Marty's 2018 Software Dairy

Table of Contents

The complete diary

1 January 2018

Here's the "copy last year to this year trick":

$ set swdiary-201{8,9}.org; (sed 4q $1; grep '^[#*$][ +]' $1) | sed 's/2018/2019/' | tee $2

which has to be tweaked just a bit for next year.

1.1 Monday, 1st   libsave lib_crunch

I'm in the midst of introducing a few functions to update my function libraries. Yesterday, I completed the low-level stuff:

  • libsave – LibraryName Function …

    which updates Function … in the LibraryName. updating only those functions which are different in their current instance than the library has currently.

    with a caveat. this update places a date tag in the leading comment section of the update. therefore the comparison for any difference excludes any date tags from both copies. the no_datetags function filters both the current and the planned update.

    unchanged functions are not updated. The

  • chg_nondata – LibraryName Function

    called for each function, does the comparison, returning the name of a changed or previously non-existing function, therefore a a function to be updated in the library.

  • lib_crunch – LibraryName.

    as the updated functions are appended to the library, new functions are unique in the library. However, changed functions exist in two copies, the original and the update. This latter has been appended, therefore when the library is sourced ( e.g. $ . LibraryName ), the later definition, the update, is the operative definition. This is fine in routine use. But it's more than tidiness to remove the now-obsolete definition. The library may get bulky, but more importantly a quick scan of an un-crunched library may reveal the obsolete copy. Consider:

    $ view +/Function.()/  LibraryName
    
    

    opportunities to mislead should be cleared up when they appear.

    In the absence of a defined initialization function according to the SHELF proposal, the library is sourced. This is a defense against repeated, recursive sourcing of the library.

    Having been sourced, where the later, just included definitions are preferred, the function definitions are collected with the declare -f idiom, and the fun_starter convention is applied

    LibraryName_init 1>&2
    
    

    as the only executable statement in the library.

    The crunching begins and ends with calls to backup the prior and updated copies of the lib

1.2 Wednesday, 3rd

My currently active families:

  • m4
  • generic
  • libmgmt
  • smart, public – the basis for library curating

what about the SHELF standard?

now the smart-public combine may offer the possibilities of a small collection of libraries: the always library, sourced in the .profile, a development library, with library management, shell documentation, etc, and an application library with rdb, finance, …

the always library would include the util, smart, public, trace, and report families.

After some struggle, the smart-public pair is taking shape.

smart_source () 
{ 
    : mfg: smart_locality;
    : date: 2018-01-03;
    ${@:-echo} /Users/applemcg/Dropbox/commonplace/textindex
}

with needed support from the libmgmt: change_nondata and libsave.

Today's innovation is the elimination of the general local_init, since fun_starter is now:

fun_starter () 
{ 
    : hdr: -- PUBLIC ---------------------------------------- UTIL Functions --;
    : from: fun_starter;
    report_notfile $1 && return 1;
    function initstmt () 
    { 
	printf "%s 1>&2\n" $1
    };
    foreach initstmt $(functions $1 | grep _init)
}

Since libsave now concludes with a lib_crunch, the easy way to stamp out or distribute necessary copies of a function is to

$ libsave {needed update library} function ...

1.3 Wednesday, 24th

Yesterday, in my new "Code book", I thought of a complement function. Given a function argument which returns a selection of its remaining arguments, this function returns a complement of that selection. e.g.

$ complement last_names this Smith that Jones other Johnson # returns
this that other

Here is the function

complement () 
{ 
    : ~ selectionFunction function ...;
    : returns logical complement of selection function ...;
    : a utility;
    : date: 2018-01-23;
    report_notargcount 2 $# selection function ...;
    comm -23 <(wpl ${*:2} | sort)       \
             <($1  ${*:2} | wpl | sort)
}

key to the presentation is showing the parallel structure.

The latter invocation is the selection function working on the remaining arguments. wpl returns a Word Per Line, this followed by sort to make a proper form for the comm utility, which with the -23 argument, by default producing three columns containing lines unique to the first file ~<( … ) argument, then the second, and lastly common lines. The "minus 2 3" flag says remove the second and third columns, leaving those unique to the first file, in this case those arguments on the command line which are not in the second list.

2 February 2018

2.1 Saturday, 10th

Yesterday I finally overcame my uncertainty to make full use of git. I'd taken the time to read the Git Book. It's common-sense explanation to use git as a local (non-networked) version control tool erased my fears in it's routine use. I spent a month using git locally to develop the Smart-Public facility I'm building inside my bash REPO on github.

This means I can relegate my backup function to application uses, and not for command line file backup.

3 March 2018

3.1 Saturday, 3rd   pandoc commonmark file_share

Today I'm working on deciding which type of data goes where. For our Men's Club, I've just set Jerry Mirelli a letter asking for his experience with and preferences for*

  • Dropbox
  • OneDrive
  • Google Drive
  • iCloud

And the important part as it affects my work. I'm moving Men's Club board and membership meetings to Markdown, Gruber style. After trying to leap over the hurdles that pandoc and commonmark place in the way of embedded HTML, I said the heck with it. For messages being used in the next week and month, and then into the public record, it's Markdown.

Pandoc retains its utility in converting the front end to the preferred format, either from/to markdown or org-mode. But not in the html generation.

I guess I'd like to settle on Dropbox for the sharing mechanism. It's the most agnostic, or rather least vendor-specific of the mechanisms. My work with MS docx and xlsx files suggest that Google Docs is the preferred tool-kit to work with documents and spreadsheets, but it needn't influence the use of Google Drive.

We'll see.

3.2 Tuesday, 6th   box dropbox INteractive_Systems

3.2.1 letter to Jerry Mirelli

Jerry,

a momentary downer for Box.

on the good side, Box Drive installed easily and is effective, competitive with OneDrive, Google Drive, and iCloud. and it has similar appearance in the top folder display: you open the icon and the directory structure appears, and GUI operations are identical.

Box also has a tool, i think I'd prefer: Box Sync. Installing either Sync or Drive warns you not to be using the other, and they have overlapping features which may conflict. That's the good news; you only need one or the other, for many, most uses. However, I first tried to install Sync, so no interference from Drive. I could not login to Sync, even while I reset my password thru the online browser interface, and re-logged to verify I'd not made a mistake. When I did login on Drive, i was able to use the credentials I'd established on the browser. Chasing some online help about this, it appears that people are having some difficulty since mac OS went to the latest Marcos, High Sierra, I believe it's called(*). So, … it's not possible for me to tell if Box Sync, when installed has the same features as Dropbox. Namely, the files occupy the namespace to all the tools. It would seem to me that this is an essential feature. Which is to say, when i access a Dropbox file with any editor or tool, it doesn't need the mediation of the GUI. And it's likely to be the reason between the conflict between Box Sync and Box Drive. If Box Sync worked I'd imagine it would be competitive with Dropbox.

=*+[]+ Marty McGowan 908 230-3739

http://alum.mit.edu/www/mcgowan

.sig changes due to MGD, 11/22/18, and JFH "jr" on 1/17/18

(*) One could blame Apple for the lack of compatibility, except they have no interest in providing an interface to what is really a competitive tool for them, i.e. Box vs iCloud. And Box, if they are being courted by MS or any other Big Tech, they could care less about supporting a feature that doesn't really have any effect except in Apple environments. That said, in Dropbox' behalf, when they first appeared, they had already solved the architectural problems between the two O/S: Windows, where the architecture is "Drives" (when was the last time you had an A: or B: drive? :-) and Unix, where the architecture is mountable file systems (where any hierarchy may be mounted at any level of a tree in some other hierarchy).

There is one feature that Dropbox will never (never say "never") support that is the great distinction between a Unix file and a Windows file: Unix has a most powerful, if not useful, feature call a "link". This link preceded the internet link (the URL) by 25 years. A Unix link allows a single file to be addressed by many names. I used this feature 34 years ago, and still use it today for my own backups. At INteractive Systems, we were porting Unix to different hardware sets, different vendors, and supporting two main releases of the OS. Unix was designed on the C language which allowed the source code to have conditional compilation conditions: i.e. it was possible to have two or more hardware specific pieces of code in the same file. We had naming conventions in the file system to enforce the build distinctions. So… we had fixed names for the OS, Chip Set, and Project name. A given developer might only be working on projects all with a like OS and Chip Set. They would have separate folders for ProjectA, ..B, .. etc.

so, they would have a file in a tree:

../myDirectory
   /source
      /common
      /ProjectA
      /ProjectB
        ...
../integrationDirectory
    /ChipSetZ
       /OSn
          /common
          /ProjectA
          /ProjectB



The point is there that with the Unix link, it was possible to have the identical source, not copies of the source, but the identical file in more than one place. The hierarchies didn't have the same structure, since some other developer might have.

...
   /ProjectA
       /ChipSetZ
   /ProjectB
       /OSn


There were about 20 "frontier" projects with 120 developers at 4 US sites in 3 time zones. I triggered a nightly build at 12:10 - 1 am Eastern Time. The first week there were ~ 200 editing collisions (more than one developer trying to edit the same file) per day out of 1700 files. By the third week we had it down to a handful, and any developers likely to be working on the same file at the same time well knew who to call to alert them. This coordination was possible because of naming conventions at two levels: in the code and in the directory names, sharp developers, and the Unix link. You, of all folks, would appreciate the organization this took.

pardon this considerable digression. I have friends in the Unix world who don't consider Windows to be an operating system, and to be honest, I don't either. when I paid careful attention to performance statistics, there were challenges to see how long (inter-network) hosts went before rebooting. The top 100 at least were all a flavor of Unix. There were no instances of an MS/Windows machine in the list. recalling the late '40s when IBM outlawed the Dvorak keyboard for the national speed typing contests. the Navy-trained Dvorak typist were beating the IBM qwerty typist by 20-40 words/minute, just when IBM was designing electric typewriters.

3.2.2 format update   markdown pandoc

Decision.

To produce Men's Club Minutes, from the latest meetings 2/26, 3/1 for the board meeting, I've settled on the original Markdown. After futzing with commonmark and pandoc markdown. I'm back to vanilla markdown. Why? Easy, pandoc is the Swiss army knife of format converters, so it has an opaque method for embedding HTML. And commonmark, in an effort to have a standard syntax also introduces hurdles right at the point where you'd be satisfied to allow raw HTML.

The original, Gruber Markdown, only requires a blank line in the source file preceding and following the raw, or native HTML. It's much less elegant, therefore simpler.

Pandoc is still useful as a first-format converter, e.g. org-mode to markdown, html to org-mode, etc. HTML output is either in org-mode or my local copy of markdown.

3.3 Thursday, 15th

In the last few day, I've introduced cp and mv functions, presumably upwardly compatible with the commands of the same name with the added feature: if any destination files are backup files, then they are backed before and after the copy or move Here are the three main functions, in the cpmv family:

cpmv_do () 
{ 
    : cpmv_do xx file ... directory;
    : cpmv_do xx file file;
    : cpmv_do xx -flag ...;
    trace_call $*;
    case $2 in 
        -*)
            trace_call COMMAND $*;
            command $*;
            return
        ;;
    esac;
    eval local destination=\$$#;
    [[ -d $destination ]] && { 
        cpmv_directory $1 $destination ${*:2};
        return
    };
    [[ $# -gt 3 ]] && { 
        error "cpmv_do file file";
        return 2
    };
    cpmv_backup $*
}
cp () 
{ 
    report_notargcount 2 $# && return 1
    cpmv_do cp $*
}
mv () 
{
    report_notargcount 2 $# && return 1
    cpmv_do mv $*
}
cpmv_directory () 
{ 
    : ~ cmd directory file ... directory;
    local destination=$2;
    trace_call destination: $destination $*;
    for file in ${*:3};
    do
        [[ $file = $destination ]] && break;
        [[ -d $file ]] && continue;
        local bn=$(basename $file);
        cpmv_backup $1 $file $destination/$bn;
    done
}

4 April 2018

4.1 Thursday, April 5th   DRY TiddlyWiki ftplib

In the last month, I've recovered my Tiddly Wiki mojo, and found it helpful. Too helpful, I might say. Software wise, my accomplishment to support it and these diaries, is an ftplib. See ftp_doc for it's documentation. This show's off some of Tiddly Wiki's assets.

Not to mention confusing the markdown syntax.

The library has a companion app. It's testing the WET - DRY line, where the latter acronym stands for: Don't Repeat Yourself. In practice this means it's OK to repeat yourself in places where you'd be forcing the generality issue. It's better to decouple modules than to introduce too many external dependencies. This of course, is an experiment.

Right now, the app is version 0.9, awaiting a final shakedown. For example, delivering this file to it's on-line home.

4.2 Saturday, April 7th   dedoop tangle tangleMode

I wrote this app to use dedoop for my Sites directory. Presuming dedoop has been download, here's the source:

ddup_cnfg  () { setenv DDUP_PWD $PWD; setenv DDUP_DIR $1; setenv DDUP_DATA $HOME/$2; }
ddup_id    () { nava DDUP_PWD; nava DDUP_DIR; nava DDUP_DATA; }
ddup_do    () { ddup_cnfg $*; dedoop $1 $DDUP_DATA; }
ddup_init  () { echo "cd $PWD; $ ddup_do someDir saveDataDir # wrt $HOME "; }
ddup_help  () { set | grep -i '^ddup_' | sed 's/ .*//'; }
ddup_input () { ${*:-echo} $DDUP_DATA/data.json; }
ddup_saved () { ${*:-echo} $HOME/lib/ddup_data.txt; }
ddup_error () { ${*:-echo} $HOME/lib/ddup_data.err; }
ddup_sumry () { ${*:-echo} $HOME/lib/ddup_summary.txt; }
ddup_popd  () { awk '{ print $2 }' | sort | uniq -c; }
ddup_sumarize ()
{
    : rolls up a ddup_favor report by
    : individual files, frequency count, reporting
    :  same and TU, total Unique files
    :           SF, separate files, and
    :        TU+SF, duplicated files total
    :
    sort -n -k2 | awk '

     {
       tu += $1; sf += $1 * ($2-1);
       printf "%5d %3d %5d %5d %6d\n", $1, $2, tu, sf, tu+sf

     }
'
}
ddup_favor () {  awk -v ddir=$DDUP_DIR '

    BEGIN {
            logfmt = "%5d %2d %s %s\t%s\n";
            stderr = "/dev/stderr"
          }

    function logrec(c, f, s) {

          printf logfmt, NR, c, f, s, $0 > stderr
    }     

                             { logrec(c,"", ""); }
    $1 ~ /"path":/           { gsub("\"", ""); gsub(",",""); path = $2 }
    $1 ~ /"original_paths":/ { count = 1; next; }
    count > 0                { gsub("\"", ""); file[count++] = $1; }
    index($1, ddir)          { gsub("\"", ""); pref          = $1; }
    $1 ~ /\],/               {

          c = count - 2;

          first = file[c]
          second = pref

          logrec(c, first, second) 

          if (pref == file[c]) {

              first = pref
              second = ""
              logrec(c, first, second)        


          } else if (pref != "" ) {

              first = pref
              second = file[c]
              logrec(c, first, second)                 

          }
          printf "%10s\t%d\t%s\t%s\n", path, c, first, second
          count = 0
          pref = ""
    }


'
}
ddup_code () 
{ 
    ${*:-open} http://mcgowans.org/pubs/marty3/commonplace/software/swdiary-2018.html#dd_code
}
ddup_manpp () 
{ 
    ${*:-open} 'http://mcgowans.org/pubs/marty3/commonplace/MyIdeaWarehouse.html#dd_app(1)'
}
ddup_run () 
{ 
    ddup_input cat | ddup_favor | sort -k3 | ddup_saved tee | ddup_popd | ddup_sumarize | ddup_sumry tee
}
ddup_tofix () 
{ 
    : date: 2018-05-04;
    printf "1. parameterize _input\n2. and Sites in _favor\n"
}

See the note for 5.1

4.3 Saturday, April 21st    directory sitemap

Today I was cleaning up local Directory, contact list from our mgmt company. And found a few email addresses needing cleanup. Three types of problem had occured when apparently the account owners typed extra information into their directory records:

  • a closing right parenthesis
  • extraneous control characters after the address
  • omitting the domain name

The first two could be handled by the same type of RE, the latter required knowing the missing domain's primary name.

This script leaves a list of the emails on the standard output. The email is in the fifth CSV field. cawk is "Comma AWK"

cawk () 
{ 
    : date: 2017-07-18;
    trace_call $*;
    awk -F',' "$@"
}
pickema () 
{ 
    set $(today).csv;
    [[ -f $1 ]] || { 
	cat SBApril2018Directory.csv | cawk '

	    BEGIN { OFS = "," }

	    NR > 1 {

	     gsub(/\.com.*/,".com",$5)
	     gsub(/\.net.*/,".net",$5)
	     gsub(/gmail$/,"gmail.com",$5)

	     print

	   }    ' > $1
    };
    cawk '{ print $5 }' $1
}

Now to begin the ordering, or cleanup of my web space. time to develop a map of where files are placed on http://mcgowans.org/pubs/marty3

#  from $HOME/Dropbox   to   http://mcgowans.org/pubs/marty3

./christmas*.html      ./     # should go to ./Family

# see ftp_common 

./org/*.html           ./Dropbox/org
./commonplace/{\1}     ./commonplace/{...}

and the few functions used to reconcile the directory trees:

hsrc () 
{ 
    for f in ${*:-*.html};
    do
        g=${f%.*};
        ls $g.*;
    done | cat
}
htcl () 
{ 
    egrep -v '/(.(bak|ver|old|state))|ar|upload|update|mcindex|oldbook|index.200'
}
htdict () 
{ 
    find . -name '*.html' | htcl | awk -F/ '{ printf "%-25s\t%s\n", $NF, $0 }' | sort
}
htmatch () 
{ 
    cd $HOME
    cd Sites/mcgowans_org/marty3;
    htdict | tee ~/lib/htmlsitedict.txt;
    cd $HOME
    pushd Dropbox;
    htdict | tee ~/lib/htmldropdict.txt
}

4.4 Monday, April 30th

Today, the breakthrough is the app_fmlib function, shown here:

app_fmlib () 
{ 
    : turn LIB into an APP;
    : 1. collect functions used by FAMlib,;
    : 2. write an edit script for 3.;
    : 2. define them as FAM_function ...;
    : 3. edit the lib functions to call redefined FAM_function;
    : 4. announce differences between _NXT and existing _APP;
    : date: 2018-04-30;
    :;
    : family_ functions: functions, library, notlib, and utilities
    : become family members
    function family_functions () 
    { 
        : date: 2017-05-29;
        : date: 2017-08-10;
        case $# in
        0 )
            [[ -p /dev/stdin ]] || {
                echo family_functions is NOT reading a PIPE 2>/dev/null;
                echo try family_{library,utilties,notlib}   2>/dev/null;                
                return 1;
            }
            ;;
        * )
            [[ -f $1 ]] || {
                 echo $1 is NOT a file to family_functions 2>/dev/null;
                 return 2;
            }
            ;;
        esac        
        awk '$2 ~ /^[(][)]$/ && !printed[$1]++ { print $1 }' ${*:--}
    };
    function family_library () 
    { 
        family_functions $(which familylib) | sort
    };
    function family_notlib () 
    { 
        : date: 2018-04-29;
        comm -23 <(set | family_functions | grep -i family_) <(family_functions $(which familylib) | sort)
    };
    function family_utilities () 
    { 
        command comm -23 <(family_functions $(which family_app)| sort) <(family_library)
    };
    set ${1:-ftp};
    set ${1%lib};
    fam=$1;
    lib=$(which ${1}lib);
    app=${lib%lib}_app;
    nxt=${lib%lib}_nxt;
    bas=.$(basename $lib);
    [[ -f $lib ]] || { 
        echo No LIB: ${1}lib;
        return 1
    };
    :;
    source applib 2> /dev/null;
    source $lib;
    : 1. app_fun collects list of used functions;
    : the functions "functions, library, notlib, utilities";
    : identify the library and utility functions, and are;
    : relabled to belong to the library family;
    [[ -f $bas ]] || { 
        ( echo family_{functions,library,notlib,utilities} | tr ' ' '\n';
        app_fun $(functions $lib) ) | sort > $bas
    };
    : 2. writes the edit script;
    comm -23 $bas <(functions $lib|sort) | tee .needed | awk -v fam=$fam '

            { printf "s/%s/%s_%s/g\n", $1, fam, $1 }
        END {
              printf "s/family_/%s_/g\n",     fam
              printf "s/familylib/%slib/g\n", fam             
              printf "s/%s_%s/%s/g\n",        fam, fam, fam
          }

        ' > .sedscript;
    cat $lib <(declare -f $(< .needed)) | sed -f .sedscript > $nxt;
    :
    : --------- assure canonical format --      
    : 
    lib_crunch $nxt
    :
    [[ -f $app ]] || { 
        cp $nxt $app
    };
    flcomm -2 $nxt $app;
    echo mv $nxt $app;
    wc $lib $nxt $app
    echo "to refresh, rm -f $bas"
}

Updated on 5.1

5 May 2018

5.1 Thursday, May 3rd   app

See notes on 4.4 and 4.2

Today, I've tuned up the app_fmlib function. I'll begin maintaining an app from it's library root as a canned, repeatable procedure. While this function is not yet "apped", it soon should be; there are some hi-level functions in there which will bring along quite a bit of the underlying function-maintenance tools, I think.

First, the local "family" functions: functions, library, notlib, and utilities. Each new app gets a personal copy of these functions.

But this function app_fmlib now uses functions from other libraries:

  • flcomm
  • lib_crunch
  • functions
  • app_fun

These four represent a sea of function structure. Time to

$ app_fmlib somelib  # which has functions app_fmlib ...

and see what happens.

The bonus of this factor is now the hierarchy is flattened: the lib and it's app. Cataloging all apps built this way suggests locating the supplying function libraries for each app. When the supplying library is updated, then the app as well.

Oh, and one other thing: the app needs a starter's pistol, that when sourced, it announces it's entry points and user documentation.

It's worth noting the _notlib function is available to identify functions using the family name, which are not defined in the _library. And before closing that chapter, decide if the _notlib should consider the _utilities with the _library. Maybe that means _notlib should be _notapp.

5.2 Wednesday, May 6th   shbang

A few days ago, in my paper tech diary, I conceived a three-tier structure for the function library. This note updates the thoughts there. Based on my view of the library, based on the SHELF principles, it now includes:

  • the library, conventionally named {something}lib
  • it's application, named {something}_app, and
  • commands, which I'd thought would be {something}_cmd.

This last notion has given way to a command interface for the library which would be invoked by the [something} command, which might look like this:

#!/bin/sh
something=$(basename $0)
source ${something}_app
report_notfunction ${something}_$1 && exit 1
eval ${something}_$1 ${@:2}

This something could be linked to all other such somethings, as this is completely general. This is an amazing bit of code. It is the only sh-bang you will ever need.

The above file sh-bang and its companion library, banglib are tangled with org-babel-tangle to a bin directory on the user's PATH.

This latter banglib supplies user functions:

  • bang_iam – an app generated out of a function library asserts itself as a sh-bang executable. it's name is saved in .shbang.log. bang_saved uses the log to restore links to an updated sh-bang
  • bang_who – lists the sh-bang executables in the current directory.
  • bang_saved – restores updated sh-bang to previously linked clients.

This banglib and any other lib aspiring to become an application follows this procedure. At the moment, app_fmlib is the subject of testing to make this procedure one-stop-shopping.

$ . banglib       # source the library, loading the functions 
$ app_fmlib  bang # loads the bang_app with lower level functions 
$ bang_iam bang   # makes "bang" a bang-able app!  
$ bang test       # runs a near trivial test

If there is a function library named somefamlib, then an application built from the library:

$ app_fmlib somefam    # creates somefam_app
$ bang iam  somefam    # links it to the_only_sh-bang_youll_ever_need
$ somefam              # execution may proceed

This app is source'd in the default section of the case options. It contains all the functions needed to run any sub-function of the app.

This notion of an application supersedes the family name function. I'll have to retreat to unwind the notion that the function family name is a master function itself, and leave that functionality to the the just-installed sh-bang.

5.3 Sunday, May 13, Mothers Day

Picking up on the latest idea, today I begin to unwind the fam_generic usage. The important function, that of turning $ fam sub x y z into $ fam_sub x y z is now taken over by the Only sh-bang you will ever need Which leaves the hard work, it's not editing the functions, no, it's rather the production piece. It now seems the smart-public feature was a little over-wrought.

So, the two jobs at hand:

5.3.1 TODO pick up fam_generic

  • have fam_generic do nothing quietly
  • excise fam_generic from fam_iam.

5.4 Sunday, May 20   subject object useby

Today begins the end of an unnecessary plethora of function names dealing who Who calls Whom!

I've collected any number of functions matching the RE: ^fun_*use*. Today I'm using fuse, which reads "Functions who USE", and fun_uses which reads "this FUNction USES …"

Here is fuse_context, and its use on fuse and fun_uses

bin.$ declare -ff fuse_context; fuse_context fuse fun_uses
fuse_context () 
{ 
    printf "%-9s\t%-9s\tcontext\n" function "used by" 1>&2;
    for f in $*;
    do
        fuse $f | sed "s/  */   /; s/^/$f       /";
    done
}
function        used by         context
fuse    clf             set -- $( args_uniq $(fuse ${1:-clf} | field 1 ));
fuse    do_fuse         echo $1 $( fuse $1 )
fuse    fapi            set -- $( args_uniq $(fuse $1 | field 1 | sed 4q ));
fuse    fapi            _fun_show fuse $1 | _trim_hash
fuse    fnuse           declare -f $1 $( fuse $1|field 1 )
fuse    fun_create              fuse eval | grep '()' | sed 's/ *()/ ()/; s/ *\$/$/' | awk '$4 ~ /()/'
fuse    fun_maker               set | fuse eval | grep ' () ' | field 1
fuse    fun_useme               fuse ${1:-ff} | awk '
fuse    fuse_context            fuse $f | sed "s/  */   /; s/^/$f       /";
fuse    fuse_lib                fuse $1 | sed "s/^/$1   /"
fuse    fuse_lib                foreach _fuse_one $( functions $1 ) | tee fuse.out | field 1 | uniq -c | awk '$1 < 2 { print $2 }';
fuse    fuse_lib                wc fuse.out 1>&2
fuse    isclf           set -- $( args_uniq $(fuse ${1:-$(myname )} | field 1));
fuse    sfuse           fuse $1
fun_uses        app_uses                fun_uses $* | awk -v notthese="$notthese" -f $( awk_file but_not_these )
fun_uses        appuses         fun_uses $* | awk -v notthese="$notthese" -f $( awk_file but_not_these )
fun_uses        fun_call                fun_uses $1 | awk "\$1 ~ /^$1\$/ { next }; { print \"$1\", \$1 }"
fun_uses        fun_level               foreach fun_uses $( $* ) | grep -v trace_call | sort -u
fun_uses        fun_useby               echo $1 $( fun_uses $1 | grep -v "^$1\$" )

It seems prudent to whip up a uses_context

I can't tell you how much time I've wasted on failing to come to grips with function creep in this area.

And one more thing; fuse produces it's context, fun_uses only identifies the functions it uses, or calls on.

On a bit of final reflection, my problem was an inability to settle on the language problem: Who is the subject and who is the object?

  • fuse – list the functions who use me:
  • fun_uses – this function uses these functions.

6 June 2018

6.1 Saturday, June 9th

In which I think it's time to clean up on the use functions. A few aliases are helpin out.

7 July 2018

8 August 2018

9 September 2018

10 October 2018

11 November 2018

12 December 2018

13 references

Author: Marty McGowan

Created: 2019-05-26 Sun 20:33

Validate