Marty's 2015 Software Dairy
Table of Contents
1 2015
1.1 January 2015
1.1.1 Saturday, 31st rdb callgraph
Somewhere in my reading: Jonathon Green, Jan-Feb '15 Atlantic I was inspired to create an index, a dictionary of my shell functions. Here's the idea as it stands.
By Chapters:
- Major themes – collect many of the short functions, one and two liners by principle command: e.g. sed, grep, and other short utilities, some of which might be awk. There, however is a major distinction. A good deal of awk is as likely to be a program.
- Application libraries – lately, I've pushed many functions back on their originating directory in to a local lib. The new dotlib function shows of the evolving methodology:
dotlib () { for f in $(shell_onlyfiles *lib|grep -v fixlib) $(shell_onlyfiles fixlib); do source $f; done }
- Command line only – some (many?) functions have no other use than the command line. These seem to be of two varieties: "I just invented this, and I know it's not an application function", and something which is devoted to the function practice. Though this latter has an evolving funlib collection, it doesn't seem to span many dimensions of the problem. e.g. the dotlib above is such and example.
- RDB – I've invested quite a bit of time this month in pushing my use of /rdb, and it's promise of "the shell is the query language". At the moment, my regular use is for our expense record, the investment tracking, and my just-started Book Club for the MIT Club of Princeton. Though this latter is modest (dozen-record tables) to start, I thought a ground-up approach would be helpful to establish a practice.
- A practice of collecting a set of functions into an application. And the ability to produce a callgraph of the application. Here's the callgraph document. Here's a function app_fun in the applib to do just that, from which I've removed some trace functions for ease of understanding.
app_fun () { set -- $(for_nomar app_uses $* | tail -1); shift; echo $* $(app_trace $*) | wpl | sort -u; } for_nomar () { while true; do trace_call $# $*; num=$#; set -- $1 $($*); echo $*; [[ $num -eq $# ]] && break; read a; done }
- A thorough index to all the functions – challenged by some sense of use, and the ability to routinely index and cull the list.
A lesson learned in this intervening time, especially with the fresh start on the Book Club, is how I need to be more careful in my function writing. I remember in the early days, when it was cards through the window, how carefully we desk-check our coding. I need to recapture some of that discipline in this too-easy-to-fix-mistakes era.
1.2 February 2015
1.2.1 Sunday, 8th
I just learned the easiest way to gather and pair off functions built either for the command line, or those used in other functions, which may also be command line functions. Realize I've a function here to build app**s (almost). Let's see if I've talked about the **app_fun yet. …
Stack PUSH !
this function, I just hammered out for this occasion: findswd_ymd () { cat software.mkd | awk -v find=${1:-app_fun} ' $1 ~ /\*/ && $2 ~ /day/ { wkday = $2; day = $3 } $1 ~ /^##$/ { year = $2 } $1 ~ /^###/ { month = $2 } index($0,find) { print wkday, month, day, ", " year } ' } results in: Tuesday, November 18th , 2014 Saturday, January 31st , 2015 Sunday, February 8th , 2015
Stack POP !
The one liability with app_fun is it doesn't yet treat functions passed as arguments. I've some thoughts on that problem.
In any case, back to today's challenge. It arose while splitting command-line functions from library functions in my 'expn' folder. I'm patting myself on the back since I've finally – thanks, Chip(azzo) – for the idea of the flow diagram for stages of data munging. I'd studied this decades ago, back when the Chip was but a decade himself, and have come to reclaim the territory. I do need to share the 'make' paradigm with him before we move on, but …
So the "lib" problem: separate commands from library functions.
- a command has no visible functions using it
- a library function has other function using it.
the two problems with this simple view:
- how do you record whether or not a command is ever used?, and
- is the library function used by a chain of function calls which are never invoked because the top command is also never used.
Fortunately, I've built a callgraph to display the command hierarchy. Here's its document.
1.2.2 Saturday, January 31st
So, the procedure today to split command from *library function*s:
- source the two libraries.
- produce a unique list of functions: `functions cmdlib proglib | tee .y`
find the functions (if any) which use these:
foreach do_sfuse $(< .y) | tee .bothlib
split the references into those used, and not used by other functions:
awk 'NF == 1' .bothlib | tee .cmdlib awk 'NF > 1 { print $1 }' | tee .funlib
rebuild the libraries
fbdy $(< .cmdlib) > .clib; fbdy $(< .funlib) > .flib;
- and copy these over their respective originals
Later clean-up to follow.
1.2.3 Thursday, 12th
Today, I rewrote fbdy, sacrificing the one-line function in behalf of the simpler, canonical form produced by `typeset -f`
function fbdbash { typeset -f ${*:-fbdbash fbdksh fbdy isKsh} } function fbdksh { fbdbash $* | awk ' NF == 2 && $2 ~ /\(\)/ { printf "function %s\n", $1; next } { print } ' } function fbdy { fbdksh $* } function isKsh { one=1; [ one -eq 1 ] 2> /dev/null }
Attempts to get cute with `typeset -f .. fbd{bash,ksh,y}` were more trouble than they were worth. I've further simplified. Now the canonical output isn't `typeset -f` but the ksh standard `function name`, since bash is kind enough to accept that definition as well. The above now reflect the default output of ksh. And this was the most recent.
former_fbdy () { fbd$(isKsh && echo ksh || echo bash) $* }
1.2.4 Friday, 13th
An idea just dawned: score functions on a value system of modernity. For the moment, un-weighted attributes:
- use of `set – …` idiom
- presence of `trace_call`
- object_method format, membership in an ..
- no file side-effects
- cmdlib presence for commands, i.e. no visible usage other than command line
- locality of use – this one's hardest to get at.
- date of origin registered
- ksh format – low value since it's now an automatic
1.2.5 Tuesday, 17th.
For the last few days, I've revisited my wrap on markdown.
I'm solving lots of problems, like how to reference across the top folders.
A picture is in order. This is the source tree, where content is created:
HOME
Dropbox git
commonplace Family
so, for just the documents, the HTML, locally referenced links, this is the document tree:
HOME
Dropbox
html
commonplace Family "BITMELD" … individual files
The major difference is that the html documents , and any local dependencies are copied (or **ln**ked) from their source location to the destination.
E.g. a file created in, say `HOME/Drobox/commonplace/subject/some.mkd`, using the md function ( the [markapp][] library for the code), produces the local HTML file, where it may be tested for local links, anticipating its place ultimately in the `HOME/Dropbox/html/commonplace/subject/some.html` file.
So, "commonplace" and "Family" have similar properties here. The "git" tree is a little different. First, its location in the source. Both [dropbox][] and [git][] are file-sharing hubs. In my case, the git hub, is for code development. I've been a proponent (discoverer, with Anderson and Brumm) of the bitmeld structure or convention. Since any git component, for example, the [markapp][], will have **b**in, **l**ib, **m*an, and **d**oc structure, their document components: manual pages, user guides, code (from bin) and library elements will be stored in the BITMELD portion of the document tree.
Again, e.g. scripted code from `HOME/git/mark/bin/*lib` will be shown it it's document location as `HOME/Dropbox/html/bin/*lib`
1.2.6 Monday, 23rd
I'm closing in on a function template. An example is pictured below. The function will:
- trace its execution: `trace_call $*`
- report missing arguments: `report_needargs N $*`
- report missing file arguments: `report_nonfile $N`
- report missing function arguments: `report_notfunction $N`
- condition its input arguments as an object: `set – …`
And this sample here shows the preferred way to do this:
md_fmDoc () { report_needargs 1 $* && return 1; set -- $1.txt ${1}_doc; report_notfunction $2 && return 1; trace_call $*; $2 > $1; md $1 }
Since the `report_ ..` functions return TRUE when the condition is not satisfied: e.g. "need 1 argument", and announce the calling function, the `trace_call` may be postponed until the pre-conditions have been satisfied.
The argument conditioning is done by the `set – ..` idiom. The positional parameters are set from the input argument(s). Since, in this example a function {stem}\_doc produces a file to [markdown][], the first argument is the stem of the function name. And the `.txt` file is merely the holding place for the text to markdown. (At this time, the md function does not read the standard input).
Note, in this example, no file is explicitly required. Presumably the function is available from a function library which has been `source`d, or read into the user's shell.
For the record, her is the handful of supporting functions:
report_needargs () { n=$1; shift; [[ $# -ge $n ]] && return 1; usage need at least $n arg[s] } report_nonfile () { trace_call $# $*; [[ -f $1 ]] && return 1; usage $1 is NOT a file; return } report_notfunction () { isfunction $1 && return 1; usage $1 should be a function } usage () { comment USAGE $(myname 2) $* } myname () { echo ${FUNCNAME[${1:-1}]} }
where i note two things:
- we needn't trace the execution of a reporting function, and
- the normal return isn't necessary at the end of a function, since the
thing we are interested in is the failure, e.g. `return 1` in the case of the false assertion. i.e. if the argument is a file, return FALSE. The user tests for the unwanted condition, which announces, through `usage` and returns TRUE, so if for example, the calling function returns immediately, the report_ function having announced the failed condition. A bit convoluted, but look back at the user code to appreciate the clarity and concision.
1.2.7 Wednesday, 25th softwarereview
Today's story – adding a spell-checker for emacs, has been recorded in my softwarereview (now defunct, copies on disc). It was a success, thanks to the references recorded there.
1.2.8 Friday, 27th
Not only is the spell-checker a success – I just finished checking my personal diary, but I'm now using [Aquamacs][] as my emacs editor of choice. Thought it's worth reporting the installation was accompanied by a few fits and starts. But it appears to be working fine now
1.3 March 2015
1.3.1 Sunday, 22nd
But aquamacs has trouble starting on a busy system. I've got a pending order on iStore for a new iMac 27". Time to pull the trigger.
Today's note is about a function api collector: fun_api. Here's a sample:
$ (fsfg report_; fbdy fun_api) 2>/dev/null | quietly fun_api report_needargs [[ $# -ge $n ]] && return 1; report_needcount [[ $2 -ge $1 ]] && return 1; report_nonaccount isaccount $1 && return 1; report_nonfile [[ -f $1 ]] && return 1; report_notTabsEQ report_nonfile $2 && return 1; report_notTabsEQ N file; report_notTabsEQ [[ -s .tabseq ]] || return 1; report_notfunction isfunction $1 && return 1; fun_api stdin has function bodies; $
and the function itself:
function fun_api { trace_call $*; quietly comment API stdin has function bodies; awk ' function ofapi() { printf "%-14s\t%s\n", name, $0 } $1 ~ /^function$/ { name = $2; next } /return [1-9]/ { n = sub(/^[ ]*/,""); ofapi(); } $1 ~ /quietly/ && $2 ~ /comment/ && $3 ~ /API/ { n=sub(/.*API/,""); ofapi(); } ' }
with these features:
- leading function name identifies a function, saving name
- quietly comment API text is reported for name
- non-zero returns are reported for name
The example above is for fun_api itself and the class of **report_**{need…} functions, which are themselves used as assertions, or usage checks against a functions arguments.
1.4 April 2015
1.4.1 Saturday, 4th
In which I've begun to work with – "master" being too strong a word – the use of Org.Mode. I've constructed my master worklist in
~/Dropbox/dbx.org
The key feature is the ability to create links to files and web pages.
1.4.2 Tuesday, 7th
1.4.3 Thursday 9th,
indeed. the first thing i think of is "how much effort am i saving by using org mode", not having to do maintanance on markapp. I'll know it's history when:
- i've converted all markapp processes to Org Mode HTML export, and
- i pull markapp from github.
a quick peek at the resulting html here says i've got a few problems to solve:
1.5 May 2015
1.5.1 Friday, 1st
1.5.2 Sunday, 31st mobileorg dropbox sync
You'll have to consult the other references to see where the time has gone. But today I learned a few things. Maybe two, three. Enumerating should help. It all began with an attempt to make sense of Mobile Org Mode.
Solution: https://github.com/MobileOrg/mobileorg/wiki/Troubleshooting
In any case, those instructions both identify and solve the problem I was having. While it's still not working, it's that I haven't really exercised the interface. It looks like some more learning is involved.
One problem was of my own creation. In order to sync the Org files thru Dropbox, this command got things started:
find commonplace org Family -name '*.org' | egrep -v '\.(bak|ver)' | cpio -pdluvm Apps/MobileOrg
Somewhere along the way in trying to get this to work I clobbered all the org files. The good news: Time Machine to the rescue. Using this a.m.s copies I'm quite sure all is well. But that raises another question about the link to Apps/Mobile. Which I guess is where the learning will come in as I attempt to use the files or data on the Mobile App.
The other problem du jour was clearing up, or trying to reconcile my two flavors of Emacs: Aquamacs and GNU Emacs. With my new screen real estate, on the new iMac, I'm taking advantage of expanded font size. The problem with Aqua is following the 'set mouse-…' which pops up a font-selection window, the status buffer and the directory browsing windows assume the requested font, but the text files don't follow suit. Again a lesson for the emacs study I'll give myself. So, there's a vote for GNU. Except, GNU wasn't until just a bit ago able to run the spell-checker. With the same ~/.emacs file for both flavors, Aqua was ready and able with the few ispell options I needed. But for some reason GNU was complaining about ispell not found.
- FOTD date expr
I won't wade you thru the details, but probably 45 minutes of hunting and I found the fix for my ~/.emacs. A variable needs the full path to the ispell program, not the elisp library. An incidental example made it clear. So, the resolution is "Use GNU emacs", where the font-scaling works consistently, and now the spell-checker is ready for my mistakes. Let's try it out on today's contribution.
Oh yes, today's FOTD: nMonthsOld, with examples:
function oldReports { column symbol report < rawOne551.rdb | row "report < $(nMonthsOld ${1:-10})" } function tenMonthsOld { nMonthsOld 10 } function nMonthsOld { report_needcount 1 $# "nMonths 1..12" && return 1; set -- $(date "+%Y %m %d") $1; trace_call $*; quietly comment use result for YYYYmmDD Less, Greater, but NOT Equal comparison; set -- $1 $(expr $2 - $4) $3; while [[ $2 -lt 1 ]]; do set -- $(expr $1 - 1) $(expr $2 + 12) $3; done; while [[ $2 -gt 12 ]]; do set -- $(expr $1 + 1) $(expr $2 - 12) $3; done; expr 10000 \* $1 + 100 \* $2 + $3 }
In this case oldReports lists those symbols (Stocks) which haven't had an annual report in 10 months. And with a little tweaking, it was easy to allow nMonthsOld to select numbers larger than 12 and negative as well. e.g. a negative twenty-five (-25) argument is two years and one month in the future. The caution about the use of the comparison is really just a warning for usage on days in longer months where there is no date for today's number in the resulting month. But this is okay, just don't attempt to use the date in a test for equality.
1.6 June 2015
1.6.1 Sunday, 7th remember
It's easy to see why emacs is both loved and ignored. No one hates emacs. If you know it, you love it. Today's discovery is remember. By the time you read this, the top menu bar on this page will likely be down to three. The "notes, records" should be the "remember" builtin.
Time to do some more reading. The answer is "capture", called "org-capture". It's an Org overlay.
Before getting carried away, it's worth pondering: Why bother? The short and easy answer:
As I learn more about emacs' features, the extensions, and ability to customize, then the vision and architecture tell me my time investment is worth it.
(Simplify this thought.)
1.7 July 2015
1.7.1 Saturday, July 4th tutorial
Great deal of growth on the OrgMode front. The Tutorial, Tasks shows what a re-run of the tutorial can do.
1.7.2 Monday, 5th comment
Friday, I think it was, I got the lesson for a decade.
In Shell Functions I'd published the notion that "Semantic Comments" were necessary to defend against the shell builtin declare -f propensity to eliminate the # sharp comment. Not so, the easiest defense is to use the ancient : colon comment. It's now time to correct that mistake in the book and in the practice.
1.7.3 Tueesday, 7th
Today, I've established the method going forward. The invention of auxlib, in support of The Only Backup You'll Ever Need, sets the method. Imagine there is an application, that might be a shell script, one of the ugliest words in the lexicon. In this case, it's backup. But backup is no more than a function, supported by other functions. Some of the functions are peculiar to the backup application, some are not. The totally generic functions in an application now go in auxlib. Most of the particular functions go in the application library.
So, the next function library to sweep in to the fold is the applib, which collects the user-support functions from a list of top-level, or user, commmand-line functions.
The thoughts running thru my mind are two, which are now TODO's.
But first, on to applib! A "comm" of the functions in the two libraries produces a three-column listing:
app_fun applib | tee .applib comm .applib .auxlib
where column 3 is the list of function used by both applib and already logged in auxlib. Good. Column 1 is the list of functions in applib not yet in auxlib. This list should be examined for likely candidates to include in auxlib. i.e. functions sufficiently generic to be used by any other libraries or applications. Column 2 list the functions already in auxlib, not used, in this case by applib. A function appearing here too often needs to be tested for it's generic potential.
Comparing applib with auxlib yields this list for "column 1":
alltype app_fun app_trace app_uses fbdbash fbdksh fbdy for_nomar fun_candidates fun_clean fun_names fun_uses tpl trace_state wpl
of which only "tpl" and "wpl" (tokens and words) per line, appear to be sufficiently generic to promote to auxlib. Notable app_trace, and trace_state, which look generic on the surface, are peculiar to the business of identifying useful hierachies in the user functions.
And, to peek ahead, I know the fbdy and it's user functions represents another library collection. So, we can anticipate a library object-model or hierarchy which starts to look like this:
- backup
- aux
- app # APPlication lib
- fun # FUNction lib
- aux
For me, this is the taxonomy I've been looking for for at least six years.
1.7.4 Wednesday, 8th
The colon comment is really a shell builtin for *
1.7.5 Thursday, 9th
Today's function of the day: higher_version.
It only needs to be run every few days, and from a "higher" directory, or when the number of updated files is considerable. It relies on frequent backup in the various directories.
What's "higher"?
It could be one's HOME directory, or HOME/src if you have a source tree of projects. Or HOME/src/projectA if the project itself has many nodes and the development cycle warrants. In any case, here's the code of higher_version
higher_version () { local version=.ver/$(ymd_hms); echo 'needir () { [[ -d $1 ]] || mkdir -p $1; echo $1; }'; files $* | grep /\.bak | grep -v .bak/.bak | sed ' s/\/\.bak\// / s/^\.\/// ' | while read dir file; do local ver=$version/$dir; echo "ln $dir/.bak/$file \$(needir $ver)/$file"; done }
The local version looks like .ver/2015_0709_142558. The function finds all files in the current directory which are the primary backup (just one .bak in the path) and creates a link from the primary backup to a file in the .ver/…/path directory whose path is identical to the primary backup.
Since the function produces the necessary shell command lines, you need to execute the output:
higher_version | sh -x
to produce the backup.
And, since the files command takes find arguments, you can produce a list of the planned work. It should be possible to use the modification time of the latest .ver directory to produce a version of just the updated files and not the whole tree. Something like:
higher_version -mtime -5 # or higher_version -newer $(ls -d .ver | tail -1) # ?? untested
1.7.6 Friday, 10th band meyer
Bill Anderson shared Bertrand Meyer's technolog+blog, this issue on "Theory of Programs"
1.7.7 Saturday, 18th command rdb
The new stuff of late, still a WiP, is what I call the command format for a function. The /RDB has two formats, the (standard) table, with the first line having tab-separated field names, the second line has coorespoinding dashes to offset the field names, and each subsequent line is the tab-separaed data. In the list format, records are blank-line separated, and each data cell is on a Name, Tab, Value line.
Here is an empty record in a command.txt file:
name inputs nargs command mode output dispose
Where the name is the name of a function, which might be derived from the command name, an already existing command, a function or file. The inputs is a list of input file names which may be a file-matching expression. The nargs is a minumum number of arguments, defaulting to 1. The mode is a string, defaulting to always, alternatively newest or append. Where "newest" only executes the command when the single output file is not newer than any of the inputs, and append (not yet functioning) is suitable for logging, or appending to the output. A dispose command may be used to post-process the output, such as backup, or archive.
As an exploration, the best use of this command format is to wrap existing functions with the file environment. This frees the internal command from any need to specify inputs and output files. There is nothing in the rule book which precludes the command from having file side-effects. But the newest paradigm restricts the update requirement to the single output file.
A command file, cmd2fun, invoked from command_libis converts each record in command.txt into a suitable function, storing the result in a file argument, defaulting to mfgcmdlib. Also, the command_table function displays the list-format command.txt in a table format for easy viewing. I use a jm function to post-process the output "justify, pipe to more"
I think of this part of the process as
wrapping the command bubbles with the file inputs and output,
which serves to separate the process bubbles from the data flow.
function command_libis { set -- command.txt ${1:-mfgcmdlib}; report_notfile $1 && return 1; trace_call $*; newest $2 $1 || { cmd2fun < $1 | onlyChanged $2; } . $2; functions $2 } function command_table { set -- command.txt; trace_call $*; report_notfile $1 && { command_record | tee -a $1 } column < $1 | awk NF }
Here is the key to having the code in an Org file saved as the real thing:
#+BEGIN_SRC sh :tangle filenameToSave ... the code #+END_SRC
extracted by exectting this command:
C-c C-v t "org-babel-tangle"
extracts the code block to the :tangle file. Here are the instructions for Extracting-source-code
1.7.8 Sunday, 19th
At the risk of sounding too carried-away, I can't say enough about the productivity I'm feeling from OrgMode. In a just-concluded re-organizing of my Report Library, (and probably to be called Assertion) my next clean-up opertion will be on my implementation of the public-domain Unix Relational DataBase management system. I got my start from Rod Manis, one of the author's of the founding text. The implementation I'll expose is based on the Bash Shell. When I got a peek at the pre-published copy of Manis' book, it was a Ksh implementation, that largely becuase ksh had already been ported to the PCs.
Oh, and why move to RDB? Easy, since a good part of the methods in the Report library depend on an RDB implementation. And if you are "building from the base", it's time to recognize and share my dependency, and hopefully, attract other users to that remarkable tool.
And how will I do it?
Move to my Commonplace Book and insert an Org file for my RDB library. Check in, that's where I'll be.
1.7.9 Monday, 20th
First thought on the RDB repair. I've left too much reduncancy in my conversion from command files to functions. The standard I'm adopting is
- prefer implementation as a function rather than a command
- awk programs more than a few lines ("few" is TBD) are either command files or awk source files: "awk -f …", where the clear choice of command file over awk file is when the RDB connection is strong. e.g. see report2fun.
- and the prefer a command where the function is simply too long. I'll have to find the best example. this implies filtering all my functions using awk, doing a triage for commands, and harvest the remaining functions by length. I know, who ever said line-count is a legitimate metric?
1.7.10 Saturday, 25th
Lots of progress on the Commonplace Book, the center of software action.
1.7.11 Sunday, 26th
Here are two working files, funGT9lines.txt and funDeprecate.txt which at this writing are the functions with more than nine lines of working text from selected libraries, and then culled from that list, the ones whose use should be deprecated, if not eliminated.
Here is list of libraries considered:
- auxlib
- backuplib
- cmdlib
- funlib
- programlib
- funGT9lines.txt
abk backup_here cancsv comm f2file ffwo flcomm fun_api fun_clean fun_tordb fun_wc funtype funtype_1 help_msg_help help_msg_test higher_version html_update_help is_leap_year my_callgraph newest nf ppt prTsEntry qsudo reY script_path stack tosetlib trace_summary twocolcmp
- funDeprecate.txt
ct dotty fixlib_help fnArrayIndex fnArrayLast gff integA integ_doc programlib_qrf qrf_insert todo programlib_ck whch getpara
- include – pre-processor
The include pre-processor is now in the combine lib since
#!/bin/bash # http://mcgowans.org/pubs/marty3/commonplace/software/swdairy.html cat ${*:--} | awk ' BEGIN { stderr = "/dev/stderr" logfil = "include.not" errfmt = "TRACE include.%s(%s)\n" } function this_line (res) { printf "%3d %d %s\n", NF, res, $0 > stderr return res } function with_file (file) { printf errfmt, "with_file", file > stderr if (this_line( (getline line < file) == -1 )) { printf "needfile %s\n", file > logfil print } else { # side effect, "file" is opened, so, # close it to be able to read it: close(file) printf errfmt, "about to INCLUDE", file > stderr system( "include " file) } # and since we may have just read it, close it again # if we opened it, or for the first time if we did not. # why? an app may include a file more than once. close(file) } function may_include() { # printf errfmt, "may_include" "" > stderr # HTML Comment, replace with 7/26/15, mcgowan@alum.mit.edu # return this_line($1 == "<!--" && $2=="include" && $4 == "-->") # iawk-style include, not bound simply awk programs return this_line($1 == "@include") } { if (may_include(file)) { # with_file( $3 ), ex 7/26/15 # e.g. @include filename with_file( $2 ) next } print } '
1.7.12 Tuesday, 28th
After yesterday's include process, I'm seeing a place to use it in my function practice. The big challenge to this point is how to easily turn a function library into a stand-alone application. The need is to round up the dependent library functions. Very few libraries can have no other dependencies. So far, just my auxlib is a stand-alone proposition. The other challenge is when I demonstrate a top-library function such as backup. There are three functions for the command-line or user script. These need to be exposed. However, there may be functions in the user library whose expostion is too much for the reader, until they load a complete application. So, in "backup"s case, to create the application, you need the top functions, the rest of the supporting backup library, and the remainder of the dependent library, in this case just "auxlib".
Then there is the case of things in the library which needn't be in the application. Typically, I employ an "init" function, e.g. backup_init as the sole executable statement in backuplib. It's job is to source the dependent library, and any other initialization for both the first time user and any other run-time considerations. This suggests the library initialization may have two components: the always necessary application initialization, and the purely library initialization.
Since the application contains all the needed functions there may be a piece left-over for run-time initialization. My first experiment will be extending this to a seamless manufacture of the backupapp
Oh yes, the requirements:
- single source – no copied code, other than
- manufactured by shell commands, and functions, which
- are available to the developer, and which
- needn't be delivered to a user.
- with written instructions for both a developer and user.
1.8 August 2015
1.8.1 Friday, 7th
Lots happening lately.
- time to cut down own open shell libaries.
- develope a practice for building thru OrgMode
The implication is that only one shell library is the dumping ground for command-line created functions. The remainder are mangaged out of an OrgMode file.
To summarize what's happening. lib_crunch is the FOTD, the implication then, is that the "dumping ground" is the only library which may be crunced.
lib_crunch () { report_notfile ${1:-MissingArgument} && return 1; set -- $1 .l; backup_lib $1; . $1; ( ff $1; fun_starter $1 ) > $2; mv $2 $1; backup_lib $1 }
Here, .l is the alternate (temporary) library. While the first argument is a file, we assume its the "dumping ground". The library is first "backed up" in its current condition. It's then "source"d, i.e. read into the shell where the current function definitions are loaded. The parenthesis syntax: ( … ) executes the commands in a sub-shell. In this case ff produces the function body of each function in the library. fun_starter is the name of the function which supplies the {libname}_init call for the library.
In this case ff takes advantage of the fact that the dbsave function appends functions to the end of the library. Therefore, the last function appended to the library is presumed to be the "latest and greatest". Remind me to not repeat that phrase.
So, beyond the "dumping ground" all other libraries are managed from their OrgMode compoents. That will happen in two phases: first, the simple copying of the individual libraries to an appropriate OrgMode, and then factor the sub-components into their pieces, within single or separate OrbModes.
1.8.2 Saturday, 8th
Actually, two dumping grounds are needed. Now they travel in the guise of cmdlib and bitbucketlib. You can guess each library's purpose.
So, I'll modify dbsave into cmdsave and bucketsave. As I dump what I regard as obsolescent functions into the bitbucket, and discover I may need it elsewhere, the f2file function easily unpacks a library into its constituent functions.
function f2file { report_notfile ${1:-MissingFirstArgument} && return 1; set -- $1 .$(basename $1); trace_call $*; rm -fr $2; mkdir $2; cat $1 | awk -v dir=$2/ ' BEGIN { funfile = "/dev/null" stderr = ".f2file.err" fmta = "NR, n, f, b, t: " fmtb = "NR, 1, 2, t, funfile: " fmtc = "%12s %4d %s\n" } { f = $1 ~ /^function$/ n = $2 ~ /^[a-zA-Z0-9_-]*$/ b = $2 ~ /^\(\)$/ r = ((b)? $1: f? $2 : r) t = dir r print fmta, NR, n, f, b, t > stderr } b { fclose funfile; funfile = dir $1 } f && n { fclose funfile funfile = t print fmtb, NR, $1, $2, t, funfile > stderr } { printf fmtc, funfile, NR, $0 > ".f2file.out" print > funfile } ' }
This function is a good candidate for what I'll call "awk-brevity" – putting the awk code in a separate file, since it's more than a line-at-a-time process.
1.9 September 2015
1.9.1 Thursday, 23rd
bin.$ time for i in {1..1000}; do cmd_repeated .bak/ 12; done > /dev/null real 0m0.889s user 0m0.331s sys 0m0.527s bin.$ time for i in {1..1000}; do pro_repeated .bak/ 12; done > /dev/null Real 0m0.191s user 0m0.182s sys 0m0.009s bin.$ cat .x cmd_repeated () { printf "$2"'%.s' $(eval "echo {1.."$(($2))"}"); } pro_repeated () { count=${2:-2}; result=; while [ $count -gt 0 ]; do result=$result$1; let count-=1; done; echo $result } bin.$
answer, clearer and faster: programlib.repreated.
1.10 October 2015
1.10.1 Sunday, 4th
Today's FOTD: fun_subfunctions
fun_subfunctions () { set | egrep "^(${1}_[^ ]*|copyright_$1|_local_.*_$1) ()" | sed ' s/...$// ' }
I've recently decided that rather than lib copyright, it's more
proper to copyright_lib
.
So, functions belonging to the family are found in the active functions
delivered by set
and therefore filtered by leading:
- lib_,
- copyright_lib, and
- _local_{name}_lib
where "lib" is the handle of the library, e.g. "aux" for auxlib
1.10.2 Sunday, 11th
Today FOTD: replacement
replacement () { eval "$3 () { set $1 \$@; comment USE $1; \$@; }"; } replacement stdevAvg FOR sumsqAvg stdevAvg () { comment WAS sumsqAvg ...
I've been looking for cheap way to automate the replacement of one
function by another. In this case, I'd written a function sumsqAvg
,
and by the time I'd implemented it, the return values were the
standard deviation and the average of a single column of numbers.
Realizing it is good in general, and not just where it was first used,
it seem better to say what it is really returning.
So, today's stdevAvg
provides the funcionality. There are, however
two dozen likely instances where I'm still using sumsqAvg
for that
functionality. The challenge: rename the exisiting function to the
new and most proper name while retaining the functionality through
the call to the former.
Enter replacement.
I thought it useful to design with a throw-away 2nd argument, so to
use replacement
, see the example above, where it almost reads like
a sentence. The only thing missing is the verb "is" or preposition "by".
e.g. this last might read:
replacment BY stdevAvg FOR sumsqAvg
So, what does replacement
do?
It defines a function, it's third argument" sumsqAvg
whose functionality
is replaced by stdevAvg
1.11 November 2015
1.11.1 Saturday, 7th family dotty local_data
I made some notes for a table family of functions in my moleskin diary on 10/23/15, on my way to Thad Usowicz' funeral in San Francisco on the following day.
The main_function defines a function family. The rdb_table function assumes and adds it's functionality. Let's see if the latter is upward compatible before too much time passes.
See the entry for Monday, April 25th 2016 I've replaced main_function
with
om_iam, in yoda-speak, says Object Method, I Am!
In this entry, all the references to the "family refer to the object
In any case, the family:
A family is a function collection whose member names follow the pattern:
family_member
where family is common, and the member is unique among the family. Either
main_function
or rdb_table
, with one argument:
main_function family
define the family and supply default functions family
and family_help
.
In both cases, family_help
is the default behavior of family
. Further
family funcitons defined as
family_this () { ... }; family_that () { ... }; family_ ...
may be invoked from the command line as:
family this arg ...
but, when used in scripts or other funcitons should always include the
underscore: e.g. family_this arg ...
I've allowed this behavior just
for typing on the command line. The space is always easier to find
than the underscore.
I've included the dotty and local_data functions, where a user-supplied local_data can supply the table location.
main_function () { h=${1}_help; isfunction $h || eval "$h () { echo $1 functions:; fun_functions $1 | sed 's/^/ /'; }"; eval "$1 () { f=$1_\${1}; isfunction \$f || { ${1}_help; return; }; shift; \$f \$*; }" } rdb_table () { : tableName; : ~ default, concatenate the table; : ~ column s/a default; : ~ echo display the table name; : ~ fields display the field names; : ~ jm after the same command; : ~ {fieldname} ... project the name ... csolumns; : ~ {localfunction} if r_name is a function, or; : ~ {function} ... any other function; :; r=$(myname 2); f=$(local_data $r.rdb); report_notfile $f && return 1; trace_call r $r, f$f, $*; set -x; case $#.$1 in 0.*) cat $f ;; 1.column) column < $f ;; 1.echo | 1.name) echo $f ;; 1.fields) rdbfields $f ;; 1.help) sfg ${r}_ ;; 1.jm) cat $f | column | justify | more ;; *.*) fields=$(rdbfields $f); if [ "${fields/$1}" != "$fields" ]; then column $* < $f; else local fun=${r}_$1; set +x; isfunction $fun && { shift; $fun $*; set +x; return }; report_notfunction $1 && return 1; local fun=$1; shift; $fun $f $*; fi ;; esac; set +x } local_data () { trace_call $*; dirname $(dotty bin); } dotty () { set ${1:-bin}; trace_call $*; local d=.; while true; do pushd $d > /dev/null; rooted && { popd; return }; ignore popd; [[ -d $d/$1 ]] && { pushd $d/$1 > /dev/null; set $(dirs); popd > /dev/null; eval echo $1; return }; d=$d/..; done }