Saturday, December 22, 2018

Great example of partial pattern completion

In this text the word "the" is cut off and might be the word "me". Except "me" does not work in the sentence:

Wednesday, December 19, 2018

A peck of Oysters from Chappaquoit

Pretty easy work for 2 people in an hour.

Friday, December 14, 2018

A little shellfish adventure

Here we are Dec 14 and I cannot wait for the waders I ordered, so I went out "on foot" to Gansett beach, three minutes walk, and spent 20 minutes looking at the rocks and, eventually, moving them around with the clam rake. My feet got wetter and wetter but it was not unpleasant.

In the end I found several illegal sized quahog, maybe one legal one, a single mussel, a single small oyster, and a couple of soft-shelled clams. A handful of diverse bivalves. I took 'em home and steamed them for a couple of minutes and just finished a snack with melted butter. 10 AM now. I think I am definitely getting used to eating such things and want to say: "Yum".

There were periwinkles and evidence of scallops. Also the odd limpet. But I am happy to find a mussel and an oyster - hope I find more of those and -yeah- the soft-shelled clams are good.

It's really kind of a minor moment, except I felt good all day and it might be the little adventure followed by snack that did the trick.

Tuesday, December 4, 2018

Eiders resting by Nobska Lighthouse

Also saw red-breasted mergansers, surf scoters, and goldeneyes.

Tuesday, November 20, 2018

Some thoughts about neurons

Thought #1
It seems that the basis for deferred pain (eg when my rib muscles hurt and I experience heartburn, or my hip muscles hurt and I experience abdominal cramps) is that the nerve bundles coming from those voluntary muscles, pass close to other autonomic nerve bundles from the abdominal muscles, with the proximity of the bundles causing cross talk and confusion at the other end of the nerve signals, in the brain.
With cross talk being so prevalent for simple muscle->brain signaling, how could the nerves of the brain possibly work - packed in as they are. Why isn't cross-talk a problem in brain->brain signaling? The obvious answer is that there is a different kind of  'insulation'  (signal isolation) for nerves in the brain than for nerves in the peripheral system. But another possibility is that cross talk is exactly how the brain works and that lack of signal isolation is how the brain works. Might the brain use very broad band filtering? How to test this?

Thought #2
In conjunction with the previous thought, one wonders if signaling during so-called "sensation" corresponds to some sort of raw data, such that it is pure signal until it gets to the brain and becomes "information". Alternatively the sensory nerves could be creating different types of signals, mapping to meaningful differences in the information - so the brain passes and mixes but is not solely responsible for creating information.
I know that is pretty vague but I believe there is a version of cognitive theory supporting the idea that thinking and information processing begin peripherally, not just centrally. How to test this?

Thought #3
Increasingly I have come to believe that "completing the incomplete" is a primary function in cognition, supported at the level of individual nerves or, at least, very low level entities. I get this belief in its purest narrative form of Truism 7, which says: that which is blocked [read "incomplete'"] will become unblocked:
X*::X 

But I also get this belief when thinking about Berkeley's analysis of the moon illusion and the perceived magnitude being affected by dimming and de-focusing of the light [which is more extreme when the moon is at the horizon than over head]. Years ago, I noticed the same phenomenon when viewing an object through a gauzy curtain (I thought a seagull looked like an albatross) and, still later, realized the same is true when an object is viewed behind a bush or some other kind of grating that partially occludes the view. In other words, the visual boundary caused by the occlusion of an object produces a perception of object magnification.
When I think about these things, I imagine an abstract field of view, with a boundary dividing what is seen of an object in the scene, versus what is occluded - and in my imagination the incomplete side of the boundary is "hot" and uncomfortable. We don't like it, we want it to go away, we strive to remove the boundary in our thoughts. This feels very similar to truism 7 but at different scales - a bit of visual scene versus an entire expectation of story line.

Knowing that ethical behavior can (sometimes, per Bloom's psychology experiments with infants) be generated by narrative preference for Truism 7, and that the moon illusion is a byproducts of reasonably low level cognitive processing, is quite suggestive. It would mean that ethical behavior is a byproduct of a low level sensation processing requirement. This begins to put ethics and -say- visual edge detection into the same neuro-psychological framework.

Update: I find that #3 is called Friston's principle of free energy

Monday, November 19, 2018

Derivatives of the associated function

I gave this up in graduate school because I did not know how to program. I think I could do it today but cannot remember the details of the first lemmas:
Let theta be angle between tangent at t and the chord from t to t+s on a [arc length parameterized] convex curve. Let r be the distance between points at t and t+s. Let psi be the angle between the chord and the tangent at t+s.
To find nth derivative in r of cos(theta) as r->0.
Let D = d()/dr, Dn the n-th such derivative and let L = lim as r->0. The problem is to find L*Dn( cos(theta) )

Lemma 1:
D(theta) = -tan(psi)/r
D(psi) = D(theta) - kappa*sec(psi)
where kappa is the curvature at t+s
Dn+1(kappa) = -Dn(kappa)*sec(psi)

Lemma 2:
L(theta) = 0
L(psi) = PI
L(Dn(kappa)) = ki  (a definition)

One needs to apply l'Hospitals rule and do a bit of algebra to solve for each Dn(theta). I find
L(D1(theta)) = -k0/2
L(D2(theta)) = k1/3 [or is it -(k0/2)^2??]

L(D3(theta)) = k0*k1/2

I propose that the successive answers are formulas in the ki's and you need a computer to find them, lacking cleverness.

Speculation: the successive answers, given in terms of polynomials in the ki's, form a Tauberian collection of polynomials, sufficient to reconstruct the complete original kappa - and hence the curve [usually!] If I had been a smart mathematician, I would have proved "Tauberian" without the explicit calculations.

Thursday, November 1, 2018

First Halloween in Woods Hole

Made my pumpkins, bought the candy, got no visitors. But I had fun with the pumpkins.

On Buzzards Bay Ave:


On Gardiner Rd:
 That's a bluefish, right?
An interesting follow up to this is that two days later, someone stole this pumpkin. I cannot imagine why, but hope it is because of the nice design.

Tuesday, October 30, 2018

Tornados in Massachusetts

Pretty rare and, I think there were two in the last five years. Funny thing, I was in both of them: in Concord ~1/4 mile away and in Woods Hole my house was right underneath the thing as it dissipated - quite a puff of wind.

Thursday, October 11, 2018

Little Adventures in Woods Hole

Each day has a little something different. Hopefully that continues. Yesterday it was seeing False Albacore  - turquoise kite shapes, gone before the blink of an eye, disdaining my lure in passing. Today it was a weird giant "needle fish":
Other recent moments of fun: losing two lures in as many minutes on my first fishing venture, by dinghy, in Buzzards Bay.
Also: 5 deer in the yard one time, 18 turkeys another time. Gray seals in the water of Lackeys Bay behind Nonamesett.
Also: my first bluefish and how to find clams at Ram Island (with a friend of a friend named Tyler):

Sunday, October 7, 2018

The stirring of new ideas

Talking with youths - Tyler Boone and David Levy, and a bit the "cool kids" out at Intuition Machines, who are friends of my son David, gets me back to thinking harder about hierarchies. My "merge split append" algorithm (which might be called a "silver" algorithm [versus the final "golden" algorithm]) does its work using an existing hierarchy.

So I have been thinking that the way you use a hierarchy must be pretty similar to the way the hierarchy is created in the first place. Mechanisms involved with creating the hierarchy are becoming interesting. To which end, this figure. I am striving to understand how dPerception/dValue can be an independent variable in a learning formula.
I'll figure it out. Gimme a little while.

Hebbs versus Persig

Let us start with separate individual entities (vagueness about this is a problem, but in some versions, the entities are somebody's idea of a "neuron") that are bound together progressively by a Hebbian principle of "what fires together, wires together" [very poetic]. The result is that clusters arise as groups of associated individuals. But we can arrive at the same groupings by a "Persigian" principle that starts with a single large group of all the individuals, then splits up that group, a bit at a time.

I just realized something cool about the difference between a bottom-up Hebbian grouping and a top-down Persigian grouping: Persigian grouping leaves behind a sequence of larger groups so, as a procedure, it produces a hierarchy in the course of generating the final grouping. The Hebbian procedure does not. In other words, all the looser associations that are a biproduct of the top-down procedure are absent from a bottom-up procedure.

Friday, September 14, 2018

My discoveries and inventions

My math PhD result, that you can identify a shape from the statistics of its chord length distribution, is important but way superficially treated. I was not smart enough to create the inverse transform although I know it's out there, like a Green's function.

In college I discovered the "read symbol" which is an ornament on a name, indicating the removal of quotation marks.

In college I invented the AOdiagram - where attribute categories are lines up as columns, each with a row of dashes indicating an particular attribute value ("trait"), and where objects are indicated with zig-zag lines between the dashes. Going through a dash means the object has that attribute value. When an object line passes through a dash, its identity is not lost or merged with other object zig-zags that are passing through the same dash. The search for elegant examples of AO diagrams leads into questions of block design. Again, I was not smart enough, although I was toying with the Kirkman schoolgirl problem. I did however invent a way to convert and AODiagram into a decision tree. At the time there was an existing algorithm from a guy named Quinlan, that did not perform as well as mine.

Later I invented Data Equilibrium. According to its theory you start with an AODiagram populated empirically, apply a forcing function to the dashes and then let it come to equilibrium assuming the force propagates outward from a 'dash' to another 'dash' in proportion to the number of objects in the data that have both traits. This handles incomplete data elegantly, and eliminates the problem of Decision Trees which force an artificial prioritization of categories.

A minor invention at Polaroid, was a least squares best fit method to find a grid of values, such that the multi-linear interpolation between the grid points provided a minimum error approximation to an arbitrary function evaluated at non-grid data points. I think this is superior to "Krigging"

I played around with using relative Chi-squared to compare patterns. There is real magic hidden in this about how persons judge similarity. So it must be related to the logarithm and Fechner's laws. You have to read the patent a filed on this [it is online although rejected]. The patent creates a type of geometry based on point scatters rather than solids.

But let me tell you what I consider my greatest discoveries and inventions.

At IVS I wrote an algorithm for parsing a one dimensional optical signal. It is simply a "moving hysteresis" algorithm which which reads along the signal from left to right and, in one state, is looking for a minimum but, in the other state, is looking for a maximum. Whenever a difference between the current value and the most recent extreme exceeds a threshold, it changes states. This is called the "ripple" algorithm. Looking at the spectrum of outcomes, as the threshold value dials between 'low' and 'high', gives a complete parsing of the data and shows the correct way to handle signal versus noise - as an arbitrary dial setting.

In "The Elements of Narrative" I create a symbolism to describe narrative patterns. It opens many doorways, especially empirical ones. So I discovered 8 'truisms' patterns that seem quite fundamental to human reasoning - or at least natural versus formal thought.

In "De-Serializing Text into a Context Hierarchy" I discover the principles of conserved context and immutable record.

It would be unfair to myself to leave off the discovery of Native American burial mounds. Apparently no one knew what they looked like. I poked around in the Middlesex woods so energetically, systematically, and enthusiastically, that I believe I have uncovered the truth of this matter.

And that's all folks. We'll see what kind of juice is left after turning 66.

Artichokes and plant neurobiology

Just read they found Calcium used to propagate electrical signals in plants - quickly. For the record, I predict that when they study artichokes, sunflowers, and other composite flowers, they will find pattern recognition properties of the array. They will ultimately find that artichokes are intelligent after a fashion.

Thursday, September 6, 2018

Handling the 3 line stanzas of the Dies Irae

Measure a Requiem composer's creativity by how they handle the 3 line stanzas of the Dies Irae. Music is in 2s and 4s, so whatayagonnado? You could try to make music using 3s. You could repeat the last line to make 4 lines. You can sing during 3 measures and use only instrumental on the 4th. In the "Requiem for an Impossible Love" I use a stretching of the last line to take up 2 measures of music. Another way you can do it is be repeating the 1st line on the 3rd measure, then singing the last line on the 4th. I just heard Gounod do something I have tried: a delay in the 3rd measure turning it into 4 measures, while 3 lines of verse are being sung.
Let's go see how Verdi did it. I know I'll end up humming his "Inter Oves..."! Ah, I hear Verdi taking an entire movement to do the first stanza of the poem. So the "Dies Irae" first line repeats make the treatment of the other 2 lines possible, as a rhyme. See what I mean? Ah and here in the "Inter Oves" he does something like my last suggestion, but repeating both first and second lines.

September....gotta get back to work!

Peter starts his marketing push today. Sending copies of "De-Serializing Text into a Context Hierarchy" to ex colleagues, MIT professors and IBM VPs, who I do not know, just on the off chance it will lead to fun.

Friday, August 31, 2018

Contented Laziness

I am overwhelmed by a sense of contented laziness. What is to become of the intellectual life?

Thursday, August 30, 2018

If I was going to build a robot...

Watched a movie, "Hot Bot", about a sex doll robot. Lately, watching things like that has led to distracting thoughts about how I would go about building a piece of software that, at least linguistically, behaves like a person in a particular environment (a narrow world). What is surprising is that this thought even crosses my mind. With working theories of language at hand, I sense that a great deal of the "intelligence" in conversation is already embodied in the word knowledge - per the word trees and structured relations that I know how to define. And I think it gets me close enough to have the thought cross my mind and wonder: what else I need in order to create a chatbot that can, itself, fall in love (as in the movie)? I am wondering how software should model "pain"?

Monday, August 27, 2018

Atmospheric "filament"

From "Jos" at weerrecords, via David Sands:
"The Earth's atmosphere has a tendency to form filaments, very long thin streams of air with similar properties, for example this one from yesterday with North Amnerican wildfire smoke.
Rarely seen such a nice one. The bloody thing must be more than 10,000 km long. #TROPOMI #S5P"
[Peter says: proud to be cc'd by David Sands]

Sunday, August 19, 2018

God is in heaven

And my context code is checked into Git Hub

Thursday, August 16, 2018

Topic Trees and Chatbot Conversational Context published

Here, in another "fly by night" Asian journal, I got  my article published.

I should mention the main motivation was to be able to reference the article in another article that I am writing.

Also: the paper is not that good. I was mixing together adjectives and noun word trees. That is computationally reasonable but epistemologically wrong.

Tuesday, August 14, 2018

Small footprint NLU

A news article just handed me a 'selling point' for Narawhal: my trees are extremely small footprint resources.

They have to store weights on individual example expressions. Yet they are beginning to talk about "small footprint natural language understanding" - something which already has a name: "Narrow world" nlu.

Monday, August 13, 2018

Starting to settle in, in Woods Hole

First you must get some art on the wall!

Friday, August 10, 2018

Studies in neuro imaging #1

I have been to quite a few lectures this summer in Woods Hole that purported to be studying how the brain works by taking on the subject of 'memory' and 'synaptic plasticity', along with brain 'regions'. Along with being hopelessly vague, this contains the reductionist fallacy of projecting an abstraction into an anatomical distinction.
My point really is that we can do better at considering cognitive functions. Consider a couple of experiments:
Experiment #1: You try to find anatomic changes corresponding to someone making a plan that combines an image with a linguistic pattern.
Experiment #2: You touch someone on the leg, then ask them to touch the same place.

Thursday, August 9, 2018

Is conscious thought suspended while dreaming?

Or is it stopped, to resume based on the dream?

Is this even a meaningful question?

Two linguistic systems?

Current theory is that language processing has a global sequence of "operations" driven by the incoming sequence of topic nouns, combined with a local sequence of operations driven by surrounding adjectives and verbs. A real distinction could underlie these two different aspects of processing.

Wednesday, August 8, 2018

FAQ chatbot diary 2

Playing with FAQ for an HR department, where the "answers" guide a user to the web page resource. The topic tree:

ROOT
   FAQPERSON
   FAQTOPIC
       PERSONAL
           CONTACT
           PHOTO
       EXPENSES
       BENEFITS

I propose to use:
DESIRES is a VAR with keywords like "I want..., can I..., how do I...".
INFO is a VAR with sub-VARs for ABOUT, CHANGE, FIND, each having its own keywords.

And a narrative of the form
action(DESIRE,  FAQTOPIC, INFO)

Producing a record of the form
(topicID, [ info, status ] )

Tuesday, August 7, 2018

FAQ Chatbot Diary

I can come back and edit this as time goes bye although I should really start a separate blog for it. The idea is to keep track of how a Conversational Context may get created. Having gotten a preliminary read() algorithm working, I am ready to try it on something other than dentistry [which is my "go to" example of word trees]. So, I gave up previous chatbot development when I got to the FAQ chatbot example. At the time it seemed much harder than it should be, causing me to want to either give up or re-think things. So the re-thinking has taken place, let's see how easy it is to create an FAQ chatbot. However, I had the thought last night that there is no reason to think any particular  Conversational Context tree is "easy" to come up with. If it is like a number system used to describe one part of the conversational world, one should not assume it is generic and just like some other part of the conversational world. We don't know. Maybe later. For now, I was thinking it would be fun to force myself to work on FAQs, and to watch if there is any development.

My starting point has been a document called "Guidelines to HR's FAQ" I got from my employer. I have been looking at the topics listed there and playing with the idea that some topics involve one or another type of question, and toying with the idea that there is a 'client' and an 'agency'. So I wrote down some of the topics, organized a bit under "personal" versus "informational". And today I am starting to wonder why not simply get started building a tree with:

ROOT
    client
    agency
    questions
    topics

But you can see that is not right. Questions are attributes of context. And the client is not really a conversational frame. So...first steps are hard.

Analyzing geometric singularities linguistically

Well, here is an idea that propagates backwards from linguistics to differential geometry:

The higher order derivatives of position on a manifold (or at least an embedded manifold), can be used to find an object which "best fit"s the manifold at a point. The family of objects used for the fitting is restricted to entities having constant such derivatives after a specified order. For example, curves in 3D space can be fit with lines, osculating circles, and helices of constant radius and torsion. Surfaces in 3D can be fit with planes, spheres, and....

Singularities in a manifold are an opportunity to "best fit" with other families of entities besides the ones with constant higher order derivatives. That one can choose from among different such families is an idea - the best model - that is more evident in linguistics than in geometry - but there it is - a way to analyze singularities.

Monday, July 30, 2018

Merge, Split, or Append - By George I think I've got it!

Here I sit in Woods Hole, having done a clean job implementing the theory and feeling quite good about  the "Merge, Split, or Append" algorithm. I think it is one of those fundamental things, like the division algorithm, which is the basis for a calculation. Time will tell what abstract mathematical questions will come up for this- as we are only now able to do the simple arithmetic.
But I want to complain that, aside from my wife, there is no one to tell my result to. No one to gasp and exclaim - "By George you have done in Waksman". So I have to say it to myself. I do plan to start some promotion in the fall. What a great summer! What a lonely summer!

Sunday, July 29, 2018

Deep Flow now supported in Narwhal

I can guess the direction of Google's marketing. So let me pre-emptively use this term to describe my conversational AI - the new context manager is called ""Deep Flow". Now I have the copywrite!

Saturday, July 28, 2018

Merge, Split, or Append - another version of the rules for the moving topic

What has been hard to clarify is how you start with a tree of context IDs, and follow rules for saving records of statements about the IDs, you always have a list of "current IDs" starting from the top level [where a generic "ledger" is always part of a chain of more and more detailed IDs] and when a new ID occurs you either, merge, split, or append it as follows:
If the new ID is already in the current chain of IDs then you merge the new info into the old, unless there is an overwrite conflict - in which case you split off a duplicate set of records below the new (these duplicated details are "SOFT" and can be overwritten).
Otherwise the new ID is outside the current chain of IDs, so you find one in that chain that is the ancestor nearest to the new ID, and append a bit of chain below that, through the intermediary IDs (recorded as "EMPTY" details), and append the new record to the end of the new chain.
After merging, splitting, or appending, the newly written detail is "HARD" and cannot be overwritten.

Equivalence of "empty" details with their abstraction

Don't know if this is significant but when a color 'exists' but is not get specified, it is pretty close to being the same as the simple abstraction of  'a color'.
Why is this OK:
"I am slowly getting to know her" 
and this is not OK:
"I am getting slowly to know her"

This example is a lot like adjective order - rigid, yet seemingly pointless. With something so rigid but seemingly without purpose, it makes you think their is something wrong with your understanding of "purpose".

Friday, July 20, 2018

Circular saw use in pre-dynastic Eygpt

YouTube is full of videos that show pre-dynastic stonework that is far more precise than anything done by the pharaohs - surfaces too smooth to have been produced with hand tools, and circular saw cuts so obvious you have to laugh at the Eygptologists saying these were produced with copper chisels. Similar saw cuts occur at Baalbek and at Teohoatican.
Let me be the first to propose an obvious way to do this: use a "spinning button" but much bigger than the toy. Those suckers can go 125,000 rpm, much much faster than industrial stone cutting saws of today. At those speeds you could cut rock with a string or with water. A super weed-wacker would work, or a toothed saw made of just about anything and using powdered stone abrasive.

Thursday, July 19, 2018

Woods Hole in 2018

I have been coming to Woods Hole since 1956, my grandfather was a scientist at the Oceanographic and my father rented a desk at the MBL library and was involved with some of the courses. I have been going to Friday night lectures since I was a teenager. So today, semi-retired and getting a good look at the place this is the kinds of people I see:

The "Big Science" MBL scientists get to rent nice properties down Gardiner Rd from me. Meanwhile the U of Chicago has brought in a continually renewing undergraduate population and the doors at the MBL are unlocked more often. The Friday night lectures continue a tradition of emphasizing credentials and the social benefits of science, rather than the science. So for example, a talk on a human disease, can involve routine application of conventional tools, and garner praise because the disease is important [I should talk] and because the grants are large. Amazing images can be praised without any understanding of the underlying processes being observed. Occasional new ideas slip in.

Another group is the retired engineers and "alte kaker" sitting across from Pie-in-the-Sky, watching people go buy (on Thursday's I go and order Chai). I am just feeling out this group. I have known some for many years but am trying to remember names. These guys, if they aren't still working, are trying to entertain themselves with boats. I want to call them the "boys", as it is a resumption of childhood fun.

Then I am bothered by another group, which I am on the fringe of, consisting of urban folks who seem to huddle together for warmth. Waiting - I guess - for death and passing time shoring up their sense of status du jour. Well, I am not urban.

Another group are the successful people, the landed gentry, over on my side of town. They seem confident and easing into age - although I should be a fly on the wall!

Friday, July 13, 2018

The Moving Topic - some first principles of context

Principle 1: Given what you have been talking about and what you are currently talking about, you use the narrowest context that is broad enough to encompass both.

Principle 2: Regardless of word order, process the information from broad("top") to narrow ("bottom") when translating text to content.

Principle 2: When keeping a record of statements an entire top-down "path" is needed to process every sub-context, starting from the broadest and leading down to the sub-context. So when child context information appears, blank information is back-filled into a sequence of containing parent contexts. If attributes for those blanks occur in the following text, they are filled in and to avoid overwriting a value an entire duplicate of the path will split off with only the new value changed. However, when legacy values have been duplicated this way and they are to be overwritten, that can happen without further splitting. [I still don't have this '"right". Perhaps you split if a parent will be..is overwritten but do not split and override, when a child is to be overwritten.]

Update: The principles if the Moving Topic
  1. Contexts are processed in the order in which they are detected in the text.
  2. Given the previous context have been talking about and the context you are currently talking about, you use the narrowest context that is broad enough to encompass both.
  3. To avoid overwriting a previously detected context detail, we split off a duplicate of that detail and all its current children, and overwrite the duplicate. Meanwhile the children that were copied contain details that were not detected and can be overwritten.

Wednesday, July 11, 2018

There are lies and damned lies and....

I want to say: there are lies, damned lies, and statistics machine learning. For truly fraudulent efforts, see the usage "deep".

Saturday, July 7, 2018

Why not develop a real search engine?

Right now, Google search is a joke and the other search engines are incapable of finding relevant matches, even though the searches can be quite clearly specified and....assuming this is true: it is big, wide, world or internet out there.

Why the f*ck doesn't someone build a real internet search engine?

Rudimentary language understanding is not that complicated - for finding out what people are searching for. But perhaps indexing the web in a meaningful way is not so straightforward. Obviously the whole strategy or promoting the 'popular' must collapse inward under its own weight. We are left not knowing how to find things.

Example: I search for articles on mathematics about language but all Google finds (with minimal exception) is articles on the language of mathematics. Google has not ability to differentiate. All it can do it key-word pair processing through their search tree. And even that is wrong because you add search terms and get more results!

Wednesday, July 4, 2018

Formulaic Conventions of Recitative (Tropes of Recitative)

It is very frustrating reading musicologist discuss recitative and see them being focused on the history of the genre and failing to describe the conventions of the form other than being "secco" or "Italian", etc. One thing I do see them discussing is the way recitative can be blended with aria, so let's not go into that. But here are some conventions that are part of the recitative formula by convention, which seem to receive very little attention from musicologists:
  1. The recitative is a dialog alternating between singer and instruments
  2. Usually the instruments make a 'statement' and the singer gives a "reply" or makes their own statement
  3. At the end, instruments finish the cadence.
  4. Singer can use rhythms of their own choosing; instruments usually stick to more rigid meter
  5. The decoupling of voice from instrument occurs through following different rules of meter. So when recitative blends back into aria it can do so, gradually, as the voice does or does not begin to coordinate with the more rigidly metered instrumental parts.
Fun violations
The cadence of (3) above is usually a 5-1. You occasionally hear Bach do a 4-1. You can break this rule in lots of obvious ways: 2-1, 3-1, or whatever you like.

I am writing a recitative where I am enjoying breaking several rules. At the beginning, I am letting the voice go first and be answered by the instruments. For the cadence, I am using 1-5 (F-C) in one place and 1-(minor)5 (C-Gmin) in another. And then by repeating the phrases I can end on the voice, then echo with an ending in the instruments. Finally I go into an aria "al attacca' when I do get back to the '1' rather than ending the recitative.

Most composers I listen to did some experiments with the form. Mozart stands out as a composer who did not.

Update: Another flouting of the recitative conventions is to have that two chord cadence actually serve both as a cadence and as a motif for the initial melody. So the recitative I am writing, called a "reverse recitative", starts with a motif of notes: 1-6. I am thinking of ending on a 2-8 notes.

Tuesday, July 3, 2018

Recording a new piece of information in nested contexts - makeRecord()

I wrote the details of an algorithm today, called makeRecord(), that handles the task of processing context specific text and recording it in the data structure of nested contexts that I have been developing. The algorithm makes some pretty heavy assumptions:

  • that the context can be updated in the order of most general to least general, and 
  • that the new context is the least general common context containing both the new and the previous details
  • that text can be de-serialized into a nested data structure by these assumptions
The reason it is "heavy" is the proposed uniform and simple way to parse meaning, more or less independent of grammar and syntax. 

Equivalence of action and speech

I guess my philosophy is that the way I think and plan to act is equivalent to the way I talk about things. As I explore the possibility of analyzing 'reality' in terms of nested, conceptual-linguistic, contexts, I am starting to see a world around me that I can speak of in more than one way and which tries to organize itself as nested entities within entities. One of the reasons the "Windows" UI was such a success - is that it really is intrinsic to reality, in the sense of the above.

Now, let me come back to the puzzle of 'single part with attribute' equivalence with 'single attribute'. I do believe I can construe the same thing in two different ways and that I am designed to use one or the other when I am planning. Sorry to be vague.

Another short term payoff for thinking in terms of nested contexts is the possibilities of cognitive theorems like this one: changing properties of a part of a context cannot cause a switch in context.

Monday, July 2, 2018

He said...

Then I felt my brain drifting off in the direction of Cuttyhunk

A principle of nested frames

Setting the properties of part of a context, does not change the context. For example, in the context of a discussion of fruit, I cannot change to discussion oranges when I describe an apple's color. [Not sure that example works....how about:] When changing the material on an abutment, that does not change the tooth number

Alexa "skill" for chopping vegetables

Saw this being marketed and think: so, narrow world language programming is gaining a foothold.

Thursday, June 28, 2018

Whatever happened to those flower arrangements?

They meant a lot more than I let on. (See here and here)

GitHub - where are my stars?

I have a hard time finding the Narwhal repo on GitHub if I search for it using tags. Other repos with thousands of stars are better placed in the search results. Poor Narwhal has something like only 8 stars total. So I just looked at an "nlp"  project that had 1.4K stars and it was a list of links to documents and source code on nlp.
Really? That is 2 orders of magnitude more stars than Narwhal and zero independent content. I think this shows that stars are only one metric and GitHub [Microsoft are you listening] should provide other ranking methods.

Wednesday, June 27, 2018

Are adjectives just simplified parts?

In the format where a context can have a 'modifier' and can also have a 'part', the ambiguity is strongest when the 'part' has just one modifier; or when the original whole has only one part and its modifiers are more simply handled as as modifiers of the original whole.
For example you can talk about the max speed of a car as a simple modifier of 'car' or you can make it one of the modifiers of the 'performance characteristics' - part of the car. If you were going to talk about acceleration and other performance characteristics it would be better to have the max speed not be a simple modifier of 'car'.

Saturday, June 23, 2018

Language Principle: you can say whatever you like

I like the idea that you can intend  to speak about something a certain way, even when it is not natural to the thing. For example "the collection of circle" or "you know, I wish last week". Not only can you intend to speak this way, you can sometimes even make a bit of sense out of the nonsensical. This gives rise to a general principle, which is that you can speak about something any way you choose.

This principle might help the denouement of the adjective/part distinction.  Let me explain: it is not completely crisp that having "parts" (namely sub contexts) is not the same as having modifiers (individual attributes). Take for example a "car" that has a color attribute. But how fast the car can go, is not really a simple modifier. It is a modifier of the car's "performance" sub-context [I call sub-contexts "parts"]. I don't think it helps the study of philosophical ontology, to confuse the self attributes with the part's self attributes.

Nonetheless, the principle says: you can talk about a part's attributes as if they were the self attribute of the original object. You can talk about maximum speed as an attribute of the car. In that sense: does it matter if you get the ontology wrong? Perhaps it does not matter if you are trying to communicate, because subtle context switching is going on all the time when we interpret speech.

But you have to be more careful if you are writing a program. The "subtle" context switching can come later. In the shorter term, you must distinguish carefully between modifiers and parts of a context "object".

Wednesday, June 20, 2018

Teasing apart the structure of a "context"

Starting with a vague "lump" of desire for conversational context to be defined, I have come so far as to define a ContextFrame (a template) and a ContextRecord (A snapshot of a partially filled template). Now I am getting clearer, slowly day by day, on some of the details. So we have

ContextFrame:
   ID (with a self vocab)
   ENV (another ID)
   MODS (each with a tree of vocabs)
   PARTS(each with its own ID)
   RELS (narrative structures used to fill MODs)

ContextRecord:
   (a printout of ID and MOD values)

Generally I will assume a tree of ContextFrames, doubly linked via ENV and PARTS and with a single parent. Later we can talk about dynamic re-assignment of ENV.
**********************************************

Where it is starting to get subtle is in the observation that a linked collection of ContextFrames can include both a tree of ID vocabularies and a collection of trees: one per MOD, per each frame.

To be honest, although I am pretending that there is nothing real or anatomical about the format that I am defining, in the back of my mind I hope there is something correct about these definitions that transcends my current programming needs. I am being merciless with myself, trying to be as clear as possible. So two types of trees emerge within definitions and I wonder about neural anatomy. Since the contexts are almost nothing but 'wiring' it is easy to imagine it.

Monday, June 18, 2018

Three weeks into retirement and coding resumes

I have been trying hard to force myself back into real thoughts about real subjects - related to my 'language automation' interests. Finally got close enough that I am about to start writing new Python code - a milestone I have been hoping to achieve, three weeks after quitting work and going into a full-time "vegetative" state.
Update: A week later I have a working context object containing lots of trees with VARs. Now I have to start passing text through it to explore and implement switching between sub contexts.
Update 2: [bearing in mind that I am trying to justify having retired and trying to reassure myself that it won't be that bad] Another week and I have a reasonable starting implementation of ContextFrames and am about to start implementing ContextRecords, which save instances of attached Frames.

Did I mention the insight that a frame attachement is simply the evaluation of certain variables of a frame? So eg the base point of a Frenet Frame is "attached" when it is set to the position on a curve at time t.

Saturday, June 16, 2018

More Geometry of Language: Attaching ContextFrames at a point of a Conversation

My overall metaphor has been that a narrative structure is fit to a piece of incoming text. I am changing that a bit and making the ContextFrame the entity 'fit' to incoming text. The metaphor is that understanding text is a frame attachment.

The 'relations' in a ContextFrame are narrative structures used to create a record (a ContextRecord) from the template of the frame and the incoming data. This filling of the record is the exact equivalent to what happens in differential geometry when attaching or 'adapting' of a frame to data. Eg, when a Frenet frame is attached to a curve, that  means setting a base point and direction vectors equal to values.

As I pointed out in Best Models, the Frenet frames are iterative and progressive. They stop after 3 dimensions because it is a complete coordinate system. However contacts of line, circle, spiral, and all higher order curves are all available for fitting with additional iterations, using higher and higher order derivatives. In the same way the ContextFrames can be more or less attached - although I am not clear about iteration order and the need to start with the most general frame.

The point is that ContextFrames are attached to language data in the same way as geometric frames are attached to geometric data - by evaluation. The template represented by the frame's definition is filled in to create an attachment. I believe the metaphor is good.

Update: Barb asks good questions: "How is this unique? What is if good for?"
Answer is: it is a new way to talk about reality with context replacing object as the center of discussion. Also, it closes the loop with the mathematical ideas of adapted frames, where now language takes its place besides visual reality as subjects of geometric thinking. What it is good for is to organize context within the language and data part of a chatbot program, rather than organizing context within the program logic of a chatbot. 

Faint justification for quitting work

I am feeling guilty about quitting and am trying to justify it to myself. I am currently working as hard as I can (not too hard) on understanding conversational context in a broader sense than the previous work on 'short term' conversational context. It is hard going because it is extremely abstract, because I have lost a lot of mental energy, and the motivation is being formed out of boredom and a residual ambitions that have not gone away while becoming slowly more absurd. Weak tea.

But I am making  progress and am somewhat proud of the new ideas coming together and I am pretty sure I could not do this during off hours from a job, on weekends and at the end of a weekday. I am having a hard time focusing but it would be impossible if I was tired and thinking about work. So I think husbanding my energies and spending them only on what is left of my creativity is the right thing to do, because I can afford it. My parent's left me just enough.

So, a couple days later and the definition of ContextFrame from the previous post has remained unchanged. It is a stable idea. Now I am trying hard to look at examples and test assumptions. Like this:
Rules related to ContextFrame
1. Want equivalent (?): a context with N parts, or a context with a mod having N values
(This is stressed below, eg 2 and 4. So the equivalence begins to break down)

2. Want access to part mods but not part parts.

3. Want access to parent mods? parts?
Maybe want no access to parent mods. The parts can be accessed in the 'relations' but lets try assuming that access to a parent mod forces a context switch.

4. The values of a mod should be mutually exclusive (but parts are not)

Monday, June 11, 2018

Refactoring "context"

Retired and with plenty of time to try to pull together some kind of focus, I have been thinking about "context". I have been treating it as a property of a conversation, so that chatbot architecture would be responsible for it and that is wrong. In a conversation there is one context per participant but each participant is operating within a structure [whatever it is] that is self contained and meaningful without being generated by a conversation. In other words context is really something that should be handled properly in core Narwhal files.

I have been trying hard to get this stuff straight. It seems there is a concept of context frame that can contain other context frames and has a form something like this:

ContextFrame
  • id (a self vocabulary list)
  • Environment (properties and parts of the parent frame)
  • modifiers (a tree of categories and values)
  • parts (contained context frames)
  • relations (narratives for connecting text to filled values)
I am trying to get straight things like: how does a "currentContext" pointer get switched from one to another context. How data can be stored in parallel instances of a template that defines the contextFrame contents.

In terms of "old" Narwhal ideas, what this whole program of understanding "context" as an attribute of language underlying conversation amounts to is a reformulation of the "vault" concept; so the vault will have much more structure. It will store these elusive template instance records but the language program for interpreting text can be designed around narrative structures that are automatically used to relate those templates....blah blah blah.

One important clarification is that to "attach" a context frame to the incoming text is the same as assigning values to its data template.

Thursday, May 24, 2018

Tomorrow is my last day

Tomorrow is my last day of feeling like I must be employed, then I am heading to Woods Hole. (That is a sentence I cannot imagine typing and, yet, there it is.)

Tuesday, May 22, 2018

Digital Customer Service - note to self

I am retiring in 3 days from my employment, after 6 1/2 years - truly one of the best jobs ever. I have a colleague who has been abusing the concept of "machine learning" so one of my motivations for leaving is mathematical revulsion; another is that I am old enough to collect Social Security; another is that there is such an obvious need to develop a team and systems engineering that integrate Customer Service tightly with internal production systems, that I want to be employed doing that. I had to resign to make it happen. I'll explain:

As a low level employee with no power it is hard to lead and make change, so I pulled the only lever I have: bargaining my job against them listening to the proposal. I said I was leaving unless they were willing to discuss customer service. Having little to lose, I have been more aggressive and more articulate with my managers, and the managers in general, and actually succeeded in communicating. You'd be proud of me. I passed the CEO (of our division) in the hall and asked if he had 10 minutes, he proposed my finding him later in the week but I pushed and said "your door is open but usually you are not there", ending by getting a scheduled meeting, later in the day. Got ready and found him to be more than able to hear what I was saying because he knows well what I am talking about.

I have been communicating an extremely abstract point using a diagram of our production workflow. It is a diagram that hangs in a lot of cubicles and everyone knows it. As a rhetorical device it is great. I have been waiving it around and asking: How did it get this way? Who built it?

The people who were there while the production system was being created [including the CEO] know very well that it was an incremental and collaborative effort between engineering and operations. They understand in total abstraction how you create such things. That allowed me to say that customer service must be a window into the production system that evolves. In other words the diagram not only shows what customer service needs to provide a window into; it also reminds the executives of how a complex system is developed - incrementally and collaboratively.

Monday, May 21, 2018

"Trusted News"? Google you are joking, right?

Love the "free" blogger - but let's not forget who pays for content.

I was shocked to see who Google considers trusted news sources providing "balanced points of view". The lineup: "The Hill", "Fox", "CNN", "Washington Post", "Wall Street Journal" and "NPR". An occasional "MSNBC" and once I saw a "HuffPo" article.

Has anyone actually listened to NPR lately? They are completely in the tank to corporate ideologies - about the same as the Wall Street Journal.  Apparently if Republican talking points are delivered with an east coast snobbish accent, then it makes up some sort of balance to Fox News which shares the same ideology but without the "liberal" pronunciation of words.

For example today on the subject of the latest high school mass gun murder, I saw no stories in favor of getting rid of guns. On the contrary, the NYT is busy publishing letters to the editor about how we need better mental health care for the shooters.

Really Google should specify that their new method for selecting headline stories: Trusted "conservative" mouthpieces, some of whom speak with east coast accents - balancing out the mid-western, western, and southern accents one finds in other news sources.

I am done with these guys. I'll get my news elsewhere.

Monday, April 23, 2018

Narwhal's punch holes in the ice

Geez you would think they knew that! The tip of Narwhal tusks is often polished white - obviously being used from something. [Yeah, the other kind of 'Narwhal'.]
See here for 'Mysterious Ice Circles in the Arctic" - which might have been created by Narhwal's or by somethings else, like a warm up-welling beneath.

Monday, April 2, 2018

My kind of dialog

Aren't you a robot?
No - I am a robot.

Wednesday, March 28, 2018

A key sentence

from my "Topic Tree" paper:
Word trees can be thought of as an application-specific numbering system, with text matching considered an act of measurement.

Tuesday, March 20, 2018

Word of Tooth (again)

I had a little hope that some of my language UI ideas would be picked up at work and that I could ease into retirement, while doing what I love. But I had several disappointments today making me think I am more likely to simply ease into retirement without any language UI employment.
Disappointment #1 - customer service discussion in Waltham is being dominated by proponents of the status quo. No action likely in the short term.
Disappointment #2 - a customer service discussion in Charlotte is being dominated by not so bright people.
Disappointment #3 - a colleague who has something of an external company I was hoping to work with, has been a bit too dismissive and, correspondingly, is fast losing any attractiveness as a partner.
Looking forward, lets get Word of Tooth going, at least in imagination. It will be a "company" with the mission of creating dental language UI interfaces - structured or conversational - to be used in the dental industry - supporting ordering, FAQ, and other customer services, as well as internal issue handling.

Saturday, March 17, 2018

Flexible architecture for adding application functions to instant messaging

With the chatbot capable of either imitating language or being a plain command interpreter, you have a great deal of flexibility about how to add application functionality into an instant message.

Chickadees and Titmouses feeding

I see buds on the maple trees
and chickadees feeding on something obscure
in the twigs of my unpruned apple
where the snow melts off the lichen

Tuesday, March 6, 2018

My father's naturalism

I always thought my mother, with her family roots in the woods of Michigan, would be the source of a true connection to nature; and that my father's naturalism was adopted. And he adopted vigorously, and passed it on to me. But I find Russian colleagues at work who used nature to feed themselves and medicate themselves - so much so that I joke how Americans go the doctor but Russians go to the kitchen. In fact my father's naturalism may well have come from his father, who enjoyed toying with the fermentation process, and may have been close to nature in the same way as these modern Russians.

Saturday, March 3, 2018

Collaborative Coding?

Coding is basically still a one person task, where the person wrestles not only what they need to get done but also with how it fits in with the idiosyncratic choices of other individual coders.

With the new emphasis on messaging and collaborative environments, one wonders if it might be possible to redefine the way teams and coding are integrated - such that they work together most of the time? "Collaborative coding" - what would that be?

Monday, February 26, 2018

Gating item on acceptance of chatbots as a service

The gating item is that people don't believe a "robotic" language assistant can be anything more than a parlor trick. So I guess I have it to do: to explain why it is true language understanding and not a parlour trick.
But on the other hand, since you can always use structured language, you want to remind people that ZorkZero used language based UI thirty years ago. So: not to worry.

Thursday, February 22, 2018

With chatbots the message can itself be interactive

That is the point. There is no room in an instant message for an application's menu-ing system. But a chatbot occupies 0 screen real estate, yet is a gateway to anything you want - menu, link, or answer. So the chatbot becomes the application UI in an instant message.

Tuesday, February 20, 2018

Chatbot architecture built on solid foundations

In case anyone is interested in my bragging, I want to claim that my chatbot architecture is getting pretty advanced - with things like built in context capabilities, sentiment, and multi-line handling. So consider:
Chatbots depend on Narwhal
Narwhal depends on the ProtoSemantics [link above and to the right]
ProtoSemantics depends on BestModels [link above also]

And it is all just another type of logarithm.

The ideal chatbot does not need to be human, just logical

There comes a time when the chatbot [I mean one created for a narrow business purpose] begins to seem realistic. But it can be logical as opposed to human. What I mean it, it may have an answer in every situation it is presented with (eg my current chatbot says "hmm?" when confused). Such things are mathematical possibilities.

Like a chatbot that could respond to greetings or be confused - alternating between "Hi" and "Hun?"

Friday, February 16, 2018

An alternative to slanted political campaign trolling

How about make it illegal to comment electronically about an election without first declaring a bias? That way, if you are unwilling to admit your bias then you cannot comment publicly. Instead you can go comment on web sites that share your bias implicitly and explicitly do declare their bias.. Cuz let's be face it: no one leaving a comment is politically neutral.

That leaves nothing except web sites a voter can visit or personal contacts a campaign can make with a voter.

Monday, February 5, 2018

Adding Service to a Group Messaging Platform

I have seen the light here and am confident that group messaging enhanced with knowledgeable chatbots is the way to go to deliver services, parts, and materials, and case tracking, in the dental industry. Of course my colleagues don't read this blog.
It is a good thing no one reads this blog. The idea of delivering services via a group messaging app is compelling. Do it medicine. Do it everywhere else that communication between people is part of the consumption of goods. Wait till insurers get into it. Image that the web page becomes a thing of the past.
UPDATE: More generally one observes that wherever people are interacting with data they want application level interaction - not just copy and view but also modify. Currently the messaging platform is like the early internet, when web pages were static and, only slowly, through advances in HTML and the Java, did web pages become able to act like applications. Well one can assume the same trajectory for messaging. It's data content (not the exchanged text) is static. But soon application level interactions will become possible. I am pretty sure the software mechanism for adding a menu-less application to a message, it so have it respond to messaged commands. So that is a chatbot. To repeat: chatbots play to role of Java in enabling applications within messaging.

Saturday, February 3, 2018

The puzzle of German word order

In English: "I saw the girl"
In German: "I have the girl seen"

Since I propose the "moving topic" as a substitute for grammar-n-syntax, the matter if word order being different in German and in English becomes another kind of puzzle. Let's stipulate that, regardless of one's preferred language, the ultimate narrative structure [after the sentence is complete] is 'I-saw->girl' and it is not clear if there is any basic difference between the languages as far as the basic meaning of the phrase. But there is the difference in word order.

I propose that the sentence, in either language, is designed for maximum drama. So in English the drama is the fact that it is a girl. In German the drama is that the speaker is taking action.

If one explores the moving topic, it shows how a German mind would be organized differently from an English mind. For me in English, the "I saw" has integrity missing from "I have the girl". Can I assume that a German has the reverse feeling of integrity? Since I eat lunch with a German woman and other people who constantly argue about language, we'll do the experiment.

Wednesday, January 31, 2018

"I wonder" instead of "hey Alexa!"

I wonder if anyone knows what I mean? The phrase "I wonder" is an English version of announcing a statement. I hereby copyright its use as a header on communication with a language agent.

Thursday, January 25, 2018

Are narrative forms instinctual?

I want to say that Truisms are instinctual. Like Truism 7, X*::X,  is probably universal. Doesn't everyone like a happy ending? Similar for the simple patterns one sees in "The Elements of Narrative". Ouch, it hurts my head to think about!

 But I was thinking about music and how it has very explicitly articulated narrative structure - but it is not necessarily instinctual, or at least not entirely. Those highly evolved and complex musical narratives need no other analog in our experiences.

Friday, January 19, 2018

The Planning Process Made Visible in the Snow

I went out to look at some tracks, adventurously in my bare feet, and they were deer tracks going uphill from left to right. Later I was noticing my own footprints and how I seemed to weave a pretty sketchy path from here to there and back. 
But looking more closely you can see the whole logic of it. On the way out I went in a straight line out and down the steps and a pace or two further. Then I started to focus on the shortest path over to the deer tracks and headed that way. But in the process the local conditions included a patch of grass that looked like less reliable footing, so I curved around slightly to the right of it. Then I looked at the deer tracks, took a step uphill, then reversed direction. I remember that after a few steps back the way I came, I was having trouble putting my new footprints into the old ones so, after that, I am kind of weaving my way around, up onto the deck and in.