Sunday, December 31, 2017

Word of Tooth

Wifey is thinking of starting an Instagram account, so I named it preemptively.

Saturday, December 30, 2017

A tree organization principle

If you are trying to organize a subject into a set of topics and subtopics  - to form a VAR tree, consider what the narratives are that you will be forming with the nodes of the tree, and follow the principle that: if two nodes are connected by a narrative, they should be siblings rather than parent-child related. For example:

PATIENT
    CASE
        MOUTH
        HISTORY
        PRODUCT
SCHEDULE
BILLING

Here we make 'PATIENT' and 'SCHEDULE' siblings because it is natural to talk about where the patient is in the schedule. At the same time, all the dental details are localized to the children of 'CASE'.

Wednesday, December 27, 2017

Geometry - more verbal than visual

Exaggerating in order to make a point: We tend to think of Geometry as the science of visual data, and think of geometric objects as things we can see. I believe that is not correct and the Geometry is the description of what is visible.
In my example of a triangle, I acknowledge we can perceive sides, corners, angles, and such. We may also perceive size, a bit like 'area'. These words name things that are visual perceptions but naming their combination as "triangle" is an added step; and so is saying something that relates these named perceptions to each other. We do not perceive the triangle but rather it is the "invisible web of words and phrases" that make up our understanding of it.

Saturday, December 16, 2017

Me and my rock piles

(picture by Gail Coolidge)

Wednesday, December 13, 2017

Visual Chatbots / Visual Conversations / Visual Conversational Agents

With the help of Python's Tkinter I have a program with a main event loop, that iterates each time some text is entered on a form. The program updates a Narwhal class with the text, and then displays a graphic of that class's internal data. So the program sends the text in, which modifies the class, then the program displays the class's contents. It also responds with output text.
For example:
It is a fun little concept to play with - language controlled VI in its cheapest form. It's on GitHub.

Narwhal reader architecture is not great

I admit I spent much of a year refactoring different reading capabilities with a sequence of "reader" objects - NWReader, NWNReader, NWSReader... It does not really matter because the even higher level classes use simple APIs to that layer and deal with their own data and structure. At the same time the lower level Narwhal classes are evolving. But the crappy middle layer isn't changing much.
Also, to be honest the RecordSlotEvents is a replacement for original core reading methods and keeping the event sequence is equivalent to having a vault. But the RecordSlotEvents is stuck looking at NARs of order <= 1 [waiting for a golden algorithm] and the original core reader vaults aren't perfect either.
So we limp along. It would be fun to get a programming language genius like the author of Python to come clean up the API.

Friday, December 8, 2017

Joking about old UI concepts

They take a subject which must originally be conceived linguistically, convert it into a tabular format concept, and then ask a user to abandon their linguistic constructs in favor of menus, radio buttons, etc. that connect to the table values. Now that we can access the direct linguistic description it is a no-brainer that we should also abandon the tabular format.
But whatever the data format, it is a crime to ignore how people want to think about something and, I am telling you, that can only be about how they speak about the something. (I might avoid this extreme position but think it might be a practical assumption.)

Thursday, December 7, 2017

I am excited about TChat's

Pieces of my NLU world are coming together. Especially around this idea of a TChat that supports the '+' operator and the SetData() method. The TChat is supposed to be a minimal "text in/text out" entity that uses a topic specific tree and a set of topic specific narrative patterns. The TChat extracts information from the text and formulates a response. The '+' operator allows creating more complex TChats from simpler ones. It is more than likely that binding the input and the output within the same object is not quite right; but never mind. What is important is this whole business of "what is the data?". If you can answer that you have a start.

The TChat construct gets close to the heart of an important truth about geometric objects (or for that matter any data type that can be diagrammed or formulated into DB schema) - which is that a geometric object is comprised of a few visual perceptions and a lot of language constructs. For example Euclid's 'triangle' relies on intuitions of points, lines, and angles - which are glued together with words. What does a TChat have to do with this? It is about constructing more complex objects from simpler ones using '+', in a way that mirrors something of what builds understanding of the geometric object. There would be a tree of words related to triangles, and a collection of things that are said using the vocabulary of this tree.

The exciting corollary is that to design a language understanding system about a geometric object is simply a matter of writing down a definition for the geometric object. Quite literally, the language understanding [software] has the same organization as the geometric object because that object is linguistic organized in the first place! This is a huge simplification in the software design process.

So there. When you get harmony between your implementation of a topic and the natural way of thinking about it (sometimes called "business logic") you are on the roll.

But that is not all. Over the last couple days, a colleague named Serge Gomert from Belgium has been showing me what he has learned about NLU with Alexa and Microsoft LUIS. I just took a closer look and darned if Microsoft isn't headed in the right direction. But the good news is that they are still way behind Narwhal. And the last excitement is this: I can get a Javascript widget from Microsoft, designed to embed in a web page, and I am up and running with their voice-to-text accessible to me - where I can do my own NLU and, in particular, do TChats. Those Microsoft suckers may have given me a platform to leapfrog them.

Makes me want to quit my job and focus on creating a TChat wizard.

Parenthetically, the reason I believe LUIS is way behind Narwhal is for reasons big and small. The first is that their 'intent' construct seems to require MxN  rather than M+N variations when there are M choices for one part and N choices for a second part of a two part intent.  Further, although they have a score, it is not signed - which to me is another way of saying that they do not have proper support for sentiment. Still further, they do not support narratives like relation, event, sequence, or becoming. Still further, they do not allow any form of narrative nesting. So they are at the bottom of the ladder. I am three rungs up, waiting for someone smart to figure out the "Golden Algorithm".

So isn't there a commercial opportunity there? I mean if I could insert myself between Microsoft and the public, I might be able to afford a bigger retirement home but, more importantly, be able to continue feeling useful for a few more moments.

Wednesday, December 6, 2017

Oh My! Microsoft publishes Goodness of Fit score for matching text to intent

Just looked at their Luis.
By gosh , that is a goodness of fit score they are reporting. Seems the concept of an 'intent' is fast creeping up on that of a narrative. Maybe they'll get sophisticated, maybe they will beat me at my own game. I'll go to bed not knowing.

Tuesday, December 5, 2017

More about augmented reality and words

What I was saying previously about the "invisible web of words and ideas" surrounding a virtual object, may be a (I want to say) deep connection between the words and the actual geometric thoughts - arranged each in the same kind of structure.
An example of what I mean: working on the different parts of the data that define options for an abutment design - (base shape and width, margin depths, core height and thickness) each has its own linguistic world and each has its own specific words and phrases that people use to describe the option alternatives. I am ending by building a "chatbot" for each lowest level option, and building up a more complex chatbots following exactly the same [hierarchical] structure as the parts make up the whole of the abutment. The proposition then is: to design the abutment language software in the same form and arrangement as the parts of the abutment.

In this view, the geometric shape is a linguistic composite, it is no more than what we can say about it. [I am not sure this is right, but that is the extreme version of the idea.]

Sunday, December 3, 2017

Exhortation to Dental AR/VR

It is not just about flying around a dental design in 3D. It is about doing it with another person.

Virtual and augmented realities are about collaboration as much as personal visualization. For that you need to understand how, as it hovers in mid-air, the abutment it is surrounded by an invisible web of words and ideas. Those words and ideas need to be virtualized and augmented for an AR/VR concept to work. This is the rational for considering a conversational agency as part of the AR/VR product concept.

Wednesday, November 29, 2017

Lighting a pipe with a computer mouse

An interesting error. Each such error gives important clues about what is going on in cognitive processing. In this case, I want to light my pipe and reach for - not the lighter - but the mouse.

I conclude that I reach out for an object that will help me do an action. If I am not paying attention, I can engage this up at too abstract a level and end by picking up something for other actions that are available to me at the desk. It is a lot like the previous post where one response to a missing piece is to ask for it. In this case, a response to an unfulfilled action, is to pick up a tool.
Anyway, when you do that, you could be filling in a narrative with VARs that are categories containing sub-VARs, whose more specific meaning is needed.
The error in picking up the mouse, occurs when we climb the tree the types of available actions and follow the wrong branch when we try to get more specific. I make the error because I am thinking of other things and only apply part of the necessary energy in detailing the tree climbing.

Third game of the world series - a context challenge for chatbots

I have been puzzling about how to retrieve context in the classic Siri fail of:

ME: Who won the world series?
Siri: The Houston Astros
ME: Who pitched in the third game?
Siri: Here is what I found online about that.

The good thing is that context retrieval seems to follow its own mechanisms that does not need to be too entwined with existing code. It is a bit confusing that you need a non-language mechanism to look up the specific question like "Who won the third game of the world series?" Language's only responsibility here is to recognize the incompleteness of "third game" and seek, in context, for a specific value of  'game'. The non-language part would have to take over after that.

I also like the slowly dawning realization that when such things cannot be located in the recent context, the right thing to do is ask a question. Siri might have said: "Which game exactly?

Tuesday, November 28, 2017

Any chance Apple is lying about facial recognition working?

How could they get away with lying? Well, if they have the phone's location, they may be able to deduce who is holding it. Obviously it would be problematic to search the whole data base for matching faces - so perhaps their "AI" does a location filter.

This thought was prompted by an anecdote where someone's cousin was showing off their new iPhone at Thanksgiving but the facial recognition software was not working, although it "usually does".

Friday, November 24, 2017

Building word-to-picture conversational agents

I want to call them chatbots  - for changing a scene and navigating within it. I built one for a colored ball placed next to a scale. That is the "bouncy" chatbot on Narwhal. I just started another one, called "MyAbutment". Here is what the graphics will look like:

This shows the placement of the margin with respect to the gum line, adjacent teeth, and/or implant interface. Left to right: subgingival, at the interface, supra gingival.
The solid lines, other than the abutment margin, represent the gum line, adjacent teeth, and opposing teeth.

Monday, November 6, 2017

The Expected-but-Missing in Context Retrieval

It is a bit of a puzzle how to look back over past word exchanges between a chatbot and its client (the context), to find the meaning of an indeterminate word in the current input, like "it" or "both". It turns out to be reasonably simple in the case of an incomplete input pattern with a missing but expected part that is referenced by "it" or "both". You simply go back over context and filter out anything except words of the expected but missing type. Then retrieve as many as are called for by the indeterminate word, if they are available in that filtered result.

Saturday, November 4, 2017

The Expected but Missing in a narrative pattern

I am particularly proud of these lines of code:

  if 0.25 _le_ node.GOF and node.GOF _le_ 0.75:
   # find the missing node (if there is just one)
   EBM = node.nar.getExpectedButMissing()
   if EBM==NULL_VAR :
      continue

   #convert the context into children of that node
   a = self.C.getAll() 
   a2  = EBM.filter(a)
   if len(a2)==0:
      continue


The routine then proceeds to grab as many entries from a2 as needed, if available, and inserts them into the incomplete narrative.

Wednesday, November 1, 2017

Bitcoin is a pyramid scheme

Early players got easy to get coins, later players got harder to get coins, and the value of the coins has grown. Sure it is geeky but it is a classic pyramid scheme. Is that legal?

Wednesday, October 18, 2017

Another Monday Dawn at work in Waltham

You can see the DentsplySirona flag in the background:

Barbara Waksman - 2017

and in 2018:

Wednesday, October 11, 2017

Oct 9 Flowers

With colleagues Rohit Mohan and Farah Hashmi. Rohit is helping me build a chatbot, at work.

Thursday, October 5, 2017

Jet trail stuck in a cloud

An example, early in the morning going in the door at work:

Monday, October 2, 2017

Oct 7 Flowers

Getting near the end of the season:

Thursday, September 28, 2017

Flowers Sept 28

Getting hard to find any I want to pick:

Wednesday, September 27, 2017

Old dogs and impossible loves

Well I have a fleece jacket that got too smelly to wear. When sweat dries and is not washed it becomes very musty smelling, like mildew. It got to the point where I was in the car with the fleece next to me on the seat, smelling so bad I had to put it into the trunk. I figured one washing would clear out the BO smell but it had no effect at all - cuz our washing machine is too poor to work with. So I washed the fleece by hand in the tub and so much filth came out that it formed erosion patterns on the porcelain. I did that several times, then used Head-and-Shoulders (it was there next to the tub) to wash it more. Then it stank of Head-and-Shoulders . So I washed it in rainwater a couple more times.

When I finally got rid of my BO smell, it still smelled bad. When I got rid of the filth it smelled perfumey from the shampoo. And when I got rid of those smells there was still the faint smell of old dog. I guess my wife, who wanted to throw out the fleece, finally succeeded in getting rid of all the residual smells by soaking the damn fleece in baking soda and leaving it in the sun.

But what is interesting, and the subject of this post, is how the smells were in layers. Getting rid of one strong smell, only made it possible to smell the remains of some older smell. And the last smell to go was this old dog smell. Now something similar is happening in my brain as I train it to stop thinking about an impossible love relation. At first it was "red brain" - all signals were tied to one face. It took a 2 months for those to die down and a similar time for her being near [I know her footsteps] to stop hurting. And my brain definitely went into a quieter mode recently. I had a dream of being near and comforted by her presence even though my discipline, even in a dream, did not permit me to look at her face. But the recent dream shows a milder, residual form of obsession. As the red brain recedes, like the BO smell, it unmasks older, simpler, less obsessed versions. Today I am only sad when I hear her footsteps, but it does not hurt anymore and the face does not keep popping into mind constantly.

As the song " Ain't no tellin' " says:
I'm standing at the station
Wondering where's that train?
I lost my darling
Now I've got all red brain.

Saturday, September 23, 2017

Simplified bread recipe

2 cups of King Arthur Flour plus 1 tsp of salt in a large bowl
1cup +2 tbs of luke warm water and 1 tsp of yeast (eg Fleischmann "Active Dry") in a small bowl
[note the 2 tbs are reserved for helping the dough form a single lump at the end of mixing].

Stir the yeast into the water in the small bowl until dissolved. Then pour it into the large bowl and mix until most material comes off the sides of the bowl and the dough forms one lump. Dump it out on a surface to rest 5 minutes.

Use a large flat knife to fold the dough once, then again at ninety degrees. Then put it back into the (clean) bowl.

3-4 hours rise in covered bowl, remove and fold twice and return to (cleaned) bowl
3-4 hours rise in covered bowl, remove GENTLY fold twice and return to bowl
2-3 hours cooling in fridge, then remove GENTLY
Form loaf GENTLY and place on (cornmeal covered) pan
2 hours rise on pan
Bake in moist (spritzed) oven at 425 for 10 min.
Spritz again and continue bake at 410 for 18 min.

Wednesday, September 20, 2017

Who created "robotics" at USC?

Poking around I see that professor Bekey is claiming paternity for the USC Robotics Research Lab. I did not know about him but here is my story:

When I was an assistant math prof in the early 80's, I was working on visual shape recognition. Another assistant prof. named Jamie Milner and I wrote some papers about the retina and lateral inhibition as a means for edge detection. One of the full time professors - Bob -something- latched on to our thoughts and came up with a model for the nerve and created a little "breakfast group" called "Visionery". It was supposed to be like "bakery" but a place to go to get information about human vision. Dean Warner got wind of it, got interested and created the research Lab as a consequence. He offered me a job but I had already accepted a position at U. of Rochester....and that was that.

Monday, September 18, 2017

Sept 18 Flowers. Sept 19

another vase:
(photo by Eric Cunnigham)

Sunday, September 17, 2017

Thursday, September 14, 2017

Korean TV hypocracy

(Not that I mind, its funny). On the show I am watching "Misaeng" they make a point of showing how male chauvinistic some of the workers are at the company. Yet when they cut to the female character the music switches to tinkly pink pony music.

Thursday, September 7, 2017

Requiem For An Impossible Love

Anyone writing the Dies Irae part of a Requiem is confronted with how to match musical tempo with a poetic tempo of rhymed triplets. Music wants to be in rhymed couplets or such. So, the composer has a problem. A low grade solution is to repeat the last line of the poem as part of a second musical couplet, or simply repeat the musical couplet with an accompaniment, while the voice remains silent. Sometimes you get lucky and the melody simply takes over and the words fall where they do, and it works.
Such is the case for my first Reqiuem - both the low grade version and the lucky one that occurs in "Recordare Jesu Pie". But now I am writing a new Requiem. Sponsored in part by a lack of creative output in other directions and also by a sad example of "no fool like an old fool", my emotions are at a high point and I am writing music to compensate. And this time I solved the rhyming triplet problem in another way that works but also has a rational explanation:
There is a device you can use in melody where you echo a last phrase, extending the musical tempo with an extra measure or two. I found that by slowing down the relation between syllables and musical beats, at the same time as doing such an echo, allows the obnoxious third line to be spread out over an extra unit of the musical tempo. Surprisingly, this redefines its relation to a correct musical tempo and seems to work.

Wednesday, September 6, 2017

Wednesday, August 30, 2017

More about adjective order - some are more noun-y than others

Consider the phenomenology of it: you can take two random adjectives that apply to the same kind of noun and there is always a sense of which comes first. If you ask me rationally: which comes first: 'kind' or 'athletic' I would not know until I tried to say it: a kind athletic person. Sometimes there is a strong sense of order and sometimes a weaker one. For example, I find the relation between "fragrant" and "red" to be fainter than that between "big" and "red". But the sense of order is always there.
Also, it feels like for any pair of adjectives, one will be more "nouny" than the other. So we try "fragrant redness" and "red fragrance". I note that the former could be, but that the latter could not be. Let's try another: kind/old becomes a "kind oldness" or an "old kindness". Ignoring the pun, the latter it is not possible; whereas the former would be stated as a "kind elder". Is "old" more nouny than "kind"? Why is "red" more nouny than "fragrant" and much more nouny than "big"? Or, using the other explanation: a redness could be big but a bigness cannot be red.

For whatever reason this order of adjectives is there in our head - a direct linear ordering. Coming back to the idea that some last-letter/first-letter combos are easier to say than others, there would be evidence for it if we took random words from different adjective categories - note the order taken between one of each, and note the ease of speaking for it.

Tuesday, August 29, 2017

Another (weird) hypothesis about adjective order

Thinking a bit desperately about "big red" working but "red big" not working, I am noticing how the "joint" between the words is "gr" in the first case and "db" in the second - it is no wonder one order sounds better than the other.

So suppose that adjective order was determined by fluidity at the word-word joints. Since there is a more or less real established order of adjectives types, this could only be the case if the words in those category types typically had endings and beginnings that tend to lock together in a tongue pleasing way.

Probably more nonsense. But then I am thinking about "kind erudite" versus "erudite kind" and what possible general rule could explain the very distinct sense that the first is correct and the second is not? It would have to be something about the muscles engaged.

Flowers

Monday, August 28, 2017

A hyopothesis about adjective order

It reflects the subset ordering of the categories of the words. What is important is what the categories can apply to, as opposed to which value from the category is being spoken of:
 Ah Ha! I have been looking for an example of where adjective order is not strongly felt. How about "a red fragrant flower" versus a "a fragrant red flower"?
NOTE: THIS REALLY DOES NOT WORK: the sweet red apple. The taste-able does not contain the visible. Could it have something to do with "big" and "sweet" being more subjective than "red"? I continue to be at a loss.

Thursday, August 24, 2017

Adjective order is puzzling

It truly is. Joe and I spent an hour or so trying to figure it out. In the end it remains unclear if we just learn things in an order, like a "sweet red candy" but not a "red sweet candy" or whether there is a logic behind it. I prefer to believe there is - I just haven't figured it out.
In the case of 'sweet', it modifies nouns in the category of thing in my mouth. Those are entirely within the applicability range of red/not_red. So it is not about what is true or false, rather about how "sweet" applies to a subset of what "red/not_red" applies to.

Barbara and Peter ~1980 1982

Tuesday, August 22, 2017

Wednesday, August 16, 2017

Monday, August 7, 2017

Juxtaposition - a most elementary of mental operations

I cannot get any further back in my mind than to the place where I posit an entity by naming it or in some other fashion. And soon after that comes the idea of juxtaposing multiple entities, where I bring up several things in "my mind's eye". When I juxtapose two things, one of several events occurs: compatibility, alternation, or grouping. [Or sequencing.]

When I juxtapose two noun objects, they sit side by side
When I juxtapose two types of attribute - like red and square, they may form a composite attribute.
When I juxtapose two values of the same attribute - like red and blue or circular and square, they cannot merge and, at best, split a prior object into parts.

Some of the point here, is that your VARs that act as parents over a collection of children should be organized so they are one of the above: alternative, compatible, or groupable. Per what I have was saying about AODiagrams.

Les Fleurs du Jour

Saturday, August 5, 2017

Context grabbing VARs -

I don't know if I have the gist of it but consider this AODiagram interpretation of some basics:
If Narwhal is using a concept tree whose nodes could be labeled as (parents of) compatible, group-able, or alternative children, then I could implement code that checks recently mentioned VARs to see if any are found under a parent labeled in one of those ways.
The purpose of this is to manage indefinite words that refer to context (eg 'it', 'both', 'compare', 'choose',...) . "It" and "both" refer to group-able entities. "Both" also refers to compatible entities. "Choose", "What is the difference" refer to alternative entities.

Wednesday, August 2, 2017

The Vatican should be coming out with a chatbot any day now

So you can talk to God, or maybe the voice of God.

Monday, July 31, 2017

Flowers - July 31

Might as well keep a record:

Wednesday, July 26, 2017

Flower arrangements at work

A month or so ago a colleague and I moved a table and chairs into a spot in the 2nd floor atrium, to create a little place where people could sit and talk. Later I started putting wildflowers in a vase on the table, and I have been doing more and more careful arrangements. Here is my most recent one, which I like a lot:
Lupine, Queen Anne's Lace, and some kind of ground pea.

In the form of wampum:

Monday, July 24, 2017

Narwhal makes it easy

I gotta brag. Programming a chatbot with Narwhal turns out to be a breeze. Everything falls into place easily, expansions occur as needed, in a balanced and maintainable way, and the ideas feel mellifluous.

Saturday, July 22, 2017

An asymmetry of gender grammar

I note that "him" is distinct from the possessive "his" but that "her" is not distinct from its possessive form. So it easy in English to distinguish between talking about a man and talking about his possessions but it is not easy to do that with a woman - as if it did not matter.

Wednesday, July 19, 2017

Topic/Subject matter in language is the same as 'key' in music

From the edge of sleep comes this insight: that jumping without modulation from one key to another is very similar to jumping from one topic to another. Modulation in music would have its equivalent in language in the 'sequitur' from one topic to another.

Monday, July 17, 2017

Another joke about MBL Friday Evening Lectures

Went to a Friday Night Lecture and one of the speakers favorite methods was to "find out what happens in mice". I was dozing off thinking "what happens in mice stays in mice".

Friday, July 14, 2017

Overheard at the beach

As a guy, I really don't know what is in women's minds, or what sorts of things they talk about when guys are not around. So it was a treat at the beach where I was standing up to mid-chest in water, and two teenage girls paddled bye, talking and not noticing that I was there. They were looking over at a rock [Paradise Rock - we're at Stoney Beach in Woods Hole] where some people they know were diving into and playing around. The dialog was this:

 - Who are the boys?
 - No idea
 - Maybe cute?
 - Let's go find out

Tuesday, July 4, 2017

The Future of Digital Dentistry is "mind share"

I want to tell a certain stalwart of the Dental industry (for whom I have a certain affection) that this is not a winning business model:
Compete by having a more sincere desire than competitors to do the right thing.
 It is a way to guarantee a stable stock price or a slowly declining one. However to triple the stock price and compete with the companies currently earning 5x as much in the Dental industry (namely Patterson and Schein) can only happen if this stalwart company takes on the business model of:
Occupy the minds of more dentists, labs, and customers, than any other company.
Now: you can only occupy the mind with a product that has the capacity to occupy minds - namely a communication product, not a family of well marketed dental parts. Hence dental communication must become the product.

Tuesday, June 27, 2017

User's selecting a personality type for the chatbot they talk to

I was thinking it would not be hard to have a number of auxiliary responses that can be peppered into an otherwise straight chatbot dialog for product ordering. By grouping such responses into different types you could easily achieve a sense of "businesslike" (no added phrases); "playful" (added phrases are jokes); or "abusive" (added phrases make fun of the user).
No sooner thought than the obvious comes up: how many people will choose an abusive dialog?
Joking aside, another type is "helpful" (added phrases are recommendations  and pieces of relevant information).

Sunday, June 25, 2017

What is a conversation?

There are surely many ways to look at "what is a conversation?" One would be from the outside where you classify conversation modes into "harmonious" or "discordant". Another might be to acknowledge that different conversational roles are possible for the conversants. So free-form discussion between "equals" is different from instruction from teacher to pupil, or from boss to assistant.

Perhaps a flaw in generic chatbot thinking is that no such role definition is needed, or that discussion between equals is the correct default. But for a good order chat bot, my vision is that it is taking instructions from the boss, asking intelligent questions, and offering helpful reminders and suggestions. But it does not need any more of an agenda than to keep the discussion focused on an order being completed and moving forward.
Update: Another conversational mode is where two people share an agreement - whether about something they observe together, or something they speculate on - painting pictures for each other.

Friday, June 23, 2017

We still look ok

 
I guess I like this picture. Taken by Gail Coolidge.

Wednesday, June 21, 2017

Narwhal as a chatbot platform

I put Narwhal in place with the idea of inventing a computer language in which to write English language recognition software. It works fine and I can build a chat bot with it. However, having a computer language is not the same as knowing how to write good programs using it. This gives me a new hill to climb - a hill of how topics can shift dynamically during a conversation. And how that becomes an architecture supported by Narwhal.
As I think about it, it all amounts to the question: what is a conversation? [Which has not yet been within Narwhal's scope.]

Monday, June 19, 2017

Does the tree know about the branch?

I think the tree knows about the branch but the branch does not always know about the tree.

"Liquid, out of my body"

I had a rare moment before being completely awake this morning, where I was in the shower and had a thought that was not that far gone when I made a mistake based on it, causing me to focus and recall the details of the thought. You are always wondering if you think only in pictures, or only in words, or whether there are non verbal, non image modalities. But usually the thought is gone by the time you get around to considering these questions.

In detail: I have an infected ear that I was trying to sooth by standing in the shower and angling my head so the shower spray could reach as far as possible into my ear. I was also brushing my teeth - which is a little awkward with your head tilted a bit on its side. So when it came time to empty my ear, un-tilt my head and have the water drip out, I made a mistake: I spat out the tooth paste foam in my mouth instead. Immediately I set about looking back at the thought I had had that created the plan so poorly executed: the thought was a collection of pictures, held together in a narrative structure: I picture "Liquid" as a vesicle in my head, and "out from body" is a channel opening from that vesicle to the outside of my body, and the liquid escaping that way. For some reason I engaged the mouth "vesicle" rather than the ear "vesicle".
I do not know if "Liquid, out of my body" is a combination of ideas already resident in my mind, ready at any time to be used is situations of expelling liquids, as a fixed part of my planning/action repertoire. Or if such a plan could exist on-the-fly, meeting the needs of the situation creatively. In either case, the thought takes the form of pictures in a matrix of narrative.
So there Wittgenstein!

Sunday, June 18, 2017

Rethinking chatnode architecture

As long as it is the conventional way to do something, it is probably wrong. Or at least it could be done better. You can look at the diagrams I posted previously and they may even be correct but what is wrong is the idea that these are "chat" nodes. I think the more correct idea is "data" nodes.
But here is the best insight: that responses can be based on which combination of data nodes are activated by a given input text.

Friday, June 16, 2017

Peter Waksman 1977 - 2017

My favorite picture of me:
And here we are 40 years later

Monday, June 12, 2017

[Better] Chat node architecture

Here the "responder" R is an object with a member R.response( ). There is only one responder in the program so, if you are in its namespace, it can be accessed directly as a global variable and its response() method can be swapped for different things.
Things inside the namespace, can pass R as an argument to sub chats outside the namespace. Inside the namespace, you can set the "global" R.response() back to a parent value after sub-processing. Outside the namespace you also set it back to your parent, but using a passed in version of R rather than accessing a global.

Friday, June 9, 2017

[WRONG] ChatNode architecture

A [WRONG] rudimentary version:
It should be understood that a chat node has a member called Responder that is currently responsible for handling text in and out. The outermost Responder is always in use but sometime is set to one of the sub nodes or sub sub nodes. It is still a brittle architecture.

Here a chat node "home" constructs a sub node during its __init__(). When it constructs the sub node, it passes itself into the sub node as a member 'Parent' of the sub node. Later events can pass control to the sub node via setResponder() and back to the home via restoreControlToParent().
In this way the home node can decide when to handoff responsibility to the sub chat, but it might not get control back. In practice the home node delegates a sub topic to the sub node and gives it responsibility for what to do next.
Example: a Yes/No sub chat can be initiated from a home node, store the result, and return control to the home. It is up to the home node to not lose the context that the question applied to. ETC ETC.

Thursday, June 8, 2017

Why you want a sub-chatnode to come and go

A good example is where you need to ask the client a yes/no question. It would be nice to hand-off control to a chatnode designed for that. How it handles non-answers could be app specific.

Tuesday, June 6, 2017

Doing some more coding

So I coded a ChatNode base class in Python that has a respondText() method. The object contains a set of underlying NARs and wants to be a "center of language processing" for a particular topic. Topics and sub-topics appear to be organized like VARs and so maybe the ChatNodes should be organized in the same way. But what is the natural mode of coexistence for a collection of ChatNodes? I think it might be smart to think of it more in terms of the pieces of data that are maintained within different customized ChatNodes and how those pieces of data are inter related.

I give you: the abutment order, which is part of a  case, which is part of an account. And this order has a past, a present, and a future. And in this order there may be an implant, and an abutment, and a crown or bridge. But the order may also exist in time and space, and even contain a whole discussion of pros and cons, telephone calls back and forth, dentists consulted, box contents, and advice about handling. Not to mention questions.

But that is it. It is doable because, luckily, the most efficient way to store all those intersecting attributes of the "order" are in the narratives that you use to capture the request.

Doing some coding

Man, I did some hard work yesterday and introduced integer variables, which I call "unknowns", into the text, VAR, and NAR handling for Narwhal. This comes hard on a previous week where I introduced recordSlotEvents( ) and got to 1-segmented text. This segmentation shows the data that has been accumulated on the way through a sentence by a given NAR. So today I introduced floating point unknowns and I am proud that the architecture of Narwhal allows adding unknowns in just a couple afternoons, after work.
So, when I use recordSlotEvents( ) for a NAR with a slot containing an integer unknown (INTx) -  sure enough  - the integer entered in the text re-appears in the data. Same for a floating point unknown (FLOATx).
The reason we need this in Narwhal is so it can handle communications with information that is to be relayed rather than absorbed. (I guess it is like delayed recognition.) This is the case for the chatbot I am building for work (at home).

I won't take the time to say this correctly but there is an interesting thing about the "boundary" between when you can get away with a collection of constants versus when you have to deal with actual unknowns. For tooth numbers, input by a user, I can look for the words "one", "two", up to "thirty two" and I can have each one in my program as part of a VAR tree of constants. But I cannot do the same thing for an arbitrary 8 digit number - not for a fundamental reason but for one of convenience. If you had the time, you could avoid using "unknowns".

Friday, June 2, 2017

If the first twenty years of searching for data....

were about finding something somewhat "relevant" to your search, then what if the next twenty years of searching for data be about providing structured information for your search.

A mission statement

I thought of this because it made sense for certain dental industry stalwarts for whom I have a certain affection:
To get customers the knowledge and information they need to place a correct order

I was thinking that a way to do that is:
To be inside the minds of all the Dental and  Dental Lab professionals

 I was thinking a way to do that is:
To be on every desktop in a Dental Office or Lab, as a daily tool

Actually I was thinking the "wave" of communication of the recent past: from phones to internet, to social media, and [I hope] to language interfaces. It suggests that the dynamic and successful Dental Product vendor will be riding the wave. Which means it succeeds by enhancing the types of communication - like training, "other customers also ordered this", good option management during ordering, etc; all in ways that are possible through the newest technologies - not just the internet, not just social media, but voice activated assistants.
For example your social media strategy needs to be coordinated with your training and information sharing functions.
At the root of that is a piece of software on the desktop of everyone who seeks dental information in general. The "Smart Catalog". That would be a steep slope to climb: to be better than Google.

Thursday, June 1, 2017

Design informed by the nature of reality

It is not every day that in the middle of program design you stop to consider the nature of reality. But that happens with this language programming stuff. I am thinking about "slot events" where a partially filled NAR "finds" a new slot to fill, in the text it is reading.

Wednesday, May 31, 2017

Math about Language is an "unsearchable expression" in Google

An unsearchable expression is an expression which, when entered into Google search, is always misinterpreted. Even when it is not completely unsearchable, if the answer you seek is several search pages deep it might as well be. My wife's name used to be pretty unsearchable. Such is the case when I search for discussions of the many interesting math problems associated to language and...I cannot even tell if such exist because the phrase "math about language" is (mis)interpreted as "language + math" and Google insists that what is relevant are discussions of math as a language. Which is a poor interpretation of the throwaway word "about".

So I am somewhat sorry for Google. They appear to have no f*-ing clue about the difference between search and single word association. Oops another clothe less emperor! On the other hand perhaps 95% of the world is happy with single word association.

Tuesday, May 30, 2017

The Mind's Eye - Who has the patent on voice controlled VR?

There is no internal mental picture when looking at an external picture. So the key to getting into someone's mind's eye is to have them look at something external.

The "mind's eye" is a surprisingly unused term. So let me copyright its use in describing or naming a VR/AR display that is language controlled - so you see what you speak about. I wonder who has the patent on voice controlled graphics and VR/AR in general? If nobody does, then I am claiming it right here. Sure they have voice controlled navigation but not voice controlled definition and changes to the content of the scene.

Imagine that as I speak of faeries they dance before my eyes?

Monday, May 29, 2017

Augmented Reality Design - using voice commands

It seems counter intuitive that it might be easier to do a 3D design with language than with 3D interactive tools. As long as the object being designed has well named parts and relationships it might be easier to say something. Like for an abutment where you say: "make the core a little thicker"; "increase the mesial tilt"; or "make the lingual margin a little deeper".
So to go whole hog: imagine wearing AR goggles but using voice to design an object.

Negatives are a reversed polarity without sentiment

I was just thinking about a Narwhal app where all there is no use of the exclusive '|' for VAR building. So all the VARs have exclusive = False and there is no sentiment involved. However in processing sentences with NOT/BUT and such things, there will still be a sign value associated to a filled NAR. So in this case those controls behave more like booleans.

Sunday, May 28, 2017

Augmented Reality Dialog: Sharing a mental picture during a conversation

There is a really deep idea here but I only know it from an example.

Suppose a customer wants to talk to someone about an abutment design. They have the conversation and can agree and then share a sketch of the abutment. The design sketch does not update during the conversation but rather the designer renders the sketch afterwards and turns it around as quickly as possible to show the customer.
Now suppose that rather than a designer, the customer is interacting with a chatbot that could perform the rendering automatically as the customer speaks, to confirm an understanding of what was said. Then this would be like the customer having a conversation (in this case with an automated assistant) and seeing or sharing the mental picture as part of the conversation.

With the possibilities of Augmented Reality (AR) and Virtual Reality (VR) these days, it is not too far a leap to imagine two people conversing this way and using headsets to display a shared concept formed during the conversation. Unlike a game the view updates per the conversation not per actions and the scripted scenario of a game. We'll call this Augmented Reality Conversation - as if it existed.

Saturday, May 27, 2017

Free Fall Coding

Bug free design and good procedures such as unit testing, code reviews, and QA are considered good software development practice. But I want to say that sometimes it is more important to create the bugs than to avoid them. There is an even wilder approach to software development which we can call free fall development. You do no regression testing because you want regression. And then, even worse, you operate with no version control. This allows you to totally break everything routinely and deal with the panic. Such shakeups are good for the organic and robust development of new ideas.

I don't seriously propose free fall development but if you want a hip software team, they should incorporate some of its principles

Tuesday, May 23, 2017

I am a victim of beauty

I am a victim of beauty
Held hostage by obsession

Monday, May 22, 2017

Hunting for rock piles everywhere else

I got good at looking for arrowheads here in the near barren fields of Concord - a tough regime. So now I am able to find stone tools anywhere on the planet. In most places the people do not know how to see such things so you can find hand axes in the roadside debris.
But rock piles are something that I assume have much less global span than stone tools in general. The same principle holds: I have learned how to see something that most people do not know how to see. It leads to wondering: where else in the US are there rock piles? They could be pretty inconspicuous. Can they be found coast to coast?

Does each narrative structure support a fixed set of querries?

Suppose you have a reader with nars like X_/A or X->Y. After a read operation you ought to be able to query the reader in forms that correspond to the nature of those nars. Some examples:
if X->Y is in the reader you should be able to ask: where, when, how? For X_/A you should be able to ask about intensity or some such. 

Update: Since a NAR has four sub-nars (thing, action, relation, and value) you should be able to say:
nar.Thing(), nar.Action(), nar.Rleation(), or nar.Value(). Each of these is a query that should return VARs that fill the identified slot. This gets confusing for nars with ORDER>1. Working on it...
Update June 2: Got it working nicely via a concept called "lastConst".

Sunday, May 21, 2017

Scalable expectation

I was thinking that expectation might have an intensity scale. So I could be driving and the story is driving,driving,driving or (get there)*,(get there)*,(get there)* and that it might be reasonable to consider this in a scale. The idea is that you could be a little impatient.
Then I was thinking about the concept of "home" and how being at "home" has the property that I can relax and stop thinking about how to change my location. If I am not home I am trying to get home. On a side track, I think that 'home' has a special place in narrative structure, like 'I'.

Slot Events, Short Commas, and the pursuit of the Golden Algorithm

The "Golden Algorithm" is the correct (but elusive) mechanism for filling N-segmented text, as I was discussing here. So I have been thinking harder about low level things and a couple of flaws in previous thinking are as follows:
  • When a slot is to be filled and has already been filled, it is kind of an error condition and simply overwriting the slot and closing ones eyes to it must be wrong.
  • When a NAR gets completed it should be vaulted in association with the current segment index, not the index of the subsequent control
  • There is no mechanisms for saying: enough time has gone by, let's vault if we have something.
Anyway, a slot event is where we go to fill a slot and the event results in a change according to the prior state of that slot as well as the others (it could be filled or not, they could be filled or not). A short comma is one that has scope of prior text having a small number (below some threshold) of words. The short comma is helpful when handling lists like lists of word/value pairs.

As for the Golden Algorithm, it requires that slots of higher level narratives be scored according to the scores for the lower level narratives in the slot. That requires a lot more local vaulting of partial results and a different feel. So those new ideas are coming, along the way.

Friday, May 19, 2017

Here is a joke: replace government with AI

It is a joke because governing is quite complicated and AI is quite incompetent. But you do see several articles a week about how "AI will revolutionize X"; so I bet you could get away with writing an Onion, tongue-in-cheek article where X=government.

Wednesday, May 17, 2017

Combinatoric complexity of NLP versus simplicity of Narhwal

Taking an absolutely canned example of someone wanting to order a product of name X. Here are some simple forms:

order X
I need to order X
I want to order X
we want X
please make me an X

This small variety already stresses out the combinatoric, part of speech based, match algorithm, and never comes to grip with the concepts involved: AGENCY{I, me, we}; MOTIVE{need, want}; dull words {to, an, please}; the ORDER {order,make} and the undefined object X. So in Narwhal (which doesn't actually support variables like X but let's pretend and call it 'x') you write
GIMME = attribute(AGENCY,MOTIVE) 
followed by
event([GIMME], x, ORDER)
This gets a score of 1.0 on every example, except for "we want X". Since this sentence is missing the ORDER verb is get's no score according to current implementation. One workaround is to add a narrative attribute(GIMME,x) which does get a 1.0.
So at the expense of every keyword list being in the right place and thinking through the necessary narratives, the Narwhal programmer can accomplish a lot in a very few simple lines that require them to actually understand the concepts being programmed as concepts not as words.

If I was not such an lazy intellectual I would try to make this point exactly and publish it. After spending a week playing with AIML, I find the the majority of the programming effort goes into handling the variations in the words that are the least important. Quite literally, AIML is designed to spot the pattern of words around the key nouns in the input, so those same nouns can be substituted [without their meaning being important] into different patterns of words in the output. It is designed to not care about  the meaning of the topic defining words. Narwhal could not be more opposite in that regard - it is focused entirely on locating important topic words while remaining as oblivious as possible to the varying pattern of irrelevant words around the topic words.

Tuesday, May 16, 2017

Is there a STANDARD word tree?

Riffing on the previous post and some mulled ideas, imagine putting words into PyDictionary and getting back definitions, to then take words from the definitions and feed them back - again - into PyDictionary. You'll get cycles and cascades and all the fun you could hope for - very much especially when you drop out words of high frequency (short words). Somehow the structure of the set of cycle you get, whatever the heck it is, embodies PyDictionary concept of word meanings.
What I want to know is how to turn a cycle diagram like that into an organized tree of related meanings.
By it is not just the building of it. Suppose you already had such a tree, or little piece of it: could it to be a standard? Would people agree? It would be a meaning standard.

Guess the topic tree structure from a sentence with its keywords

I am browsing someones GitHub project (https://github.com/natsheh/sensim), which is about a metric of similarity between sentences, and am thinking: how would I do this calculation?

Since Narwhal is about goodness of fit between a narrative and a sentence, it is tempting to calculate a distance between sentences by regarding one of them as a narrative and the other as a sentence (the answer could be different if you reversed the roles of the two). But what is missing in this is how does one reconstruct a topic tree that encompasses the 'narrative' sentence?

Or maybe a better question is about how to build a tree from a whole corpus of sentences. So go like this: find the most unique words and look them up in PyDictionaries and get their synonyms. Now go discover some corpus that is rich in these same words and their synonyms. So: given two lists  of synonyms A and B and a cloud of sentences enriched with all the synonyms [not just A and B], how would you know when to have B below A in the tree?

The example is: "loud" implies a sound; and "noise" implies a sound. So if "loud", "noise", and "sound" are in synonym lists, then "loud" and "noise" should be below "sound". Can this be deduced automatically somehow?

You might ask: is there anything out there in 'reality' that guarantees these words should be in this relationship? I think the answer must be "no" since they are words. I cannot see how you would construe the relation between "loud" and "sound" as factual. But it sure does a good masquerade of it.

Monday, May 15, 2017

OS vulnerabilities are unnecessary and are just a money saving strategy for OS vendors.

A lot of huffing and puffing about the "WannaCry" virus attacking the world - reminds me that the whole computer 'virus' thing is based on making an OS vulnerable deliberately so it can receive cheap upgrades. In fact, OSs do not need to be soft-writable but could be burnt into silicon and as invulnerable as a light bulb. Problem is that Microsoft is addicted to a plastic, writable, operating system so it can roll out upgrades at little to no cost. They fix things when they get around to them and roll out a patch - business as usual.

Here is the thing: if PC motherboard architects wanted to, they could design a system that was largely constant and which only allowed writable memory is constrained "sandbox" areas. All files could be recoverable at all times. The question is: why don't they? My guess is: no business case for it and a lot of conventional thinking. A smart computer scientist could solve this problem.

I think OS upgrades should be delivered the way Kodak "Brownie" flash bulbs were delivered: in packages of several disposable bulbs per package. Unscrew the old OS and plug a new one into the socket. Meanwhile the only vulnerable part of the computer would be a file repository that you could lose and not care, while routinely backing it up.

As I wrote (somewhere) a "flash bulb" strategy for a disposable OS is quite financially problematic for companies like Microsoft and Apple; as the "light bulb socket" would require an API spec that eliminated the monopolies these companies enjoy.

Sunday, May 14, 2017

Friday, May 12, 2017

The urge for home

Is there not such a thing?

My learning stages for language processing

It is so idiosyncratic (not counting my math background):
1. Did the automated reading project, the "note reader", for customer order notes. [C++]
2. Built a noise reader at home, re-implementing ideas from work. [C++ and Python]
3. Created narhwal - the long term project that should contain all. [Python]
4. Started designing the "naomi" chatbot to accomplish more than the note reader: have to have language responses; cannot bail with an "alarm" category...or much less often; have to know when enough has been said to proceed; have to alter responses on the way through; etc. [Python and maybe - with a colleague- Java Script.]

We'll see how it goes with my learning curve. At my age, all learning curves are races with senility.

Sunday, April 30, 2017

White Throated Sparrow

The high whistle of a white throated sparrow takes me all the way back.

Tuesday, April 25, 2017

The Rock Piles blog must be influential

I was just backtracking visitor logs on the Rock Piles site and people are visiting for all kinds of different articles - no doubt the result of searches they are doing. The articles they find are from all different periods of the blog's history. It is an archive not an active news site. As I look at it, the posts cover a reasonably wide variety of subjects centered around ceremonial stonework, but also with arrowheads and other more general archeology or Native American cultural topics - as posted by my guest authors. There are thousands of articles and thousands of readers every month. Right now it stands at around 4.8 thousand readers this month.
I don't know how many of those are repeat visitors but most of them are not. That means I am reaching a large number of people interested enough in such archeological topics to do online searches. I find it sort of weird to put some minor random thought into a post and have someone in Oregon read it the next day. I also find it weird to hear ideas that originated with this blog coming back at me from some other direction - like an echo. For example, I have been hammering away at certain concepts - like rectangular mounds with hollows being burials and *blink* it seems to now be an established fact.

N-segmented text

Continuing the idea from here: if we pass several first order NARs across the segmented text, they may individually or together get a result at any "control" in the segment. The hard part would be higher order NARs getting passed across such a combination of lower order NARs and un consumed VARs in the original segment. I need to develop an infrastructure for managing such overlays of higher and higher order NARs "above" the original segmented text.

The rolling up of the text into the segment (sequence of VARs) and the rolling up of the segment into higher and higher order NARs - leads to the idea that once the text is prepared, all the meaning has already been established. In other words, the rolled up input is the output.

I am grasping but do believe there is a "beautiful theorem" in there somewhere.

Monday, April 24, 2017

Verbs mixed with adjective and the power of the '[ ]' notation for implicitness

I was making up an example of a statement that combined verbs and adjectives and the following, not entirely natural, example came up:

Jon shot a goose that cooked up pretty good

A couple of sort-of interesting things come up as I try to "diagram" it using proto semantics. 

Jon-shot->Goose, [We]-cooked->Goose, [?]_/good

I am using '?' to indicate the ambiguity of whether the cooking or its result was good. Since it is implicit, and since there is a truism that makes them somewhat equivalent, you can see why it is easier to just leave it implicit.

Something like an inserted "We" is needed. Which suggest the general rule of narrative continuity allowing  arbitrary insertion of "I" or "We". This is allowed because they are always in context just as the subject of a story is always in context. [Added: in other words they are global variables]

I think it is good that the proto semantic notation stumbles on exactly the ambiguities that are present in the sentence. The word "that", leaves us uncertain if the entire situation is being described as 'good' or a sub part of it. One does, in fact, sense that ambiguity but also that the ambiguity is not important; which is because of Truism 8 "If an action is described as a success, its outcome is assumed to be good". 

Sunday, April 16, 2017

"Sounds Big: The Effects of Acoustic Pitch on Product Perceptions,"

The article about the article did not mention a discussion of why low pitched sounds are associated with thinking things look larger.

It sounds like the same as the moon on the horizon looking bigger. The connection would be that low pitched sounds carry further so their sources are judged to be further away - hence appearing larger. Same as light being fainter on the horizon. Thanks G.Berkeley.

Tuesday, April 11, 2017

Dear Reader from Italy

Regular readers are so rare, I would be grateful if you would leave a comment. 

Saturday, April 8, 2017

Nesting of narratives and text processsing

The puzzle is how to fit a complex narrative to a text, where the complex narrative nests sub narratives. I had the following thought while riding the Red Line somewhere around Central Square: 

 - First text is processed by being broken up into tokens. The result is called "tokenized text."
 - Next tokens are processed by conversion into a sequence of VARs, including NULL_VAR for tokens that are not recognized. The result is called "segmented text".
 - Now the smallest pieces of sub narrative can be "rolled" over the segmented text to produce a processing of the text into higher and higher order results - you could call them "n-segmented text"....and so on with n increasing with the order of the narrative.

Tuesday, April 4, 2017

More semantic personality profiling - how people summarize and how narratives nest inside narratives

The previous post considered asking different people who had experienced the same thing (a subway ride, buying a dress, etc.) to give a short description. This was to be followed by quantitative study of the notational description of the text - via sentence structure (SnG) and/or via proto semantics. Here is a different question, that can be part of the study:

Ask a participant in the study to give a very brief summary. Then ask for a longer one. Then ask for more details. What forms are present in reversing the summary? You could also ask participants to read a short paragraph and provide a summary, or read a sentence and provide an imaginary flushing out of the story's details.

The purpose of this study is to gather information about how narratives are nested within narratives - and how that varies from person to person.

Saturday, April 1, 2017

How much semantic variation is there person-to-person?

I am railing against the paucity of vocabulary for describing mental states and how it has left us, we who wish to analyze language, with a complete lack of information about how diverse the use is of -say- native English. The problem is that most folks don't know proto semantic notation so they have little hope of capturing specific meanings (absent their representation by words and phrases, i.e. syntax-N-grammar) let alone person-to-person variation. So here is the experiment I would want to do. There is a PhD thesis here:
As much as possible ask 10 or so people for a short description of the same thing that they experienced (a ride on the subway, or a trip to buy groceries, or etc.) The questions to answer are:
  • Do different people use different concepts to describe the same things? How much variety is there?
  • Do different people use the different phrases to describe the same concepts?
  • Do different people use the same phrases to describe different concepts?
The approach is to take a short paragraph from each person. Transcribe the paragraph into proto semantic notation and transcribe it into traditional S-n-G notation. The answer to the questions come from quantifying the comparison of these pieces of notation, person-to-person.

We could call this a profiling of language personality. It is an intrinsically interesting topic - the empirical classification of people by their language use parameters. You could have someone with broad vocabulary saying nothing but a few routine things and someone with a limited vocabulary using a broad variety of concepts and saying a good deal. You could have bell shaped curves. The point is that these interesting things cannot be studied with S-n-G notation alone but they can be studied with that plus proto semantic notation.

Like any other form of profiling, doing it for language personality would be a basis for discrimination. Is it OK to discriminate based on language personality? Clearly not, because your references are always based on some other discrimination. But analyzing differences is not the same as basing discrimination on them.

Thursday, March 30, 2017

Airplane in E minor

I am watching a Netflix show and, out of the blue, decide to stop and play the bit of music - a requiem - that I was trying to compose recently. I always try to anticipate the sound of the first note I hit, so I sing it before hand. In this case, I turned on the keyboard, and was about to do that when I realized that the note I was about to sing was actually the engine note of a passing airplane.

But I went ahead and sang it anyway and then played the keys for the required E minor and -  what do you know - it is exactly the right note. The keyboard E matched the airplane E matched the voice E. And that must be why, out of the blue, I thought to stop watching TV and start playing music. I don't have any noticeable perfect pitch but I might have some perfect pitch based associations.

Wednesday, March 29, 2017

My great grandfather

A Mitnik:

The innocence of ageism and corporate mono culture

I am past 60, easily flustered, and often make in-explicable mistakes with details. I try to catch the mistakes using new error checking strategies - but not entirely successfully. I almost always get the inequalities backwards, the first time I code something. Or I can spend several minutes trying to copy, rename, and move four files using Windows Explorer on a crowded desktop. Something routine like that can be quite error prone. It is embarrassing.

Today I took an employment interview "quiz" with a time limit. About 15 minutes into the allowed 45 minutes I started feeling fatigued. I continued and soon got stumped on a question involving filling in missing entries of a table. This question required me to read and understand the column headings (which were multiply nested) and I took too long. A few minutes later the time is almost up and I am having trouble taking 5% of 600,000 and wanting to check my work. I assure the reader I know how to do arithmetic but that is how flustered I get. It is too bad. It is embarrassing.

So here is the ageism: They expect me to learn quickly and understand a piece of data with complex layout - and they expect me to do it at the same speed as a smart college student. No fair! They also expect me to keep my cool under pressure. I never could, but the impact is far worse today than it was when I was twenty five. No fair!

I think that is the end of that interview. It was my mistake to put myself in the path of its youthful bias. I understand it because I might not hire me either.

There was a different sort of bias on display with today's quiz, aside from ageism. As a geometer I tend to think visually. As an experienced engineer I often know the answers in general and I have no trouble being inventive when needed. Those are the things I would brag about. But on today's "quiz" there was no geometry; no testing of knowledge; and never mind about creativity. Instead they tested a kind of algebra and logic reflecting the mental skills of the test designers - skills that are poorly aligned with mine, independently of my age. This is how an employment culture filters out the "different" and becomes a mono-culture. My guess is that mono-culture is not healthy for the company but, in any case, I do not expect potential employers to solve the problem of my getting old.

Thursday, March 16, 2017

Just sketching an idea

I am beginning to acknowledge that syntax-n-grammar are not completely good for nothing subjects. From a slightly more enlightened position, I can see that syntax-n-grammar give the mind an opportunity to see patterns in the words that are being used - so that recognized pattern of words can be used to fill in a narrative - almost independently from what narrative is being filled. Syntax could fill more than one narrative. The point is that it functions to predict the next words in the same way that narratives function to predict the next meaning.
I like the idea that syntax is a means for extrapolating to future words.

Tuesday, March 14, 2017

Morgan Leslie-Waksman Learning Language

My granddaughter is going through games with her parents; for example looking at a picture book and saying "where is the duck?" and answering "there is the duck" (while pointing). And then "where is the pony?", etc. Soon Morgan can play the game and answer correctly. Then her parents play a more complicated game of "what sound goes the duck make?" "quack, quack". And "what sound does the pony make" "neigh, neigh".
So the game gets more complicated and each game can build on top of the last. So at first she may be learning the word "duck" and noticing the similar patterns of duck on the page or in different contexts where the word is used. She also learned the word "where is" and learned about pointing. In the next game she learned to connect the word "duck" to a different kind of pattern for the "quack" sound.
Some things Morgan needs to learn these games are: an ability with pointing; an ability to see similarity; and a desire to imitate. In the process she acquires the language of animal names, their sounds, their appearances, as well as the game words themselves "where" and "what sound". Possibly she picks up words like "the", "and", "a" - imitating the overall form of expressions she hears.
Then Morgan begins two word sentences. For example she says "Two cars" or "Two boats" but does not use the word "three". Her use of "two" may be related to a desire she has for symmetry - when I draw on one of her hands, she wants me to also draw on the other. Two word sentence like these are a description (to this adult observer).
She also says "Bye Pete" and "Bye ..." to describe something going out of view or changing its relation to her (to this adult observer).

LittleShift

This is transformative: a chatbot that, although artificial, helps you talk out the things that are causing you stress. https://www.littleshift.co/?ref=botlist

It didn't work the way I wanted it to but it is a great idea.

Thursday, March 9, 2017

Driverless cars are a waste of time

Point #1 is that it may be comparatively simple to automate driving down a straight road with no oncoming traffic, pedestrians, or any other sort of confusion. That is one extreme. The other is all the un-expect-able events that an adult human has learned to handle specifically as well as in general. Sure you can write programs to do any one of those things but we are not much closer to creating that sort of artificial intelligence than we were before the advent of the computer. So Point#1 is that driver-less cars are much further from reality than the AI clowns can admit.

Point #2 is that cars are a stupid way to move large number of people anyway, and trying to automate a dumb solution is an even dumber solution. What works for public transit is buses and trains. Point#2 is: we should figure out how to merge the best of the train/bus concept and the best of the car concept with these goals: minimum commute times [I assume this is about commuting not road trips], minimum adult supervision. Maximum freedom to use the system or not.

Proposal: Suppose cars and highways had built in "networked" functionality and suppose you drive up to the 2nd from fastest lane. You push a button and it tells the networked system to please take over and move you into the fast lane. It does and for a while your car is part of a train and you can doze/read/or watchTV. When you want to exit the fast lane, you push a button and the system returns you to the non-automated lane. After which you control the vehicle.
In a crowded city, you would request to leave your parking space, once on the city road "grid" everything is managed as a single network application. You ask to park when you get to your destination. [Ignoring the obvious problem that there may be no place to park. But of course the network already knows about that.] Traffic lights are coordinated with the system and optimized as much as possible. Here "opting out" of system is more problematic.