Thursday, December 29, 2016

More Narwhal progress

I have been debugging Narwhal steadily over Christmas vacation. Wrote a preliminary regression test and bogged down in further bugs. But today I was free enough from deep bugs to begin using the regression test to tune the NoiseApp. Just now I took a chance and fired it up on a pretty long sentence I have been waiting to try:

 "Not one to complain normally I was able to overlook this however I was not able to overlook the fact that the walls are paper thin-every footstep, toilet flush, tap turned, and word spoken was heard through the walls and to top it all off we were unfortunate to have a wedding party staying on our floor."

The result is quite excellent:
A 'sound' narrative is certainly being told, as well as a 'proximity' narrative. It would be great if this was the way it worked on most examples - even the 'affect' narrative gets a bit of a hit. And this starts to give me a perspective on the app narratives fitting around the text together.
Update: This was not as "new" a sentence as I thought. I had debugged it.

Wednesday, December 28, 2016

Revised GOF formula

I had an original gof formula (see here):
gof = (u/n) *( r/f)
where 
  • u = num used slots of narrative
  • n = num slots of narrative
  • r = num words read (corrected for control words, dull words, and anything else I can skip)
  • f = (last word read index) - (first word read index) + 1 
This had an issue when the narrative is a single VAR that is found in the text, because the formula gives it a "perfect match" as compared with what usually happens with multi slot narratives. So I engaged in the unpleasant exercise of compensating for the total number of words, thus penalizing all the other narratives in order to handle single-VAR narratives.
After a bit of soul searching, is it reasonable to think that a singe VAR is not really a narrative? In the proto semantics I took the simplest narrative fragment to be a 'thing'. Perhaps it should be 'thing has attribute"? If that was so, it means the slot count should really be at least 2. So instead of leveraging the total word count into a formula that applies to everything, let us compensate with this version of the formula, that is only different for single-VAR narratives:
gof = (u/max(n,2)) * (r/f)

Tuesday, December 27, 2016

Language is precise

I think one reason mathematicians do not consider language the proper subject of mathematics is because they believe language is vague. By contrast I think language is precise and what seems vague is the use of implicit language. However implicit language follows very specific rules (laid out as the "Truisms") and it is highly efficient. When truisms are explicitly re-inserted into the sentences where they apply, the result is an object with well defined mathematical properties. By which I mean a sentence has an exact description as a formal expression, that is subject to well defined transformations by substitutions that preserve narrative role.

When I'm 64

That would be today.

Saturday, December 24, 2016

Anti trust Law should apply to Big Company Chatbots

Just read an article where some law professors are warning about the forms of user discrimination that large company "personal assistants" can embed. It seems totally obvious that a platform AI like Google Assistant should protect the customer not the vendor but that Google's profit model is in conflict with customer protection. This is true in general, a large company with a profit agenda should not be allowed to also "advise" on what purchases to make. It is a conflict of interest and bad for the consumer.

Thursday, December 22, 2016

Worst mixed metaphor of 2016

(found on Venture Beat, without any preamble):
"The very best marketing campaigns grab them by the gonads. But it takes data to figure out exactly what goes into reaching out and touching someone."

What happened to editors?

Monday, December 19, 2016

Up late

Had some good experiences with Narwhal yesterday and today. It has been like a goose reluctant to lay a golden egg - that just started laying. Enough of Narwhal is working to begin to show strength - but I will continue to be paranoid that a show stopping conjunction or punctuation shows up to spoil everything. Or that it be too hard to define the narratives of a narrow world.
Update: actually what happens is that the mechanism of cause(X,Y) is challenged by the need to both work with bi-directional syntax ('as' versus 'so') and be able to mis parse a partial sentence that has lost an earlier part.

Monday, December 12, 2016

Wow, my code is working

I just tried the equivalent of SOUND_/[SOURCE]_/[INTENSITY]_/[TOD] using the highly nested, and untested function Attribute(X ,Y) along with 'implicit' notation "[]" in the syntax:

s = Attribute( SOUND,[Attribute(SOURCE,[Attribute(INTENSITY,[TOD])] )] )

And, what you know? It seems to have worked, cuz the standard score on the standard sentence, went up a bit. But it was slow!

Thursday, December 8, 2016

Probability of a subsequence

I am a little surprised to find no quick online answer for the question: what is the probability of finding a fixed 'search for' sequence of length K within a longer 'search in' sequence of length N? 
Assume a common alphabet - say - {0,1}. The total number of sequences of length N is 2^N [actually more like (2^N - K)]. The number of these that begin with the 'search for' sequence is 2^(N-K). Call that set "D"

Consider applying the shift operator to the elements of D and how D has an orbit, within the whole space of sequences, that is the same as the set of sub sequences of interest.

The total number of things you get that way divided by 2^N is the desired probability. I suppose it is complicated because not all sequences have the same orbit size, so depending on where you start in D you get a different orbit length. So actually, rather than caring how to count these things, we should be more interested in how they fit together geometrically. You are looking at the points of a discrete surface within a discrete volume, so counting points may not be as interesting as other geometric properties of the sets. Not going to figure it out though. I see why now.

However we can say that the maximum orbit size for a point in D, under shifts, is N - cuz that is how many possible shifts are available. So the probability is < N*2^(N-K) / 2^N . So an estimate is:
probability < N / 2^K
Unfortunately this is a lousy estimate. It misses the subtler point that periodicity of 'search for' within 'search in' must bring the numerator down a lot.
Update:  If N is a prime number then only a constant sequence like 0000000 can be self similar and have an orbit under the shift operator that is less than 2^N - K long.
Update: The proof is that is shift^n has a fixed point, then the subgroup with this  fixed point would have order dividing a prime.

Wednesday, December 7, 2016

Implicit NARs in Narwhal

Not debugged but seems straightforward: implicit sub narratives are implemented in Narwhal through the clunky api of
nar.makeImplicit()
The result is that all sub nars and all VARs at the bottom will have a self.explicit field set to False. The consequences is in GOF slot counting, where we use the "active" slots in the denominator instead of the total number of slots. The "used slots" is as before, the total "num slots" is as before. But now the "num active" is defined as found or explicit; and this is used in the denominator. This allows an unused and implicit part of the narrative to be ignored in the GOF - hence making the nar more multi-purpose.
There is a reward for filling implicit sub narratives: more words from the text are read.

Monday, December 5, 2016

Getting closer

Toot toot! (my own horn). After a lot of debugging of the NWObject, my first attempt at running the noise application gives good results:
I don't dare stress test it. But I'll have to. [In fact the next thing I tried failed.]
I was depressed thinking how hard it will be to market Narwhal to people who already think they know about language interfaces. Then I had a cheerful thought: I can write an article about the need for language interfaces to understand product reviews - and go into details of the noise complaint - as an example of the psychology of a particular specific "product" review. After all, I am an expert, and there is nothing technical about the topic.

Sunday, December 4, 2016

The "final" meaning

I puzzle over whether a filled narrative is the final form of information or whether that can be transformed, one last time, into a better, static data structure. Mostly, I conclude the efficient static structure is the filled narrative itself together with ifound[] information about what words were 'read' in the text.

Monday, November 28, 2016

Narwhal reaches "Design Complete" stage

This means I have done all the basic typing I expect to do and now have to debug. In principle, the tough ideas and design choices are behind us, and Narwhal is at "Alpha". Of course I fear the debugging still ahead but, as I wrote in my GitHub commit, the NWObject works in at least one case. That is the Narwhal object.
As far as that goes, here is an encouraging firsts impression:
Of course a moment later it falls on its face - ah well.

Friday, November 18, 2016

Slicing the "search" problem differently

Suppose that Google focused on algorithms that were entirely personal and connected with a user's profile, while at the same time it built a neutral backend for its indexed data. I suppose the problem is that you cannot index without making assumptions. Anyway, if the profile became the thing, then you could always step out of it and do neutral exact searching if desired.

Wednesday, November 16, 2016

"End of Life" for a neural net

Because they are black boxes giving no window into the multi dimensional measurement space where "clusters" are forming during machine learning, you will never see just how non-convex the classification regions are - how topologically different they are from the original regions in object space. Un-justified assumptions of convexity and a blind belief in the applicability of a random Euclidean metric have created a situation where, inevitably, a sample will get added from "category A" that is closer the known examples from "category B". From there, it is only a matter of time when the two categories start merging and the system deliver more and more incorrect classifications.
It seems to me this is almost inevitable.
At first a neural net system seems great. A small number of examples have been added to the system and they are far enough apart to work as nearest-neighbor classifiers. But then we start adding other examples for greater "accuracy". From personal experience, two things are happening now. Counter examples are starting to show up and so they are added as new training examples. Also the developer is beginning to be hypnotized (PRADHS - "pattern recognition algorithm developers hypnosis syndrome") into believing objects belong in the category, if their system tells them that is where the object belongs. This leads to the addition of more and more boundary case examples. Rather than becoming more accurate, the system has actually become useless and incapable of delivering accuracy greater than a not-very-good level like 65%. That is machine learning "end of life".

I believe that Google may have reached end of life in its search algorithm. You can always find straw in a haystack but I am afraid that you can no longer find a needle there. As far as I am concerned, when I search for "Barbara Waksman" and they return pages with both words but not all pages where the words are adjacent, then what I am seeing is a whole lot of false positives. The world seems much too accepting of these Google errors. When Netflix makes the same error it is SO bad that I can utilize their bogus search results as a backdoor search for random movie titles that are not available otherwise - a different Netflix error. 

Tuesday, November 15, 2016

current GOF formula

(min(u,r)/N) * (r/F)
Where:
N = num slots in narrative
u = num slots used
F = num all words between first and last word read
r = number of words read

Update: Skip the min(u,r) but DO use the total number T of words, so the latest version of the formula is:
(u/N)*(r/F)*(r/T)
This makes a single word match less good than a multi word match; and it favors longer narrative patterns.

I should mention that dull words are discounted in F and in T. For no particular reason they are added to the numerator for (r/F) and subtracted from the denominator for (r/T)). Either way the scores I am getting now look better (eg "1.0"). 
Update: I changed it again, so the score in the numerator of (r/T) is corrected by replacing r with r + ur, where ur is a count of dull and control words in the full segment of text.

Monday, November 14, 2016

Artificial Intelligence, embedded intelligence, and real intelligence in a computer

Count me as an AI skeptic. When I hear about the program that beat a Go master, I think: there is no way that program arose by machine learning or any "AI" technique. Instead, the programmers knew how to play the game well, and embedded their intelligence into a computer. Given sufficiently good Go playing programmers, getting a machine to do what they do, but with the overview of seeing many steps ahead, it is not surprising the program beats the master. All you need is near-master level programmers. This was real intelligence that they embedded into a computer game. But calling it an "AI success" is claiming a mantle of success that had nothing to do with so called "AI".

On the other hand, I propose that a computer that brings me a Coke when I say "bring me a Coke", is exactly as intelligent as needed. It satisfies my requirement, and so it is real intelligence.

As for artificial intelligence, I guess it deserves its own title. Anyone know examples that are 85% accurate?
Update: I see "embedded intelligence" is already taken, how about  "embedded cognition"? There's a buzzword for ya. OOPS! That's taken too. Can I go with "machine cognition"? Nope. All combinations of these words have been taken. So I might as well stick with "embedded intelligence".

Friday, November 11, 2016

Thursday, November 10, 2016

Exact string match searches are no longer possible on Google

I have been trying to get Google to "find" my photos of Barbara, here on Sphinxmoth. I believe Google does index this blog, so why don't they know about her? Frustration over this led me to search for "how to get exact matches?" and to discussions of how Google went with matching to "close Variants" instead of exact matches, at the end of 2014. There doesn't seem to be any explanation except it is somehow beneficial to their advertisers.

So I have a couple of thoughts about why Google had to stop providing exact matches. First off, why should a user care about Google's Ad revenue; and shouldn't Google prioritize users before customers?* Is it possible that exact search would conflict with broader "variant" searches that match customers to Ads? Given the difficulty of blending different algorithms without ad hoc decisions, I still wonder: why not add a button, to allow a choice of "exact matching"? 

 I suspect the reasons are deep and, in fact, would be embarrassing to admit: Google has been suckling too long on the 'bottle' of neural nets. They made the fundamental mistake of thinking they were "learning" and, in fact, they were only averaging. After a while the averages turn to mud. The coefficients become bloated with contradictory data. You add some 'poison' samples and (I know from personal experience) your entire library becomes corrupt, because it contains samples from too many diverse populations. Try treating a multi-modal distribution as a simple Gaussian!

If I am right, Google is doomed. So is Apple. Separately I see that Google's head of R&D is a neural net guru. How delightful.

(*) What part of "do no evil" did they forget?

Wednesday, November 9, 2016

A rare time I agree completely with Chomsky

"So if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language. "

Narhwal and the Virtual Articulator

Yesterday at work I saw my Virtual Articulator (C++) perform beautifully for the first time. Last night at home I saw my Narwhal (Python) outer loop "readText()" perform beautifully for the first time. For the Virtual Articulator, it is more or less fully developed. For Narwhal, the worst of it may be over, but there remain miles to go before I sleep.
I have been agonizing about these two pieces of software for a while. The VA, at work, for most of the last year and Narwhal, at home, since August. It is interesting that these two have been on a sort-of parallel trajectory. Here is a VA demo.

Tuesday, November 8, 2016

AI discussions are like Science Fiction

Except the discussion authors seem to forget that it is fiction. Reading about how capitalism will fail because of AI. AI is going to take your job. I don't see it. Until a robotic hamburger delivery system is cost competitive with a human hamburger delivery system, why fictionalize? Cuz reality is boring?

Sunday, November 6, 2016

righ....t?

Several ways to say the word "right" as a sharing of emotional state with the listener. In one case I say "right?" with a drawn out increasing inflection that asks the listener to share my puzzlement. In another case I say "right!" with a sharp termination that asks the listener to share my enthusiasm.

Interestingly, these different narratives are given by both a written word spelled r i g h t and by an inflection.

Saturday, November 5, 2016

Why does Google not see my photos of Barbara Waksman?

I posted so many. EG here. No reason why Sphinxmoth cannot help promote these photos of Barbara Waksman (I am writing it over and over for the damn search engines.) Barbara Waksman, Barbara Jones Waksman.

Friday, November 4, 2016

New Bread Recipe

2 cups of King Arthur Flour plus 1 tsp of salt in a large bowl
1cup +2 tbs of luke warm water and 1 tsp of yeast (eg Fleischmann "Active Dry") in a small bowl

Stir the yeast into the water in the small bowl until dissolved. Then pour it into the large bowl and mix until most material comes off the sides of the bowl. Dump it out on a surface to rest 5 minutes. [It is a moist dough and the moisture is redistributing evenly]. Clean the bowl.

Use a large flat knife to fold the dough once, then again at ninety degrees. Then put it back into the (clean) bowl. Cover it and let it sit for 3-4 hours at 65-70 F. It should inflate 3X or 4X.

After waiting, pull the dough away from the sides of the bowl, plopping it into the center (like Jacques Pepin) until it somewhat separates from the bowl, and you can dump it onto a flat surface. Clean the bowl. Use the flat knife to again fold the dough twice again. By now there are bubbles in the dough, so try to handle it gently, and don't squash it while folding. Put the dough back in the bowl. The main difference with Pepin, is that he seems to punch his dough down too much for my flour. I have to be gentler with it.

Let it rise for 3-4 hours again at 65-70F. Again pull it gently out of the bowl and fold it twice with a flat knife. Then put it in the fridge for 4-5 hours. (Any longer and the bread gets rubbery). After this rest, take the bread out of the fridge and let it come to room temperature and, again rise for an 2-3 hours.

Then gently take the dough out of the bowl and form it into a loaf  on a surface- either on the final baking pan (which you previously coated with corn meal, or with baking paper), or on a board from which you can transfer it to said baking pan after another hour. Then get ready to bake:

Oven at 425, spritz the bread and the oven interior with water and bake for 10 minutes. Reduce heat to 405, spritz again, and bake for another 17-18 minutes.

To cool the bread, do not put it on a horizontal surface. Instead, prop it up on something so it cools standing vertically, or on a side. That way the bubbles are less deformed during the cooling.

Alternate ending: In summer when the rising times are ~3 hrs, I can do a cooling/rest stage not in 4/5 hours but in less than 2 hrs; then form as a loaf, placed on the baking pan and rising for another hour or so. Then bake and eat the bread for dinner that you started at 7 AM. So we have:
 7-11 1st rise
11-2 2nd rise
2-4 cooling and rest
4-4:15 loaf rises on baking pan
5:15-5:45 baking

Keeping GitHub up to date

This blog was showing up as a "referring site" in my GitHub repository
 https://github.com/peterwaksman/Narwhal
Perhaps, when it gets archived, the current version of the blog main page no longer contains the link and no longer get seen by GitHub as a referrer? So maybe I have to keep it fresh. Consider this an experiment.
Update: Succcess. The link from Narwhal just has returned.

Thursday, November 3, 2016

What is the "information model"? A summary narrative?

The underlying idea of Narwhal is: if you have a model for information to be found in text, you can start with the model, then see how much of it is filled from the text. This notion of an information model is close to what I understand as the database format they use at FrameNet to store the semantic frame alternatives.

So here is an anecdote: I am moving towards developing the higher level work of the Narwhal class - the work where multiple narratives interact. So I was thinking about an underlying information model, and it is very tempting to come up with a class definition for "Hotel" with sub classes for "Room" and all kinds of structure around descriptions of sound. Trying to diagram it, it quickly becomes a confusing mess of boxes and arrows. But that approach is an alternative to using a summary narrative, which captures all possible stories. For noise, various versions of this parent narrative occur, such as
Sound->Me :: Me_/affect , Me->staff : staff_/ action
I am starting to understand that this narrative format is much more concise than any attempt to break it out as a collection of connected boxes.

The key discovery, this summer, which enabled designing Narwhal, was the realization that this parent narrative rarely occurs but instead a variety of smaller partial narratives appear. These "empirical narrative" are to be read and then corresponding parts of the information model are to be set, and delivered. However, with the anecdote above, it is highly recommended that one thinks of the totality of information as itself a narrative, the "summary/parent" narrative, and that smaller partial narratives should be translated into versions of the parent. Hence the Narwhal developer can skip designing a "schema" for the information, and instead focus on determining a parent narrative and rules for translating the partial narratives into it.

Tuesday, November 1, 2016

Where is the chatbot-chatbot interaction concept?

I am mystified that with all the instant experts, none has grasped the need for data exchange between chatbots. There is lots of talk of collections of chatbots (on the subject of searching for a chatbot) but none considers the obvious: chatbots will interact with each other.

Most importantly the personal chatbot "space" must include a concept where my chatbot talks to a travel website chatbot. More generally, my chatbot does my shopping for me.

Thursday, October 27, 2016

Later Hosen

As with "see you later alligator" you could say: "lederhosen".

Other ways to count slots in a narrative

You can count all the VARs appearing in a NAR and that is what I am doing. You just climb into the tree and return by adding sub counts. You do the same to count the "used" slots by checking the state of the underlying VARs and adding up the sub counts.

However it might make sense to consider weighting sub narratives equally, even when they contain different numbers of slots. EG if a two part narrative (A,B) has A=(X,Y,Z) and B=(U,V). Then the count of slots for (A,B) should be 1/2*(count for (X,Y,Z)) + 1/2*(count for(U,V)) = 3/2 + 2/2=2.5
The "used" would be weighted in the same way.
Update:  Or you could still have the total "amount" of slots equal to 5, but with 1/2 of the 5 devoted to A,B equally, and the other half to X,Y,Z equally. So the slots have individual weights as follows:
A and B each has weight 2.5/2  = 1.25
X, Y, and Z each has weight 2.5/3 = .8333...

Tuesday, October 25, 2016

Keywords and Keynarratives

I guess that is the minimal description of Narwhal.

Monday, October 24, 2016

Google's Cloud Analytics

The hype: "Understand the sentiment in a block of text."

My problem is this: blocks of text do not contain a sentiment but a mixture of different sentiments, about different things. No way they would catch the difference between "and" and "but". Talk about "a dull knife"!

Actually: The idea of doing statistical analysis of blocks of text to derive a (non-existent) average sentiment is utter nonsense. It is guaranteed such "tools" will be useless. How long must we toil under an emperor with no clothes? I cannot express the level of distress I have about "researchers" who pass their text data through a black statistical box without understanding the math they are using and without bothering to read the text either. If they read the text, it would quickly become evident that opinions, if subtle, are mixed: consumers "like A" and "dislike B" all at the same time. So trying to turn this into an average opinion is pointless....except to the people who think they are doing research, using off-the-shelf garbage and maybe even getting PhD's. Weirdly enough, companies like Microsoft brag about tools that incorporate "decision tree algorithms". That is 2 strikes against them at the start: using crap math [decision trees aren't even good statistics], and believing in average opinions.

The poor innocent public thinks AI is a done deal and will be more and more in our lives. I strongly suspect the emperor will prevent other contenders for the throne - causing lasting damage, well beyond the short sighted, Bayesian, dreams of the moment.

Sunday, October 23, 2016

Coming back from Narwhal's "Darkest Hour"

What I called it here. I have turned the corner and am writing version 3 of the "outer loop" with so many small but difficult lessons learned in the process.
 - how scoring could pass from the sub narratives up to the whole, 'linearly'
 - how different scoring could be done in the inner loops, separately from return values and from vaulting in the outer loop
 - how vaulting does not need to occur when a narrative is complete, as there should be a chance at the next control event
 - nor does vaulting need to happen after 'forgetting', for the same reason. [Though there might be a reason to vault after an empty segment of text.]
 - that testing every possible segment of text for goodness of fit is WAY too slow, requiring a more relaxed "moving topic" that steps between control indicators in the text. This feels "right"er than version 2.

Saturday, October 22, 2016

Chatbot to Chatbot Interaction and Community

I am not sure why this isn't an obvious idea but as soon as the software developers realize that bots can interact with other bots then they will be seduced by the artificial life aspect and chatbot development will get at least temporarily side tracked away from the purpose of enhancing my internet experience.

Thursday, October 20, 2016

Two ways chatbots can exchange data

Through their own language interfaces and through standardized data exchange formats. It could be like a secret handshake.

An idea from S Donaldson's "The One Tree"

He repeatedly expresses the idea that when you love something and it is stolen, it is even worse to get it back, broken.

Over on my other blog, I am distressed that readers still are wholly confused about the simple things I have been writing about for years. With whatever sort of spirituality I own, loving the beauty of the woods and all, and with a reasonable sense of the current state of scientific knowledge on this particular subject, I find it repugnant to hear of people lying down in burial mounds and looking up at the stars.

The thing I love is the discovery of these mounds. To show them to people and have them apply ideas which only work for a negligible fraction of the available samples - a population of samples far broader and variable than most people realize - is hard. I am afraid I don't have much patience for it.

Wednesday, October 19, 2016

"That" and "It"

I recently put "that" into a list of causality words because the phrase "we were over a bar that was noisy" seems to carry a sense that the bar 'causes' the noise. This sense of causality puts one in a particular narrative framework that is not necessarily related to the old grammatical 'part of speech' definition for the word "that". The 'narrative' reductio is alien to the old grammatical definitions and, hopefully, cuts across them. We'll see, if it turns out I am sorry I put the word into that list.
Update: Perhaps it is simpler to consider "that" to be an attributor like "with", describing a property of the bar. But it is also a property of the experience and its influence on the writer. Still unclear...
UPDATE: Bad idea. It took me a 2 weeks to find out.

Tuesday, October 18, 2016

Aw AIML!

Going in exactly the wrong direction in some things but, otherwise, probably the authoritative approach to communicating topic context. Gotta get that in Narwhal someday!

I was playing with online demos of ALICE and of CLAUDIO today at Alicebot.org  . Here is a fun experiment: what happens if you feed output from ALICE as input to CLAUDIO and vice versa? Do they reach a fixed point? They would have to unless those programs can generate longer and longer texts.

Monday, October 17, 2016

From an article about Apple hiring AI experts

"Salakhutdinov has published extensively on neural networks, a branch of AI critical for voice and image recognition technology..." [here]

The word "critical" is incorrect and shows the risk of sucking on the bottle of Neural Nets. I am familiar with applying Neural Nets to all of the kinds of image recognition tasks I have encountered in my career*, and I have always felt that neural nets are a disappointment. My whole program is about the necessity for model based recognition. For years, companies like Cognex sold image recognition systems using simple template matching - let alone anything sophisticated. Those work fine, within limits, and use no neural nets.

(*) Including gray scale 2D video in general; video microscopy; character recognition; semiconductor defect recognition; and defect pattern detection; 3D feature recognition.

Update: Also reading about Apple dropping its efforts to develop self-driving cars. Is it that they could not get their neural networks to work, so they are doubling down on that technology? Since AI innovation is not going to come from that direction, I wonder whether there is now no one at Apple with a broad enough technical vision that they are listening to the conventional wisdom about how AI is supposed to be done? In any case it is time to sell your Apple stock and buy IBM.

Saturday, October 15, 2016

"The Elements of Narrative" published

I have to admit I am a bit pleased. Let's go have a look. Ouch! a cheap looking website. Anyway, here is the email I got:

  Congratulations! Your paper has been published at October 2016 issue in IOSR Journal of Engineering.

Your paper has been published on following link:



Thanks and Welcome!!!

Friday, October 14, 2016

Dreaming about the mathematics

Given the sorts or semi-mathematical ideas as narrative continuity and invariant transformations of narrative, and given that the entire pattern recognition framework is in the context of a fiber bundle, it is almost guaranteed that there are Euler numbers and beautiful theorems to find.

Thursday, October 13, 2016

The Goodness of Fit scores I ended up with

During recursive narrative processing, one needs a goodness-of-fit score for a narrative that is linear so the sum of the sub-narrative scores is the same as the score of the whole OR one needs a score that is superimpose-able so the sub-narrative scores can super-impose as the score of the whole. I do both using the U=number of used slots acting as a linear aspect of the score; and the found indices ifound[] as a superimpose-able aspect of the score. At any time you can compare U to the number, N, of slots used. Also you can compare the number of words read, R, to the span of indices (F=last-first+1) where they occur.

If all N slots of a narrative are used then U=N and (U/N)=1. Similarly if every word is read between the first and last indices then (R/F)=1. Thus I propose the formula
GOF = (U/N)*(R/F)
This is between 0 and 1 and it is equal to 1 if an only if every word is read and every narrative slot is filled. (There are minor adjustments to F, for dull words and known control words. Also the high-level vaulting permits multiple occurrences of the narrative to be counted if they occur repeatedly.)

But that GOF score is not linear and does not transfer up from the sub-narratives to the whole. So when we come to the need for a goodness of fit score during recursion, the linear/superimpose-able aspects need to used. But how? It only matters when reading the two-part narratives: sequence(a,b) and cause(a,b). What I do is try splitting the text into textA followed by textB and consider U_A as the number of slots used when reading textA with the narrative 'a' and let U_B be the number of slots used when reading textB with the narrative 'b'. Now we seek to maximize
g = U_A * U_B
over all possible ways of dividing the text into two consecutive pieces. It is tricky because the return value from the reading of this text will be U_A+U_B (using plus! for linearity) where g was maximized. This formula for g favors dividing the text into equal size pieces but the sum does not.
Update: It occurs to me, after explaining that the linear and superimpose-able is preserved in a recursion regardless of what formula you use for g, I can see no reason not to use the full GOF formula for g, as well. I'll have to think about it.

Narwhal is now on GitHub

I believe this is public:
https://github.com/peterwaksman/Narwhal

A bit hard to find using "search" but words like "narhwal" and "tripadvisor" will do it.

Tuesday, October 11, 2016

The Internet of Words

I guess I should consider myself the copyright owner of the phrase "Internet of Words". It means an internet powered by language interfaces. I have no intention of enforcing a copyright but hope, rather, that someone does use the phrase.
The Chronicle of Higher Education has me beat but they are referring to something else.

Where did all the instant "experts" on chatbots come from?

A few months ago I was writing about the internet of words and was aware of Siri and some efforts underway by Mark Zuckerberg. Now, only a few months later, the use of language interfaces has pole-vaulted into first place in the techworld's discussions of what is cool. What shocks me is to read "Venture Beat" with one article after another about "Chatbots", written by the cool kids who - since they are the authors - seem to think they are experts.

They are experts like grocery store shoppers are experts on canned carrots. I should be pleased that people are beginning to see the internet of words but am a little revolted, as one gets, when personal thoughts get popularized in the mainstream.

I also note with a mixture of fear and skepticism that the highest praise these "experts" have is for chatbots with "... natural language and machine learning features..." and "big data". Fear because deep bullshit is hard to dislodge and skepticism because I am getting closer to launching Narwhal which is a narrow world language processing toolkit, that works with small data and geometry, rather than statistics, as the underlying technology. (I better get that launched asap.)

One last pair of snide comments:
  • narrow worlds are exactly what chatbots are for, and so the deep Bayesian statistics approach is doomed.
  • it is not about individual chatbots but about how a community of chatbots will work together
So far the "experts" have understood neither.

And versus But

I always questioned the Symbolic Logic assertion that "and" and "but" had the same meaning. It took years to articulate the basic difference: "and" preserves good/bad value whereas "but" reverses the good/bad value of the previous statement. Similarly "although" reverses the value of the next statment.

But I encountered a more profound difference in the dynamic meaning of "and" versus "but" that is exemplified by how "Bob and Ted and Mary" makes sense where "Bob but Ted but Mary" does not make sense. It is something like this: "but" always signals it was preceded by a complete prior statement and "and" keeps the statement going - allowing it to be preceded by am incomplete prior statement. So the moving read point implementation for "and" does not trigger changes as deep as the "but".

Sunday, October 9, 2016

Things Squirrels Do

I think I have seen:
 - one squirrel dragging another that was wounded out of the road
- a male squirrel fooling with a female squirrel, in missionary position and fondling the tits
- a squirrel placing a leaf across a nest entrance as some kind of signal
Update: namely: compassion, sexual intimacy, and symbolic communication.

Friday, October 7, 2016

Chatbot communities are necessary for them to work together

[Late night reading about "chatbots" - emergent communities of software agents] Narwhal is a design tool for creating conversational interfaces. The whole topic of creating a community of interacting chatbots has gone un-noticed. I presume such a community will work in a way that enhances the total effect, with individual classes able to announce their capabilities to each other and, perhaps, some forms of emergent behavior. The development of environmental controls and of 'learning' features would be interesting challenges.

Thursday, October 6, 2016

First "Goodness Of Fit" measure

I am having a good experience using Microsoft's "Python Tools for Visual Studio". In any case, getting a goodness of fit score ("GOF") for the first time is great. Certainly, I have been aiming for this for months, so getting it to work in one case, is a start. The GOF score is 0.5714...
The text is: "the hotel was near to the border and far from downtown" and the narrative pattern I was looking for was 'room/hotel in proximity to noise sources'. The poor fit is because the word "border" is not in any of the noise source dictionaries. However the word "downtown" is a known noise source.

Tuesday, October 4, 2016

Debt swindle

As far as I can tell, a famous presidential candidate's tax swindle amounts to subtracting a negative and still getting a negative. Where is the bad math? Who took the absolute value of debt?

Monday, October 3, 2016

Glib AI put down

When AI researchers understand the difference between hearing and listening, they will understand what they have been missing. "Deep data" assumes the machine will discover patterns in the data (hearing) but does not understand the need to impose patterns on the data (listening).

Sunday, October 2, 2016

Saturday, October 1, 2016

We were so young... we were so beautiful

Circa 1982 Barbara Jones Waksman:
Circa 1984, Peter Waksman, at USC:
I have a song about Woods Hole:
Do you remember?
Remember when
In the summer
Of two thousand seven

We were so young
We were so beautiful
Let's look back and
Know that we were

Oh, Woods Hole
Is the place of my Dreams
Where my skin's been caressed
By the gentle ocean breeze

And although my friends
Are no longer my friends
I look forward to when
We'll all be friends again

Dumping off "Deep Learning" in Open Source

Here is an example. They use the verb "open source" which I call "dumping off". Either way, Yahoo is not giving up on classifying pornographic images, just their lost investment.

Yahoo open-sources a deep learning model for classifying pornographic images

Friday, September 30, 2016

Hate Speech may be a "Narrow World"

I hope to study hate speech and build a Narwhal class to read it if it uses a limited set of narratives. Using Narwhal to read hate speech would seem to be a natural application. Hope so, cuz it looks like the internet needs it.

I am up late reading about "AI and Hate Speech" (my Google search terms) and about something that removes 90% of the offending speech. That is probably as good as it will ever get, being model-less.

(A few minutes later after reading more) It looks way more complicated than simply finding keywords and narrative patterns. So perhaps my "hope" is absurd.

Refloated my boat

Sent the current version of "The Elements of Narrative" to a Journal that solicited a publication called "IOSRJEN" the International Organization of Science Journal of Enginering. I'll probably regret it as the organization is pretty 3rd world. The paper has been significantly cleaned up. So we'll see.
Update: I am caught flat footed by the fact that it got accepted.  Ah well, I could write shorter versions for other journals.

Friday, September 23, 2016

Lots of blogging over at "Rock Piles"

Cranked out ~40 pages of writing and pictures over at my main blog. Ach! I am putting off coding the readText() function with recursive temporary vaulting.
Update: nah! recursive, but vaulting only at the highest level.

Origins of the word "Narwhal"

Got this from a comment in Dictionary.com [here] by a person named Kald:

And by the way, “narr” also means both a “fool” and “jester” in Norwegian… I would guess it came from the Latin word “narrare” (a story or a tale). Jester = storyteller. And I would not be surprised if that is the origin of your Hebrew word as well.

Update: So it looks like I am completing the circle of the word back to its roots. And while we are on the subject of its roots, how about the 4-beat "5 3-6 5 3" sung with the words "nah nah-na nah nah". I would imagine it to be older than these languages. I would not be surprised if birds sing something like it. [Or I could be wrong and crazy.]

(I am using 3-6 to mean dotted quarter note on three and eighth note on six.)

Update II: Korean song: 5-5 3-3 5, 3-3 4-4 3. Sing it and you will see an family relation with the English version.

Thursday, September 15, 2016

The Pre-History of Narwhal

Only cuz I hope it will be of interest to someone:

I work at a dental company where we do lots of 3D geometry pattern recognition. I did this with the best of them for a few years before learning that the right approach is often through some kind of model fitting process. Recently a colleague developed a "model" for the entire pair of upper/lower arch of teeth, by doing principal component analysis on many examples of upper/lower arches. Using this model to fit new examples was a totally different exercise. Not like what you do in calculus, where the best fit is calculated, but rather something more like an exhaustive search through the possibilities. Nevertheless, the results are very effective.

I had similar success measuring the profiles of teeth and coming up with 2D polygon models, to fit to those profiles. In my case I could measure widths and heights of features in the profiles, and use these numbers to parameterize the 2D models. This gave a shortcut to finding a best model and a best fit. Before I developed a model based approach I spent a month or so trying to do pattern recognition solely with algebraic relations between the feature measurements. It just didn't work. But the models did (or did significantly better).

Somewhere around in there I started thinking about the ideas I published in the "Best Models" paper. While trying to write about them, at the same time I was seeing examples in how my cognitive processing was handling routine tasks: like getting up in the morning and going to the kitchen or, one time, when I started reading a paragraph. I noticed how the leading sentence of the paragraph gave me a frame of reference for the following sentences. That planted a seed.

It was in this intellectual environment that I was given a new assignment: To sort incoming text from the customers into "design" and "non-design" statements. It turns out customers use a text feature to override what they ordered or to ask for custom features. Sometimes they simply say "Hello" or "Merry Christmas". All of which makes order fulfillment harder. I was bound and determined to come at this problem using a best model approach.

Now everyone was an "expert" in Natural Language Processing and they were all braying about Bayesian Statistics (stuff I know how to do using AO Diagrams and Data Equilibrium) But I was convinced the best model approach could be applied to language and set about hacking together something that used keywords and tried to use key phrases and even tried to contain an information model for statements of particular importance. At some point I wished I had a dictionary of word patterns I could use, like a keyword dictionary. For example:
"As [blank] as possible"
I did not have that sort of generality and did the best I could. I learned that our customers had about 14 different topics they wrote about and came to call this "corpus" of text a narrow world. In a narrow world you want to sort text into its appropriate sub-topic and then try, as best as possible, to understand and be able to act on, whatever is expressed about that subtopic. Not only were the possibilities quite limited but, as a matter of fact, most people express themselves the same way or in one of several different ways. Since the possibilities are finite, processing text accurately should be possible.

I spent part of a year building an even better reader at home. I studied hotel reviews as a narrow world and tried to read what people were writing when they complained about noise. It was an interesting subject and I built up some good word lists. But, at the time, there was much I did not understand about approaching language with geometric preconceptions. The best model approach lays out a specific Fibre Bundle description for the relation between a total 'Pattern Space' and a base "Measurement Space' [see illustration here]. I never could understand which was which in the case of words. My noise reader was OK but got messier and messier and, in the end, Trip Advisor told me I could not mine their reviews. So dreams of money ended.
 
Then I spent a year thinking harder about word patterns, calling them narrative patterns, and deriving a new formalism called proto semantics for analyzing story structure. I wrote it up in a paper "The Elements of Narrative" and got it rejected by a couple of journals. (I am revising it.) In fact I got distracted from the pursuit of language programming by some actually startling discoveries about simple narrative patterns we live with. There is an entire world of meaning between words and thoughts that has never been discussed because there was never an adequate notation for it. So proto semantics opens other doors it is always tempting to wander through (even in my dottage) . Hence this Sphinxmoth blog.

Then I got sick and badly depressed for 2 month. Recovered with an bubbling up of joy and went to Woods Hole promising myself I would take another close look at noise complaint examples I still have. So this summer I looked hard at the examples and tried to translate them into the new proto semantic notation. As it turned out there were about 6 distinct stories being used to described noise. Like:
"The room was near the elevator"
or:
"Open windows let in the noise"
I showed an image of the six recently here.

Then I started thinking harder about how to organize keyword dictionaries and had an important insight: once you arrange keyword lists into hierarchies you have, effectively, a number system. Then I realized that the "Measurement Space" of the best model approach is this hierarchy of words. And that the "Pattern Space", contained the meanings and the ideal objects defined in proto semantics that were to be fitted to those meanings. So a sequence of words in a text, is a path not just through the tree of words but a path, lifted up, into the space of meanings. The final hurdle in this approach is to have a goodness of fit calculation to compute how well a narrative pattern fits an incoming text. This is the key difficulty at present in Narwhal. (I just realized it has to be recursive and am writing this instead of thinking about that).

I want to say that pattern recognition really does work better with models. It is inevitable that one starts by measuring the data and looking at how relationships between the numbers relate to relationships between the patterns of interest. But it is a mistake to get stuck in the "Measurement Space", and it always goes better when you include an understanding of the "Pattern Space".

I was in Woods Hole a different time later in the summer and was looking at the noise complaint narrative structure and was able, for the first time, to conceive of how to write an automated "reader" in general. Once I saw the possibility, I decided it should be named after "Narrow World" and "Narrative" and picked the name "Narwhal". It is also a nod to computer languages with animal names. Also it is nice we are talking about a marine mammal, since I had so much fun with it in Woods Hole this last summer. Glenway and Barb let me rant about it.

Tuesday, September 13, 2016

Bad becomes Good

Let's suppose a rule for the narrative X::Y that says if one of the two is Bad and one is Good, then the whole statement is Bad. If they are both Good, or both Bad then that is Good. I am assuming '::' has an interpretation as "if...then".

Here is an example that seems correct:
"if you like noise then you will like this hotel" (Bad-to-Good, a negative about the hotel)
"if you like noise then you will not like this hotel" (Bad-to-Bad, a positive about the hotel)
"if you like quiet you will not like this hotel" (Good-to-Bad, a negative about the hotel)
"if you like quiet you will like this hotel" (Good-to-Good, a positive about the hotel).

If this is the general case, then value or sentiment transmits through an "if...then" by this rule:

X\Y      Good   Bad
Good  |  Good   Bad
Bad   |  Bad    Good

Interestingly this can be derived from Truism 4 - "Things remain the same." The stories about things remaining the same are positive and the ones where things change are negative.
Update: I am not sure if it this works in general. It may be because of the "like".

Saturday, September 10, 2016

The "n" and the "d"

I was driving and trying to make a joke of the words "Black Widow" I saw on the back of a passing car. I tried "Black Window" and, for some reason, this really did not work. Pausing to think about it, it may be because the sounds "win" and "wid" are so different. It feels as if the "n" and "d" endings are almost opposites of each other.
Just a thought to follow up on sometime: the exclusive "|" of Narwhal is related to the reversal of sentiment value, as per the "block()" commands and the implementation of ' *'. The same thread of narrative exclusion may serve to describe very low level aspects of behavior and language. In other words the "n" versus "d" sounds may live in a world (of phonemes) that can also be explained using narrative patterns.

Tuesday, September 6, 2016

Narhwal's darkest hour

...writing the UpdateVault() routine

Sunday, September 4, 2016

Wednesday, August 31, 2016

The close integration of different perceptions within our experience

About dogs and crushes on girls.
Scientists have found that dogs perceive kind words differently from the same words spoken without kindness. Nor are dogs fooled by kind sounding gibberish. Instead, based on MRI scans of their right/left hemispheres while being spoken to, it is observed that dogs are using both hemispheres in perceiving the positive re-enforcement of praise. This has numerous interesting consequences. I see it as suggesting that spoken language of humans shares a emotional component with the language of dogs. Just as we seem to share musical scales with birds. And these things come naturally to all of us species.

Written language is a poor thing, bereft of the emotional content carried by tone and enunciation, it simply stands to remind us of what the spoken language was like. In fact we must fill it in and "speak to ourselves while reading" to get a full sense of what is written. Still, we might get it wrong, with something like sarcasm, or reading a script out loud. This non-emotional content is huge for humans but may be different for dogs and birds.

We had that conversation about "common coding" theory, saying that the muscles used to say words are connected with the memories associated to those words, and connected with how the memories are stored. The consequence, based on the dog observation, is that we have similar muscles to dogs.

Perhaps separating thought and musculature creates a false dichotomy. Perhaps language and its function of communication are too closely tied to everything else that we act on, perceive, and experience. I don't know but I want to mention one more example.

When you have a strong crush on a girl and are trying hard to put her out of your mind, physical sensations breaks through the barrier. Physical touch becomes a trigger for recalling the person. Could it be that there is a pattern of 'woman' so built-in to my act of physical sensation, that each touch speaks her name? If this was so, then presumably the most primitive creatures would experience love. It is not much fun.

Wednesday, August 24, 2016

Port Huron Float Down - illegal "floaters" from USA to Canada, Aug 24 2016

The question comes up as to whether the Americans who were 'rescued' were also tagged before they were 'released' back into America. This in turn leads to the speculation of using "tag and recapture" to estimate the number of Americans in terms of the ratio of re-captured individuals.

Tuesday, August 23, 2016

Spell Hacker

I have a premise for a story: a guy living in a world of magic, who does not himself have magical ability. But it turns out that spells are like software and almost invariably are buggy. So this fellow learns how to hack the bugs in spells. For example, he and his group are immobilized by a spell but he leans slightly forward and back a few times and cracks himself out, then thinks: "let's see...there usually is a bush somewhere with a...there's one...[he looks underneath the bush]....there it is! This spell is a piece of crap." He fiddles with something....there is a "poof"...and his friends are freed as well.

Friday, August 19, 2016

Road layout determines driving behavior independent of traffic density

You could say that driving behavior consists of speeding up, slowing down, and changing lanes. I note - while driving past Route 20 going south on 128 at 1:30PM Friday - that people are making the same lane changes that, later on, gunk up the traffic, but which do not at the lower density. The behavior is still there but it does not affect traffic at this density.
On the other hand, speed changes and opportunistic lane changes (the ones that aren't necessary) are related to density. 

Thursday, August 18, 2016

How do we take the measure of a word?

I was stuck on this for a long time but I now believe the answer is:
 - Within a hierarchy of word categories that surround the word in a particular semantic frame. The location of a word in this hierarchy is thoroughly analogous to how numbers are located in smaller and smaller half intervals of [0,1]. But words by themselves are not infinitely subdivisible.

Wednesday, August 17, 2016

how to handle a polarity change deep inside the sub narrative tree?

A totally idiosyncratic question, but the answer is theoretical, and postulates that a narrative can have only a positive value or negative value. You are not waiting for an ambiguous good/bad to occur but are operating with a current one.
So the answer to the question is: whenever a deep polarity change occurs, the whole evolving narrative should get its polarity changed as well.
Update: It is particularly interesting to suppose a collection of alternative interpretations that widens and narrows on the way through a text but with one component - the value - being shared by all the alternatives. This 'value' is always current and is not projected as a possible future. 

Thursday, August 11, 2016

Owl hoots, the clave, and narrative patterns

I was having a sleepless night and tried to turn it to good use thinking about how to implement the "moving topic" in Narwhal. I had three good ideas, which I will write more about later. The basic idea of the "moving topic" is that you move through the words of a sentence with a collection of hypothetical meanings that gets added to and narrowed down as you go.

Then there is the clave (or "stick") which Glenway Fripp has been telling me about. Apparently it serves as a fixed rhythmic pattern - whether or not you hear it - and this fixed pattern replaces the regular "beat" of European music, in all Latin American music. This idea of an irregular rhythm being the basis for variation (of the overlayed melody, not the clave "beat") has been in my mind recently - since I started hearing about it in July.

And then around 4:30 AM a Great Horned Owl started hooting. It has a three-note rhythm at the beginning, followed by a more complex rhythm sequence of hoots. The latter sequence of hoots (as I listened carefully to them) were varying in 'attack'/ 'duration'/ 'intensity'/ 'inflection', and in enough different ways that I was not sure if the owl was repeating itself. A casual listener would conclude the song was the same each time with only subtle [and presumably meaningless] differences in vocalization of each hoot.

I bet that is not right. I bet that three-note prelude allows sync'ing to it and guarantees the rhythm expectation (of any listener) for the more variable part of the pattern to follow. I also bet that if you recorded the same owl hoot over and over, you would see a very deterministic variation in the second sequence of notes. As a computer programmer, I know how much information can be encoded in a binary sequence. And with the clave-like sequence of owl hoots, given the idea of an evolving narrative pattern, it totally makes sense that the hoot variations would be within a fixed rhythmic sequence, but with more and more inflection variation towards the end of the sequence.

There is absolutely no mathematical basis for the assumption that all owl hoot sequences are the same, or that their songs are aesthetic but without content. There is plenty of room in that data for subtle meanings. In fact there is plenty of room for a collection of hypothetical narratives to be narrowed down to a final confirmed meaning.
Update: The point is that the theory that birds sing for aesthetic reasons and to show off - but not to communicate, then that theory has to explain a change in variability part way through the song. I don't see how it can.

Saturday, August 6, 2016

Value propogation in verb and adjective narratives

We have relations like
A -v-> B
and
A _v/ B
The question is: how do the values associated to A, v, and B become a value for the whole expression? Let 0=bad and 1=good, we can write rules like the following.
Always ignore the value of A and use:
value( A -v-> B ) = value(B) + value(v) +1
value( A _v/ B ) = value(A)*value(B)
For fun you can write:
1 - val( A-v->B) = val(A) + val(B)
     val( A _v/ B ) =val(A) * val(B)
Also, it think we should use:
val( A, B) = val(B) 
and
val(A::B)=val(B) 

Friday, August 5, 2016

GOF - the goodness of fit formula [a work in progress]

I think this is right, and took the time to draw it. (Actually it is  not, see below)

We consider all possible assignments of words of text to slots of a narrative pattern, including not using some of the slots. We are interested in assignments that use as many slots as possible. Let p_used be the number of slots used in a given assignment and let |text| be the number of words of text. Let "delta i" be the difference between the first and last indices of words that are used plus one. Then a measure of goodness of fit (or "GOF") for the assignment is:
 
This rewards for having extra slots that match but does not punish for having extra slots that are un-matched. To do that, pre-multiply by pattern length divided by text length.
Update: as usual there is a bit of confusion as something like this starts to finalize.  In fact the formula confuses between the number of slots used out of a total available in the narrative pattern with the number of indices used (in the many-to-many mapping of slots to indices) out of the available indices in the text. If we assume the above formula has p_used to be the number of indices of text tokens that are used in pattern matching, then the missing piece to penalize for un-used slots is the factor (u/n) where u is the number of slots of the pattern that are used and n is the total available number of slots. So, less elegantly but more correctly we can let 
|p_used|= num pattern slots used 
|p| = num pattern slots available
|text_used| = num text token indices used
|text| = num text token indices available
di = (first index used - last index used + 1)
then define
GOF = ( |p_used|/|p| )  *  (|text_used|/|text|) * (|text_used|/ di)
Where the first factor measures how much of the pattern is used. Second factor measures consumption of the text. Third factor measures how clustered is the use. But note that the "use" of the pattern may be smeared out over the text. 

Currently (in Sept): I am favoring one optimization involving u = numSlotsUsed() of a narrative, n = slotsUsed(), and r = words read, f = lastwordread-firstwordread+1. The formulas, for goodness of fit is gof = (u/n)*(r/f). A different version, used in recursion, is simply 'u' along with trying to read as many words as possible.

Tuesday, August 2, 2016

Things rich people say - continuing the series

Overheard:
"...the neighborhood was kind of run down, I mean for a million bucks you expect a little more.."

see also here.

More about the 'story" noun type

In principle this may not be too hard to add to the proto semantics. 
Verbs of 'story'
person "tell" story   (write: person-tell->story)
person "listen to" story (equivalently: story-listened to by->person )
thing "evokes" story  (thing-evokes->story)

Adjectives of 'story'
There is one adjective qualifying a story as "about" another narrative pattern. So we write
story _/ narrative

There are also some semantic rules about plans of action being converted to story and vice versa.

Sunday, July 31, 2016

Goodness of fit metric for text matching

I believe it involves the quantity:

g = (num words read)^2 / (last - first + 1)
where a pattern is used to match text, and 'first' and 'last' are the first and last indices of words read in the text.
Here is why. Increasingly I think pattern matching should require all of the pattern to be filled in some way so the number of words consumed will just be a function of how many of them are in the pattern. hence (num words read) / (last - first + 1) is simply measuring how spread out those same words are in the incoming text. The additional factor in the numerator of (num words read) gives greater weight to longer patterns. You might want  to consider G = g/(num words in text) so the quantity is never more than 1.

Saturday, July 30, 2016

Barbara Waksman

How to verify computer generated proofs that are too long for a human to read?

I propose that a proper implementation of Narwhal could solve this problem. In the previous post I showed how an "if...then" statement could be translated into a proto semantic expression. This would give Narwhal an opportunity to process an arbitrarily long statements and, in particular might be setup to read the proof automatically to determine its correctness. As it crunched along automatically through what would take a lifetime for a human: if an incoherent statement occurs in the proof, or if it finishes and says the proof is true or false, then the final verification would be to verify Narhwal itself.
Verifying Narwhal might be considerably more in the scope of a lifetime. I am sure you could do it by verifying the lowest levels and the inductive/recursive mechanisms for increasing complexity.

Hypotheticals are contrasted, multi-part, 2nd order narratives

Example:
If you have time after work then you should come over for a drink 
This is a hypothetical chronology. Actually of the form
((you have time after work), (you come over for drinks))*
There is no need for a separate concept of 'hypothetical' as it already resides within the contrast operator and forces derived from Truism #7. Given a stated a contrast, there will be a desire to resolve it. 
The narrative inside the parentheses that is contrasted is a second order narrative with first order sub narratives. [Concept of "order" is new to proto semantics but needed for the Narwhal implementation]

So this gives us that "if A then B" should translate into the proto semantic statement
(A,B)*
So for example "unless A, B" or "if not A then B" becomes:
 (A*,B)*
 Obviously the Narwhal engine has its work cut out for it, making these translations to text internally.
Update: I am not sure about this. It could also be interpreted as A::B. Perhaps the logician usage is a bit of a corruption between the two interpretations [yeah, don't expect me to be nice.]

Good vs Bad in proto semantics

Faced with how language builds up more complex statements from simpler ones and given how easy it is for expressions to become combinatorially difficult to analyze, the ancients found the True/False concept provided relief. As the thinking goes: True/False can be assigned to the smallest part of an expression, and the value percolates up and through all the complexity to reveal a total whole True/False "meaning" for the more complex expressions. Maybe it was pretty clever of them to realize the problem and solve it this way.

I have the same problem with proto semantics. Given a complex statement, what exactly has been said? I take relief in the idea that something like a Hotel review is an opinion. It expresses a value that is, in the end, either positive or negative, good or bad. So what percolates up through the complexity of an a opinion statement is a Good/Bad value that is meaningful for the smallest unit and also percolates up and through the complexity to provide a single Good/Bad value for the whole. [proto semantics also allows the "opinion" to be off topic or un-intelligible].

This means a Narwhal class must contain a "polarity" member variable, that works to retain the whole value of an input text. It defaults to "1" which is equivalent to "good" or "true" or any other polarity that might be around. I am learning lots of things from trying to implement proto semantics in this Narwhal language.

But not all statements are value statements. Lots of technical language describes things with attributes, chronologies, and interactions and is not concerned with the positive or negative values. They are just informational narratives. In that case the polarity can remain at its default ("Good") [where it can be ignored].

Tuesday, July 26, 2016

If Narwhal is right...

If Narwhal is right then I can hope geeks, from among the 4 billion people on the planet, will set about building little pockets of definition and after 5 years there will be standards committees and the collection of existing (and connected) pockets will spread out and provide a broad basis for understanding most language use. After 10 years it will be broad and deep and the internet will be able to understand what you type.
For example, someone might put their way of thinking on the internet, so people could use it without learning much about it.

You can't make this stuff up

Man fatally shoots doctor before killing elf at Berlin hospital

Monday, July 25, 2016

if A then B

A couple thoughts. One about how logicians co-opted a chronological term ("then") for a timeless logical relation. But actually it is like this: all mathematical certainty comes down to the same thing: I am playing a game with someone or I am not. For example:
P is playing a game with B: when he hands her a ball, she will hand it back.
P will know B is not playing if the ball does not come back in a timely manner.
For another example: Suppose we take a game with this rule: whenever I hand you an assumption A, you will hand me back a predictable result B. I then hand you an assumption A. Now I know you will hand me back a result B, or know that you are not playing the game. Was it Aristotle who wrote this down?
A
A=>B
B

Update: So what is at the root of the certainty that 'I am either playing a game or not'? I believe it derives from this same principle that defines the notion of a "channel"- where you can switch between channels but only watch one at a time. This happens in our heads, for any category that works exclusively - like a color channel, or a shape channel, or an intensity channel, or a position channel (two things can't be in the sample place), or an arbitrary True/False channel. We know that two different channel types are compatible when they can be superposed. I cannot tell if this derives from language or from perception, or what all.

 Playing a game with rules that are shared is also an underlying principle of communication - as Grice would have it. Beings that depend on communication for their prosperity have evolved to this mental capacity of creating new channels and then using them on purpose.

Saturday, July 23, 2016

Circling around the semantic underpinnings of symbolic logic

It is a goal of mine to better understand how mathematical necessity comes about. It some cases it seems to come from cultural games that develop expectations (like counting). Others arise naturally in the vocabulary of boxes and containers and, generally, 'thing' and 'place' words. But above all, I would hope to derive the necessity of symbolic logic ideas from simpler semantics. That hasn't happened yet but I am getting some insights into how the proto semantic operators are related to language usage in comparison with how logic operators co-opt the same usages.

The ',' of proto semantics is equivalent to the natural usage of the word "then", as in "we went to town, then we came home, then we ate lunch". Pretty close to "we went to town and we came home, and we ate lunch". Now this same "and" was borrowed by symbolic logic to mean a version of  sequence that is independent of order. To question whether order is important in a sequential statement is to create a straw horse, meaningful to the logicians' co-opted version of "and" but which is not part of the original natural usage of "and".

Very much the same is true for "or" which does not really have a representation in proto semantics. It means, generally, to make a choice. To ask about whether it is an "exclusive" choice creates a straw horse, meaningful to the logicians who co-opted the term but not part of its original natural usage. A choice is a choice and the additional ("and not both") is an artificial add-on from logical usage. So why is "or" not part of proto semantics? I suppose it is too close to requiring a concept of 'collection' that is not available in "proto" land.

Finally the '::' of proto semantics is the "because" or "so" of natural language. It's closest analog in logic is the "therefor" of syllogism. But interestingly, it often corresponds with a different natural usage of the word "and" [example?]. "He sailed beyond the horizon and that was the last they saw of him".

Update (in favor of proto semantics): the two natural meanings of "and" are captured by the notations ',' and '::'. Proto semantics has no concept of set, so no concept of "or" and choice - although that might be added in a post-proto semantics. Logicians have added the (unnatural) assumption of order-independence to the definition of "and" and the (unnatural) assumption of "both/not both" to the definition of "or". Meanwhile the natural word "then" is captured as ',' and is not locked into the logicians warm embrace of the "if....then..." format. Also the word "if" is in no way special in proto semantics. We say "if you are not busy after work then you should come over for drinks" and take this as a statement with a ',' in it for the "then". The "if" partakes more a matter of choice availability - something at the level of "or" and a bit out of reach to proto semantics.

Sunday, July 17, 2016

The Narwhal Language - a Python extension that implements semantic concepts

I started thinking about promoting my "proto semantics" via a computer language for narrow world language programming. It is fun and convenient to call this the Narhwal Language and thinking about it has clarified several things.
At a theoretical level, I finally see that the base space in a Best Model implementation of language recognition, is a structure of keyword dictionaries. Like this:
Entities like this play the same role as numbers and measurements play in geometric pattern recognition. But it is the narrative patterns that play the role of geometric figures. These narrative patterns live in the total space of meanings. For noise, we have these functional narratives:
     SOUND
     SOUND_/TOD
     PROBLEM_/SOUND, (PROBLEM_/SOUND)*
     LOC _RELATION_/ SOURCE
     MATERIAL -OPACITY-> SOUND
    (SOURCE,LOCATION)_/INTENSITY :: SOUND


So Narhwal is designed around making these ideas accessible to a programmer who wants to write text-aware classes but wants to focus on the details of his subject, not on generalities about how language works.
At a practical level, one discovery helps me to see how Narhwal could be implemented. This is the separation of the summary narrative:
        (SOUND->[ME] :: [ME]_/AFFECT)
from the functional ones and the realization that the functional narratives need a hard coded mapping to the summary narrative. But once you see how to do that and see how text can be filtered through the functional narrative patterns, an implementation starts to become visible on the horizon.
Also at a practical level, it is worth clarifying the tree of keyword dictionaries illustrated above. One basic concept is the difference between OR'ing and XOR'ing of sub-dictionaries. Another is the difference between sub-dictionary and child dictionary. These are needed to specify the structure that I think is required.
Back to the theoretical level for a moment: incoming text is seen as a path through the base space and its interpretation is a lifting to the total space. By using a goodness of fit measure that counts the number of words consumed, one hopes that the best fit lifting is a reasonable approximation to the "true" meaning that will forever be stored, untouchably, in different people's different minds.
Update:  Not long after seeing that a computer language was possible, I was noodling around in a tongue-in-cheek sort of way trying to think of a name for the language. "Narhwal" works for narrative patterns as well as for narrow worlds and, given the number of computer languages that are named after animals, seemed like a winner. In fact, as soon as I had a name to use, I started using it and the project was launched.

Saturday, July 9, 2016

The abstract form of the noise statement

But interestingly, this is not the form that appears in natural expressions which mostly take the form of cause and effect statements: "the room was near the elevators and we were kept awake all night by the clanking of the old equipment". Or "the windows did little to block the sounds of heavy traffic from I-270".
This suggests the need for two levels of semantic processing in an automated reading system. A first level reads natural expressions using smaller "functional" story forms. These are then hard coded to fill in the larger general form. [For hotel rooms there are only about 5 of these smaller story forms that are about noise.]
Update: This basic idea of two levels of semantic processing was the beginning of believing I might be able to implement these things in software.

Tuesday, June 28, 2016

selfie with a spicebush hat

I call it "soapbush" and rubbing it on yourself sort-of helps keep off the bugs. But the bugs were annoying so I tried to make a hat. It still did not do much good.

Tuesday, June 14, 2016

Fool enough to think I can spot a woodcock

A woodcock flew up ahead of me and landed quickly, a few feet away. I thought I could photo it, but filming is the only thing fast enough.
The woodcock is thinking "he must consider himself able to spot a woodcock".

Friday, June 10, 2016

Planning

Since describing an action gives a narrative for it - a plan - I want to examine the idea that a plan is a narrative with narrative structure similar to the action itself. Since the actions precede language, must not language evolve along lines that already exist in the mind or "motor-sensory" system?

In other words the underlying system of narrative structure would predate language.

Saturday, May 28, 2016

Beuroware

I might as well coin the term "beuroware" for software that replaces a beurocracy.

"For English press (1)"

Friday, May 27, 2016

Pausing before taking off the second shoe

I notice that as I take of my shoes, there is a moment, when I have taken off one shoe, when I am at rest for a moment. I resume taking off my other shoe without a definite schedule. This is reflected in a "," in diagramming the expression "I took off one shoe and then the other:
I took off one shoe, I took off the other
Compare with the expression "I untied my shoe and then took it off". Which diagrams as
I untied my shoe :: I took the shoe off
The "," conveys a sequence that can proceed at an unconstrained rate, so I can relax for a moment before taking off the other shoe. When I untie a shoe, though, I do not rest until the shoe is off. 

I find other moments of rest, during the day. For example, getting out of my car there is a related set of actions: pull in, take car out of gear, brake, stop, pull parking brake, undo seat belt, open car door. THEN there is a pause before I exit the car, where I feel like I can rest a moment.  

These moments of pause are represented by the "," of narrative structure describing the action. I understand why it is natural to occur between shoes, but I am less clear why it works that way with stepping out of the car. In fact, I sometimes put one foot out of the car and on the ground, then pause. I guess moving a leg from inside to outside the car is a complete action and that "," is natural between complete actions. These thoughts reinforce the comparison between actions and narratives that can be called "planning". An action is not necessarily described in words but when it is the narrative should reflect underlying structure for the action. By this view, the structure of language simply represents the structure of planning and of action. Which allows language to be a window into our mental processes.

Thursday, May 26, 2016

The Boy and the Dragon

Act 1
Scene: The boy named Tom Solomon, is fishing along the river, only a few feet from the nose of an old female French Swimming Dragon (the ones that lay eggs that are elongated. These dragons are green and reflective and well camouflaged when they hold still). Female dragon is peacefully occupied looking for large prime numbers in her head and spearing an occasional passing fish with her tongue. Nearby a young female dragon, daughter of the older female, plays in a field. Her name is “Esque”- as in ‘statuesque’, ‘arabesque’, and ‘picturesque’ but not as in ‘grotesque’ or ‘Kafkaesque’.  The boy is glad it is Sunday and he can play outside.

Scene: the King’s advisors talk of how the King’s authority is being eroded by a strong middle class and powerful merchants. Originally the King’s family was famous for slaying dragons. “Too bad there aren’t any dragons left, or we could have the King defeat one… What, that old fart? We’d have to fake the fight…..hey why not fake the dragon….can we do that? Let’s ask the magicians and special effects department….” Eventually they come up with a fake dragon and launch it: a Red Fire Dragon, swooping back and forth over the town and countryside breathing fire and terrorizing everyone. Among all the advisors, one (The Naysayer) says: “No, you should not fool around with dragons…”

Scene: everyone cowers under the swooping presence of the Fire Dragon. The boy scrambles out of his boat and runs to hide under a nearby bridge. The old dragon shows no signs of even noticing the disturbance overhead. The young female, Esque, cowers for a moment in the field. She is too young to realize how well hidden she is in the grass, and she runs to hide under the same bridge – back to back with the boy. They meet and converse.

Scene: The mother dragon is called “Llel” as in ‘parallel’ and ‘Low Level Energy Laser’*. Esque talks to her mother about the scarey dragon overhead and the Lell says: “Oh! That? You can tell it is fake – see how it keeps repeating the same pattern – three flaps, a swoop and a turn to the left; then three flaps, a fireball, two flaps and a turn to the right…what a piece of junk….it’s a wonder the thing hasn’t crashed.” “But mother, it is frightening, can you make it stop?”
Reluctantly the mother agrees, shakes herself off, and jumps into the air. These French Swimming Dragons are graceful and delicate. Once airborne the mother becomes quite hard to see, reflecting the color of the sky and clouds. Occasionally a metallic glint is visible but never an outline. She flies alongside of the fake dragon – matching it flap for flap, swoop for swoop, and then shreds it with her claws and extinguishes the fire with a well-placed spout of water. She shakes herself off and glides to a landing on a distant lake a few miles away.

Act 2
Scene: The Naysayer and Merchants sit discussing events. One argues that the military is loyal to the King, another argues that a few select generals control the army’s loyalty.

Scene: the King’s advisors are only confused briefly by the destruction of the fake red dragon; then they see their chance. They call up the generals to send out the troops looking for a “green dragon”. Most of the troops don’t believe there is any such thing.

Scene: mother dragon swims back up the river, past the town, back to her quiet spot by the river. The troops ride up and down.

The troops capture Esque. The boy Tom gets into the castle and gets her out of her cage. As a young dragon Esque does not have ability to fly or shoot water. Somehow the stresses of capture initiate the maturation process. In a desperate moment, Tom climbs on her back and she is able to fly out with Tom on her back.

Act 3
They return to the mom. She gets angry, trashes the kings troops.

Much confusion ensues, possible dethroning of the King. In the end the Naysayer says: “I told you so”.

*Senior Dragon’s occasionally are allowed to claim an acronym for their name. It is not common.