Tuesday, May 23, 2017

I am a victim of beauty

I am a victim of beauty
Held hostage by obsession

Monday, May 22, 2017

Hunting for rock piles everywhere else

I got good at looking for arrowheads here in the near barren fields of Concord - a tough regime. So now I am able to find stone tools anywhere on the planet. In most places the people do not know how to see such things so you can find hand axes in the roadside debris.
But rock piles are something that I assume have much less global span than stone tools in general. The same principle holds: I have learned how to see something that most people do not know how to see. It leads to wondering: where else in the US are there rock piles? They could be pretty inconspicuous. Can they be found coast to coast?

Does each narrative structure support a fixed set of querries?

Suppose you have a reader with nars like X_/A or X->Y. After a read operation you ought to be able to query the reader in forms that correspond to the nature of those nars. Some examples:
if X->Y is in the reader you should be able to ask: where, when, how? For X_/A you should be able to ask about intensity or some such. 

Sunday, May 21, 2017

Scalable expectation

I was thinking that expectation might have an intensity scale. So I could be driving and the story is driving,driving,driving or (get there)*,(get there)*,(get there)* and that it might be reasonable to consider this in a scale. The idea is that you could be a little impatient.
Then I was thinking about the concept of "home" and how being at "home" has the property that I can relax and stop thinking about how to change my location. If I am not home I am trying to get home. On a side track, I think that 'home' has a special place in narrative structure, like 'I'.

Slot Events, Short Commas, and the pursuit of the Golden Algorithm

The "Golden Algorithm" is the correct (but elusive) mechanism for filling N-segmented text, as I was discussing here. So I have been thinking harder about low level things and a couple of flaws in previous thinking are as follows:
  • When a slot is to be filled and has already been filled, it is kind of an error condition and simply overwriting the slot and closing ones eyes to it must be wrong.
  • When a NAR gets completed it should be vaulted in association with the current segment index, not the index of the subsequent control
  • There is no mechanisms for saying: enough time has gone by, let's vault if we have something.
Anyway, a slot event is where we go to fill a slot and the event results in a change according to the prior state of that slot as well as the others (it could be filled or not, they could be filled or not). A short comma is one that has scope of prior text having a small number (below some threshold) of words. The short comma is helpful when handling lists like lists of word/value pairs.

As for the Golden Algorithm, it requires that slots of higher level narratives be scored according to the scores for the lower level narratives in the slot. That requires a lot more local vaulting of partial results and a different feel. So those new ideas are coming, along the way.

Friday, May 19, 2017

Here is a joke: replace government with AI

It is a joke because governing is quite complicated and AI is quite incompetent. But you do see several articles a week about how "AI will revolutionize X"; so I bet you could get away with writing an Onion, tongue-in-cheek article where X=government.

Wednesday, May 17, 2017

Combinatoric complexity of NLP versus simplicity of Narhwal

Taking an absolutely canned example of someone wanting to order a product of name X. Here are some simple forms:

order X
I need to order X
I want to order X
we want X
please make me an X

This small variety already stresses out the combinatoric, part of speech based, match algorithm, and never comes to grip with the concepts involved: AGENCY{I, me, we}; MOTIVE{need, want}; dull words {to, an, please}; the ORDER {order,make} and the undefined object X. So in Narwhal (which doesn't actually support variables like X but let's pretend and call it 'x') you write
GIMME = attribute(AGENCY,MOTIVE) 
followed by
event([GIMME], x, ORDER)
This gets a score of 1.0 on every example, except for "we want X". Since this sentence is missing the ORDER verb is get's no score according to current implementation. One workaround is to add a narrative attribute(GIMME,x) which does get a 1.0.
So at the expense of every keyword list being in the right place and thinking through the necessary narratives, the Narwhal programmer can accomplish a lot in a very few simple lines that require them to actually understand the concepts being programmed as concepts not as words.

If I was not such an lazy intellectual I would try to make this point exactly and publish it. After spending a week playing with AIML, I find the the majority of the programming effort goes into handling the variations in the words that are the least important. Quite literally, AIML is designed to spot the pattern of words around the key nouns in the input, so those same nouns can be substituted [without their meaning being important] into different patterns of words in the output. It is designed to not care about  the meaning of the topic defining words. Narwhal could not be more opposite in that regard - it is focused entirely on locating important topic words while remaining as oblivious as possible to the varying pattern of irrelevant words around the topic words.