An unsearchable expression is an expression which, when entered into Google search, is always misinterpreted. Even when it is not completely unsearchable, if the answer you seek is several search pages deep it might as well be. My wife's name used to be pretty unsearchable. Such is the case when I search for discussions of the many interesting math problems associated to language and...I cannot even tell if such exist because the phrase "math about language" is (mis)interpreted as "language + math" and Google insists that what is relevant are discussions of math as a language. Which is a poor interpretation of the throwaway word "about".
So I am somewhat sorry for Google. They appear to have no f*-ing clue about the difference between search and single word association. Oops another clothe less emperor! On the other hand perhaps 95% of the world is happy with single word association.
Wednesday, May 31, 2017
Tuesday, May 30, 2017
The Mind's Eye - Who has the patent on voice controlled VR?
There is no internal mental picture when looking at an external picture. So the key to getting into someone's mind's eye is to have them look at something external.
The "mind's eye" is a surprisingly unused term. So let me copyright its use in describing or naming a VR/AR display that is language controlled - so you see what you speak about. I wonder who has the patent on voice controlled graphics and VR/AR in general? If nobody does, then I am claiming it right here. Sure they have voice controlled navigation but not voice controlled definition and changes to the content of the scene.
Imagine that as I speak of faeries they dance before my eyes?
The "mind's eye" is a surprisingly unused term. So let me copyright its use in describing or naming a VR/AR display that is language controlled - so you see what you speak about. I wonder who has the patent on voice controlled graphics and VR/AR in general? If nobody does, then I am claiming it right here. Sure they have voice controlled navigation but not voice controlled definition and changes to the content of the scene.
Imagine that as I speak of faeries they dance before my eyes?
Monday, May 29, 2017
Augmented Reality Design - using voice commands
It seems counter intuitive that it might be easier to do a 3D design with language than with 3D interactive tools. As long as the object being designed has well named parts and relationships it might be easier to say something. Like for an abutment where you say: "make the core a little thicker"; "increase the mesial tilt"; or "make the lingual margin a little deeper".
So to go whole hog: imagine wearing AR goggles but using voice to design an object.
So to go whole hog: imagine wearing AR goggles but using voice to design an object.
Negatives are a reversed polarity without sentiment
I was just thinking about a Narwhal app where all there is no use of the exclusive '|' for VAR building. So all the VARs have exclusive = False and there is no sentiment involved. However in processing sentences with NOT/BUT and such things, there will still be a sign value associated to a filled NAR. So in this case those controls behave more like booleans.
Sunday, May 28, 2017
Augmented Reality Dialog: Sharing a mental picture during a conversation
There is a really deep idea here but I only know it from an example.
Suppose a customer wants to talk to someone about an abutment design. They have the conversation and can agree and then share a sketch of the abutment. The design sketch does not update during the conversation but rather the designer renders the sketch afterwards and turns it around as quickly as possible to show the customer.
Now suppose that rather than a designer, the customer is interacting with a chatbot that could perform the rendering automatically as the customer speaks, to confirm an understanding of what was said. Then this would be like the customer having a conversation (in this case with an automated assistant) and seeing or sharing the mental picture as part of the conversation.
With the possibilities of Augmented Reality (AR) and Virtual Reality (VR) these days, it is not too far a leap to imagine two people conversing this way and using headsets to display a shared concept formed during the conversation. Unlike a game the view updates per the conversation not per actions and the scripted scenario of a game. We'll call this Augmented Reality Conversation - as if it existed.
Suppose a customer wants to talk to someone about an abutment design. They have the conversation and can agree and then share a sketch of the abutment. The design sketch does not update during the conversation but rather the designer renders the sketch afterwards and turns it around as quickly as possible to show the customer.
Now suppose that rather than a designer, the customer is interacting with a chatbot that could perform the rendering automatically as the customer speaks, to confirm an understanding of what was said. Then this would be like the customer having a conversation (in this case with an automated assistant) and seeing or sharing the mental picture as part of the conversation.
With the possibilities of Augmented Reality (AR) and Virtual Reality (VR) these days, it is not too far a leap to imagine two people conversing this way and using headsets to display a shared concept formed during the conversation. Unlike a game the view updates per the conversation not per actions and the scripted scenario of a game. We'll call this Augmented Reality Conversation - as if it existed.
Saturday, May 27, 2017
Free Fall Coding
Bug free design and good procedures such as unit testing, code reviews, and QA are considered good software development practice. But I want to say that sometimes it is more important to create the bugs than to avoid them. There is an even wilder approach to software development which we can call free fall development. You do no regression testing because you want regression. And then, even worse, you operate with no version control. This allows you to totally break everything routinely and deal with the panic. Such shakeups are good for the organic and robust development of new ideas.
I don't seriously propose free fall development but if you want a hip software team, they should incorporate some of its principles
I don't seriously propose free fall development but if you want a hip software team, they should incorporate some of its principles
Tuesday, May 23, 2017
Monday, May 22, 2017
Hunting for rock piles everywhere else
I got good at looking for arrowheads here in the near barren fields of Concord - a tough regime. So now I am able to find stone tools anywhere on the planet. In most places the people do not know how to see such things so you can find hand axes in the roadside debris.
But rock piles are something that I assume have much less global span than stone tools in general. The same principle holds: I have learned how to see something that most people do not know how to see. It leads to wondering: where else in the US are there rock piles? They could be pretty inconspicuous. Can they be found coast to coast?
But rock piles are something that I assume have much less global span than stone tools in general. The same principle holds: I have learned how to see something that most people do not know how to see. It leads to wondering: where else in the US are there rock piles? They could be pretty inconspicuous. Can they be found coast to coast?
Does each narrative structure support a fixed set of querries?
Suppose you have a reader with nars like X_/A or X->Y. After a read operation you ought to be able to query the reader in forms that correspond to the nature of those nars. Some examples:
if X->Y is in the reader you should be able to ask: where, when, how? For X_/A you should be able to ask about intensity or some such.
Update: Since a NAR has four sub-nars (thing, action, relation, and value) you should be able to say:
nar.Thing(), nar.Action(), nar.Rleation(), or nar.Value(). Each of these is a query that should return VARs that fill the identified slot. This gets confusing for nars with ORDER>1. Working on it...
Update June 2: Got it working nicely via a concept called "lastConst".
if X->Y is in the reader you should be able to ask: where, when, how? For X_/A you should be able to ask about intensity or some such.
Update: Since a NAR has four sub-nars (thing, action, relation, and value) you should be able to say:
nar.Thing(), nar.Action(), nar.Rleation(), or nar.Value(). Each of these is a query that should return VARs that fill the identified slot. This gets confusing for nars with ORDER>1. Working on it...
Update June 2: Got it working nicely via a concept called "lastConst".
Sunday, May 21, 2017
Scalable expectation
I was thinking that expectation might have an intensity scale. So I could be driving and the story is driving,driving,driving or (get there)*,(get there)*,(get there)* and that it might be reasonable to consider this in a scale. The idea is that you could be a little impatient.
Then I was thinking about the concept of "home" and how being at "home" has the property that I can relax and stop thinking about how to change my location. If I am not home I am trying to get home. On a side track, I think that 'home' has a special place in narrative structure, like 'I'.
Then I was thinking about the concept of "home" and how being at "home" has the property that I can relax and stop thinking about how to change my location. If I am not home I am trying to get home. On a side track, I think that 'home' has a special place in narrative structure, like 'I'.
Slot Events, Short Commas, and the pursuit of the Golden Algorithm
The "Golden Algorithm" is the correct (but elusive) mechanism for filling N-segmented text, as I was discussing here. So I have been thinking harder about low level things and a couple of flaws in previous thinking are as follows:
As for the Golden Algorithm, it requires that slots of higher level narratives be scored according to the scores for the lower level narratives in the slot. That requires a lot more local vaulting of partial results and a different feel. So those new ideas are coming, along the way.
- When a slot is to be filled and has already been filled, it is kind of an error condition and simply overwriting the slot and closing ones eyes to it must be wrong.
- When a NAR gets completed it should be vaulted in association with the current segment index, not the index of the subsequent control
- There is no mechanisms for saying: enough time has gone by, let's vault if we have something.
As for the Golden Algorithm, it requires that slots of higher level narratives be scored according to the scores for the lower level narratives in the slot. That requires a lot more local vaulting of partial results and a different feel. So those new ideas are coming, along the way.
Friday, May 19, 2017
Here is a joke: replace government with AI
It is a joke because governing is quite complicated and AI is quite incompetent. But you do see several articles a week about how "AI will revolutionize X"; so I bet you could get away with writing an Onion, tongue-in-cheek article where X=government.
Wednesday, May 17, 2017
Combinatoric complexity of NLP versus simplicity of Narhwal
Taking an absolutely canned example of someone wanting to order a product of name X. Here are some simple forms:
order X
I need to order X
I want to order X
we want X
please make me an X
This small variety already stresses out the combinatoric, part of speech based, match algorithm, and never comes to grip with the concepts involved: AGENCY{I, me, we}; MOTIVE{need, want}; dull words {to, an, please}; the ORDER {order,make} and the undefined object X. So in Narwhal (which doesn't actually support variables like X but let's pretend and call it 'x') you write
So at the expense of every keyword list being in the right place and thinking through the necessary narratives, the Narwhal programmer can accomplish a lot in a very few simple lines that require them to actually understand the concepts being programmed as concepts not as words.
If I was not such an lazy intellectual I would try to make this point exactly and publish it. After spending a week playing with AIML, I find the the majority of the programming effort goes into handling the variations in the words that are the least important. Quite literally, AIML is designed to spot the pattern of words around the key nouns in the input, so those same nouns can be substituted [without their meaning being important] into different patterns of words in the output. It is designed to not care about the meaning of the topic defining words. Narwhal could not be more opposite in that regard - it is focused entirely on locating important topic words while remaining as oblivious as possible to the varying pattern of irrelevant words around the topic words.
order X
I need to order X
I want to order X
we want X
please make me an X
This small variety already stresses out the combinatoric, part of speech based, match algorithm, and never comes to grip with the concepts involved: AGENCY{I, me, we}; MOTIVE{need, want}; dull words {to, an, please}; the ORDER {order,make} and the undefined object X. So in Narwhal (which doesn't actually support variables like X but let's pretend and call it 'x') you write
GIMME = attribute(AGENCY,MOTIVE)
followed by
event([GIMME], x, ORDER)
This gets a score of 1.0 on every example, except for "we want X". Since this sentence is missing the ORDER verb is get's no score according to current implementation. One workaround is to add a narrative attribute(GIMME,x) which does get a 1.0.So at the expense of every keyword list being in the right place and thinking through the necessary narratives, the Narwhal programmer can accomplish a lot in a very few simple lines that require them to actually understand the concepts being programmed as concepts not as words.
If I was not such an lazy intellectual I would try to make this point exactly and publish it. After spending a week playing with AIML, I find the the majority of the programming effort goes into handling the variations in the words that are the least important. Quite literally, AIML is designed to spot the pattern of words around the key nouns in the input, so those same nouns can be substituted [without their meaning being important] into different patterns of words in the output. It is designed to not care about the meaning of the topic defining words. Narwhal could not be more opposite in that regard - it is focused entirely on locating important topic words while remaining as oblivious as possible to the varying pattern of irrelevant words around the topic words.
Tuesday, May 16, 2017
Is there a STANDARD word tree?
Riffing on the previous post and some mulled ideas, imagine putting words into PyDictionary and getting back definitions, to then take words from the definitions and feed them back - again - into PyDictionary. You'll get cycles and cascades and all the fun you could hope for - very much especially when you drop out words of high frequency (short words). Somehow the structure of the set of cycle you get, whatever the heck it is, embodies PyDictionary concept of word meanings.
What I want to know is how to turn a cycle diagram like that into an organized tree of related meanings.
By it is not just the building of it. Suppose you already had such a tree, or little piece of it: could it to be a standard? Would people agree? It would be a meaning standard.
What I want to know is how to turn a cycle diagram like that into an organized tree of related meanings.
By it is not just the building of it. Suppose you already had such a tree, or little piece of it: could it to be a standard? Would people agree? It would be a meaning standard.
Guess the topic tree structure from a sentence with its keywords
I am browsing someones GitHub project (https://github.com/natsheh/sensim), which is about a metric of similarity between sentences, and am thinking: how would I do this calculation?
Since Narwhal is about goodness of fit between a narrative and a sentence, it is tempting to calculate a distance between sentences by regarding one of them as a narrative and the other as a sentence (the answer could be different if you reversed the roles of the two). But what is missing in this is how does one reconstruct a topic tree that encompasses the 'narrative' sentence?
Or maybe a better question is about how to build a tree from a whole corpus of sentences. So go like this: find the most unique words and look them up in PyDictionaries and get their synonyms. Now go discover some corpus that is rich in these same words and their synonyms. So: given two lists of synonyms A and B and a cloud of sentences enriched with all the synonyms [not just A and B], how would you know when to have B below A in the tree?
The example is: "loud" implies a sound; and "noise" implies a sound. So if "loud", "noise", and "sound" are in synonym lists, then "loud" and "noise" should be below "sound". Can this be deduced automatically somehow?
You might ask: is there anything out there in 'reality' that guarantees these words should be in this relationship? I think the answer must be "no" since they are words. I cannot see how you would construe the relation between "loud" and "sound" as factual. But it sure does a good masquerade of it.
Since Narwhal is about goodness of fit between a narrative and a sentence, it is tempting to calculate a distance between sentences by regarding one of them as a narrative and the other as a sentence (the answer could be different if you reversed the roles of the two). But what is missing in this is how does one reconstruct a topic tree that encompasses the 'narrative' sentence?
Or maybe a better question is about how to build a tree from a whole corpus of sentences. So go like this: find the most unique words and look them up in PyDictionaries and get their synonyms. Now go discover some corpus that is rich in these same words and their synonyms. So: given two lists of synonyms A and B and a cloud of sentences enriched with all the synonyms [not just A and B], how would you know when to have B below A in the tree?
The example is: "loud" implies a sound; and "noise" implies a sound. So if "loud", "noise", and "sound" are in synonym lists, then "loud" and "noise" should be below "sound". Can this be deduced automatically somehow?
You might ask: is there anything out there in 'reality' that guarantees these words should be in this relationship? I think the answer must be "no" since they are words. I cannot see how you would construe the relation between "loud" and "sound" as factual. But it sure does a good masquerade of it.
Monday, May 15, 2017
OS vulnerabilities are unnecessary and are just a money saving strategy for OS vendors.
A lot of huffing and puffing about the "WannaCry" virus attacking the world - reminds me that the whole computer 'virus' thing is based on making an OS vulnerable deliberately so it can receive cheap upgrades. In fact, OSs do not need to be soft-writable but could be burnt into silicon and as invulnerable as a light bulb. Problem is that Microsoft is addicted to a plastic, writable, operating system so it can roll out upgrades at little to no cost. They fix things when they get around to them and roll out a patch - business as usual.
Here is the thing: if PC motherboard architects wanted to, they could design a system that was largely constant and which only allowed writable memory is constrained "sandbox" areas. All files could be recoverable at all times. The question is: why don't they? My guess is: no business case for it and a lot of conventional thinking. A smart computer scientist could solve this problem.
I think OS upgrades should be delivered the way Kodak "Brownie" flash bulbs were delivered: in packages of several disposable bulbs per package. Unscrew the old OS and plug a new one into the socket. Meanwhile the only vulnerable part of the computer would be a file repository that you could lose and not care, while routinely backing it up.
As I wrote (somewhere) a "flash bulb" strategy for a disposable OS is quite financially problematic for companies like Microsoft and Apple; as the "light bulb socket" would require an API spec that eliminated the monopolies these companies enjoy.
Here is the thing: if PC motherboard architects wanted to, they could design a system that was largely constant and which only allowed writable memory is constrained "sandbox" areas. All files could be recoverable at all times. The question is: why don't they? My guess is: no business case for it and a lot of conventional thinking. A smart computer scientist could solve this problem.
I think OS upgrades should be delivered the way Kodak "Brownie" flash bulbs were delivered: in packages of several disposable bulbs per package. Unscrew the old OS and plug a new one into the socket. Meanwhile the only vulnerable part of the computer would be a file repository that you could lose and not care, while routinely backing it up.
As I wrote (somewhere) a "flash bulb" strategy for a disposable OS is quite financially problematic for companies like Microsoft and Apple; as the "light bulb socket" would require an API spec that eliminated the monopolies these companies enjoy.
Sunday, May 14, 2017
Friday, May 12, 2017
My learning stages for language processing
It is so idiosyncratic (not counting my math background):
1. Did the automated reading project, the "note reader", for customer order notes. [C++]
2. Built a noise reader at home, re-implementing ideas from work. [C++ and Python]
3. Created narhwal - the long term project that should contain all. [Python]
4. Started designing the "naomi" chatbot to accomplish more than the note reader: have to have language responses; cannot bail with an "alarm" category...or much less often; have to know when enough has been said to proceed; have to alter responses on the way through; etc. [Python and maybe - with a colleague- Java Script.]
We'll see how it goes with my learning curve. At my age, all learning curves are races with senility.
1. Did the automated reading project, the "note reader", for customer order notes. [C++]
2. Built a noise reader at home, re-implementing ideas from work. [C++ and Python]
3. Created narhwal - the long term project that should contain all. [Python]
4. Started designing the "naomi" chatbot to accomplish more than the note reader: have to have language responses; cannot bail with an "alarm" category...or much less often; have to know when enough has been said to proceed; have to alter responses on the way through; etc. [Python and maybe - with a colleague- Java Script.]
We'll see how it goes with my learning curve. At my age, all learning curves are races with senility.
Subscribe to:
Posts (Atom)