stepping out of character:
Definition: Prosperity is the flow of dollars accompanied by the [reverse] flow of goods and services.
One dollar moving fast enough could keep us all rich. Unfortunately the government cannot create prosperity. But it can create dollars. Creating dollars and giving them to a bank has no affect on prosperity. Instead they should use the creation and gift of dollars as a means of creating a pump/vacuum to stoke the exchange of goods and services directly. A simple way to do that is to go on a buying spree.
Update: if money is a policed exchange rate for goods and services, then without actual goods and services being exchanged, there is no functional involvement of money [or say the money has no "traction". With a government failing to police "traction"] , real money is replaced with some kind of poker chip which is only good temporarily, in some casino.
Saturday, March 30, 2013
Saturday, March 23, 2013
Personal Internet Agents
[More on the topic of "itelligence", see here]
I can't imagine that the internet will be the same in 20 years and that today's big players - the Amazon's, Twitter's, Facebook's, Travel Planners, games, searches, etc will remain the big players of tomorrow. If I had anything to do with it, it would be to return identity to people and fight against the broader trend to become sub-functions of corporations. I am a DFH (dirty f'ing hippie) and proud of it.
At the root of these things is the concept of personal profiles - that define the internet presence of a person from the point of view of some vendor. Let's hope for a future where people control their own profiles. If that were to work out then the big players of the internet would still be there but would be resources for people, not the other way around. So to facilitate that we will need some form of personal robotic agent that serves as a virtual person, when the real person is doing something else. I imagine the personal robot is my online surrogate but it can only process the information I teach it to process. From my point of view it is a virtual friend - a chat bot. It can interact with other personal agents and, in some circumstances a virtual relationship can become a real one.
This technology will be enabled by automatic text understanding.[What I find on the internet is called "meaning recognition".]
Update: Call it the "internet of words".
I can't imagine that the internet will be the same in 20 years and that today's big players - the Amazon's, Twitter's, Facebook's, Travel Planners, games, searches, etc will remain the big players of tomorrow. If I had anything to do with it, it would be to return identity to people and fight against the broader trend to become sub-functions of corporations. I am a DFH (dirty f'ing hippie) and proud of it.
At the root of these things is the concept of personal profiles - that define the internet presence of a person from the point of view of some vendor. Let's hope for a future where people control their own profiles. If that were to work out then the big players of the internet would still be there but would be resources for people, not the other way around. So to facilitate that we will need some form of personal robotic agent that serves as a virtual person, when the real person is doing something else. I imagine the personal robot is my online surrogate but it can only process the information I teach it to process. From my point of view it is a virtual friend - a chat bot. It can interact with other personal agents and, in some circumstances a virtual relationship can become a real one.
This technology will be enabled by automatic text understanding.[What I find on the internet is called "meaning recognition".]
Update: Call it the "internet of words".
Shoehorning semantics into a "best model" framework
I'll just ramble on here for a second and emphasize that the method of best models (see here) is valuable because
of the way it divides the data into variant and invariant portions. [It is necessary for geometry and, I warrant, for everything else as well.] For
shape congruence, position is considered a variant - to be measured away
and then ignored. For shape similarity, the size is considered a
variant - to be measured away and then ignored. As we strip away the
variants, the invariants are left to define the true
nature of the object.
The same needs to be part of the thinking about language and sentence structure. What are the variants and invariants? I think they change while we are reading. As I think about it, it is the hierarchical, sequential, and essentially dynamic nature of best models reasoning that comes into play here. So much linguistic analysis begins after a sentence has been read, that perhaps they are throwing the baby out with the bath water by ignoring the sequential and chronological nature of text understanding.
All I have at the moment is a sense that initial word meanings are measurements. They are used to create a framework (ideal sentence) for measuring later words. So the meaning of a sentence is to be broken up into the sentence and semantic structure, in a way that is independent of the actual contents of the words. Sure and I am not going to dodge any issue that has been around since Plato, but it seems like "meaning" comes and goes in chronology of understanding.
Here is a (radical?) idea: we give language way too much credit for having subtle structure when it relies mostly on the pre-existing structure of the world. Bertrand Russell, as a young man in Principles of Mathematics (not the older (?) Russell of Principia Mathematica) was puzzled by the possible ambiguity of the word "or" and the word "and". I find he gave up too quickly in the rush for mathematical simplifications. Actually the nature of "and" and "or" is the same: they are both juxtapositions which vary by the nature of the things juxtaposed (juxtaposed adjective will merge; disjoint objects will not) and vary by how we are interacting with the collection formed by the juxtaposition (take one from/ take all of)
Another example: in a course of introductory symbolic logic at BU, I balked at the statement that "but" and "and" have the same meaning. It took years to realize that "but" negates an unstated, implicit phrase and is used to alert the reader/listener to suspend their semantic expectations. For example: "there was snow on the ground but he went out barefoot." Includes an unstated expectation that snow implies reasonable foot wear. The negating of the expectation is the purpose of the "but".
Linguistics and a more correct symbolic logic must include the implicit statements that make up the semantic context of explicit language. So buckle your seat belts laddies, this trip says: boolean logic is wrong and will never come close to imitating humans. It needs to be re-written from the ground up to include the implicit and (I hope) a standardized use of best models.
The same needs to be part of the thinking about language and sentence structure. What are the variants and invariants? I think they change while we are reading. As I think about it, it is the hierarchical, sequential, and essentially dynamic nature of best models reasoning that comes into play here. So much linguistic analysis begins after a sentence has been read, that perhaps they are throwing the baby out with the bath water by ignoring the sequential and chronological nature of text understanding.
All I have at the moment is a sense that initial word meanings are measurements. They are used to create a framework (ideal sentence) for measuring later words. So the meaning of a sentence is to be broken up into the sentence and semantic structure, in a way that is independent of the actual contents of the words. Sure and I am not going to dodge any issue that has been around since Plato, but it seems like "meaning" comes and goes in chronology of understanding.
Here is a (radical?) idea: we give language way too much credit for having subtle structure when it relies mostly on the pre-existing structure of the world. Bertrand Russell, as a young man in Principles of Mathematics (not the older (?) Russell of Principia Mathematica) was puzzled by the possible ambiguity of the word "or" and the word "and". I find he gave up too quickly in the rush for mathematical simplifications. Actually the nature of "and" and "or" is the same: they are both juxtapositions which vary by the nature of the things juxtaposed (juxtaposed adjective will merge; disjoint objects will not) and vary by how we are interacting with the collection formed by the juxtaposition (take one from/ take all of)
Another example: in a course of introductory symbolic logic at BU, I balked at the statement that "but" and "and" have the same meaning. It took years to realize that "but" negates an unstated, implicit phrase and is used to alert the reader/listener to suspend their semantic expectations. For example: "there was snow on the ground but he went out barefoot." Includes an unstated expectation that snow implies reasonable foot wear. The negating of the expectation is the purpose of the "but".
Linguistics and a more correct symbolic logic must include the implicit statements that make up the semantic context of explicit language. So buckle your seat belts laddies, this trip says: boolean logic is wrong and will never come close to imitating humans. It needs to be re-written from the ground up to include the implicit and (I hope) a standardized use of best models.
A proto language with primitive semantics
I wrote a paper about best model reasoning that ends with the sentence:
"...the main limitation in our ability to program an artificial intelligence will be our ability to parameterize ideal objects in the world that intelligence is supposed to encounter.".
So now, moving on to the obvious questions about how language and semantics relate to the whole business of "best models", one needs to pause and ask: what world is the automatic language processor supposed to encounter? And how, in that world, are idealized objects parametrized? Well, I don't know yet but I think it is worth thinking about a world of language structure populated both by natural text and idealized text, where the idealized text needs to be parametrized somehow. I am looking for "pre-narrative" semantic structures, kind of atomic stories, kind of a semantic meta language. The goal is natural and explicit meanings structure, not mathematical tidiness, that can be used to analyze and translate other language. Here is what I've got so far :
Nouns
- person (me; or things I lend me-ness to)
- thing
- place
There are single word sentences like thing but let us ignore that for now.
Note that person can become thing, almost routinely. Thing can become "personified" but it is less common. This does not need to be explicit for now because the syntactic context is unambiguous.
Verbs
There are semantic structures of the form
person go place
person want thing
person assigns value to thing
person see thing
thing in place
place contains thing
thing causes thing
thing acts on thing
Caveat: I am not sure if other verbs are needed for person. Do we need a "make" verb?
Caveat: it is a bit artificial to construe a "person acting on thing" as something that requires the "person" to first be turned into a "thing" for the purpose of the analysis. I am taking the view that it is only the things which express personhood (indicators or what my son calls "agency") that require the meaning of a person. Otherwise the person functions semantically like a thing.
Adjectives
There are semantic structures of the form
thing has attribute
thing has thing (pretty similar to: place has thing)
Structural elements
( ) groups compound entities into single ones, treated as nouns
[ ] expresses implicit or optional parts of the structure (this helps be explicit)
:: transformation
, sequence
+ binding, (an abbreviation of multiple "has", "in" or "contains" statements, or more??]
Allowed syntax
X with attribute Y
[ ] with attribute Y (for example the sentence "hot!")
X acts on Y, [Y::Z]
X wants Z, [ X acts on Y, [Y::Z ]
(H ) + X (can also be written (H+X)
(H), X (can also be written (H , X)
X :: [X]+Y
other?????
Examples
Orginal text: Grunk takes ball into shower
Structure: ("Grunk"+"ball") go "shower" (an example of personification)
Original text: Grunk has ball in shower
Structure ("Grunk"+"ball") in "shower"
Original text: Grunk throws ball into shower
Structure: "Grunk" acts on "ball", "shower":: ("shower"+"ball")
Reverse examples. It is not the intention of this pre-narrative to be taken and used as a language, or as a playground for mathematical reasoning. The important point remains: How does something like this work in the context of best models. What are the measurements, ideal objects, and classifications to be made? But, as a dry exercise lets look at some examples of syntax that may or may not be meaninful in the pre-narrative language.
What is the difference between (X::Y) and [X::Y] ?? The first expresses a transformation become a noun. The second is an implicit transformation - not at all the same thing. Can transfomations become nouns? Seem like there are example in English: "I watched my sons growth with amazement"
"I" see ("son"::(["son"]+"growth")), "I" :: (["I"]+"amazed")
This is a start. It is easy to get distracted with the details of this proto-lanuage, and to forget its actual purpose - which is to provide an example of parametrized ideal sentences structure.
Update: This is really a proto-syntax
Update II: NO it is a proto semantics.
"...the main limitation in our ability to program an artificial intelligence will be our ability to parameterize ideal objects in the world that intelligence is supposed to encounter.".
So now, moving on to the obvious questions about how language and semantics relate to the whole business of "best models", one needs to pause and ask: what world is the automatic language processor supposed to encounter? And how, in that world, are idealized objects parametrized? Well, I don't know yet but I think it is worth thinking about a world of language structure populated both by natural text and idealized text, where the idealized text needs to be parametrized somehow. I am looking for "pre-narrative" semantic structures, kind of atomic stories, kind of a semantic meta language. The goal is natural and explicit meanings structure, not mathematical tidiness, that can be used to analyze and translate other language. Here is what I've got so far :
Nouns
- person (me; or things I lend me-ness to)
- thing
- place
There are single word sentences like thing but let us ignore that for now.
Note that person can become thing, almost routinely. Thing can become "personified" but it is less common. This does not need to be explicit for now because the syntactic context is unambiguous.
Verbs
There are semantic structures of the form
person go place
person want thing
person assigns value to thing
person see thing
thing in place
place contains thing
thing causes thing
thing acts on thing
Caveat: I am not sure if other verbs are needed for person. Do we need a "make" verb?
Caveat: it is a bit artificial to construe a "person acting on thing" as something that requires the "person" to first be turned into a "thing" for the purpose of the analysis. I am taking the view that it is only the things which express personhood (indicators or what my son calls "agency") that require the meaning of a person. Otherwise the person functions semantically like a thing.
Adjectives
There are semantic structures of the form
thing has attribute
thing has thing (pretty similar to: place has thing)
Structural elements
( ) groups compound entities into single ones, treated as nouns
[ ] expresses implicit or optional parts of the structure (this helps be explicit)
:: transformation
, sequence
+ binding, (an abbreviation of multiple "has", "in" or "contains" statements, or more??]
Allowed syntax
X with attribute Y
[ ] with attribute Y (for example the sentence "hot!")
X acts on Y, [Y::Z]
X wants Z, [ X acts on Y, [Y::Z ]
(H ) + X (can also be written (H+X)
(H), X (can also be written (H , X)
X :: [X]+Y
other?????
Examples
Orginal text: Grunk takes ball into shower
Structure: ("Grunk"+"ball") go "shower" (an example of personification)
Original text: Grunk has ball in shower
Structure ("Grunk"+"ball") in "shower"
Original text: Grunk throws ball into shower
Structure: "Grunk" acts on "ball", "shower":: ("shower"+"ball")
Reverse examples. It is not the intention of this pre-narrative to be taken and used as a language, or as a playground for mathematical reasoning. The important point remains: How does something like this work in the context of best models. What are the measurements, ideal objects, and classifications to be made? But, as a dry exercise lets look at some examples of syntax that may or may not be meaninful in the pre-narrative language.
What is the difference between (X::Y) and [X::Y] ?? The first expresses a transformation become a noun. The second is an implicit transformation - not at all the same thing. Can transfomations become nouns? Seem like there are example in English: "I watched my sons growth with amazement"
"I" see ("son"::(["son"]+"growth")), "I" :: (["I"]+"amazed")
This is a start. It is easy to get distracted with the details of this proto-lanuage, and to forget its actual purpose - which is to provide an example of parametrized ideal sentences structure.
Update: This is really a proto-syntax
Update II: NO it is a proto semantics.
Wednesday, March 13, 2013
When the internet becomes intelligent
OK, now that I've got all this cool technology lets think about a time when the internet becomes intelligent - or supports intelligent life in some way. The idea of one huge intelligence is repulsive to me, so I'd rather plan for a future with lots of different intelligent organisms viing for attention. So, how to build an intelligent organism, that lives on the internet and makes money for my family?
Update: Maybe a spec like this:
An internet intelligence...an intertelligance...the "itelligent agent".
- a url that expresses a person
- has public, private, and semi-public "dna".
- expresses user settings in a profound way
(Meaning things like: the user can invent their own types of settings, like UI widgets, using public or private h files and ultimately what their agent does is up to them)
- has a universal api for communication with other itelligent agents (see Microsoft's IPerson interface).
- can go shopping, do a search, or plan a trip
- can find friends
- can push information at the user
- can be taught things by the user, naturally and correctly
- gives the user total control of their own profiling.
Update: Maybe a spec like this:
An internet intelligence...an intertelligance...the "itelligent agent".
- a url that expresses a person
- has public, private, and semi-public "dna".
- expresses user settings in a profound way
(Meaning things like: the user can invent their own types of settings, like UI widgets, using public or private h files and ultimately what their agent does is up to them)
- has a universal api for communication with other itelligent agents (see Microsoft's IPerson interface).
- can go shopping, do a search, or plan a trip
- can find friends
- can push information at the user
- can be taught things by the user, naturally and correctly
- gives the user total control of their own profiling.
Tuesday, March 12, 2013
Subscribe to:
Posts (Atom)