I take comfort in the idea that the language I speak goes all the way back to the beginning of language, just like my genome has that sort of continuity with the past. When I think about it, my genome did not follow the same path as my language, because I don't speak Russian. I do speak French. English is my native language but not my native genome.
Update: and every one of my ancestors survived!
Friday, December 12, 2014
Tuesday, December 9, 2014
I was in the right place at the right time for moving frames
[A brief autobiographical note] I had the great good fortune to fall under the influence of the theory of moving frames that was brought to light by Frenet, Darboux, Cartan, Weyl, and others. My thesis adviser William Pohl was the student of SS Chern and Pohl claimed Chern was a student of Cartan. (This is not what the math "geneology" says, so perhaps it is wrong, or perhaps the relationship was different).
I was always enamoured of the logarithm, especially the complex logarith, and I used to see how to define it using cut-and-paste methods that also work and generate other things like the Hopf bundle (which is a circle bundle over the torus, generated by puncturing the torus and sewing a cicle of circles into the wound, using circle as a group). It is especially nice to see these objects in terms of moving frames and sections of fiber bundles.
I got lucky and was perhaps first to notice that anatomical descriptions could be done in terms of anatomical frame standards, and I was definately lucky to be at a job where I invented a useful fiber bundle and was not able to publish it - which forced me to think of it in the abstract and develop a general form of classification: as a choice of sections of a fiber bundle induced by measurement of variants. To be able to shoehorn Berkeley's "esse est percipi" into the same mathematics as the logarithm is pretty sweet.
I just wish someone could understand what I am talking about. I am trying hard to write up "Best Model Classification" clearly. I hope to send it out by Christmas and let's hope it works out.
Update: 3 journal rejections later I am trying again and ready to try once more after that.
I was always enamoured of the logarithm, especially the complex logarith, and I used to see how to define it using cut-and-paste methods that also work and generate other things like the Hopf bundle (which is a circle bundle over the torus, generated by puncturing the torus and sewing a cicle of circles into the wound, using circle as a group). It is especially nice to see these objects in terms of moving frames and sections of fiber bundles.
I got lucky and was perhaps first to notice that anatomical descriptions could be done in terms of anatomical frame standards, and I was definately lucky to be at a job where I invented a useful fiber bundle and was not able to publish it - which forced me to think of it in the abstract and develop a general form of classification: as a choice of sections of a fiber bundle induced by measurement of variants. To be able to shoehorn Berkeley's "esse est percipi" into the same mathematics as the logarithm is pretty sweet.
I just wish someone could understand what I am talking about. I am trying hard to write up "Best Model Classification" clearly. I hope to send it out by Christmas and let's hope it works out.
Update: 3 journal rejections later I am trying again and ready to try once more after that.
Sunday, December 7, 2014
More about false positives
This feels like a similar topic to the previous post: If someone asks me if such-and-such is correct and I can only answer in terms of good or bad (e.g. my mouth is full and I can give a "thumbs up" but cannot say "correct") then I say "good" as a substitute for "correct". The good/bad axis and the correct/incorrect axis are neither orthogonal nor co-linear.
Saturday, December 6, 2014
Was that a blue bird? Ignoring observations during classification
I have a little story that illustrates possible approaches to handling false positives in a classification system with limited alternative categories.
Driving in Bedford a small bird with a rust-red underbelly flies past the car and I get a brief (.3 sec?) glimpse.
Analysis: It was too small to be a blue bird and there are no birds that small around here with rust-red breasts.
Driving in Bedford a small bird with a rust-red underbelly flies past the car and I get a brief (.3 sec?) glimpse.
Analysis: It was too small to be a blue bird and there are no birds that small around here with rust-red breasts.
- If the bird had been a bit bigger it could have been a blue bird or, even bigger, it could have been a robin.
- If it wasn't really red then it could have been a Junco or sparrow.
- If it was an accidental European bird, it might be one of those
- If I misjudged size as well as position of the red color, it could have been a Towhee or Catbird.
Monday, November 3, 2014
Who let the algebraists write the Wikipedia geometry definitions???
It must take a certain type of personality to slave for hours over a hot keyboard to give Wikipedia and the world a clear definition of some mathematical term - hard to format and complicated. But apparently this personality type tends towards the "French" formalism of - what was it? ...yes! - the Erlanger program[OOPS! THAT NOT IT - try - Bourbaki]. I see little effort made in Wikipedia to make simple math ideas accessible, simply.
I am reading Wikipedia for their definition of "fiber bundle" and it seems to me it used to be a bit more comprehensible. Then I look up "double fibration" and hit a bunch of stuff about Twistor space, like:
Ω(x)=ωA−ixAA′πA′
and I think - geez! what happened to the geometry? I mean pictures. In general, Wikipedia mathematical definitions are abundantly formal, often failing to give the common sense meaning. For example do they show the picture for Lagrange multipliers? Actually they do, but not the right picture.
Lagrange multipliers find where the optimum valued isotherm touches the constraint for the last time. The key idea being that you can look for where the red and green vectors coincide:
I am reading Wikipedia for their definition of "fiber bundle" and it seems to me it used to be a bit more comprehensible. Then I look up "double fibration" and hit a bunch of stuff about Twistor space, like:
and I think - geez! what happened to the geometry? I mean pictures. In general, Wikipedia mathematical definitions are abundantly formal, often failing to give the common sense meaning. For example do they show the picture for Lagrange multipliers? Actually they do, but not the right picture.
Lagrange multipliers find where the optimum valued isotherm touches the constraint for the last time. The key idea being that you can look for where the red and green vectors coincide:
Saturday, October 25, 2014
Continuous Croissants
My croissant recipe - derived from the Julia Childs's video - is no big deal. What is fun is that although at least 8 hours long and involving multiple repeated steps, there are many opportunities to pause the sequence by putting dough in the fridge or the freezer. As a result you can start a batch almost anytime of day and, with a bit of planning, have them be fresh baked at any time of day as well. Since I love to eat croissant and am enjoying baking them, I could easily go off my diet and have non-stop heartburn.
Note: croissant are the same as bread, with two risings. The dough includes milk, sugar, and oil [which is a bit different from bread] and there is a whole bunch of messing around with butter and rolling pins between the two risings, but it is still the same basic idea - just a variation.
Recipe:
MIXING and FIRST RISING
Take out a big bowl and a small one.
Put 1tsp yeast [or use 1/2 and double the rise times below] in small bowl and add 1/2 cup of warm milk, 1/4 cup warm water, 1 tsp sugar.
While you have the sugar and tablespoon in hand, start adding to the big bowl:
Put 1 tsp sugar in big bowl , add 1 tsp salt, 3-4 tbs of cooking oil, 2 cups of flour.
[I used King Arthur, tried mixing in some pastry flour - which is a disaster, and got good results with Gold flour. Am now trying mixing King Arthur and Gold].
Add small bowl contents to big bowl. Mix thoroughly until it is all one lump of dough stuck to the fork (or mixer). Add liquid/oil if it is too crumbly to form a single lump. Now kneed the heck out of it: maybe three minutes by hand until it becomes silky.Let this dough rise at room temperature until the ball is about 2 times as wide. Takes a bit less than 2 hours at 70F room temperature.
After this rising, punch it down and put aside in the fridge, briefly, while you soften the butter.
SOFTEN BUTTER
[Following Julia C]
Take one stick of unsalted butter out from fridge and hammer it out with a rolling pin, then use the heel of your hand to squash down the bumps and warm it slightly. Scape it together a few times. Use the heel of your hand a second time...you kind of want the butter and dough to be equally soft, maybe the butter a little harder than the dough. Scrape it together into a 5"x4" rectangle.
Key points: kneading enough, and softening enough - with no lumps in the butter.
FOLDING and ROLLING
Take dough from fridge, flatten into a 14" diameter circle. Put the butter rectangle in the middle and fold up the edges (without pulling on them) to make an envelope around the butter. Pinch the seams. Roll carefully into 14"x7" rectangle, turning it over a few times, and fold the long dimension in 3. Then turn the dough 90 degrees clockwise and, in the direction that was previously the 7" width, roll it out again into 14"x7" rectangle, fold in 3 again. That is two "turns".
Let dough rest in fridge for 2 hours. Then repeat the rolling and folding two more times. That is two more "turns". Now let dough rest another 2 hours in fridge.
SECOND RISING AND BAKING
Dough in now ready to be rolled out 1/8" thick. Following Julia, I cut the dough in half, put one in fridge while rolling out other half. Cut that half again and put a quarter in fridge while rolling out the other quarter to 1/8" thickness. While working with a small amount of dough, the rest remains cool.
At 1/8" thickness, cut into triangles, roll them up and bend into crescents. When rolling dough into a tube towards the point of the triangle, stretch the dough a little in the direction you are rolling.
Put on baking sheet and allowed to rise again for 2-3 hours. When ready, they become jiggly.
Heat oven (note: you can use a hotter over without moisture, or a cooler one with more moisture. I have settled on 450 with a small splash of water on the coil, to make steam]
Bake for 9 minutes. Just before they are ready, they smell good. If you are burning the bottoms you can start to smell it - so keep a nose out for these things.
Let cool. They are best about 1/2 hour later.
WHERE TO PAUSE:
- any time dough is frozen it stops the sequence. It resumes when it dough is back at fridge temperature.
- any time it is rising you can slow it down longer than 2 hours by putting in fridge. For example, form the croissant before bed, put in fridge, use warm oven to accelerate rising in the morning. You can have em for breakfast.
Note: croissant are the same as bread, with two risings. The dough includes milk, sugar, and oil [which is a bit different from bread] and there is a whole bunch of messing around with butter and rolling pins between the two risings, but it is still the same basic idea - just a variation.
Recipe:
MIXING and FIRST RISING
Take out a big bowl and a small one.
Put 1tsp yeast [or use 1/2 and double the rise times below] in small bowl and add 1/2 cup of warm milk, 1/4 cup warm water, 1 tsp sugar.
While you have the sugar and tablespoon in hand, start adding to the big bowl:
Put 1 tsp sugar in big bowl , add 1 tsp salt, 3-4 tbs of cooking oil, 2 cups of flour.
[I used King Arthur, tried mixing in some pastry flour - which is a disaster, and got good results with Gold flour. Am now trying mixing King Arthur and Gold].
Add small bowl contents to big bowl. Mix thoroughly until it is all one lump of dough stuck to the fork (or mixer). Add liquid/oil if it is too crumbly to form a single lump. Now kneed the heck out of it: maybe three minutes by hand until it becomes silky.Let this dough rise at room temperature until the ball is about 2 times as wide. Takes a bit less than 2 hours at 70F room temperature.
After this rising, punch it down and put aside in the fridge, briefly, while you soften the butter.
SOFTEN BUTTER
[Following Julia C]
Take one stick of unsalted butter out from fridge and hammer it out with a rolling pin, then use the heel of your hand to squash down the bumps and warm it slightly. Scape it together a few times. Use the heel of your hand a second time...you kind of want the butter and dough to be equally soft, maybe the butter a little harder than the dough. Scrape it together into a 5"x4" rectangle.
Key points: kneading enough, and softening enough - with no lumps in the butter.
FOLDING and ROLLING
Take dough from fridge, flatten into a 14" diameter circle. Put the butter rectangle in the middle and fold up the edges (without pulling on them) to make an envelope around the butter. Pinch the seams. Roll carefully into 14"x7" rectangle, turning it over a few times, and fold the long dimension in 3. Then turn the dough 90 degrees clockwise and, in the direction that was previously the 7" width, roll it out again into 14"x7" rectangle, fold in 3 again. That is two "turns".
Let dough rest in fridge for 2 hours. Then repeat the rolling and folding two more times. That is two more "turns". Now let dough rest another 2 hours in fridge.
SECOND RISING AND BAKING
Dough in now ready to be rolled out 1/8" thick. Following Julia, I cut the dough in half, put one in fridge while rolling out other half. Cut that half again and put a quarter in fridge while rolling out the other quarter to 1/8" thickness. While working with a small amount of dough, the rest remains cool.
At 1/8" thickness, cut into triangles, roll them up and bend into crescents. When rolling dough into a tube towards the point of the triangle, stretch the dough a little in the direction you are rolling.
Put on baking sheet and allowed to rise again for 2-3 hours. When ready, they become jiggly.
Heat oven (note: you can use a hotter over without moisture, or a cooler one with more moisture. I have settled on 450 with a small splash of water on the coil, to make steam]
Bake for 9 minutes. Just before they are ready, they smell good. If you are burning the bottoms you can start to smell it - so keep a nose out for these things.
Let cool. They are best about 1/2 hour later.
WHERE TO PAUSE:
- any time dough is frozen it stops the sequence. It resumes when it dough is back at fridge temperature.
- any time it is rising you can slow it down longer than 2 hours by putting in fridge. For example, form the croissant before bed, put in fridge, use warm oven to accelerate rising in the morning. You can have em for breakfast.
Saturday, September 27, 2014
Form follows content
The language theory is that: meanings are in your head and find expression in what is said and what is heard. In other words the meaning content of an expression is used to parse and articulate the form and structure of an expression.
Having a "theory" doesn't mean you know all the answers. I am implementing a program that allows meaning to evolve on the way through a sentence and I can see that there would be different approaches to implement such a thing - some more atomic than others.
Having a "theory" doesn't mean you know all the answers. I am implementing a program that allows meaning to evolve on the way through a sentence and I can see that there would be different approaches to implement such a thing - some more atomic than others.
Sunday, August 10, 2014
The black hole at the birth of the universe
http://phys.org/news/2014-08-black-hole-birth-universe.html
I wrote about something like this in a college term paper in cosmology - a generalization of the donut shape in 4D.
I wrote about something like this in a college term paper in cosmology - a generalization of the donut shape in 4D.
Friday, August 1, 2014
Traffic Equations
If you are like me, during a long drive you'll drift into thinking about traffic. Seems to me there is an important function that relates speed and density of the cars, this is the minimum_safe_distance( s ) for a speed s. Below this distance you cannot stop in time. The average distance between cars is >= min_safe_dist(s). If we think of density as the reciprocal of average distance then
- (1/traffic density) >= min_safe_dist(s)
- higher density lanes tend to merge into lower density ones
- as many cars enter an interchange as leave it.
- people usually drive as fast as they can
Thursday, July 17, 2014
Split-brain, or what are the hemispheres for?
I see behavioral studies of patients with severed corpus collosum describing one or the other impairments of recognition functions.
"Speech" and "Visual Recognition and "Tactile Recognition" are easy-to-label groups of function. Simply noting that the majority of such are localized in different hemispheres yields the obvious point: that coordination is interrupted when you cut the connecting nerves.Possibly you could have just as well localized them on one side. Perhaps there is an advantage to having two thoughts about the same thing?
Is there an origin of two brain hemispheres? I want to think there is some "first" hemi-brained creature with some "first" function not possible with only a single brain. But the logic is weak. Hemispheres could have arisen as redundant organs, followed by specialization.
Are there comparable creatures with and without dual hemispheres? If so, are there functional behaviors which differ and seem to have been enabled by the dual hemispheres?
"Speech" and "Visual Recognition and "Tactile Recognition" are easy-to-label groups of function. Simply noting that the majority of such are localized in different hemispheres yields the obvious point: that coordination is interrupted when you cut the connecting nerves.Possibly you could have just as well localized them on one side. Perhaps there is an advantage to having two thoughts about the same thing?
Is there an origin of two brain hemispheres? I want to think there is some "first" hemi-brained creature with some "first" function not possible with only a single brain. But the logic is weak. Hemispheres could have arisen as redundant organs, followed by specialization.
Are there comparable creatures with and without dual hemispheres? If so, are there functional behaviors which differ and seem to have been enabled by the dual hemispheres?
Tuesday, July 1, 2014
Microsoft are you listening?
[Surely not]. I totally buy into the idea that we can structure information in terms of rectangles - nested, overlapped, chained, whatever...and that it is the same as the hierarchical structures that come so naturally. But you implemented it wrong, and a desktop covered with open folders (after a few minutes work) is a mess.
Get rid of double clicks, use left click to activate. Right click for selection and everything else. Double click now can be mean: activate and keep (f-in) window open behind me...a rarely used option.
And while I am criticizing, why don't you build something for old people? And why don't you build something where I feel like I am in control of everything from unasked for updates to pop-ups and scams? Not to mention our government watchers.
Get rid of double clicks, use left click to activate. Right click for selection and everything else. Double click now can be mean: activate and keep (f-in) window open behind me...a rarely used option.
And while I am criticizing, why don't you build something for old people? And why don't you build something where I feel like I am in control of everything from unasked for updates to pop-ups and scams? Not to mention our government watchers.
A nice example of the cloud jet trail phenomenon
I can now refine the question:
If jet trails diffuse more slowly in areas of higher humidity, what is the difference between the visible cloud where the jet trail diffusion is slowed and the invisible part around the cloud where it is equally slowed?
Saturday, June 28, 2014
A hotel review "Camera"
Distancing myself from the idea of language and "language processing", I find a very good metaphor in the idea of a camera capturing information and focusing it into structured data. Here is a slide from a findaquiethotel.com presentation
Currently I claim findaquiethotel.com has capture > 95%, focus >90%, and is assuming "white light" that ignores differences among guests.
I have built the technology (for noise and wifi statements in reviews), now I am shifting into hype mode.
Currently I claim findaquiethotel.com has capture > 95%, focus >90%, and is assuming "white light" that ignores differences among guests.
I have built the technology (for noise and wifi statements in reviews), now I am shifting into hype mode.
The Moving Topic
The Moving Topic is the hypothesis that grammar and syntax are largely irrelevant for the analysis of meaning and that, instead, one needs to consider the accumulation of information and projected expectation that occurs, word by word, as an expression is being understood. The current state of such information and expectation is called the "moving topic". According to best model reasoning, there can be multiple simultaneous instances of different moving topics, when there is ambiguity. The multiplicity gets narrowed down or broadened as the moving topic progresses. Hopefully, resolved to a single possible meaning as the moving topic gets to a point where the ambiguity is resolved. I want to call this hypothesis a neo-Whorfian (or anti-Chomskian) hypothesis.
As far as I can tell the words "grammar" and "syntax" do not have a clear definition. So the following theorem is, in turn, not particularly clear but here it is:
Any grammatical rule, related to the proper formation of sentences can be replaced by a rule for how the moving topic changes and is resolved.
In terms of translating expressions into meaning, this means that benefits from studying grammar are a subset of benefits from studying the moving topic. As such, who cares what the words "syntax" and "grammar" mean?
But parenthetically if grammar and syntax are irrelevant for meaning, what are they for? I say: for a poetic pleasant sound. Consider for example the two phrases: "floods maroon thousands" vs "flood maroons thousands". It is simpler to think the "s" at the end of "floods"/"maroons" as having poetic use but no semantic purpose. [Another: "plants you seeded" versus "seeds you planted".] Rules of grammar and syntax become aesthetic rules, divorced (at least a bit) from semantics.
I propose this extreme anti-Chomskian view, not because I know it is true, but because it is easier to work from in the analysis of narrow world expressions. In this extreme approach one also denies the usefulness of the general concept of "language". Rather than working with a pure, ideal, Platonic concept, it is easier to deny such idealization and focus, instead, on "expression" and the anthropology of how people express themselves.
As far as I can tell the words "grammar" and "syntax" do not have a clear definition. So the following theorem is, in turn, not particularly clear but here it is:
Any grammatical rule, related to the proper formation of sentences can be replaced by a rule for how the moving topic changes and is resolved.
In terms of translating expressions into meaning, this means that benefits from studying grammar are a subset of benefits from studying the moving topic. As such, who cares what the words "syntax" and "grammar" mean?
But parenthetically if grammar and syntax are irrelevant for meaning, what are they for? I say: for a poetic pleasant sound. Consider for example the two phrases: "floods maroon thousands" vs "flood maroons thousands". It is simpler to think the "s" at the end of "floods"/"maroons" as having poetic use but no semantic purpose. [Another: "plants you seeded" versus "seeds you planted".] Rules of grammar and syntax become aesthetic rules, divorced (at least a bit) from semantics.
I propose this extreme anti-Chomskian view, not because I know it is true, but because it is easier to work from in the analysis of narrow world expressions. In this extreme approach one also denies the usefulness of the general concept of "language". Rather than working with a pure, ideal, Platonic concept, it is easier to deny such idealization and focus, instead, on "expression" and the anthropology of how people express themselves.
Friday, June 27, 2014
Thursday, June 19, 2014
Find a quiet hotel dot com
If you haven't already done so (you legions of fans) check out the web app I delivered:
findaquiethotel.com
findaquiethotel.com
A self referential act
I was doing a search for the expression "Goodman's Hill" Quartzite. Something very unusual in the search results is that only one link was found.
But since I am writing at this moment about the topic, no doubt it is a matter of a short time before Google finds it out and the search result stops being unique.
Where else can expressing an observation falsifying it?
Oops I did not mis-spell it in the same way. So: "Goodmans Hill" Quartzite
But since I am writing at this moment about the topic, no doubt it is a matter of a short time before Google finds it out and the search result stops being unique.
Where else can expressing an observation falsifying it?
Oops I did not mis-spell it in the same way. So: "Goodmans Hill" Quartzite
Friday, May 9, 2014
Deriving the additive properties of the integers from their multiplicative properties
Usually formal arithmetic begins with
additive properties, such as 0 and the +1 operator. Along the way,
multiplication n*m gets defined as the successive addition of n, m
times. This leads to the notion of primes and the topic of the
distribution of primes.
Instead I want to look at the integers in terms of their multiplicative properties and see if additive properties (in particular the standard ordering of the positive integers) can be derived. So begin with a set of prime numbers, given in order 1,2,3,...... with 1 as multiplicative identity and other numbers formed by stringing together primes with multiplications. The question is: how might these composites be ordered? So here is a very incomplete thought.
Axiom (4) is like the isoperimetric concept: of all rectangles with the same perimeter, the square is the one with the largest area. Axiom (5) says that composites are packed as closely together as possible; alternatively, that primes are introduced as infrequently as possible.
If we apply axiom (4) this says that 1*(2*2) nn 2*2 which is false by
axiom (1). Hence we need another prime next to 2, call it “3” so we have
1 n 2 n 3. Now we must have 3 n 2*2 because of axiom (5), so we have 1 n
2 n 3 n 2*2. We cannot have 2*2 n 2*3 without violating (4) and all
other composites are even larger than 2*3, so we must have another prime
“5” after 2*2. Now we have 1 n 2 n 3 n 2*2 n5. (The argument continues???)
Instead I want to look at the integers in terms of their multiplicative properties and see if additive properties (in particular the standard ordering of the positive integers) can be derived. So begin with a set of prime numbers, given in order 1,2,3,...... with 1 as multiplicative identity and other numbers formed by stringing together primes with multiplications. The question is: how might these composites be ordered? So here is a very incomplete thought.
We will try to derive an ordering using the symbol “n” to
mean immediate neighbor, so that if B is the immediate neighbor to the right of
A we can write:
A n B
If there is a sequence of zero of more numbers C1,C2,C3, …
such that AnC1nC2nC3……..nB then we write:
A nn B
Note that A n B
implies A nn B and that A nn B nn C implies A nn C, by definition.
We assume the arithmetic
properties of multiplication: presence of identity “1”, associativity,
commutativity. We also assume the
standard ordered sequence or primes 2, 3, 5, 7, 11, etc.
Simple Axioms
- The relation ‘n’ and the relation ‘nn’ are not reflexive (A n A is false, A nn A is false)
- The relation ‘n’ is not transitive (but ‘nn’ is transitive, as noted above)
- If A n B then C*A nn C*B (and not C*A n C*B unless C==1)
- [added] Every A has a B such that A n B
Key Axioms
- (ISOPERIMETRY) If A n B and C n D where A nn C we must have A*D nn B*C
- (PARSIMONY) Composites occur as early in the sequence as possible without violating (4)
Assume
1 nn 2 nn 3 nn 5 etc.
Axiom (4) is like the isoperimetric concept: of all rectangles with the same perimeter, the square is the one with the largest area. Axiom (5) says that composites are packed as closely together as possible; alternatively, that primes are introduced as infrequently as possible.
Partial Theorem
1 n 2 n 3 n 4 n 5 n 6
n 7 n 8 n 9 n 10 n 11 follows from the axioms and the order of the primes.
Theorem (unproved)
The conventional additive order of the (composite) integers
can be derived from these axioms and the order of the primes.
Proof of the partial theorem is something like this: 1 has a
neighbor but it cannot be a composite using just 1, so it must be 2, hence 1 n 2.
Now axiom (5) says we should try to put 2*2 as soon as possible after 2 but that would give us 1 n 2 n 2*2.
Now axiom (5) says we should try to put 2*2 as soon as possible after 2 but that would give us 1 n 2 n 2*2.
Word clouds for product reviews
In case anyone wants to claim they thought of this, especially when the words are hyperlinks.
Thursday, March 27, 2014
Why geometry of language?
The theory of best model classification is essentially geometric and it seemed so successful in that domain that there was a strong desire to apply it to other things. And an impression it could be applied to reading text. Luckily I work at a place that needed language automation as well as geometry automation and I got to try out an approach informed by those ideas in the narrow world of custom part design.
But at home, I am trying to really understand what is involved. This requires pursuing the analogy in more details. What is the space of points, and data fitting that can apply a best model approach to text? That question drove the formulation of a proto semantics and its definition of narrative fragments, the "geometric objects" of this linguistics. I have yet to carry out the complete program including a goodness-of-fit metric (something to do with number of slots filled in a fragment).
Even in college, after reading Bertrand Russell, I became convinced that the way ideas merge together was often more intrinsic to the type of idea than to the grammar that combined words for the ideas. So "or" and "and" took there meaning largely from whatever was being juxtaposed. I was fascinated by how I cannot think of a square and a circle as the same object, but can easily have a square and a redness as the same object. If I try to make a single object red and green, it splits the color in two. I still don't understand how color and shape channels are what they are.
But although I do not understand this built-in logic, that comes with our use of words and thoughts about things, there is a reasonable, practical way to use data fitting ideas with text, provided you narrow the topic enough. Then the word definitions are what you make of them, and reside in your dictionaries and C++ class designs. Language recognition built in this way is real because the class definition is concrete and shared.
But at home, I am trying to really understand what is involved. This requires pursuing the analogy in more details. What is the space of points, and data fitting that can apply a best model approach to text? That question drove the formulation of a proto semantics and its definition of narrative fragments, the "geometric objects" of this linguistics. I have yet to carry out the complete program including a goodness-of-fit metric (something to do with number of slots filled in a fragment).
Even in college, after reading Bertrand Russell, I became convinced that the way ideas merge together was often more intrinsic to the type of idea than to the grammar that combined words for the ideas. So "or" and "and" took there meaning largely from whatever was being juxtaposed. I was fascinated by how I cannot think of a square and a circle as the same object, but can easily have a square and a redness as the same object. If I try to make a single object red and green, it splits the color in two. I still don't understand how color and shape channels are what they are.
But although I do not understand this built-in logic, that comes with our use of words and thoughts about things, there is a reasonable, practical way to use data fitting ideas with text, provided you narrow the topic enough. Then the word definitions are what you make of them, and reside in your dictionaries and C++ class designs. Language recognition built in this way is real because the class definition is concrete and shared.
Friday, March 21, 2014
Linguistics in the 20th century
Reading first some Chomsky and then, with diminishing respect, looking for something better at the dear old Concord Library and discovering Whorf, I find it interesting that so much energy was spent by the former defeating academic adversaries rather than addressing the topics at hand.
It is too bad they lived in an age before it was possible to imagine teaching a computer to understand some limited part of a natural language. In narrow world language processing the question is how to capture meaning and be able to extract it automatically from examples of natural language. The idea is that if you have a sufficiently well defined "world" object, its member variables can set themselves. So you have to put your money where your mouth is and write software that fills in data objects from language samples. Then do something with the objects. You have to confront the notion of word meaning at a mathematical level to do it successfully. I am trying to do that with best models and the proto semantics.
But the controversies of the 20th century did not go away. I hope Whorf would approve my scheme of proto semantic shapes, filled with words having native context for each speaker of the language. I hope he would be sympathetic to my view that many of our primary abstract words are developed from games in childhood which are culture specific. I wonder if he would also approve of my idea that the cultural differences get built on top of simpler meaning entities that maybe are universal.
For example, do not all cultures include: "person", "place", "thing", "want", "ask", and action verbs, and things like transformation, sequence, and grouping? Note, it does not really matter if it is universal or not in terms of programming for a single language.
But there is something to what Chomsky is saying that I feel is very true. The words in my mind are encoded with musculature - what you could call the phonemes [or is it morphemes?]. As I dozed off last night, the word "only" split off an "-ly", which brought a sensation of deep meaning. It would not be surprising at all if the concepts that used "ly" where physically manifested in my brain as a thinking mechanism that includes those muscles. But that is the implementation of meaning not the form or content of it. So yeah, Chomsky, meaning is there in the physical implementation of language, and the use of such muscles is critically important. The same way my computer program uses silicon dioxide and larger objects like transistors to perform the operations I want in a computer program. But it is the computer program which is of most interest with the elusive "meaning" we want to understand.
Separately, the algebraic rules of grammar are necessary for extracting subtleties of meaning. They apply during parsing, and ordering of the input into narrative structures. But how important is it? Aren't single concepts usually described with adjacent words? It is a tough subject.
Update: My son David mentions that Sasurre (sp?) wrote that the form and sounds of words cannot be connected to their meaning. I agree completely. At the same time, a system that stores meanings in efficient way will use a strong degree of parallelism between the word forms/sounds and the word meanings.
It is too bad they lived in an age before it was possible to imagine teaching a computer to understand some limited part of a natural language. In narrow world language processing the question is how to capture meaning and be able to extract it automatically from examples of natural language. The idea is that if you have a sufficiently well defined "world" object, its member variables can set themselves. So you have to put your money where your mouth is and write software that fills in data objects from language samples. Then do something with the objects. You have to confront the notion of word meaning at a mathematical level to do it successfully. I am trying to do that with best models and the proto semantics.
But the controversies of the 20th century did not go away. I hope Whorf would approve my scheme of proto semantic shapes, filled with words having native context for each speaker of the language. I hope he would be sympathetic to my view that many of our primary abstract words are developed from games in childhood which are culture specific. I wonder if he would also approve of my idea that the cultural differences get built on top of simpler meaning entities that maybe are universal.
For example, do not all cultures include: "person", "place", "thing", "want", "ask", and action verbs, and things like transformation, sequence, and grouping? Note, it does not really matter if it is universal or not in terms of programming for a single language.
But there is something to what Chomsky is saying that I feel is very true. The words in my mind are encoded with musculature - what you could call the phonemes [or is it morphemes?]. As I dozed off last night, the word "only" split off an "-ly", which brought a sensation of deep meaning. It would not be surprising at all if the concepts that used "ly" where physically manifested in my brain as a thinking mechanism that includes those muscles. But that is the implementation of meaning not the form or content of it. So yeah, Chomsky, meaning is there in the physical implementation of language, and the use of such muscles is critically important. The same way my computer program uses silicon dioxide and larger objects like transistors to perform the operations I want in a computer program. But it is the computer program which is of most interest with the elusive "meaning" we want to understand.
Separately, the algebraic rules of grammar are necessary for extracting subtleties of meaning. They apply during parsing, and ordering of the input into narrative structures. But how important is it? Aren't single concepts usually described with adjacent words? It is a tough subject.
Update: My son David mentions that Sasurre (sp?) wrote that the form and sounds of words cannot be connected to their meaning. I agree completely. At the same time, a system that stores meanings in efficient way will use a strong degree of parallelism between the word forms/sounds and the word meanings.
Saturday, March 8, 2014
How do words get their meanings?
[Obviously not a complete answer (from a doc I am writing):] No discussion of semantics can escape the central mystery of how words acquire their meanings and contexts. The narrative structures [of proto semantics] are simple by comparison and define roles for words independent of the native meanings. So let us acknowledge two kinds of word meaning: native meaning context and role within a narrative structure. For the sake of discussion, I assume words acquire their native meaning context through repeated use in a single role within multiple occurrences of a narrative structure in varying physical contexts. That single role is a word’s native role.
The “mystery” of word learning is why some parts of
the context are perceived as constant and become associated with the word, while
other parts are perceived as varying and become part of the expected narrative
usages. For example to use the word “sun”, for me, creates a vague picture of
an outdoors with a sky and a specific picture of a sun. The vague picture is
waiting to be clarified.
Saturday, March 1, 2014
The Turing Test Asks the Wrong Question
The Turing Test, as I understand, asks if a computer program could ever fool a person in a blind exchange of statements and replies. I believe the answer is clearly "yes" such a program could exist. Even if the computer had much thinner context-less definitions for words, real communication might still be possible. But that is the wrong question.
Instead let's ask if a computer program, dressed as an android, could be here with me so as to fool me in every way as to it being such. If this includes my being able to mate with it, then as far as I am concerned it is a person not a robot, and I am not fooled. [I guess that is the point they are making in "Bladerunner".]
Instead let's ask if a computer program, dressed as an android, could be here with me so as to fool me in every way as to it being such. If this includes my being able to mate with it, then as far as I am concerned it is a person not a robot, and I am not fooled. [I guess that is the point they are making in "Bladerunner".]
Wednesday, February 19, 2014
My Proto Semantics is nearing a final form
Took a look at Wikipedia's definition of "semantics" today. It is a huge mish-mash of form and content, syntax, grammar, vocabulary, semiotics, linguistics, and language differences - all supposedly in pursuit of the subject of "meaning".
Very little of it seemed to have to do with what I consider narratives, or stories. Sure a Chinese speaker may have some different narratives than me, but we also must have many that are the same and are about the world around us and the insides of our minds. It is precisely those worlds that we share and the common narratives about them that ought to be the proper subject of semantics.
So I slice it and dice it differently from Wikipedia. Form is represented by a proto semantics, as per below. Content is represented by word meanings and the larger mystery: how do words get their meanings?
Very little of it seemed to have to do with what I consider narratives, or stories. Sure a Chinese speaker may have some different narratives than me, but we also must have many that are the same and are about the world around us and the insides of our minds. It is precisely those worlds that we share and the common narratives about them that ought to be the proper subject of semantics.
So I slice it and dice it differently from Wikipedia. Form is represented by a proto semantics, as per below. Content is represented by word meanings and the larger mystery: how do words get their meanings?
A PROTO SEMANTICS
Nouns
There
are three kinds
- person (me, or things I lend me-ness)
- thing
- location
- person (me, or things I lend me-ness)
- thing
- location
These
are denoted by single letters or groups of things in parentheses. X, Y, etc.
Adjectives
Two
kinds
-
feeling
(attributed only to persons)
-
attribute
These
are denoted by A,B, etc. To express that a noun X has or feels an adjective A we write:
X__/A
Verbs
These involve a pair
of nouns called actor and target:
actor\target
|
person
|
thing
|
location
|
person
|
love
understand
|
want
assign_value
see
|
go
indicate
find
|
thing
|
cause_to
|
act_on
compare_to
|
in
at
|
location
|
affects
|
contains
on
|
connect_to
|
To
express that a noun X acts on a noun Y we write:
X-->Y
To
express the idea that the same verb occurs in more than one part of a
narrative, superscript the arrow like this ‘-->a‘.
Note
that noun and adjective types are automatically converted by usage. To say “the
dog loves his owner” or “Niagara Falls loves to see tourists at all seasons”
lends personhood to these non-person nouns. Similarly we will be able to put
attribute words in the locations of nouns (e.g. “red is shirt”). Although
almost nonsense, such constructs do carry slight meaning.
Narrative Fragments, Connectors, and
Grouping
Narrative fragments are:
· noun
· noun_/adjective
· noun-->noun
· two narrative fragments joined by a
comma ‘,’. This is a connector that means ‘consecutive’.
· two narrative fragments joined by a
‘::’. This is a connector that means ‘becomes’.
· Any narrative fragment in
parentheses. This means ‘treated as a noun’ or ‘treated as an adjective’
depending on the usage.
· Any narrative fragment in square
brackets. This means ‘implicit’ noun or adjective depending on usage.
Rule of precedence
For simple expressions: ’__/’ ‘-->’ '::' ‘,’ .
Otherwise use parentheses to avoid ambiguity.
Update: Most of the arguments in semantics seems to have to do with whether words can be used in narrative roles that do not match the words' natural definitions. Duh! I realize that there is such a cacophony of nonsense out there about this subject, good new ideas will never be heard, unless they become the basis of commericially successful applications. This seems like a reasonable test, if evaluated in the long term.
Otherwise use parentheses to avoid ambiguity.
Update: Most of the arguments in semantics seems to have to do with whether words can be used in narrative roles that do not match the words' natural definitions. Duh! I realize that there is such a cacophony of nonsense out there about this subject, good new ideas will never be heard, unless they become the basis of commericially successful applications. This seems like a reasonable test, if evaluated in the long term.
Subscribe to:
Posts (Atom)