(although maybe SNOWMAX is not the right stuff)
Thursday, December 12, 2013
Monday, December 9, 2013
Thursday, December 5, 2013
Wednesday, November 20, 2013
He who publisheth the most [online]
He who publisheth the most wins the information battle; if google be the judge.
Friday, November 15, 2013
Cloud Questions
Cloud History:
What happens to a cloud after it releases rain?
If a cloud persists after raining, can it "re-hydrate"?
How old is a cloud? (days, weeks, months, years...?)
Cloud Shape:
Why are clouds at different altitudes different shapes?
Why are summertime clouds different from wintertime ones?
Why are some clouds flat on
the bottom (are they maintaining a thermal barrier or what)? I.E, What
is the basis for the observed cohesion/adhesion?
Clouds are observed to
suddenly change visibility (they can vanish or appear in just a few
seconds). Does this correspond to a change in humidity?
Strange shapes:
Some clouds appear to have an two-tone upper and lower part with a dark lower part. If this is not a shadow, what is it?
Some clouds appear to be
trailing curved tendrils - is this just rain seen at a distance? Or is
it a hanging garden of organic fibre?
Some
clouds have a "soft" surface while others have a "hard" one. How could
the wind be responsible? The bottom of a cloud becomes "soft" just
before rain. Also with the above "two-tone" clouds the lower, darker one, is always softer. [The suggestion that wind currents would be different above and below the cloud, to produce this difference in visual texture - is unsupported.]
Cloud Behavior:
Can a cloud change altitude without changes in atmospheric conditions around it?
Are clouds "bathymetric" and able to change effective volume/density - rasing or falling in the atmosphere?
Metabolism, Energetics, Signalling:
Do clouds store energy? I.E.
does the energy entering a cloud (via sunlight) get completely converted
via cooling and reflection?
Can energy be stored in polymer bond angles as well as in sugars? Could a cloud "flex" like a muscle?
Are clouds in thermal equilibrium with their surrounding air?
Do clouds contain electic currents and the corresponding magnetic fields?
Why do clouds start raining "all at the same time"? Is a quorum sensing mechanism present?
If quorum sensing occurs is it chemical or electric? Given an answer to this, does it explain why one cloud rains while the next one over does not?
What holds a cloud up in the first place since it is denser than the air around it? Again, is it a thermal barrier?
What holds a cloud up in the first place since it is denser than the air around it? Again, is it a thermal barrier?
Monday, October 28, 2013
another cloud argument - from contrails
Today I saw a contrail left behind in the sky, with a clean beginning and ending. It was a white line segment against the dawn. Now one end might be because the pilot downshifted, but not both ends of the segment. So, logically, the sharp demarcation at one of the two ends must have been because of a sharp demarcation in the "evaporative environment". But there was noting visible around the contrail - and a pure change in humidity would not have a sharp boundary.
Increasingly it seems the idea of an amorphous aerosol keeping stuck together because of a physical arrangement of different sized water particles - sounds like nonsense - unsupported by any evidence.
Increasingly it seems the idea of an amorphous aerosol keeping stuck together because of a physical arrangement of different sized water particles - sounds like nonsense - unsupported by any evidence.
Saturday, October 5, 2013
eDNA of atmospheric pseudomonas - velcro for water molecules
If you get a bunch of dead Pseudomonas you'll get a mat of organic polymers and water trapping proteins. Are not large masses of such stuff accumulating right now overhead?
My friend from Carlisle came up with the velcro phrase when I described it to him.
My friend from Carlisle came up with the velcro phrase when I described it to him.
Friday, October 4, 2013
What I have been up to artistically
is colored pencil drawings. I call this kind a "quadrych"
This is a quadrych of a locust, dogwood, skyline, and plane.
Here is one of a tree line, from mid afternoon till dusk (woodpecker, dragonfly, sparrow, satellite).
The rule for a quadrych is that, in some way, the lower right one should be over the top.
This is a quadrych of a locust, dogwood, skyline, and plane.
Here is one of a tree line, from mid afternoon till dusk (woodpecker, dragonfly, sparrow, satellite).
The rule for a quadrych is that, in some way, the lower right one should be over the top.
Tuesday, October 1, 2013
Cows make fog
The title is sort of a joke but contains a truth: I was driving to work admiring the fog hovering over the grass (clear sky above) and noticing it was denser over the marsh and low lying areas. No surprise there. But as I got onto 2A from Lexington Rd, heading east with a cow field on the right and an open, mowed field on the left, it was surprising to see the fog rising thickly from the cow field and not at all from the mowed field.
It is likely the grass was not the same length or shape from mowing as from mooing (cows) but the difference in fog/no fog was stark enough to require a serious explanation - not a tossed off assumption about physical differences in the grass blade shapes. What to me is far more likely is that bacteria are needed for ground fog and there are lots of them in a cow field and not so much in a mowed field.
It is likely the grass was not the same length or shape from mowing as from mooing (cows) but the difference in fog/no fog was stark enough to require a serious explanation - not a tossed off assumption about physical differences in the grass blade shapes. What to me is far more likely is that bacteria are needed for ground fog and there are lots of them in a cow field and not so much in a mowed field.
Tuesday, September 24, 2013
Clouds are biofilms (rolled up)
There. Somebody had to take the extreme position.
If we are to suppose that bacteria have a role in precipitation, and add that the structure of clouds also looks like a fibrous mat, then we might as well also assume bacteria have a role in the formation of clouds - that the whole thing is bacterial, from start to finish.
Update: To put it more briefly: clouds are alive.
If we are to suppose that bacteria have a role in precipitation, and add that the structure of clouds also looks like a fibrous mat, then we might as well also assume bacteria have a role in the formation of clouds - that the whole thing is bacterial, from start to finish.
Update: To put it more briefly: clouds are alive.
Friday, September 20, 2013
Contrails
Here is a picture illustrating contrail dissipation rate being influenced by clouds:
According to the site where I borrowed this, the contrails dissipate more slowly in higher humidity - namely when they are crossing clouds. They also mention some contrails spreading out and lasting a long time, blocking the sun. Do they become bacterial? It should be easy to find out.
Update: It turns out this is a well known phenomenon but because it is promoted by conspiracy theorist, no scientist dares discuss it. Persistent contrails are called "chemtrails" and are supposed to be a government plot. It is as if anywhere there is a conspiracy theory about the government you might get close to the truth by replacing the word "government" with the word "bacteria". Anyway, that is what happens when observant people do not have the tools of knowledge and science. But no excuse exists for those with those tools who do not use them on "suspect" observations. Of course in academics, it is about self preservation.
Also:
According to the site where I borrowed this, the contrails dissipate more slowly in higher humidity - namely when they are crossing clouds. They also mention some contrails spreading out and lasting a long time, blocking the sun. Do they become bacterial? It should be easy to find out.
Update: It turns out this is a well known phenomenon but because it is promoted by conspiracy theorist, no scientist dares discuss it. Persistent contrails are called "chemtrails" and are supposed to be a government plot. It is as if anywhere there is a conspiracy theory about the government you might get close to the truth by replacing the word "government" with the word "bacteria". Anyway, that is what happens when observant people do not have the tools of knowledge and science. But no excuse exists for those with those tools who do not use them on "suspect" observations. Of course in academics, it is about self preservation.
Also:
Wednesday, September 18, 2013
Visual Impression as a Quantititative Measure
Why shouldn't one argue from visual impression of a material to its intrinsic properties?
For example the shape of a bubble- or better yet the dynamic shape as the bubble emerges- is compatible with an elastic material or a material with surface tension (like gum or water), but incompatible with a granular isotropic material (like sand). So if you see a bubble shape (or a growing bubble) it is very fair to conclude the material is more like gum/water than like sand. Can you make the argument quantitatively? [I believe yes, as per best models/chi-squared]
Another example is sweeping up dog hair versus sweeping up sand. You sweep up dog hair and it starts to roll and form tubes. You sweep up sand and it stays sand.
Why not have this be a legitimate way to say clouds are more like dog hair?
David Sands
I am happy to say the world's expert in bio-precipitation, David Sands, talked to me for 1/2 hour on the phone. I am wondering about inter-bacteria signalling, and later emailed: "If it starts to rain all at once, then the bacteria must be signalling each other." During the conversation, I mentioned fibers made from polymers, he mentioned fibers of DNA/RNA almost 10' long in one bacterium. That'll tangle a cloud together for ya. Also lightening! I am thrilled.
Update: This is the 2nd email I sent him:
There are other less compelling reasons to think clouds are a bit sticky.(1) One observes a fast cloud overtake a slower one, tear off a piece of the slower cloud, and continue on its way. (2) The upwind side of a cloud tends to bunch together in a way that looks more like tissue paper than like disconnected and homogeneously dispersed water particles. It just looks like a fibrous mat being rolled up. Over and over one sees the visual appearance of a non-uniform, non-homogenous, fibrous, and sticky structure.
Bacteria do contain long fibres (DNA and RNA, as you suggested in our phone converstation) and they are capable of producing oils, polymers, and other products. It would make clouds sticky. Many types of bacteria live in biofilms.
I do not know how to test this more quantitatively. One might try to:
(a) Locate free-floating bi-products, also in the cloud
(b) Sample over a surface, over a very short time span, and then do an assay to study the physical distribution of the organisms, and bi-products within the cloud. It might be easier to do this with fog at ground level.
Also on the subject of bacterial bi-products, is inter-bacteria signalling important for rain formation? Certain bacterial group behaviors like disease and bioluminesce, require a given population density before being triggered. The question is: what signals, if any, do rain forming bacteria (pseudomonas) use or require for rain formation? Could one suppress or enhance ice formation with the right signals?
Update: This is the 2nd email I sent him:
Several different observations support the idea that clouds are being held together by bacteria and are a bit "sticky".
The
best information comes from comparing normal clouds with jet trails -
which are effectively sterile clouds. Jet trails dissipate quickly in
the same sky where clouds persist. More significantly, one observes an
occasional jet trail
intersecting with a cloud. The part
of the trail outside the cloud continues to dissipate while the part
inside the cloud does not. Hence there must be forces holding the cloud
together that are protecting the enclosed jet trail from dissipation.
Those forces cannot simply be "water-to-water" forces.
There are other less compelling reasons to think clouds are a bit sticky.(1) One observes a fast cloud overtake a slower one, tear off a piece of the slower cloud, and continue on its way. (2) The upwind side of a cloud tends to bunch together in a way that looks more like tissue paper than like disconnected and homogeneously dispersed water particles. It just looks like a fibrous mat being rolled up. Over and over one sees the visual appearance of a non-uniform, non-homogenous, fibrous, and sticky structure.
Bacteria do contain long fibres (DNA and RNA, as you suggested in our phone converstation) and they are capable of producing oils, polymers, and other products. It would make clouds sticky. Many types of bacteria live in biofilms.
I do not know how to test this more quantitatively. One might try to:
(a) Locate free-floating bi-products, also in the cloud
(b) Sample over a surface, over a very short time span, and then do an assay to study the physical distribution of the organisms, and bi-products within the cloud. It might be easier to do this with fog at ground level.
Also on the subject of bacterial bi-products, is inter-bacteria signalling important for rain formation? Certain bacterial group behaviors like disease and bioluminesce, require a given population density before being triggered. The question is: what signals, if any, do rain forming bacteria (pseudomonas) use or require for rain formation? Could one suppress or enhance ice formation with the right signals?
evolution of bird flight
If you ever see a turkey round a corner while running, you'll get some other ideas about the use of wings. Using wings as they run, turkey's can turn on a dime.
Monday, September 16, 2013
Clouds and Jet Trails
I have been trying to see if there is any observable evidence that clouds are held together by bacteria. I noticed that Jet trails dissipate in a way that clouds do not.
Yesterday I saw something compelling: a jet trail intersected a cloud. As I watched, the part of the jet trail outside the cloud dissipated. But the jet trail inside the cloud did not dissipate- it remained visible long after the rest of the trail was gone. This supports the idea that there is some non-water related force holding the cloud together.Update: Perhaps as significant, is that I see examples where the jet trail stays visible for a short distance outside the cloud. This suggests to me a coating of adhesive fibers that is not as visible because it is not as humid. If the evidence for humidity is the visible cloud then the invisible one has to have something else going for it.
Saturday, September 14, 2013
Role of bacteria in cloud formation
How is a cloud different from "vapor" from a kettle? Clouds do not dissipate the same way. How much do you want to bet polymer- and oil-producing bacteria hold a cloud together?
Update: How are clouds different from jet contrails?
Update: How are clouds different from jet contrails?
Wednesday, August 28, 2013
The Anti-Virgin
When you have sex for the first time and "lose your virginity" it is a big deal and marks a transition point in your life. You are well aware of it. But the last time you have sex, when you become an anti-virgin, probably occurs without you knowing it, without much fanfare.
Wednesday, August 14, 2013
A bacteria scanner
A technology goal: to create a scanner capable of probing the bacterial population at a variable depth within tissue. How could you probe without damaging tissue? Maybe with a thin enough glass tube, you could sample many bacteria, while killing a only a few tissue cells. For example, a thin glass tube with small pores and an internal thread or "sponge". You insert the tube into tissue, pull back on the thread- to get a fresh sample, then pull back a little more to isolate the sample, then remove the tube. Later, pull the thread more to expose the samples.
Wednesday, August 7, 2013
Monitoring Soil Volumes
With respect to the previous post "Handling bacterial and genomic diversity", I have devised an experiment: drill a hole in the soil; put sterile sand in the hole; wait a fixed length of time; do a spectral analysis of the sand before and after.This determines which chemicals have leached into the sand over that time frame - the byproducts of organisms living around the hole.
One person with a mass spectrometer and a microscope could do a lot. Or, instead of spectrum, you could culture the sand as a way of monitoring. They must do this somewhere.
Update: or some kind of probe could be used to take a core sample from living tissue.
One person with a mass spectrometer and a microscope could do a lot. Or, instead of spectrum, you could culture the sand as a way of monitoring. They must do this somewhere.
Update: or some kind of probe could be used to take a core sample from living tissue.
Saturday, August 3, 2013
Handling bacterial and genomic diversity
I went to the MBL Friday Night Lecture last night. It was by Richard Roberts, entitled "Why I love bacteria". Essentially it was a review of the zoology of bacteria and contained little to no new science [my how the Friday Night Lectures have been dumb-ed down!]. But over and over it was reiterated that bacteria are present in most biological reactions; that their colonies are hugely diverse; that their genomes are more numerous and complex than the genomes of their hosts (us); and that little is known about these complex realities.
Listening in, it seemed to me that we are being limited by the concept of individual organisms and that we need new ways of thinking about these hugely complex biological processes in a way that bypasses the complexities. Let me make an analogy with physics: firstly there is an uncertainty principle that forces physicists to ignore the lowest level details; secondly they have the approach of statistical thermodynamics allowing macroscopic properties to be derived from aggregate behavior of the smaller parts. In other words, modern physics has ways to avoid losing the forest for the trees. Maybe biology needs something like that.
(Returning momentarily to the quality of recent Friday Night Lectures, one of the visible manifestations is of older scientists talking about old research and then segueing into extensive discussions of their laboratory protocols. The tools for studying the questions have replaced the questions themselves, as the focus of their research interest. Here we see from a more personal perspective the "lose forest for trees" phenomenon.)
So this lecture got me wondering how we could get away from thinking about individual organisms and their diversity, how we could break the combinatoric deadlock faced by biological sciences? Borrowing from Physics, what kind of statistical biodynamics might be possible? I propose that such a discipline could be based on biological tissue volumes and their macroscopic material properties - namely the chemical and genetic characteristics of what flows across the surface demarcating a given tissue volume; or perhaps a study of how the volume changes shape.
You might want to know how those fluxes change when the volume contains more or fewer of a given organism. It might never happen that different organism mixtures would produce the same macroscopic characteristics but if it did, you might be glad to ignore details that, in the end, did not matter.
And so what? How would such "statistical" thinking change anything? I say it would change the type of experiments you might do with bacteria. Rather than isolate a tubercle bacillus you might instead compare two volumes of soil - one with and one without these creatures. OR you could look at a piece of lung tissue. Figure out how to monitor the flow across the (arbitrary) volume boundary. With bacteria tending to like living in a film, this makes the boundary geometry simpler. And while I am ranting away, what organisms cooperate with TB? You could experiment with the composition of these volumes (adding and subtracting different components) and with their geometries. New forests! New trees!
Update: Suppose two volumes of tissue with TB. In one volume the TB is quiescent, in the other the TB is happily reproducing. The question is: what is the difference in the chemical or organism flux at the edges of such tissue volumes?
Listening in, it seemed to me that we are being limited by the concept of individual organisms and that we need new ways of thinking about these hugely complex biological processes in a way that bypasses the complexities. Let me make an analogy with physics: firstly there is an uncertainty principle that forces physicists to ignore the lowest level details; secondly they have the approach of statistical thermodynamics allowing macroscopic properties to be derived from aggregate behavior of the smaller parts. In other words, modern physics has ways to avoid losing the forest for the trees. Maybe biology needs something like that.
(Returning momentarily to the quality of recent Friday Night Lectures, one of the visible manifestations is of older scientists talking about old research and then segueing into extensive discussions of their laboratory protocols. The tools for studying the questions have replaced the questions themselves, as the focus of their research interest. Here we see from a more personal perspective the "lose forest for trees" phenomenon.)
So this lecture got me wondering how we could get away from thinking about individual organisms and their diversity, how we could break the combinatoric deadlock faced by biological sciences? Borrowing from Physics, what kind of statistical biodynamics might be possible? I propose that such a discipline could be based on biological tissue volumes and their macroscopic material properties - namely the chemical and genetic characteristics of what flows across the surface demarcating a given tissue volume; or perhaps a study of how the volume changes shape.
You might want to know how those fluxes change when the volume contains more or fewer of a given organism. It might never happen that different organism mixtures would produce the same macroscopic characteristics but if it did, you might be glad to ignore details that, in the end, did not matter.
And so what? How would such "statistical" thinking change anything? I say it would change the type of experiments you might do with bacteria. Rather than isolate a tubercle bacillus you might instead compare two volumes of soil - one with and one without these creatures. OR you could look at a piece of lung tissue. Figure out how to monitor the flow across the (arbitrary) volume boundary. With bacteria tending to like living in a film, this makes the boundary geometry simpler. And while I am ranting away, what organisms cooperate with TB? You could experiment with the composition of these volumes (adding and subtracting different components) and with their geometries. New forests! New trees!
Update: Suppose two volumes of tissue with TB. In one volume the TB is quiescent, in the other the TB is happily reproducing. The question is: what is the difference in the chemical or organism flux at the edges of such tissue volumes?
Monday, July 22, 2013
We identify objects after compensating for variability
Writing a paper about best model estimation and finally came to a very simple initial statement:
"Geometric invariants are considered important in mathematics as the quantities that identify geometric objects [Weyl]. They are also important in shape recognition and robotics where an object is to be identified or recognized independently of its position, distance, or angle of presentation to the viewer – which is to say independently of its variants. This paper assumes we can measure an object’s variants and expresses the simple idea that recognition is a composition of a variant factor with an invariant one. In terms of measuring the variants, it states the obvious: we identify what is left after taking out the variability. "
"Geometric invariants are considered important in mathematics as the quantities that identify geometric objects [Weyl]. They are also important in shape recognition and robotics where an object is to be identified or recognized independently of its position, distance, or angle of presentation to the viewer – which is to say independently of its variants. This paper assumes we can measure an object’s variants and expresses the simple idea that recognition is a composition of a variant factor with an invariant one. In terms of measuring the variants, it states the obvious: we identify what is left after taking out the variability. "
Sunday, July 7, 2013
Converting syntax to semantics
Think of reading/understanding as a process of converting words into an overall meaning. As we progress through the (say) text, words are read into semantic "structure" creating anticipations for subsequent words and meanings. So syntax is transferred into semantics, one word at a time.
Sunday, June 2, 2013
The False God of Syntax
I am mostly convinced that semantics is hard to study without routinely and repeatedly getting side tracked into considerations of syntax- of form rather than content. For example I spent several days being daunted by the variations in "as (blank) as possible":
- make it as low as possible
- make it low as far as possible
- make it as close to the implant as possible
- make it as close as possible to the implant.
These variations are not on behalf of a subtle meaning. They are, I think, almost entirely on behalf of being poetic.
But most people's vision of meaning recognition is syntax recognition plus statistics. I think that is wrong.
My sense is that the best approach to meaning recognition in text is to have a clear sense of the meanings and to simply fill them in from the text. But one tends to get suckered into thinking about syntax instead. I was reading a supposedly wise discussion of such things on Phys.org and they were talking about the difficulty of handling the word "run" which has more meanings than any other word in the English language.
I think they are doing this backwards. It is easy to be fooled into thinking about form not content.
- make it as low as possible
- make it low as far as possible
- make it as close to the implant as possible
- make it as close as possible to the implant.
These variations are not on behalf of a subtle meaning. They are, I think, almost entirely on behalf of being poetic.
But most people's vision of meaning recognition is syntax recognition plus statistics. I think that is wrong.
My sense is that the best approach to meaning recognition in text is to have a clear sense of the meanings and to simply fill them in from the text. But one tends to get suckered into thinking about syntax instead. I was reading a supposedly wise discussion of such things on Phys.org and they were talking about the difficulty of handling the word "run" which has more meanings than any other word in the English language.
I think they are doing this backwards. It is easy to be fooled into thinking about form not content.
Thursday, May 23, 2013
Some ideas about language
1. Speech and text understanding are dynamic- meaning changes during reading/listening and is not a function of the whole.
2. Language works because of a shared world, not because of its internal structure (which is nearly arbitrary).
3. The key cognitive capabilities are (a) naming; and (b) deriving word types from narrative. [over both the external world and the internal one]
Update: If best models come into this, it is in describing how we maintain multiple possible meanings, and resolve them by reading more words (i.e. making "additional measurements")
2. Language works because of a shared world, not because of its internal structure (which is nearly arbitrary).
3. The key cognitive capabilities are (a) naming; and (b) deriving word types from narrative. [over both the external world and the internal one]
Update: If best models come into this, it is in describing how we maintain multiple possible meanings, and resolve them by reading more words (i.e. making "additional measurements")
Saturday, March 30, 2013
Prosperity
stepping out of character:
Definition: Prosperity is the flow of dollars accompanied by the [reverse] flow of goods and services.
One dollar moving fast enough could keep us all rich. Unfortunately the government cannot create prosperity. But it can create dollars. Creating dollars and giving them to a bank has no affect on prosperity. Instead they should use the creation and gift of dollars as a means of creating a pump/vacuum to stoke the exchange of goods and services directly. A simple way to do that is to go on a buying spree.
Update: if money is a policed exchange rate for goods and services, then without actual goods and services being exchanged, there is no functional involvement of money [or say the money has no "traction". With a government failing to police "traction"] , real money is replaced with some kind of poker chip which is only good temporarily, in some casino.
Definition: Prosperity is the flow of dollars accompanied by the [reverse] flow of goods and services.
One dollar moving fast enough could keep us all rich. Unfortunately the government cannot create prosperity. But it can create dollars. Creating dollars and giving them to a bank has no affect on prosperity. Instead they should use the creation and gift of dollars as a means of creating a pump/vacuum to stoke the exchange of goods and services directly. A simple way to do that is to go on a buying spree.
Update: if money is a policed exchange rate for goods and services, then without actual goods and services being exchanged, there is no functional involvement of money [or say the money has no "traction". With a government failing to police "traction"] , real money is replaced with some kind of poker chip which is only good temporarily, in some casino.
Saturday, March 23, 2013
Personal Internet Agents
[More on the topic of "itelligence", see here]
I can't imagine that the internet will be the same in 20 years and that today's big players - the Amazon's, Twitter's, Facebook's, Travel Planners, games, searches, etc will remain the big players of tomorrow. If I had anything to do with it, it would be to return identity to people and fight against the broader trend to become sub-functions of corporations. I am a DFH (dirty f'ing hippie) and proud of it.
At the root of these things is the concept of personal profiles - that define the internet presence of a person from the point of view of some vendor. Let's hope for a future where people control their own profiles. If that were to work out then the big players of the internet would still be there but would be resources for people, not the other way around. So to facilitate that we will need some form of personal robotic agent that serves as a virtual person, when the real person is doing something else. I imagine the personal robot is my online surrogate but it can only process the information I teach it to process. From my point of view it is a virtual friend - a chat bot. It can interact with other personal agents and, in some circumstances a virtual relationship can become a real one.
This technology will be enabled by automatic text understanding.[What I find on the internet is called "meaning recognition".]
Update: Call it the "internet of words".
I can't imagine that the internet will be the same in 20 years and that today's big players - the Amazon's, Twitter's, Facebook's, Travel Planners, games, searches, etc will remain the big players of tomorrow. If I had anything to do with it, it would be to return identity to people and fight against the broader trend to become sub-functions of corporations. I am a DFH (dirty f'ing hippie) and proud of it.
At the root of these things is the concept of personal profiles - that define the internet presence of a person from the point of view of some vendor. Let's hope for a future where people control their own profiles. If that were to work out then the big players of the internet would still be there but would be resources for people, not the other way around. So to facilitate that we will need some form of personal robotic agent that serves as a virtual person, when the real person is doing something else. I imagine the personal robot is my online surrogate but it can only process the information I teach it to process. From my point of view it is a virtual friend - a chat bot. It can interact with other personal agents and, in some circumstances a virtual relationship can become a real one.
This technology will be enabled by automatic text understanding.[What I find on the internet is called "meaning recognition".]
Update: Call it the "internet of words".
Shoehorning semantics into a "best model" framework
I'll just ramble on here for a second and emphasize that the method of best models (see here) is valuable because
of the way it divides the data into variant and invariant portions. [It is necessary for geometry and, I warrant, for everything else as well.] For
shape congruence, position is considered a variant - to be measured away
and then ignored. For shape similarity, the size is considered a
variant - to be measured away and then ignored. As we strip away the
variants, the invariants are left to define the true
nature of the object.
The same needs to be part of the thinking about language and sentence structure. What are the variants and invariants? I think they change while we are reading. As I think about it, it is the hierarchical, sequential, and essentially dynamic nature of best models reasoning that comes into play here. So much linguistic analysis begins after a sentence has been read, that perhaps they are throwing the baby out with the bath water by ignoring the sequential and chronological nature of text understanding.
All I have at the moment is a sense that initial word meanings are measurements. They are used to create a framework (ideal sentence) for measuring later words. So the meaning of a sentence is to be broken up into the sentence and semantic structure, in a way that is independent of the actual contents of the words. Sure and I am not going to dodge any issue that has been around since Plato, but it seems like "meaning" comes and goes in chronology of understanding.
Here is a (radical?) idea: we give language way too much credit for having subtle structure when it relies mostly on the pre-existing structure of the world. Bertrand Russell, as a young man in Principles of Mathematics (not the older (?) Russell of Principia Mathematica) was puzzled by the possible ambiguity of the word "or" and the word "and". I find he gave up too quickly in the rush for mathematical simplifications. Actually the nature of "and" and "or" is the same: they are both juxtapositions which vary by the nature of the things juxtaposed (juxtaposed adjective will merge; disjoint objects will not) and vary by how we are interacting with the collection formed by the juxtaposition (take one from/ take all of)
Another example: in a course of introductory symbolic logic at BU, I balked at the statement that "but" and "and" have the same meaning. It took years to realize that "but" negates an unstated, implicit phrase and is used to alert the reader/listener to suspend their semantic expectations. For example: "there was snow on the ground but he went out barefoot." Includes an unstated expectation that snow implies reasonable foot wear. The negating of the expectation is the purpose of the "but".
Linguistics and a more correct symbolic logic must include the implicit statements that make up the semantic context of explicit language. So buckle your seat belts laddies, this trip says: boolean logic is wrong and will never come close to imitating humans. It needs to be re-written from the ground up to include the implicit and (I hope) a standardized use of best models.
The same needs to be part of the thinking about language and sentence structure. What are the variants and invariants? I think they change while we are reading. As I think about it, it is the hierarchical, sequential, and essentially dynamic nature of best models reasoning that comes into play here. So much linguistic analysis begins after a sentence has been read, that perhaps they are throwing the baby out with the bath water by ignoring the sequential and chronological nature of text understanding.
All I have at the moment is a sense that initial word meanings are measurements. They are used to create a framework (ideal sentence) for measuring later words. So the meaning of a sentence is to be broken up into the sentence and semantic structure, in a way that is independent of the actual contents of the words. Sure and I am not going to dodge any issue that has been around since Plato, but it seems like "meaning" comes and goes in chronology of understanding.
Here is a (radical?) idea: we give language way too much credit for having subtle structure when it relies mostly on the pre-existing structure of the world. Bertrand Russell, as a young man in Principles of Mathematics (not the older (?) Russell of Principia Mathematica) was puzzled by the possible ambiguity of the word "or" and the word "and". I find he gave up too quickly in the rush for mathematical simplifications. Actually the nature of "and" and "or" is the same: they are both juxtapositions which vary by the nature of the things juxtaposed (juxtaposed adjective will merge; disjoint objects will not) and vary by how we are interacting with the collection formed by the juxtaposition (take one from/ take all of)
Another example: in a course of introductory symbolic logic at BU, I balked at the statement that "but" and "and" have the same meaning. It took years to realize that "but" negates an unstated, implicit phrase and is used to alert the reader/listener to suspend their semantic expectations. For example: "there was snow on the ground but he went out barefoot." Includes an unstated expectation that snow implies reasonable foot wear. The negating of the expectation is the purpose of the "but".
Linguistics and a more correct symbolic logic must include the implicit statements that make up the semantic context of explicit language. So buckle your seat belts laddies, this trip says: boolean logic is wrong and will never come close to imitating humans. It needs to be re-written from the ground up to include the implicit and (I hope) a standardized use of best models.
A proto language with primitive semantics
I wrote a paper about best model reasoning that ends with the sentence:
"...the main limitation in our ability to program an artificial intelligence will be our ability to parameterize ideal objects in the world that intelligence is supposed to encounter.".
So now, moving on to the obvious questions about how language and semantics relate to the whole business of "best models", one needs to pause and ask: what world is the automatic language processor supposed to encounter? And how, in that world, are idealized objects parametrized? Well, I don't know yet but I think it is worth thinking about a world of language structure populated both by natural text and idealized text, where the idealized text needs to be parametrized somehow. I am looking for "pre-narrative" semantic structures, kind of atomic stories, kind of a semantic meta language. The goal is natural and explicit meanings structure, not mathematical tidiness, that can be used to analyze and translate other language. Here is what I've got so far :
Nouns
- person (me; or things I lend me-ness to)
- thing
- place
There are single word sentences like thing but let us ignore that for now.
Note that person can become thing, almost routinely. Thing can become "personified" but it is less common. This does not need to be explicit for now because the syntactic context is unambiguous.
Verbs
There are semantic structures of the form
person go place
person want thing
person assigns value to thing
person see thing
thing in place
place contains thing
thing causes thing
thing acts on thing
Caveat: I am not sure if other verbs are needed for person. Do we need a "make" verb?
Caveat: it is a bit artificial to construe a "person acting on thing" as something that requires the "person" to first be turned into a "thing" for the purpose of the analysis. I am taking the view that it is only the things which express personhood (indicators or what my son calls "agency") that require the meaning of a person. Otherwise the person functions semantically like a thing.
Adjectives
There are semantic structures of the form
thing has attribute
thing has thing (pretty similar to: place has thing)
Structural elements
( ) groups compound entities into single ones, treated as nouns
[ ] expresses implicit or optional parts of the structure (this helps be explicit)
:: transformation
, sequence
+ binding, (an abbreviation of multiple "has", "in" or "contains" statements, or more??]
Allowed syntax
X with attribute Y
[ ] with attribute Y (for example the sentence "hot!")
X acts on Y, [Y::Z]
X wants Z, [ X acts on Y, [Y::Z ]
(H ) + X (can also be written (H+X)
(H), X (can also be written (H , X)
X :: [X]+Y
other?????
Examples
Orginal text: Grunk takes ball into shower
Structure: ("Grunk"+"ball") go "shower" (an example of personification)
Original text: Grunk has ball in shower
Structure ("Grunk"+"ball") in "shower"
Original text: Grunk throws ball into shower
Structure: "Grunk" acts on "ball", "shower":: ("shower"+"ball")
Reverse examples. It is not the intention of this pre-narrative to be taken and used as a language, or as a playground for mathematical reasoning. The important point remains: How does something like this work in the context of best models. What are the measurements, ideal objects, and classifications to be made? But, as a dry exercise lets look at some examples of syntax that may or may not be meaninful in the pre-narrative language.
What is the difference between (X::Y) and [X::Y] ?? The first expresses a transformation become a noun. The second is an implicit transformation - not at all the same thing. Can transfomations become nouns? Seem like there are example in English: "I watched my sons growth with amazement"
"I" see ("son"::(["son"]+"growth")), "I" :: (["I"]+"amazed")
This is a start. It is easy to get distracted with the details of this proto-lanuage, and to forget its actual purpose - which is to provide an example of parametrized ideal sentences structure.
Update: This is really a proto-syntax
Update II: NO it is a proto semantics.
"...the main limitation in our ability to program an artificial intelligence will be our ability to parameterize ideal objects in the world that intelligence is supposed to encounter.".
So now, moving on to the obvious questions about how language and semantics relate to the whole business of "best models", one needs to pause and ask: what world is the automatic language processor supposed to encounter? And how, in that world, are idealized objects parametrized? Well, I don't know yet but I think it is worth thinking about a world of language structure populated both by natural text and idealized text, where the idealized text needs to be parametrized somehow. I am looking for "pre-narrative" semantic structures, kind of atomic stories, kind of a semantic meta language. The goal is natural and explicit meanings structure, not mathematical tidiness, that can be used to analyze and translate other language. Here is what I've got so far :
Nouns
- person (me; or things I lend me-ness to)
- thing
- place
There are single word sentences like thing but let us ignore that for now.
Note that person can become thing, almost routinely. Thing can become "personified" but it is less common. This does not need to be explicit for now because the syntactic context is unambiguous.
Verbs
There are semantic structures of the form
person go place
person want thing
person assigns value to thing
person see thing
thing in place
place contains thing
thing causes thing
thing acts on thing
Caveat: I am not sure if other verbs are needed for person. Do we need a "make" verb?
Caveat: it is a bit artificial to construe a "person acting on thing" as something that requires the "person" to first be turned into a "thing" for the purpose of the analysis. I am taking the view that it is only the things which express personhood (indicators or what my son calls "agency") that require the meaning of a person. Otherwise the person functions semantically like a thing.
Adjectives
There are semantic structures of the form
thing has attribute
thing has thing (pretty similar to: place has thing)
Structural elements
( ) groups compound entities into single ones, treated as nouns
[ ] expresses implicit or optional parts of the structure (this helps be explicit)
:: transformation
, sequence
+ binding, (an abbreviation of multiple "has", "in" or "contains" statements, or more??]
Allowed syntax
X with attribute Y
[ ] with attribute Y (for example the sentence "hot!")
X acts on Y, [Y::Z]
X wants Z, [ X acts on Y, [Y::Z ]
(H ) + X (can also be written (H+X)
(H), X (can also be written (H , X)
X :: [X]+Y
other?????
Examples
Orginal text: Grunk takes ball into shower
Structure: ("Grunk"+"ball") go "shower" (an example of personification)
Original text: Grunk has ball in shower
Structure ("Grunk"+"ball") in "shower"
Original text: Grunk throws ball into shower
Structure: "Grunk" acts on "ball", "shower":: ("shower"+"ball")
Reverse examples. It is not the intention of this pre-narrative to be taken and used as a language, or as a playground for mathematical reasoning. The important point remains: How does something like this work in the context of best models. What are the measurements, ideal objects, and classifications to be made? But, as a dry exercise lets look at some examples of syntax that may or may not be meaninful in the pre-narrative language.
What is the difference between (X::Y) and [X::Y] ?? The first expresses a transformation become a noun. The second is an implicit transformation - not at all the same thing. Can transfomations become nouns? Seem like there are example in English: "I watched my sons growth with amazement"
"I" see ("son"::(["son"]+"growth")), "I" :: (["I"]+"amazed")
This is a start. It is easy to get distracted with the details of this proto-lanuage, and to forget its actual purpose - which is to provide an example of parametrized ideal sentences structure.
Update: This is really a proto-syntax
Update II: NO it is a proto semantics.
Wednesday, March 13, 2013
When the internet becomes intelligent
OK, now that I've got all this cool technology lets think about a time when the internet becomes intelligent - or supports intelligent life in some way. The idea of one huge intelligence is repulsive to me, so I'd rather plan for a future with lots of different intelligent organisms viing for attention. So, how to build an intelligent organism, that lives on the internet and makes money for my family?
Update: Maybe a spec like this:
An internet intelligence...an intertelligance...the "itelligent agent".
- a url that expresses a person
- has public, private, and semi-public "dna".
- expresses user settings in a profound way
(Meaning things like: the user can invent their own types of settings, like UI widgets, using public or private h files and ultimately what their agent does is up to them)
- has a universal api for communication with other itelligent agents (see Microsoft's IPerson interface).
- can go shopping, do a search, or plan a trip
- can find friends
- can push information at the user
- can be taught things by the user, naturally and correctly
- gives the user total control of their own profiling.
Update: Maybe a spec like this:
An internet intelligence...an intertelligance...the "itelligent agent".
- a url that expresses a person
- has public, private, and semi-public "dna".
- expresses user settings in a profound way
(Meaning things like: the user can invent their own types of settings, like UI widgets, using public or private h files and ultimately what their agent does is up to them)
- has a universal api for communication with other itelligent agents (see Microsoft's IPerson interface).
- can go shopping, do a search, or plan a trip
- can find friends
- can push information at the user
- can be taught things by the user, naturally and correctly
- gives the user total control of their own profiling.
Tuesday, March 12, 2013
Monday, February 25, 2013
The general solution to a discrete expert system - the Best Model Estimate
Here mu is a set of measurements, the phi are sections of mu (one for each i in {1,2, ..., K}) thought of as parametrizations, and e is a real classification to be estimated. It is pretty easy to show that the average success rate of the estimator is the sum of the volumes of the Vi intersected with the inverse image of i under e( ). So when X has measure 1, the total error is:
I guess it makes sense to call this a best model estimate.
Thursday, February 21, 2013
Best fit step
Consider "fitting" a step shape (in red) to data (in blue) by simply matching step height to data height and choosing the first place the (lower) level encounters the data.
Once you have aligned the abstraction (in red) over the data (in blue) you can do a least squares fit. [This is the only evidence I have of the value of least squares best model.]
Once you have aligned the abstraction (in red) over the data (in blue) you can do a least squares fit. [This is the only evidence I have of the value of least squares best model.]
A prediction game
Hey this is kind of blogworthy:
Suppose that person A and person B agree that A will classify incoming objects into -say- n categories. And they agree that A will follow a systematic procedure. Person B has the task of predicting what person A will do when a new object arrives to be classified. Person B gets to examine the object first but must be in a different room from person A (so they cannot observe each other).
The essence of what an expert system must do is exactly this, but with the further burden that "person B" must be a computer equipped with rulers/calipers/sensors/detectors/etc to process sensor data about incoming objects. So an expert system is an automated person B.
Suppose that person A and person B agree that A will classify incoming objects into -say- n categories. And they agree that A will follow a systematic procedure. Person B has the task of predicting what person A will do when a new object arrives to be classified. Person B gets to examine the object first but must be in a different room from person A (so they cannot observe each other).
The essence of what an expert system must do is exactly this, but with the further burden that "person B" must be a computer equipped with rulers/calipers/sensors/detectors/etc to process sensor data about incoming objects. So an expert system is an automated person B.
Sunday, February 17, 2013
An imaginary conversation
A customer walks in to the store and speaks to the clerk:
Customer: Do you have any yellow enamel paint?
Clerk: What kind of yellow?
Customer: A pale, butter yellow.
Clerk: What is it for?
Customer: The eye of a merganser.
...there is a pause...
Clerk: Then you won't be needing very much.
Customer: Do you have any yellow enamel paint?
Clerk: What kind of yellow?
Customer: A pale, butter yellow.
Clerk: What is it for?
Customer: The eye of a merganser.
...there is a pause...
Clerk: Then you won't be needing very much.
Tuesday, February 12, 2013
Thursday, February 7, 2013
Pattern recognition by the method of best models
I have been hunting for words in some of the previous posts. Let me try again:
Suppose you have a space of objects, each given by data that can be measured and you wish to use the measurements to help recognize the object. Here is the method: use a discrete dictionary of parametrized ideal objects, called models, whose measurements are set to match the measurements of a given object you wish to recognize. There may be several models in the dictionary with these same measurements. Each such model is itself an object in the object space. Because the model is in the same space as the object to be recognized, comparison metrics can (and should) be based on model-to-object distance there, not on any distance concepts in the space of the measurements. The best model is the one closest to the object in object space, and having the same measurements as the object to be recognized.
In particular, you want to avoid the trap of defining recognition in terms of regions in the measurement space. That can lead to expert systems with training instability and an infinity of corner cases. This new better way of looking at it with a "Total Space" of objects, a "Base Space" of measurements, and a method for inverting the measurements, all are reminiscent of creating a section of a fibre bundle (like the logarithm).
But what is most evocative to me, is that this recognition takes place in a context where measurement is possible - a context with some form of coordinate system and some mechanism for aligning the coordinates to the objects to be recognized, in order to perform measurements. Hence the recognition is a byproduct of a perception (the measurement) and a projection (the forming of models being compared to the data). If you think about it, this is a reasonable fit for how we navigate the world about us in a continuous feedback loop of perception and projection. But here is the hardest part of the idea: the initial measurement depends on a prior coordinate frame attachment which, itself, is a best model result. The process is inherently hierarchical and (I suppose) will work best when the pattern dictionaries are nested in the same way that details are related to the whole.
Since getting my PhD, I have been fascinated not just with the mathematics of moving frames but also with some of the applications of attaching coordinate frames to data - (eg "Anatomical Frame Standards" for medical imaging). So, to discover these ideas connected to those of another old friend, the logarithm, all in a way that is a reasonable description of a cognitive process - is quite gratifying. I'm sure it sounds looney but I did use it successfully to solve a problem at work involving automatic feature detection for surfaces in 3D. So here is the full-on crazy:
Suppose you have a space of objects, each given by data that can be measured and you wish to use the measurements to help recognize the object. Here is the method: use a discrete dictionary of parametrized ideal objects, called models, whose measurements are set to match the measurements of a given object you wish to recognize. There may be several models in the dictionary with these same measurements. Each such model is itself an object in the object space. Because the model is in the same space as the object to be recognized, comparison metrics can (and should) be based on model-to-object distance there, not on any distance concepts in the space of the measurements. The best model is the one closest to the object in object space, and having the same measurements as the object to be recognized.
In particular, you want to avoid the trap of defining recognition in terms of regions in the measurement space. That can lead to expert systems with training instability and an infinity of corner cases. This new better way of looking at it with a "Total Space" of objects, a "Base Space" of measurements, and a method for inverting the measurements, all are reminiscent of creating a section of a fibre bundle (like the logarithm).
But what is most evocative to me, is that this recognition takes place in a context where measurement is possible - a context with some form of coordinate system and some mechanism for aligning the coordinates to the objects to be recognized, in order to perform measurements. Hence the recognition is a byproduct of a perception (the measurement) and a projection (the forming of models being compared to the data). If you think about it, this is a reasonable fit for how we navigate the world about us in a continuous feedback loop of perception and projection. But here is the hardest part of the idea: the initial measurement depends on a prior coordinate frame attachment which, itself, is a best model result. The process is inherently hierarchical and (I suppose) will work best when the pattern dictionaries are nested in the same way that details are related to the whole.
Since getting my PhD, I have been fascinated not just with the mathematics of moving frames but also with some of the applications of attaching coordinate frames to data - (eg "Anatomical Frame Standards" for medical imaging). So, to discover these ideas connected to those of another old friend, the logarithm, all in a way that is a reasonable description of a cognitive process - is quite gratifying. I'm sure it sounds looney but I did use it successfully to solve a problem at work involving automatic feature detection for surfaces in 3D. So here is the full-on crazy:
Subscribe to:
Posts (Atom)