Monday, March 28, 2016
Friday, March 25, 2016
Liebnitz's indescernables and realism
I think Liebnitz is looking at it wrong when defining the ways things become descernable. I would rather imagine an innocent baby seeing nothing but diversity around itself and only slowly separating out constancy of things and, even later, similarity of things. How this works is what needs to proceed discussions of AI.
Suppose for vagueness that we are considering some phenomenon and that it is given to us by data and we propose to process the data "using AI" to recognize or handle it in some what. Now I believe that when humans are doing that sort of thing, as adults, we are operating with an underlying concept of what the data represents. It is all very well to be a realist and to suppose that the underlying concept exists in the data and can be discovered there using generic tools (as if!). But what if you are wrong and the data contains no concept? Then you will not discover it by applying any generic tools.
Suppose for vagueness that we are considering some phenomenon and that it is given to us by data and we propose to process the data "using AI" to recognize or handle it in some what. Now I believe that when humans are doing that sort of thing, as adults, we are operating with an underlying concept of what the data represents. It is all very well to be a realist and to suppose that the underlying concept exists in the data and can be discovered there using generic tools (as if!). But what if you are wrong and the data contains no concept? Then you will not discover it by applying any generic tools.
Monday, March 14, 2016
Putting yourself inside the narrative
I've written about this before: Part of understanding the 'story' noun type is it's special verbs and operators. If I did not already say this, there is a special relationship - not necessarily semantic - of putting yourself into the story. Call it what you want, it is necessary for the story to be understood, let alone believed. I suppose we should mull on and relish this: taking yourself into the story and back out where you only hear the words.
UPDATE: This is completely wrong! I have to believe that the behavioral psychology of an event is more primitive to the human experience than capturing it as a narrative. We do not "sink into" the event but, rather, circle back around after creating a narrative. The need for narratives comes after the behavior. Interestingly narratives are discrete but the actual underlying stories of experience must be continuous (although they could still be abstract).
UPDATE: This is completely wrong! I have to believe that the behavioral psychology of an event is more primitive to the human experience than capturing it as a narrative. We do not "sink into" the event but, rather, circle back around after creating a narrative. The need for narratives comes after the behavior. Interestingly narratives are discrete but the actual underlying stories of experience must be continuous (although they could still be abstract).
A generic story
You can string truisms together:
X->person :: person_/feeling :: person->Y :: (person->Y)_/GOOD :: Y_/GOOD
In terms of truisms
X->person _/ [place,time,manner] (T1)
:: (T2)
person_/feeling :: (T3)
person->Y :: (T8A)
(person->Y)_/GOOD :: (T8B)
Y_/GOOD
Example: It was a nice day so Peter felt like joking around, so he sought out a friend and made them laugh.
Since that story makes no use of T4-T7. There are obviously some other generic possibilities.
X->person :: person_/feeling :: person->Y :: (person->Y)_/GOOD :: Y_/GOOD
In terms of truisms
X->person _/ [place,time,manner] (T1)
:: (T2)
person_/feeling :: (T3)
person->Y :: (T8A)
(person->Y)_/GOOD :: (T8B)
Y_/GOOD
Example: It was a nice day so Peter felt like joking around, so he sought out a friend and made them laugh.
Since that story makes no use of T4-T7. There are obviously some other generic possibilities.
Sunday, March 13, 2016
Where is all that anger supposed to go?
I am not much of a historian but it seems to me that the way people are aroused during the "drumbeat of war" is one of the standard initial stages of a war. Rousing the rabble. If "Gone With the Wind" correctly portrays the South before the civil war, or the history of discussions in Concord, Lexington, and other Middlesex communities is correct, then they certainly have in common the same sort of chest thumping, strutting, and anger generation that came from George Bush before he sent young Americans off to die for his oil buddies.
Does this kind of aroused anger, ever die down without first giving itself expression in violence? Because the rabble certainly are being aroused here, in the US in 2016, and I wonder if that anger must express itself somehow? Blood is already being spilled at campaign events. What happens to the anger if the angry people lose the election?
Does this kind of aroused anger, ever die down without first giving itself expression in violence? Because the rabble certainly are being aroused here, in the US in 2016, and I wonder if that anger must express itself somehow? Blood is already being spilled at campaign events. What happens to the anger if the angry people lose the election?
Thursday, March 10, 2016
Monday, March 7, 2016
About Liebnitz's "Identity of Indescernables"
According to Wikipedia, Liebnitz said something like:
That certainly is the issue. Let me ask:
Two distinct things cannot have all their properties in common
That certainly is the issue. Let me ask:
- how do we know sameness?
- how do we know difference?
and
- why is the experience broken into separate sensory channels?
[I know I am no Liebnitz, but a guy can wonder].
Update: I think of examples where my mind suddenly differentiates two people's faces I have been confusing. It happened with Susan Sarandon and Sigourney Weaver, until I saw them together or somehow finally grasped the diffrerence. More rececently Katie Perry and Zoe Deschanelle. Something about brunettes?
Update: I think of examples where my mind suddenly differentiates two people's faces I have been confusing. It happened with Susan Sarandon and Sigourney Weaver, until I saw them together or somehow finally grasped the diffrerence. More rececently Katie Perry and Zoe Deschanelle. Something about brunettes?
IBM's Watson - do we get a hint of what they are doing?
Take Healthcare. We can assume IBM has access to lots of individual health records, as well as all the scientific literature about - I guess - the corresponding health problems. Suppose they set about pairing the phrases used in the literature with examples of individual symptoms?
With a good understanding of the narrative patterns in the scientific literature, you might say that the correlation engine's job would be to pair stories to symptoms, and come at the narrative of diagnosis in a different way, through the back door.
With a good understanding of the narrative patterns in the scientific literature, you might say that the correlation engine's job would be to pair stories to symptoms, and come at the narrative of diagnosis in a different way, through the back door.
Sunday, March 6, 2016
Deep AI and the Homunculus - more critique of realism and the deep stupidity it gives rise to
You know the idea of the homunculus? He is a little man looking at a screen inside your head, who sees pictures for you. He is invoked by people trying to explain how human vision works. It is an example of bad realistic philosophy and you should see the idea is circular and totally stupid. Anyway, the story is that an MIT professor (could have been Minsky) assigned computer vision as a student summer exercise. How, you might ask, could anyone be so naive?
Not only does the "homunculus" fail to explain anything, you have to acknowledge that people who slip in to that way of thinking really don't understand the problem that they are trying to solve. The problem is not how do we recognize objects in the world? The problem is how do we turn the world into objects? Until we get started on that, the Deep AI stuff (which thinks objects already exist and is a correlation engine) will be dumb no matter how wealthy the "intellectual" who hypes it.
Not only does the "homunculus" fail to explain anything, you have to acknowledge that people who slip in to that way of thinking really don't understand the problem that they are trying to solve. The problem is not how do we recognize objects in the world? The problem is how do we turn the world into objects? Until we get started on that, the Deep AI stuff (which thinks objects already exist and is a correlation engine) will be dumb no matter how wealthy the "intellectual" who hypes it.
Thursday, March 3, 2016
Why Deep AI will always be deeply dumb
I have been trying to put my finger on what is the difference between a superb correlation engine and what I call "intelligence". I gather that "Deep AI" is a re-branding of multi-layer neural nets with back propagation [not sure what the means] and that, as such, it provides a correlation engine that is more or less useful. But it feels like something is missing. It is a matter of philosophy [I plan to oversimplify]. You see, there are realists and idealists.
Realists believe that facts exist in the universe and that our intelligence lies on our ability to discover the pattern of those facts. Most people tend to be realists: Scientists, computer scientists and, particularly, AI researchers trained in Cambridge MA.
Idealists, by contrast, believe that we can only discover those facts or pattern of fact in the universe that we are prepared to observe and that, through the limitation of how we perceive, we only perceive what our organic design is capable ofwe are designed to. An idealists might believe, for example, that "facts" are purely human facts and that, given another intelligent life form, facts could be entirely different. Where a realists might think about it and conclude that facts were finite - due to the finite number of particles and laws of the universe; at the same time an idealist would conclude that facts are infinite because, bending the universe to our perception is an ongoing process.
I (as an idealists) never quite understood why the notion that "what you see is seen through your eyes" is such a controversial idea and so repugnant to the masses. It is something which I think AI researchers must ignore with a slight pang of guilt - because it is so obviously true.
As far as I know, the Greeks did not really think in these terms. For Plato (I guess) the ideal was better than the real and the role of the person within the universe was never considered. If I had to say, Plato would be the idealists and Aristotle would be a realists. The position and definition of idealism was clearly given by George Berkeley (my man) at a time when Newton was - presumably - a significant proponent of realism. It would be interesting to know what Leibnitz thought about this subject. The other deep thinkers of Germany, Kant and Hegel, had views which I am sure I do not understand. (At this point, I would be surprised if anyone understood what Hegel was thinking.) More recently Descarte's "I think therefore I am" and "mind versus body" suggest he might have been a realist, treating the body like a machine. At least at one time there were idealists and realists making their ideas known in England. Meanwhile, on the continent, it seems like mixed bag. Lately, the French Philosopher Bergson would need to be considered an idealist because he routinely railed against the supposed facts of our perception and tried to point out that there was more dynamic, fluid, and pre-linguistic flow of time which could never be understood through our perceptions - or at least not through what we say about our perceptions. In America, we had our own schools of thought that were a bit different. The only philosophers who I have tried to read, Pierce and James, must have been the equivalent of idealists. At least I want to think so because I like what they wrote about.
The reason for talking about this history is to put into perspective the events following World War I that brought English and continental ideas to America [and perhaps stamped out what was left of American philosophy - or drove it out of Massachusetts towards California]. I want to say that the realists and British way of thinking (as well as German bomb making) came to Cambridge MA and the prominent universities of Harvard and MIT. This limited the idealist and phenomenologists to migrating south of the Charles River into Boston and university backwaters there, like Boston University (my college), and I don't know where else. Boston University got the Hegel scholars and the Bergson experts. In Cambridge, they picked up from where the dons of Oxford and Cambridge (England) left off. I am sure that over in Boston we were given a somewhat different picture of these philosophical ideas.
The point I want to make is that because of this history, Cambridge MA became the home of the most advanced thinking in realistic philosophy. They got all the money and all the best students, while a weaker flame of idealistic philosophy flickered, not dead, across the river in Boston. So realism is the basic philosophy behind AI research. And that is what is wrong with it. I'll explain.
Leaving philosophy aside for a moment let's look at more concrete topic in engineering and mathematics: fitting model objects to data. The classic example is the "regression line" in statistics but engineers also occasionally need to fit circles or hyperbolas to data and know that each different sort of ideal object ("model") has its own methods for fitting and its own value to the user. A good engineer will tell you that the data itself is not inherently "linear" or "circular" but the question can come up as to which model is a better fit to the data - which is a more useful way to think about the data. From my own work I can tell you that you can also use piece-wise linear fitting and get better fits by allowing more and more pieces. Ultimately, you can go all the way and have a different 'piece' for every consecutive pair of data points - so that you get a meaningless zig-zagged mess that passes through each data point and tells you nothing useful. These practitioners know very well that the data, by itself, does not have any spine and that you have to choose one to use if you want to interpret the data usefully. I doubt they would all agree to the statement: that data fitting is basically an idealistic process but, with a little thought they might agree that correlations in the data, by themselves, are not particularly useful. What is useful is the correlations that exist in the idealized model, after it has been fit to the data.
It is exactly that spine which is missing from neural networks and other "deep" technologies. Leave it to the realists to go about looking to discover new patterns in the data, rather than finding pre-existant ones there. So in the end the problem with Deep AI is that it springs from a naif realism that the data contains the patterns and the untested incorrect hypothesis that knowledge is the pursuit of facts that exist in the data. I suspect smart people like Marvin Minsky started thinking about machines before they were done with childhood, and long before they had acheived any maturity in understanding the word "intelligence". With the guilty shrug of the shoulders they must have begun the grand adventure of making a better correlation machine - and have been busy with implementation details and teraflops ever since. It is too bad because if they had better philosophy they would build better machines.
I watched an episode of Chronicle (a Boston TV show highlighting one or another aspect of the New England community) about Deep AI and they mentioned IBM's "Watson" machine. An interviewee, perhaps the director, mentioned how that machine uses human language and studies correlations there. That is potentially a far more viable approach as it looks to build up an understanding of existing human patterns to devise its engine. Good luck to them and a pox on the other "exponential growth of technology" which is frequently alluded to as bringing us to the edge of intelligent machines...soon but just not yet.
Realists believe that facts exist in the universe and that our intelligence lies on our ability to discover the pattern of those facts. Most people tend to be realists: Scientists, computer scientists and, particularly, AI researchers trained in Cambridge MA.
Idealists, by contrast, believe that we can only discover those facts or pattern of fact in the universe that we are prepared to observe and that, through the limitation of how we perceive, we only perceive what our organic design is capable of
I (as an idealists) never quite understood why the notion that "what you see is seen through your eyes" is such a controversial idea and so repugnant to the masses. It is something which I think AI researchers must ignore with a slight pang of guilt - because it is so obviously true.
As far as I know, the Greeks did not really think in these terms. For Plato (I guess) the ideal was better than the real and the role of the person within the universe was never considered. If I had to say, Plato would be the idealists and Aristotle would be a realists. The position and definition of idealism was clearly given by George Berkeley (my man) at a time when Newton was - presumably - a significant proponent of realism. It would be interesting to know what Leibnitz thought about this subject. The other deep thinkers of Germany, Kant and Hegel, had views which I am sure I do not understand. (At this point, I would be surprised if anyone understood what Hegel was thinking.) More recently Descarte's "I think therefore I am" and "mind versus body" suggest he might have been a realist, treating the body like a machine. At least at one time there were idealists and realists making their ideas known in England. Meanwhile, on the continent, it seems like mixed bag. Lately, the French Philosopher Bergson would need to be considered an idealist because he routinely railed against the supposed facts of our perception and tried to point out that there was more dynamic, fluid, and pre-linguistic flow of time which could never be understood through our perceptions - or at least not through what we say about our perceptions. In America, we had our own schools of thought that were a bit different. The only philosophers who I have tried to read, Pierce and James, must have been the equivalent of idealists. At least I want to think so because I like what they wrote about.
The reason for talking about this history is to put into perspective the events following World War I that brought English and continental ideas to America [and perhaps stamped out what was left of American philosophy - or drove it out of Massachusetts towards California]. I want to say that the realists and British way of thinking (as well as German bomb making) came to Cambridge MA and the prominent universities of Harvard and MIT. This limited the idealist and phenomenologists to migrating south of the Charles River into Boston and university backwaters there, like Boston University (my college), and I don't know where else. Boston University got the Hegel scholars and the Bergson experts. In Cambridge, they picked up from where the dons of Oxford and Cambridge (England) left off. I am sure that over in Boston we were given a somewhat different picture of these philosophical ideas.
The point I want to make is that because of this history, Cambridge MA became the home of the most advanced thinking in realistic philosophy. They got all the money and all the best students, while a weaker flame of idealistic philosophy flickered, not dead, across the river in Boston. So realism is the basic philosophy behind AI research. And that is what is wrong with it. I'll explain.
Leaving philosophy aside for a moment let's look at more concrete topic in engineering and mathematics: fitting model objects to data. The classic example is the "regression line" in statistics but engineers also occasionally need to fit circles or hyperbolas to data and know that each different sort of ideal object ("model") has its own methods for fitting and its own value to the user. A good engineer will tell you that the data itself is not inherently "linear" or "circular" but the question can come up as to which model is a better fit to the data - which is a more useful way to think about the data. From my own work I can tell you that you can also use piece-wise linear fitting and get better fits by allowing more and more pieces. Ultimately, you can go all the way and have a different 'piece' for every consecutive pair of data points - so that you get a meaningless zig-zagged mess that passes through each data point and tells you nothing useful. These practitioners know very well that the data, by itself, does not have any spine and that you have to choose one to use if you want to interpret the data usefully. I doubt they would all agree to the statement: that data fitting is basically an idealistic process but, with a little thought they might agree that correlations in the data, by themselves, are not particularly useful. What is useful is the correlations that exist in the idealized model, after it has been fit to the data.
It is exactly that spine which is missing from neural networks and other "deep" technologies. Leave it to the realists to go about looking to discover new patterns in the data, rather than finding pre-existant ones there. So in the end the problem with Deep AI is that it springs from a naif realism that the data contains the patterns and the untested incorrect hypothesis that knowledge is the pursuit of facts that exist in the data. I suspect smart people like Marvin Minsky started thinking about machines before they were done with childhood, and long before they had acheived any maturity in understanding the word "intelligence". With the guilty shrug of the shoulders they must have begun the grand adventure of making a better correlation machine - and have been busy with implementation details and teraflops ever since. It is too bad because if they had better philosophy they would build better machines.
I watched an episode of Chronicle (a Boston TV show highlighting one or another aspect of the New England community) about Deep AI and they mentioned IBM's "Watson" machine. An interviewee, perhaps the director, mentioned how that machine uses human language and studies correlations there. That is potentially a far more viable approach as it looks to build up an understanding of existing human patterns to devise its engine. Good luck to them and a pox on the other "exponential growth of technology" which is frequently alluded to as bringing us to the edge of intelligent machines...soon but just not yet.
Tuesday, March 1, 2016
Multi variate correlations without underlying models may be a waste of time
I know multi variate correlations are hard to figure and perhaps, without models, not particularly meaningful.
Subscribe to:
Posts (Atom)