We have two clocks in the kitchen and I checked the time on one of them to see if it was OK to knead my dough again. It was 3:09, not ready. I left the room for a few moments and came back and checked the other clock which now said 3:08. Of course I swung my head immediately to the first clock and it was still 3:09. The funny thing is that briefly - as my head is swinging - I was dazed and confused and entertaining various hypotheses; one of which was that time might have gone backwards while I was out of the kitchen.
That this is a natural reaction seems significant. It means that the spatial metaphor for time, as measured by a clock, brings with it spatial properties(like bi-directionality) that do not belong - and they never stop being inappropriate.
Thursday, February 26, 2015
Monday, February 23, 2015
How landscape is de-emphasized in English
Interested in semantics as I am, I came across a couple of ways in which the concepts of place, or landscape, get the "short end of the stick" in our analyses of language - of English in particular. The first example is in my verb table, where I try to come up with verbs where a place is the 'actor':
The one where "place affects person" is pretty lame, and generally the 'place' row is sparse. So English is quite poor in its vocabulary and concepts of how landscape and position act upon us.
A second example has to do with adjectives that are meaningful for place but not for person or thing. For example the word "windy". I find that in English we have the generic term "feeling" to describe attributes of a person; and we have the word "attribute" or "property" to describe the attribute of a thing. Yet there is no word for adjectives that are particular to place and landscape. For example "sad" is an example of a feeling and "windy" is an example of a ...[there is no named category]. Thus English is poor in its vocabulary and concepts for how landscape fits within its own semantics.
I propose to call it"landscape" "setting", generically. So "windy" will be a landscape setting term.
actor\target
|
person
|
thing
|
place
|
person
|
love
understand
|
want
assign value
see
find
|
go
indicate
|
thing
|
cause to
|
act on
compare to
|
in
at
|
place
|
affects
|
contains
on
|
connects to
|
A second example has to do with adjectives that are meaningful for place but not for person or thing. For example the word "windy". I find that in English we have the generic term "feeling" to describe attributes of a person; and we have the word "attribute" or "property" to describe the attribute of a thing. Yet there is no word for adjectives that are particular to place and landscape. For example "sad" is an example of a feeling and "windy" is an example of a ...[there is no named category]. Thus English is poor in its vocabulary and concepts for how landscape fits within its own semantics.
I propose to call it
Tuesday, February 17, 2015
Monday, February 16, 2015
Computer Virus Vulnerability is Unnecessary - the Standard Architecture is Absurd
The NYT makes the following foolish statement while discussing spyware (here) found by Kaspersky in most PCs:
"Firmware is about the closest to the bare metal you can get — a coveted position that allows the attacker not only to hide from antivirus products but also to reinfect a machine even if its hard drive is wiped."
They are wrong because you can get all the way to the "bare metal". The "bare metal" can get along just fine without firmware. You do not need firmware to program hardware - it is just cheaper than building a chip with baked-in capabilities. And this brings up one of my pet peeves: that computers are designed to be vulnerable - with a writable OS sitting on an unprotected hard-disk. Totally unnecessary.
Instead, you can make the OS be like a light bulb, that plugs into an actual socket. Want an upgrade? They send you a new one, you unscrew the old one and screw in the new one. A standard PC would need both vulnerable and invulnerable memory. The latter would not be writable, or could be writable only by the OS. I believe all this virus protection is needed only so Microsoft can save on shipping costs.
It should be no big deal, just more expensive for Microsoft. But as I think about it, it is bad for Microsoft in several ways: (1) It makes their OS more easily replaceable - because the light bulb-to-socket interface would be public; and (2) It draws a clear line, at that interface, between OS and non-OS. I guess this would be difficult for a company that is so fond of bundling. To be as clear: our virus vulnerabilities are connected to profit motive and not to necessity.
"Firmware is about the closest to the bare metal you can get — a coveted position that allows the attacker not only to hide from antivirus products but also to reinfect a machine even if its hard drive is wiped."
They are wrong because you can get all the way to the "bare metal". The "bare metal" can get along just fine without firmware. You do not need firmware to program hardware - it is just cheaper than building a chip with baked-in capabilities. And this brings up one of my pet peeves: that computers are designed to be vulnerable - with a writable OS sitting on an unprotected hard-disk. Totally unnecessary.
Instead, you can make the OS be like a light bulb, that plugs into an actual socket. Want an upgrade? They send you a new one, you unscrew the old one and screw in the new one. A standard PC would need both vulnerable and invulnerable memory. The latter would not be writable, or could be writable only by the OS. I believe all this virus protection is needed only so Microsoft can save on shipping costs.
It should be no big deal, just more expensive for Microsoft. But as I think about it, it is bad for Microsoft in several ways: (1) It makes their OS more easily replaceable - because the light bulb-to-socket interface would be public; and (2) It draws a clear line, at that interface, between OS and non-OS. I guess this would be difficult for a company that is so fond of bundling. To be as clear: our virus vulnerabilities are connected to profit motive and not to necessity.
Wednesday, February 11, 2015
Fear of AI
I read that Bill Gates, Elon Musk, and Steven Hawkings are worried about AI. I am not sure what the fear is exactly - that robots will take over the world in a realization of the Terminator story?
Why would anybody listen to these three? None has worked in AI and only credulous people think that there is such a thing. Hate to disappoint you but AI is minimal - some Google Translating, a database query or two, Dragon Speech 20 years later, and some optical character recognition. What else? Don't count the facial recognition software you see in police TV shows. Perhaps some stock trading programs are more or less effective bags of tricks perhaps someone capitalizes on an observed correlation. Mostly it is not real intelligence.
I believe you can always build machines that will kill people and you can experiment with making the machines autonomous and especially dangerous. But there is no way to judge a machine as more intelligent than a human - who would devise the test? In the end, if it is not human then it is just mining equipment. So I think there is a fallacy in fearing [superior] AI - since there may never be any such thing.
Update: I asked my son George about this and he said he thought they were concerned about emergent phenomena in a complex system. I only realized later that it is unlikely that what emerges first would be a violent anti-human killing machine (or whatever). Since you are starting off with the AI of mud, I would expect the emergence to have to go through stages like evolution - a long time frame in which these complex systems were rewarded for increasingly dangerous behavior towards us.
Update #2 The three people: Gates, Musk, Hawkings (Gates and Musk anyway) are examples of what America now gives far to much credit to - elite job creators. Considering them experts in all subject follows from a new idea that I hear expressed so frequently lately at the Woods Hole Friday night lecture: that if something makes money then it must be true. I want to call this the "idol of the bank" - which consists of errors arising from confusing financial success with wisdom.
Update #3 I may be wrong about AI not being possible beyond human intelligence in that "expensive mining equipment" can get pretty fancy. I was reading about a machine that reverse engineered how something like an amoeba optimizes some task. Pretty fancy mining all right.
Why would anybody listen to these three? None has worked in AI and only credulous people think that there is such a thing. Hate to disappoint you but AI is minimal - some Google Translating, a database query or two, Dragon Speech 20 years later, and some optical character recognition. What else? Don't count the facial recognition software you see in police TV shows. Perhaps some stock trading programs are more or less effective bags of tricks perhaps someone capitalizes on an observed correlation. Mostly it is not real intelligence.
I believe you can always build machines that will kill people and you can experiment with making the machines autonomous and especially dangerous. But there is no way to judge a machine as more intelligent than a human - who would devise the test? In the end, if it is not human then it is just mining equipment. So I think there is a fallacy in fearing [superior] AI - since there may never be any such thing.
Update: I asked my son George about this and he said he thought they were concerned about emergent phenomena in a complex system. I only realized later that it is unlikely that what emerges first would be a violent anti-human killing machine (or whatever). Since you are starting off with the AI of mud, I would expect the emergence to have to go through stages like evolution - a long time frame in which these complex systems were rewarded for increasingly dangerous behavior towards us.
Update #2 The three people: Gates, Musk, Hawkings (Gates and Musk anyway) are examples of what America now gives far to much credit to - elite job creators. Considering them experts in all subject follows from a new idea that I hear expressed so frequently lately at the Woods Hole Friday night lecture: that if something makes money then it must be true. I want to call this the "idol of the bank" - which consists of errors arising from confusing financial success with wisdom.
Update #3 I may be wrong about AI not being possible beyond human intelligence in that "expensive mining equipment" can get pretty fancy. I was reading about a machine that reverse engineered how something like an amoeba optimizes some task. Pretty fancy mining all right.
Subscribe to:
Posts (Atom)