Thursday, March 30, 2017

Airplane in E minor

I am watching a Netflix show and, out of the blue, decide to stop and play the bit of music - a requiem - that I was trying to compose recently. I always try to anticipate the sound of the first note I hit, so I sing it before hand. In this case, I turned on the keyboard, and was about to do that when I realized that the note I was about to sing was actually the engine note of a passing airplane.

But I went ahead and sang it anyway and then played the keys for the required E minor and -  what do you know - it is exactly the right note. The keyboard E matched the airplane E matched the voice E. And that must be why, out of the blue, I thought to stop watching TV and start playing music. I don't have any noticeable perfect pitch but I might have some perfect pitch based associations.

Wednesday, March 29, 2017

My great grandfather

A Mitnik:

The innocence of ageism and corporate mono culture

I am past 60, easily flustered, and often make in-explicable mistakes with details. I try to catch the mistakes using new error checking strategies - but not entirely successfully. I almost always get the inequalities backwards, the first time I code something. Or I can spend several minutes trying to copy, rename, and move four files using Windows Explorer on a crowded desktop. Something routine like that can be quite error prone. It is embarrassing.

Today I took an employment interview "quiz" with a time limit. About 15 minutes into the allowed 45 minutes I started feeling fatigued. I continued and soon got stumped on a question involving filling in missing entries of a table. This question required me to read and understand the column headings (which were multiply nested) and I took too long. A few minutes later the time is almost up and I am having trouble taking 5% of 600,000 and wanting to check my work. I assure the reader I know how to do arithmetic but that is how flustered I get. It is too bad. It is embarrassing.

So here is the ageism: They expect me to learn quickly and understand a piece of data with complex layout - and they expect me to do it at the same speed as a smart college student. No fair! They also expect me to keep my cool under pressure. I never could, but the impact is far worse today than it was when I was twenty five. No fair!

I think that is the end of that interview. It was my mistake to put myself in the path of its youthful bias. I understand it because I might not hire me either.

There was a different sort of bias on display with today's quiz, aside from ageism. As a geometer I tend to think visually. As an experienced engineer I often know the answers in general and I have no trouble being inventive when needed. Those are the things I would brag about. But on today's "quiz" there was no geometry; no testing of knowledge; and never mind about creativity. Instead they tested a kind of algebra and logic reflecting the mental skills of the test designers - skills that are poorly aligned with mine, independently of my age. This is how an employment culture filters out the "different" and becomes a mono-culture. My guess is that mono-culture is not healthy for the company but, in any case, I do not expect potential employers to solve the problem of my getting old.

Thursday, March 16, 2017

Just sketching an idea

I am beginning to acknowledge that syntax-n-grammar are not completely good for nothing subjects. From a slightly more enlightened position, I can see that syntax-n-grammar give the mind an opportunity to see patterns in the words that are being used - so that recognized pattern of words can be used to fill in a narrative - almost independently from what narrative is being filled. Syntax could fill more than one narrative. The point is that it functions to predict the next words in the same way that narratives function to predict the next meaning.
I like the idea that syntax is a means for extrapolating to future words.

Tuesday, March 14, 2017

Morgan Leslie-Waksman Learning Language

My granddaughter is going through games with her parents; for example looking at a picture book and saying "where is the duck?" and answering "there is the duck" (while pointing). And then "where is the pony?", etc. Soon Morgan can play the game and answer correctly. Then her parents play a more complicated game of "what sound goes the duck make?" "quack, quack". And "what sound does the pony make" "neigh, neigh".
So the game gets more complicated and each game can build on top of the last. So at first she may be learning the word "duck" and noticing the similar patterns of duck on the page or in different contexts where the word is used. She also learned the word "where is" and learned about pointing. In the next game she learned to connect the word "duck" to a different kind of pattern for the "quack" sound.
Some things Morgan needs to learn these games are: an ability with pointing; an ability to see similarity; and a desire to imitate. In the process she acquires the language of animal names, their sounds, their appearances, as well as the game words themselves "where" and "what sound". Possibly she picks up words like "the", "and", "a" - imitating the overall form of expressions she hears.
Then Morgan begins two word sentences. For example she says "Two cars" or "Two boats" but does not use the word "three". Her use of "two" may be related to a desire she has for symmetry - when I draw on one of her hands, she wants me to also draw on the other. Two word sentence like these are a description (to this adult observer).
She also says "Bye Pete" and "Bye ..." to describe something going out of view or changing its relation to her (to this adult observer).


This is transformative: a chatbot that, although artificial, helps you talk out the things that are causing you stress.

It didn't work the way I wanted it to but it is a great idea.

Thursday, March 9, 2017

Driverless cars are a waste of time

Point #1 is that it may be comparatively simple to automate driving down a straight road with no oncoming traffic, pedestrians, or any other sort of confusion. That is one extreme. The other is all the un-expect-able events that an adult human has learned to handle specifically as well as in general. Sure you can write programs to do any one of those things but we are not much closer to creating that sort of artificial intelligence than we were before the advent of the computer. So Point#1 is that driver-less cars are much further from reality than the AI clowns can admit.

Point #2 is that cars are a stupid way to move large number of people anyway, and trying to automate a dumb solution is an even dumber solution. What works for public transit is buses and trains. Point#2 is: we should figure out how to merge the best of the train/bus concept and the best of the car concept with these goals: minimum commute times [I assume this is about commuting not road trips], minimum adult supervision. Maximum freedom to use the system or not.

Proposal: Suppose cars and highways had built in "networked" functionality and suppose you drive up to the 2nd from fastest lane. You push a button and it tells the networked system to please take over and move you into the fast lane. It does and for a while your car is part of a train and you can doze/read/or watchTV. When you want to exit the fast lane, you push a button and the system returns you to the non-automated lane. After which you control the vehicle.
In a crowded city, you would request to leave your parking space, once on the city road "grid" everything is managed as a single network application. You ask to park when you get to your destination. [Ignoring the obvious problem that there may be no place to park. But of course the network already knows about that.] Traffic lights are coordinated with the system and optimized as much as possible. Here "opting out" of system is more problematic.

Tuesday, March 7, 2017

My semiconductor "event" pattern recognition patent

All I really wanted was to have my patent application US 20040175943 A1 get out there. I guess it did since I just found this on Google. The chi-squared formula applied by relative area, as in the above, is a key mechanism for measuring event position data against specific reference regions, such as illustrated:
Clearly the collection of fixed regions, with chi-squared calculated for each, gives you a mechanism for embedding patterns of dot scatter into a vector space - one dimension per region. Also you can see that there are families of region (the rows in 2nd pic) that differ by a group operation or "symmetry".

Here is part of the point, a program can be carried out with this way of measuring shape that goes further than the one I tried to do in grad school. So imagine representing a region by a pale gray transparent value inside the region. Imagine the darkness of the gray is a function of the chi-squared of the region versus a fixed scatter of event dots. Now superpose the transparency of many different regions over the pattern and I am confident the result is a sketch of the pattern. Thus in a real sense a pattern is the weighted sum of all the sub regions, weighted by the chi-squared. If I was a stronger mathematician, I would defend the formula

Saturday, March 4, 2017

Mathematics as a Language versus Mathematics about Language

Good luck getting Google to make that distinction! It does eventually provide links to the latter subject but they are scattered among a large number of links to the former. My question is, why cannot we expect a search engine to distinguish between Math as Language versus Language as Math? Why must "as" and "about" be ignored by the search engine?

Friday, March 3, 2017

The "I have no f*cking idea" tax deduction for the IRS form

The IRS tax form should allow you to declare anything of value less than $3,000 in a category called "I have no fucking idea how to declare this".

Thursday, March 2, 2017

Narwhal TODO

Haven't worked on Narwhal since releasing v2.0. I need a problem to solve to push into new areas. In general, some new areas include:
  • Proper handling of lists and numbers - so design info can be read systematically
  • The noun-less sort of conversational "echoing" of the chatbot world indicates a concept of "context", with actors and objects. Narwhal might consider doing this or re-instituting the "total narrative" idea.
  • Stories inside of stories. How is it so simple that a story like "I am making toast" can nest sub-stories like: "I am getting sliced bread out of the freezer".."I am taking slice out of package"...."I am putting slice in toaster and starting the toaster"....
  • How to deploy machine learning? This is a tool looking for an application but I am sure there are possibilities for automatic detection of keywords, and also automatic detection of narratives. Although unclear how, following up on a clue from Gamalon, the goodness of fit score could appear as a weight in a diagram. Not sure if that is a useful direction for thought.

Wednesday, March 1, 2017

Things rich people say (next)

"The '61 is slightly more tannic than the '59."
Variants from my old friend David Kabat:
 - "You can still smell the oak barrel"
 - "The bouquet is a bit flowery"