I was surprised to read an academic study showing AI did not have human-like shape recognition. They use the trope of a teapot with golf-ball texture and come to the long-since obvious conclusion that the software is prioritizing texture over shape. I first heard about this with sea turtle texture identification over a wide range of non-sea turtle shapes.
I construe Neural Nets as providing pieces-wise linear coordinate rectangle mapping which, if done at sufficiently hi resolution will be able to approximate any smooth curve. There are no explicit provisions for multi-curve problem spaces. There is no understanding of differentiability. One real problem is that the neural layers are essentially training on small groups of pixels. The larger the group the more time it takes to "learn" and one approach is to toss on new layers for larger and larger groups. But -GOD- mathematicians knew long ago that shape was a global property, and common sense says there is rarely a guarantee of "smoothness" or the sorts of convexity that are assumed in their vague theories of interpolation/ extrapolation. Trying to deduce shape from texture is complete nonsense. Trying to use raw pixels is the stupidest thing imaginable, from the point of view of position-invariant shape. What has become clear is that often the data->result mapping is discontinuous.
So I wonder a couple things, in a snide sort of way: why don't these guys go learn some math? And what about all those other AI "successes" we are hearing about? Like medical diagnostics. How do we know these aren't subject to the same errors - correlating a global outcome with an easy to compute local property that is- in fact coincidental? I see some risk ahead when no one can check if the emperor's new clothes are real - when the problem domain is harder to check than image recognition.
Went to a WHOI seminar where the man in charge of AI spending at MIT Lincoln Labs gave a superficial summary of AI that touched on its successes and the failures. Weirdly, he claimed "98% accurate" image recognition in one part of the talk, then mentioned later how easy it is to trick the system. Leading to the question: 98% of what?
Anyway, short of smart feature extraction (of global features) there will be no pixel-based image recognition.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment