Friday, March 25, 2016

Liebnitz's indescernables and realism

I think Liebnitz is looking at it wrong when defining the ways things become descernable. I would rather imagine an innocent baby seeing nothing but diversity around itself and only slowly separating out constancy of things and, even later, similarity of things. How this works is what needs to proceed discussions of AI.
Suppose for vagueness that we are considering some phenomenon and that it is given to us by data and we propose to process the data "using AI" to recognize or handle it in some what. Now I believe that when humans are doing that sort of thing, as adults, we are operating with an underlying concept of what the data represents. It is all very well to be a realist and to suppose that the underlying concept exists in the data and can be discovered there using generic tools (as if!). But what if you are wrong and the data contains no concept? Then you will not discover it by applying any generic tools.

No comments:

Post a Comment