It is my belief that the current state of A.I. research and development is flawed.
A.I. research, which currently is composed almost solely of neural networks, seeks to
replicate part of the functions of a human as closely as possible to the real thing.
One popular example is computer vision, where neural networks are trained to recognize
various objects. These are used in self driving systems to allow the car to recognize
stop signs, pedestrians, etc. Almost all of these neural networks share the same basic
properties. They are given a set of inputs that activate the neural network, and they
deliver a set of outputs which marks the end of the neural network. In this way,
modern neural networks are almost exactly like conventional functions in programming
languages.
The current theory in A.I. research is that the continual development of the function
like neural networks will eventually lead to true intelligence. If a computer gets so
good at recognizing images it must at some point develop understanding about those
images, right? I posit that this way of thinking is wrong. Functions can be compared
to logical circuits, and in much the same way, the continued development of a single
logical circuit will never properly lead to a computer. One can not take a simple
addition circuit and make it add so well it becomes a computer, or at least if one
could it would take an obscene amount of time to accomplish. In order for A.I. research
to be driven to the next stage, we need to stop thinking about neural networks as
single functions and think more about how the combination of these can develop a
computer.
There are a few improvements that I think could be a start to this change in thought.
Post too long. Click here to view the full text.