It is my belief that the current state of A.I. research and development is flawed.
A.I. research, which currently is composed almost solely of neural networks, seeks to
replicate part of the functions of a human as closely as possible to the real thing.
One popular example is computer vision, where neural networks are trained to recognize
various objects. These are used in self driving systems to allow the car to recognize
stop signs, pedestrians, etc. Almost all of these neural networks share the same basic
properties. They are given a set of inputs that activate the neural network, and they
deliver a set of outputs which marks the end of the neural network. In this way,
modern neural networks are almost exactly like conventional functions in programming
languages.
The current theory in A.I. research is that the continual development of the function
like neural networks will eventually lead to true intelligence. If a computer gets so
good at recognizing images it must at some point develop understanding about those
images, right? I posit that this way of thinking is wrong. Functions can be compared
to logical circuits, and in much the same way, the continued development of a single
logical circuit will never properly lead to a computer. One can not take a simple
addition circuit and make it add so well it becomes a computer, or at least if one
could it would take an obscene amount of time to accomplish. In order for A.I. research
to be driven to the next stage, we need to stop thinking about neural networks as
single functions and think more about how the combination of these can develop a
computer.
There are a few improvements that I think could be a start to this change in thought.
First, a neurological computer must be based on a self feedback loop. Input and output
must be indistinguishable. A brain works constantly, 24 hours a day, 365 days a year.
There is no stopping point to the calculations of a brain. It follows as well that the
feedback mechanism must be coded into the loop and work at runtime. It must change as
it runs and evolve as it exists.
Second, individual neural networks must move away from replication of external biological
functions, and they must now focus on learning and evolving mechanisms. Theoretically, if
a proper neurological computer is made, it should be able to create it's own neural nets
for vision, language, even sound. The most important aspect to replicate must be versatile
learning. A human can lose a limb, or their sight, or their hearing and the brain will
adapt. Brains are used by all animals and yet a spider is so much different from a human.
To create A.G.I., versatility must be the constant. IO can be learned later.
Third, two dimensional layers will no longer work as bases for networks. Networks
must transcend to the third dimension. Neurons may even have to be used within multiple
networks at the same time. This also means parallel processing might be the key to this
new network. As such it might be very helpful to use some sort of functional programming
language in the development of it.
In conclusion, A.I. R&D must have a paradigm shift. This is not to say that A.I. as it
is currently is bad. Any task that doesn't need intelligence shouldn't use it, but if
we want to achieve true A.I., we must evolve too.