>>558333
>Is there a single program that can write, on its own and in response to a problem facing it, a functional program it has never been in contact with before? Is there even a single program that can scan another program, then copypaste the parts of the other program that help it to perform a given task?
Hey, I was right, so I'll just reiterate what I already said. What you are asking for here is a general artificial intelligence, which doesn't exist.
>Are you talking about the IBM computer that "learned" chess?
No, Deep Blue isn't anywhere close to modern AI. I was talking about AlphaZero, which was given the rules of chess and mastered it by playing games against itself. It did the same with Go and Shogi.
>Because it didn't learn chess, it knew all of the rules of chess already
I know Deep Blue didn't learn chess, but I don't see how being given the rules is an indictment of any Chess AI. Are you saying that in order for an AI to be considered "intelligent" it needs to be able to extract the rules of a system by demonstration or by being told them in plain English rather than by a programmer?
If you want AI that can understand human speech you can dive into the field of Computational Linguistics, in particular Natural Language Processing.
If you want AI that can learn to preform actions by observing there's Baxter made by the company "Rethink Robotics" which is a complete package, robot and AI.
If you want AI that can work out the rules of a system by observing and interaction, you'll want to look into "Cooperative Inverse Reinforcement Learning".
However I don't think that any of the above is necessary for Big Dog to be useful in the field. I was just bringing up AlphaZero to show that comparing an organic brain to a mechanical one in the way you did isn't a useful comparison. AlphaZero is pretty good at board games, but outside of that it's retarded.
>Talks about their benefits, if they existed, which they fucking don't.
Here's an excerpt from page three of the paper
>One of the first GAs inspired by bacterial evolution was the Microbial GA (Harvey, 1996, 2001, 2011). This is a steady-state, tournament based GA that implements horizontal microbial gene flow rather than the more standard, vertical gene transfer from generation to generation.
>The Pseudo-Bacterial GA (PBGA) (Nawa et al., 1997) and the Bacterial Evolutionary Algorithm (Nawa and Furuhashi, 1998) are two GAs that use a genetic operator which they call the ‘Bacterial Operator’. This operator attempts to mimic gene transduction which is one process by which bacteria can horizontally transmit parts of their genome to other bacteria. The goal of implementing gene transduction in a GA is to try to speed up the spread of high fitness genes through the population.
>Path prediction, based on understanding the human its following and where they're likely to go if it ever loses sight of them.
Sorry, I misunderstood.
Predicting where humans will go isn't unheard of. Here's an excerpt from Stanford AI research paper
>Humans are much more predictable in their transit patterns than we expect. In the presence of sufficient observations, it has been shown that our mobility is highly predictable even at a city-scale level [1]. The location of a person at any given time can be predicted
with an average accuracy of 93% supposing 3 km^2 of uncertainty.
That paper is "Learning to predict human behaviour in crowded scenes" by Alexandre Alahi, Vignesh Ramanathan, Kratarth Goel, Alexandre Robicquet, Amir Abbas Sadeghian, Li Fei-Fei, Silvio Savarese.
That section is just a summary of the paper the [1] is referencing, which is "Limits of Predictability in Human Mobility" by Chaoming Song, Zehui Qu, Nicholas Blumm, Albert-László Barabási.