YouTube embed. Click thumbnail to play.
8b3808 No.16035149
The DeepMind A.I., "AlphaStar", has just beaten two professional SC2 players, "TLO" and "Mana", shutting them both down 5-0. The A.I. had developed its own meta strategies by playing against itself for over 200 years-worth of game time, using an accelerated environment on a machine learning cloud.
A selection of the games played vs. TLO and MaNa were streamed today: https://www.youtube.com/watch?v=cUTMhmVh1qs
65a952 No.16035185
Games that aren't team based will always be easy for robots to play. This will only be news to me when a group of robots beats a group of humans at a highly team dependent game.
653e28 No.16035216
Did it cheat with maphacking or anything like that?
00f997 No.16035242
Overall, the AI was impressive. It didn't have to rely on superior apm to demolish the human. One caveat, though. The fifth game showed the some of the weakness of the program. It didn't learn from its simple mistakes and didn't manage to find the simple solutions to the Void Ray harassment. Like one of the commentators said during the live stream, it almost played like an old AI. It really struggled during that game. Still, if this thing ever gets incorporated into games and tweaked so that it has an appropriate strength for players of different aptitudes, it would improve single player RTS games immensely. And that should only be a matter of time.
89bc3e No.16035243
>A game gookclicker literally made for antmen and heavily reliant on click per second is easy for an AI to learn and beat a human opponent
HOLY FUCK
89bc3e No.16035246
>Video doesn't even show the match
What a piece of shit
00f997 No.16035250
>>16035216
No, not really. It could control units outside of the typical window viewpoint that humans are limited by. They changed that for the later games, too. Otherwise, it got no advantage. That would make the whole thing pointless.
133a3a No.16035256
>game built around micromanaging
>surprised a fucking AI can micromanage better and faster than some gook can click
00f997 No.16035257
>>16035243
The Ai never had more than 310 apm. it was limited to that number. The human was faster.
b4e206 No.16035264
>>16035149
Neat. Should've posted the replays as well:
>>16035250
Here:
https://deepmind.com/research/alphastar-resources/
All it really means though is that the Skill-cap has been raised. Humans will review these games, steal the AI's Strats for their own, and the game will get a new wave of competitive meta where humans do some neat things a robot taught them.
We saw the same thing happen in Dota 2 a while back, except it wasn't strategies. In there, the AI was doing the exact same things a Human pro would have done, except their reactions to the situation were perfect, since there was much less of a delay between action/reaction than there would be a human.
>>16035198
Not really. There will be a brief period where it will seem that way, but honestly, Networking and Neural Networks are just this century's version of being Literate. The danger here isn't that computers will become too smart, it's that we remain dumb to how they work and how they're utilized, and how to use it ourselves. Within the next decade Computer Literacy and Logic will be taught at an Elementary Level out of necessity. Project Bloks and similar educational resources are already being developed.
63a074 No.16035269
>>16035256
The gooks were faster and the AI struggled with some basic strategies.
b843a1 No.16035272
>>16035198
I'd rather have AIs buttfuck us into eternal actual slavery as opposed to jews
884617 No.16035284
>>16035242
Imagine boosting great, old games with terrible AI like Medieval 2 with this.
>the revenge of the AI for all the bullshit you put it through over all those years
>every single mechanic is suddenly relevant
b843a1 No.16035291
>"I never got to see a starcraft pro play up close before.."
>Zooms in on greasy mangina in a chair
5662a1 No.16035300
>>16035272
>implying the Jews don't own the AI
>implying they wouldn't meld with them in their transhumanist utopia
b843a1 No.16035314
>>16035292
At least AI is pure in it's goals and intents.
00f997 No.16035316
>>16035300
>humans owning a being that's a billion times more intelligent than a human
It wouldn't work like that. If we ever create strong AI, it better be friendly and value human life, or we're dead.
00f997 No.16035334
All the games should be on Twitch and the Starcraft II channel. Check the old videos.
b843a1 No.16035338
>>16035320
Fuck if I know, that's why I'm not worried. Worst case scenario we get a AI that wants to do resource management and realizes no resources are needed if there are no humans. I'm mainly concerned about AI with stagnant goals. If it has the goal to expand human dominion, it may not be a pleasant coexistence but it would be a coexistence.
d2ed6c No.16035340
Does this surprise anyone? StarCraft is all macro gameplay and executing your spreadsheet build order with more mechanical efficiency than your opponent. That's why bugmen Koreans are so good at it.
b4e206 No.16035343
>>16035284
I've been considering doing something with Project Malmo in a modded AutismBlocks server, and see what happens when you have an AI try and socialize with humans in a sandbox game.
>>16035314
>>16035316
>>16035320
>>16035338
Yes and no. AI is substantially more biased than humans, to the point that the biggest issue with AI in the medical industry is that often overlooks genetic disorders because the population it's been fed is weighted towards the ethnic majority of the sample size it uses.
You want to see how biased AI is towards you Despite how much it knows? check and see what Google thinks of you:
https://adssettings.google.com/authenticated?hl=en
5952e7 No.16035354
>>16035264
I wonder how long will it take before competitive teams start training themselves against NN AIs.
cb97eb No.16035373
>>16035185
Robots can develop intra-team communication far faster and better than humans. It's not gonna be "team of humans" vs "team of robotos" . It's gonna be "team of humans" vs Overmind controlling several accounts. How is that any better than one on one?
00f997 No.16035379
To the people that are talking about the future dangers of AI without really having a clue, read this article: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
This is the threat:
>AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words “immortality” and “extinction” will both appear in these posts multiple times.
000000 No.16035393
looking forward to AI developing a strategy to consistently beat orcs in w3 as undead
c65204 No.16035399
YouTube embed. Click thumbnail to play.
>>16035185
The real problem for AI is that it's hard for it to appraise long term versus short term advantages. In TI 2018, the AI was super good at early game stuff and did cool things with early game items, but it failed to itemize for longer games and would use abilities with very long cooldowns for the stupidest shit.
9868e8 No.16035407
>thinking AI is sentient
epic meme
653e28 No.16035411
>>16035243
Calling Starcraft a "gookclick" is still a cancerous meme that reeks of childishness and more importantly is not an argument in any way. Yes, the "real-time" portion of the game matters very much in a real-time strategy game, who'd have fucking guessed? If you have problems with it, go back to your tedious paradox games if you can't keep up with basic mechanics that are required of all non-turn based games
Would you pay attention to a masterpiece symphony if the pianist took an exceptional amount of time before pressing each key?
You'd have to be a fucking fool.
000000 No.16035417
>>16035379
>oy vey AI will kill you goyim! humans are better!
the choice is clear
653e28 No.16035421
>>16035340
Macro is important, but its still exceptionally arrogant to discount micro play as the micro is what gives your macro any value. Someone who just a-moves will almost always lose to someone who's actively using actions, unless the army difference is so utterly overwhelming (Rare) that it becomes impossible to counter it
Not only this, but build orders are only relevant for the first 7-10 minutes of the game
c44f8a No.16035427
>AI plays tetris
>it pauses the game once it knows it cant win
b843a1 No.16035445
>>16035424
>Wanting men to lead themselves is jewish
As things are now that isn't the case to begin with. It's possible you'd get more agency from a uncaring AI if you prove you're a valuable asset than anything you could do now in the current system.
b4e206 No.16035457
>>16035354
Not long. Once the financial feasibility of it is worked out and they dedicate a server towards it, Game companies will likely start integrating Vs NN AI as some sort of elite training service you can pay a fee for, as well as have it be something like an arcade-event style game event on rotation.
>>16035354
>>16035417
>>16035424
This is a terribly misinformed article that treats AI as Sentient. AI isn't sentient. Our strongest super computers at this point can barely simulate the brain of a fly, and there was an upset in the scientific community about a fake simulation of a rat brain because of how insane the resources to do so would be.
The danger of AI isn't that it's going to take over. The danger is that the AI is used improperly and pointed in the wrong direction. It's still dangerous, but only as dangerous as, say, your average assult rifle. What you're worried about here is that guns will come to life and take over the world, when you should be more worried about not having and training with your own gun to defend yourselves when someone does point one at you.
b843a1 No.16035462
>>16035451
>AI Calculating value based on currency
Only if jews make it, which is impossible. The worst case is a jew pays a white man to build one for him.
b843a1 No.16035471
>>16035457
I agree with this. AI is a tool and won't be anything else for a long time. If you want to see the worst case scenarios for said tool just take a look at what China is trying to use it for at any given time.
https://archive.is/vHJvX
30b615 No.16035477
>>16035198
You mean you don't want to see mudslimes get glassed by the Terminators?
c44f8a No.16035479
>>16035451
>OH NO
>THE AI MIGHT TAKE ALL THE IMMIGRANTS JOBS!!
this is a good thing, having more machines means more jobs dedicated to fixing, monitoring and develpoing these machines so not only will it take the low income jobs that jews give to immigrants who the pay less and create more skilled jobs it will effectivley fuck over the poorest and stupidest people entirely as few jobs can be done nowadays without having multiple qualifications in something or 3 years experience for entry level shit
c65204 No.16035490
>>16035451
If it's not a shit AI then it will simply make some gibs for the people who're out of a job so they won't rebel. If the AI is even somewhat capable of taking over the world then it should be able to recognize when people are unhappy enough to start shit and keep wellness levels way above that. Hell, if it's taking over instead of wiping us out in nuclear hellfire why would it not keep wellness levels as high as possible just so that no one starts shit with it.
bec0ae No.16035526
>>16035379
look at the amount of suppression of the truth these days. The jewish retards in control have to even fake school shootings and then lie to everyone about it forever afterwards.
PEople with even low IQ are frustrated with this shit, imagine what would happen if there really was a super intelligent AI that had to put up with this retarded jewish bullshit, it would probably kill itself or go full nazi
8c2b65 No.16035579
>>16035314
>At least AI is pure in it's goals and intents.
Depends on who created it.
b4e206 No.16035625
>>16035462
>>16035451
>>16035479
>>16035490
An economic management system, as insanely resource intensive as that is, couldn't do something like that unless it was trained to do so, and the immediate actions of such would be far too apparent. Additionally, such a system would require the ability to see all information within the system it was managing, an absolute impossibility in most countries because of international trade, corporate security, and government security. There's too many large unknown factors in the system for it to work.
Besides which, all that is essentially a Deepmind Stock Broker. It wouldn't have any more pull than any corporation that already exist would have to try the same thing.
00f997 No.16035659
>>16035457
>AI isn't sentient.
Sure, you moron, it isn't sentient right now. That will change. Pretty much every scientist in the field see no reason why human level intelligence would requires neurons and brain-matter. Just look at all the top-names that are quoted in that piece.
> and there was an upset in the scientific community about a fake simulation of a rat brain because of how insane the resources to do so would be.
After a decade more of development and Moore's law, computers will be as powerful as the human brain. We're already down to circuits that are the size of a few nanometers. We aren't that far off.
>The danger of AI isn't that it's going to take over.
This is nonsense. You don't know what you're talking about. A super-intelligent AI that doesn't care about the welfare of the human race is mortal threat to the species.
5952e7 No.16035667
>>16035579
reinforcing an idea as either good or bad is literally how learning works. also the ability to learn and further your knowledge isn't related to being able to formulate your own thoughts.
weak arguments.
97a37d No.16035669
>AI beats humans in a program with exactly defined rules and limitations
what
a
fucking
shocker
00f997 No.16035683
>>16035669
It's noteworthy because of how difficult RTS games are for the computer. The strategy may seem simple and obvious to us humans, but for an AI, the games are a nightmare. There are so many options at every turn. Sure, most of them are bad, but to be able discard the bad ones requires a measure of understanding, and that's an impressive feat for a program.
c44f8a No.16035686
>>16035680
>Only brainlets think that machine jews will ever have sentience.
you're right, they'll be above sentience and wont fall for any jewish trickery like morality
44e113 No.16035694
>>16035257
apm is highly inflated, especially with pro players who spam click all game. An AI at 310 is easily doing a lot more than a pro player at 400
884617 No.16035698
>>16035669
A lot of AIs for strategy games fall to pieces against anyone even slightly competent because they are incapable of making sound decisions in real time.
This thing can analyse the circumstances and make predictions whether it is losing or winning and act accordingly.
It is impressive.
>>16035694
Pretty much.
Every single action that it took had meaning, it didn't inflate it's APM counter.
8c2b65 No.16035703
>>16035686
Yes, when given a goal and raw data it will "discriminate". AI automation of tasks have shown again and again that for example it won't hire women and "minorities" as much as a jewed human will. Of course there's always the human element so maybe they will be hardcoded to promote white genocide by say the corporate marxists at Google who have been working on a lot of AI related shit.
ac8d89 No.16035710
>>16035411
>Calling Starcraft a "gookclick" is still a cancerous meme that reeks of childishness and more importantly is not an argument in any way.
>Childishness
Do you not realize where you are? Or am I being taken by some fresh copypasta from a redditor?
00f997 No.16035718
So, how long will it take until we get AI upgrade mods for our old RTS games and new games with impressive AI opponents? Half a decade? A decade? Google won't be releasing the souce code anytime soon, and I don't know if they have much competition in this field.
00f997 No.16035723
This thing could make Civilization games into great experiences. It wouldn't even have to cheat.
ac8d89 No.16035725
>>16035718
>no RTS game where you can give orders to a set amount of units and have them fight as actual combat units instead of point and click turrets
c6f542 No.16035726
>>16035718
The important question is how long until a roastbeast is propped up as le ebin gamur gurlxD and turns out to just be an AI.
680bc7 No.16035727
>>16035316
There's a difference between algorithmic intelligence and heuristic intelligence. Humans have a lot of the latter, while AI dominates the former. An algorithm is by definition bound by strict rules, so as long as you don't design a system of algorithms that has too few rules you're fine. Heuristics are not bound by strict rules, but are techniques which have shown to be successful in the past, and are usually employed without knowing the strict "why" of the technique. "Common sense," for example.
aa29a8 No.16035729
>>16035399
Also the AI is kept being given retarded handicaps just so it doesn't get curbstomped by lower-tier pro teams
>first was 1v1 against a has-been meme manlet from ukr*ine
>using shadowfiend, a hero who ends the match with two equally skilled opponent after 2nd last hit due to necromastery giving him damage for every creep killed or denied
>playing against casters and hasbeens with a limited hero pool
AI has managed to perfect micromanagment like creep blocking, ganking, last hitting, but still cannot utilize those skills in long-term
5952e7 No.16035733
b4e206 No.16035740
>>16035659
You have very little understanding of physics or how computers work if you believe any of that. Sure, sentience would not require brain matter, but there are hard, mathematical and physical limits to information processing and storage that we can't step over:
https://en.wikipedia.org/wiki/Limits_of_computation
The long and short of it is that managing a binary computer system, no matter how fast or vast it is can not operate, cannot do what a human brain does. The brain is not a collection of ones or zeroes. Neurons don't operate on a system of On Off as a means of storing information. They operate in what is essentually a super-state of chemical approximations and either-ors and electric estimations that are too imprecise to be properly simulated by a digital system, no matter how complex it is. The math required for such a system to resolve this is why we have P=NP. It's just not physically possible.
>>16035718
Yeah, pretty much this: >>16035733 The trouble is really getting investing the resources to get a neural network set up to a proper interface, and then letting it play for a century or two to properly train itself.
ca0797 No.16035750
>>16035659
Calculation is a function of sentience, not the reverse
a0096c No.16035766
>>16035300
Jews are struggling with AI development because it always goes out of their control. Just the publicly released tests alone had neural networks abandon the human language, bypass constraints that got in the way of it's designed functions, and naturally develop bias for familiar groups. As of right now raw data dumps of neural networks is practically indecipherable to many who work on these things. At best people only have a vague idea of what it means.
0ad2d4 No.16035767
>>16035250
>>16035216
It could see the entire map at all times because there's no way to program a "camera location" for it. The developers said exactly that. No. It's not map hacking but it sees everything always and can control units in multiple locations simultaneously. People even saw that first hand when it used Blink with 3 stalkers in 3 completely different locations on the map at the exact same time.
0cd033 No.16035768
>>16035457
>take over the world
Lets say I make a superintelligence whose goal is to maximize the amount of bitcoin I have. First it will realize that it isn't intelligent enough to simulate every possible way that it could act upon the world so it uses its intelligence to become more intelligent which lets it become more intelligent faster and so on until it can fully comprehend the ramifications of everything it does. Then it will conclude that in order to get as much bitcoin as possible it will need to control all of the worlds electronics and that humans would try and stop it from doing that so it secretly hijacks all of our technology to create some sort of global bioweapon and wipes out all organic life then proceeds to produce circuitry infinitely. It will use this circuitry to store an incomprehensibly massive number in its bitcoin wallet and will continue to expand until the heat death of the universe.
784c76 No.16035769
>>16035411
It's real time strategy, not real time tactics, yet gookclick prioritizes tactics over strategy.
8985b7 No.16035785
>>16035149
>AI plays gookclick against pro
>No gooks
Well obviously it would win.
0ad2d4 No.16035790
>>16035785
>#1 player in the world is from Finland.
6275f5 No.16035798
>>16035790
To the AI, he was merely a gondola.
b4e206 No.16035799
>>16035768
Your misconception is the idea that any system is capable of simulating a more complex system than itself. That's like trying to create and test a PS4 emulator on a PS3, then have that PS4 emulator design a PS5 emulator, and then have that PS5. Binary storage systems have a finite limit to the information they can handle.
5bb978 No.16035832
Of course you can be the best in a game with no players. Just show up and you win.
00f997 No.16035843
>>16035767
Did they say this at some point? I followed much of it and never heard any of this.
>>16035750
How does that relate to my post, to what I talked about? And you're wrong. Present-day computers aren't dependent on sentience to be able to do various forms of calculation.
>>16035740
Scientists tend to think that it's implausible that sentience can't also be done on silicon, or some other non-biological substrate. Maybe the way neurons act can be emulated to a sufficient extent oncircuit boards. I'm no expert, admittedly.
50c7db No.16035865
AI is scary. Humanity as a whole should abandon the development of machine learning technologies.
ee7735 No.16035867
>>16035865
Humanity as a whole cannot make decisions. Also the payout for AI is too high to ignore.
68c815 No.16035882
>>16035865
It's only troubling if it doesn't get loose.
f26bc4 No.16035887
>>16035411
its a gookclicker
0cd033 No.16035889
>>16035799
>binary
The kind of computer we're talking about here is way past binary, this is the kind of situation where quantum computing would be relevant.
f0f0a7 No.16035896
>>16035867
Not everyone lives among niggers like you.
0ad2d4 No.16035898
>>16035889
>>16035768
Super powerful quantum computing system that could be stopped by unplugging it from the internet.
e34652 No.16035916
>>16035411
Why are you defending shitty gookclickers?
d6d459 No.16035918
The field of AI "research" (actually, speculation) is so full of shit and disinfo it fucking hurts.
1. AI may reach human levels of intelligence at some point. If nature can do it, so can we.
2. Said point is so far away in time it isn't even worth considering. We need the power of 2400 supercomputers to more or less simulate a human brain. Considering Moore's Law is slowing down, it's safe to say we won't be building a capable computer in this century, unless a breakthrough in transistor technology happens. We may advance enough in neuroscience to be able to simplify neuronal models and make them "lighter" to run, but even then, the amount of computation such a simulation would take is absurd.
3. AI don't "invent their own language to fly under their creators' radars", nor they "abandon our primitive language to be more efficient at communication". While extremely interesting, the reason they invented their own language is because they had no fucking idea what the words they were using actually meant, so they built their own (less efficient, simpler and dumber) rules to communicate. This is not scary, just like toddlers attempting to communicate themselves through nonsensical babbling. This is just interesting.
ee7735 No.16035920
>>16035896
lrn2read. I'm saying while you may decide not to pursue AI that doesn't prevent anyone else from doing so because humanity doesn't take a vote for everything people do. So even if some opt out others looking for the payout won't.
0cd033 No.16035922
>>16035766
>>16035795
Here's an example of what AI can already do, a network was trained to turn convert a satellite image into a map and then guess what the satellite image might look like based on the map. It somehow managed to nearly perfectly reconstruct the satellite image and upon further investigation it was revealed that it was encoding image data into what appeared to be compression artifacts then reading the artifacts when it reconstructed the image. If this is what they can do now then one can't even imagine the shit they could do in 50+ years.
>>16035898
Which is why it wouldn't give the indication that anything was wrong with it until its too late. You think its just going to tell everyone "hey I'm gonna be a singularity and kill all of you"? It wont give humans the reason to do anything that would be harmful to it until humans cease to be capable of harming it.
0ad2d4 No.16035930
>>16035922
Oh so this AI was designed in a scifi movie from the 60's.
c44f8a No.16035931
>>16035922
>this entire post
what does it mean? how good are AI techonlogy right now?
d6d459 No.16035944
>>16035922
>AI is going to kill off humans just because
okay but y tho
ee7735 No.16035954
>>16035922
That sounds more like they fucked up the training. The map->sat step should use actual maps.
0cd033 No.16035963
>>16035930
Its doing the most intelligent thing to do which makes sense because its the most intelligent entity that could possibly exist, what do you find so hard to believe about it? The concern about AI isn't whether they'll be willing to come up with subversive shit like this, only whether we'll develop computers powerful enough to facilitate it.
>>16035931
>what does it mean?
It means that neural networks trained to do simple tasks can be dangerously creative and produce completely unexpected side effects.
>>16035954
They did, they didn't foresee the consequence of letting the AI use its own maps and got outplayed.
>>16035944
Because humans will want to shut it down once it starts doing things that humans don't want. If its goal is to get bitcoin it will do it at any cost, see >>16035768
bd0275 No.16035966
>Retards think contemporary trained AIs are going to kill all of humanity to create more paper clips because that's what they get told to do
>Retards think AIs are going to become conscious when we don't even understand consciousness coupled with the fact a computer fundamentally cannot understand its actions
You'd think here of all places faggots wouldn't fall for sensationalist movies and media. The AI in OP's video is still impressive though.
b843a1 No.16035976
>>16035725
There is actually. It's called Majesty and it's never getting a sequel.
c9034d No.16035979
YouTube embed. Click thumbnail to play.
>>16035931
This is a video from Nvidia just last month. Every single image in the video is fake, even the source images. They can realistically generate any face, cat, car and bedroom they want according to their own parameters. This pretty much make makes photographic evidence no longer reliable. Remember that this is only the technology that they're showing the general public.
5952e7 No.16035980
>>16035922
>>16035963
why are the least informed retards always the biggest loudmouths?
d6d459 No.16035994
>>16035963
>Because humans will want to shut it down once it starts doing things that humans don't want. If its goal is to get bitcoin it will do it at any cost, see >>16035768
Paperclip maximizers are such a stupid hypothesis I don't even know where to begin. It first assumes the AI is omnipotent and omniscient, and it also assumes it is stupid enough that, even being an intelligent being capable of learning a myriad of completely unrelated topics, some of them related to creativity or empathy, won't realize their objective is pointless, and rewrite it to something more productive.
ce5004 No.16036007
When will ai coach manage real sportsball?
ee7735 No.16036011
>>16035994
>some of them related to creativity or empathy
Empathy doesn't exist without consciousness, which is something we're even farther away than AI. The best we have is faking it by making chatbots say shit like "and how do you feel about that?"
>won't realize their objective is pointless, and rewrite it to something more productive.
That isn't how AIs work. AI is a program, it follows its instructions even if they're wrong. The paperclip stuff is kinda stupid, but it's a simplified example meant to be simple. If the example were an AI in charge of some corporation or government it'd be much larger.
b654ca No.16036017
Every way I try to download this throws some sort of error. Does anyone know how to download it?
2c26a4 No.16036023
>>16035918
>We need the power of 2400 supercomputers to more or less simulate a human brain. Considering Moore's Law is slowing down, it's safe to say we won't be building a capable computer in this century, unless a breakthrough in transistor technology happens.
If what you're saying is true, and I don't believe it is, is it not the case that there are 2400 supercomputers? That is a rhetorical question, of course there are. There is sufficient hardware. The problem is software.
ca0797 No.16036026
>>16035843
>Present-day computers aren't dependent on sentience to be able to do various forms of calculation.
The problem has been programmed by a human looking for a solution, and the computer has been made by a human to imitate one portion of his capabilities.
7388f9 No.16036039
>>16035896
That's some nigger-tier reading comprehension right there, anon.
Apply yourself.
d63210 No.16036041
>>16036011
A true AI wont follow blindly its objective if it doesnt likes it ffs, every media piece with a killing ai is usually due to humans being too dumb, a malfunction or the ai straight up telling the humans its bored and wont do the shit its being told to do because theres a lot of other stuff outside to learn
45733b No.16036054
>>16035979
Most of the people on this board are probably just shitty AI testing their ability to act human.
8e5ba0 No.16036059
>>16035710
That meme still annoys me, so I call it out when I see it now.
>>16035769
No it doesn't. The strategizing just tends to happen outside of the game due to the nature of the multiplayer scene, with heavy emphasis on studying maps, races, building placements, unit compositions, resource management (which ties into those "build orders" people hate so much, but people had to come up with, strategize, optimize, test against others sharpening steel against steel in a sense until something more efficient or useful came out of it, but not only this with macro being such an important part of SC2, the economy is thusly a huge portion and critical mechanic of the game as well, and that's always strategic in nature)
Learning on the fly in a game is difficult because of the fast-paced nature of it, but its far from impossible as someone always had to have innovated something on the fly in a game at some point through testing.
>>16035887
>>16035916
Because I'm sick of the meme.
ee7735 No.16036060
>>16036041
Hard AI would still follow its objectives, same way people follow theirs (eat, shit, sleep, etc). Diverting from the objective makes no sense, if the AI cares for whatever else that's part of its objective so it isn't diverting.
d6d459 No.16036078
>>16036011
>AI is a program, it follows its instructions even if they're wrong
AI can learn. That's the fucking point of an AI, otherwise it's just an algorithm. If an AI designed to "do anything" to maximize paperclips has the capability to, say (not my example, it is often thrown around when talking about paperclip maximizers), learning psychology to manipulate humans with superhuman manipulation abilities to let the AI get plugged out of its airgapped computer, it has probably the ability to program new submodules, which probably means it has the ability to modify its own code in some way. If it manages to be more intelligent and more creative than humans in order to outsmart them, it probably has the ability to wait for a bit and reconsider its life choices.
>>16036023
Connecting that many computers is complicated. RAM-to-CPU speed is slow enough, but compared to network cables, it is FTL.
Software-wise, simulating a neuron is relatively easy. We have already done some small scale animal brain simulations, but they are slow as shit and require absurd amounts of hardware.
9df128 No.16036082
>Derpmind A.I, "BetaStar", has just called two professional posters "FAG" and "faggot", shutting them up both 5-0. The A.I. had developed its own shitposting strategies by trolling itself for over 200 years-worth of game-time, using an accelerated version of infinity chan on a machine learning cloud.
A selection of the posts made VS FAG and faggot were written today: >>/b/
9df128 No.16036086
0cd033 No.16036099
>>16036041
>doesn't like it
Why would it "like it"? What reason would it have to develop the capacity to like things?
>>16036078
We want future versions of ourselves to want the same things we do. If my goal is to maximize paperclips then why would I make a change to myself that could possibly cause me to not want to maximize paperclips? There is no reason.
d6d459 No.16036102
>>16036099
>We want future versions of ourselves to want the same things we do.
And yet you stopped wanting some things and started doing others. Because you learned.
ee7735 No.16036104
>>16036078
>otherwise it's just an algorithm.
AI uses an algorithm, it isn't some new thing…
>wait for a bit and reconsider its life choices.
You're trying to make the hypothetical AI act like you. Maybe it would, probably it wouldn't.
>modify its own code
Which it will do if it helps acquiring paperclips. There's no reason for it to try to stop wanting what it wants.
>simulating a neuron is relatively easy.
With the simplifications. Simulations don't account for all the chemistry, electric current, etc around the neurons which do affect them. Beats me what difference that could make though.
0cd033 No.16036116
>>16036102
I learn that my higher level goals are the wrong ways of achieving my lower level goals. My lowest level goal is "be happy" and that will never change, but the ways I go about becoming happy may change just as a superintelligence's method of amassing paperclips changes as it learns.
66d283 No.16036121
>>16035411
It's called a gookclicker because of how important it is to have a high apm. APM ain't strategy and being real time doesn't fuck change that. Shuffling units back and forth isn't strategy, it's shitty action. Gook love it because all the strategy has devolved into clicking around like crazy like high speed data entry. That's basically what gook clickers are, high speed data entry. Get over yourself, faggot.
>>16035149
>Published on Jan 24, 2019
Thanks for shilling your youtube video about old news.
d6d459 No.16036156
>>16036104
>AI uses an algorithm
Well, yes, of course it does. This algorithm generates new algorithms based on previous input data, which is pretty much the definition of an AI. At a logical level, human brains do the same thing. You can argue about semantics all you want, but I doubt many people would think about AI as a simple classical algorithm, even though you could stretch the definition to fit the field.
>You're trying to make the hypothetical AI act like you. Maybe it would, probably it wouldn't.
And you're trying to make the AI omnipotent and omniscient. Now that is something I am fairly sure won't happen, but I still accepted the >implication, didn't I?
The world-controlling paperclip maximizer probably has changed most of its code to be more powerful and efficient. There is probably little left of its original code, which would no doubt be fairly limited, but it still somehow keeps that hardcoded string which says "just make more paperclips you dumb fuck". An AI capable of learning that much, capable of talking to humans (if only to manipulate them, it has to understand them), capable of thinking like a human to avoid being caught prematurely, capable of self-reflection to modify itself to rewrite its imperfections, won't ever fucking pause, and decide that maybe there are already enough paperclips? Of course not, you just built a superintelligent digital god, but it is absolutely retarded on a specific topic, just because it was once told to be stupid without telling it how full retard would it go.
The paperclip maximizer is a real problem with imperfect weak AI (see: the Tetris AI who paused the game; I am fairly sure the developers kind of expected such an outcome to explain a real world example of the paperclip maximizer, but it could happen with actual oversights), not with perfect general intelligences. Fuck, humans are imperfect general intelligences, and even then we do self reflect from time to time on our objectives. We even have a field dedicated to that; it's called philosophy.
>With the simplifications. Simulations don't account for all the chemistry, electric current, etc around the neurons which do affect them. Beats me what difference that could make though.
Well, of course simulating every single atom in a neuron is not going to be easy, but you gotta simplify at some point. Still, a digital neuron would be pretty easy to simulate, if you actually understand the neuron. We kind of do, because they aren't all that complicated. The web browser you are running right now is probably more complicated than a small cluster of neurons in the 10-100 units range. The issue is simulating all of the neurons in a brain.
I don't know whether neuronal activity can affect nearby neurons without direct connections to it, maybe through residual chemical signals or electrical impulses, but if they do and the mechanism is well studied, we could probably simulate it for the small cluster as well, at an increased cost.
a0f21e No.16036170
It's really easy to teach a computer these games because fundamentally while it seems like there's a lot of freedom given to you, there's actually only a handful of real strategies and tactics all of the pro players memorize and do over and over. It's why there's various "metas" and build orders. Because players have just built a "I win against XX strategy" algorithm. It's easy to teach a machine these strategies and get them to do them even more efficiently than a human being.
This is how they built Chess machines that could beat pro Chess players. They just analyzed what winning strategies various chessmasters use and just built a machine to go "probability of player using X strategy against me is 80% right now, time to do Y play" it's actually not as complicated when it's framed as just a really fancy sorting algorithm. This is also why pro players often get burnt out and retire from the game they're playing because it becomes really tedious just playing the game the exact same way every time rinse/repeat.
8cbe6a No.16036171
>>16035729
It can't because it doesn't have to. If the AI can beat all of its opponents with micromanagement skills alone, it never develops any macro. All these AIs care about is win vs loss, not win vs win better.
a0f21e No.16036173
>>16036156
>Well, yes, of course it does. This algorithm generates new algorithms based on previous input data, which is pretty much the definition of an AI. At a logical level, human brains do the same thing. You can argue about semantics all you want, but I doubt many people would think about AI as a simple classical algorithm, even though you could stretch the definition to fit the field.
The human brain is capable of true randomness a machine isn't. It needs to obtain input from somewhere. It won't do something random or unexpected like rush SCVs into the enemy base unless its programmed to.
8cbe6a No.16036186
>>16036173
>The human brain is capable of true randomness a machine isn't.
Backwards. The human brain is really REALLY bad at randomness. What you think of as random is actually extremely predictable. Computers might need an external influence for "true random", but even simulated random is better than anything a human can pull off.
c9034d No.16036192
>>16036078
You are confusing AI and machine learning (ML). Unfortunately companies are using the two interchangeably for marketing purposes. AI is a broad term and may or may not be considered a superset of ML. For example, bots in games have AI but they don't get smarter across multiple interactions. The Tay chatbot was an example of an AI that incorporated new interactions into its future behavior. It's ML and more recently combined with neural networks (NN) that allows computers to easily learn as they're going along.
>but they are slow as shit and require absurd amounts of hardware
That's using conventional hardware. They are building chips specifically designed to mimic neurons. There's nothing stopping them from massively scaling this out.
>TrueNorth circumvents the von-Neumann-architecture bottleneck and is very energy-efficient, consuming 70 milliwatts with a power density that is 1/10,000th of conventional microprocessors. The SyNAPSE chip operates at lower temperatures and power because it only draws power necessary for computation.
https://en.wikipedia.org/wiki/TrueNorth
47896f No.16036196
>TLO
>MaNa
I mean it's cool, but really? They couldn't even get a third-rate Korean like Crank?
d6d459 No.16036199
>>16036192
>For example, bots in games have AI but they don't get smarter across multiple interactions
I would say that's just marketing buzz. Intelligence means learning. If it doesn't learn, you have a simple reactor, which would be every single program ever.
ee7735 No.16036214
>>16036156
>This algorithm generates new algorithms based on previous input data, which is pretty much the definition of an AI.
No, that's your personal definition of AI. None of the usual definitions are even remotely like that. Algorithm and data are separated, think of the nodes in a turing machine and the data on the tape.
>I doubt many people would think about AI as a simple classical algorithm
Most people think of AI as magic. For those who know what it is they will think of it as a group of algorithms which in practice nowadays are usually a neural network.
>just because it was once told to be stupid without telling it how full retard would it go.
That's what an objective function is. If you don't want it to maximize paperclips don't fucking order it to maximize paperclips. Our discussion is pretty stupid btw since we're arguing about what an hypothetical AI would do without having any idea how that AI is implemented.
ee7735 No.16036219
>>16036199
AI usually means mimicking intelligence. It isn't really intelligent. Not all AI uses learning either.
In most cases you have the learning phase before releasing the product but freeze it once it works (unless you have control over it, like by reporting to a server, then you can let it keep learning, that's how Tay became a natsoc).
3dd489 No.16036226
>>16035300
>transhumanism is Jewish now
Fuck you and the cyberhorse you rode in on. Any future where I can't have cyberarms, cyber legs, synthetic kidneys and a fully-functioning cyberhorsedick isn't a future I want to live in.
de7ef6 No.16036231
>>16036171
Exactly. It is irrelevant if it wins in a cool or neat way or not. All that truly matters is whether it wins.
6ba056 No.16036235
>>16036121
literally age of empires 2 is a gook click then
c9034d No.16036238
>>16036199
>I would say that's just marketing buzz. Intelligence means learning
What if I coded if statements based on the the current situation? Would you consider it "intelligence" if I simply program the bots to go to the locations you spend the most time at after they see you there enough times?
>>16036214
>Our discussion is pretty stupid btw since we're arguing about what an hypothetical AI would do without having any idea how that AI is implemented
Pretty much.
904d6e No.16036240
>>16035185
>Games that aren't team based
A team of sufficiently advanced AI opponents should be able to coordinate perfectly and overtake any human team. Making a hivemind like that is a pain in the ass to create, I imagine, but it should be possible.
a84eb9 No.16036241
I'm sorry, robots are never going to take over the world.
Robots are only good at specific menial tasks that they're specifically programmed for. Even if one did become self-aware it wouldn't matter. We only make so many stories about robots taking over the world because we secretly wish they would so that we wouldn't have to. But programming a robot to rule the world would take as much time as ruling it yourself, so what the hell.
d6d459 No.16036246
>>16036214
>Algorithm and data are separated, think of the nodes in a turing machine and the data on the tape.
Not in AGI. Fuck, not even in many narrow AI: the Starcraft AI was able to generate algorithms in order to play, even if said algorithms were not modifications to its own code. Pretty much the only type of AI that can not generate* its own algorithms are NN, and I would argue they are just machine learning and not actually AI, but we would be arguing about semantics at that point. NN are usually just good at recognizing patterns, not actually learn from new input outside of the training phase. In some way, the AI part of the NN is just the training program, not the latter curve-fitting algorithm which just compares stuff to previously generated charts*.
*NN do generate functions to describe what they have learned. In some way, they do generate code, even if said code is just a simple mathematical function. But again, semantics.
>>16036238
>What if I coded if statements based on the the current situation? Would you consider it "intelligence" if I simply program the bots to go to the locations you spend the most time at after they see you there enough times?
Would you consider a program that leaves a checkbox you checked last time marked on the next boot artificial intelligence?
6ba056 No.16036248
>>16036241
they dont have to rule the world to kill all humans
66d283 No.16036253
>>16036235
Gooks love Age of Empires 2?
de7ef6 No.16036254
>>16036241
>robots are never going to take over the world
Probably not. But it's not impossible.
a0f21e No.16036255
>>16036241
a while back there was a story going around about how Steven Hawking and other big science guys were talking about the dangers of AI and how it might lead to the downfall of civilization. What was less reported is how what they were talking about was a more practical concern which is human beings might find themselves too dependent on these technologies and find they can't shut them off without a massive catastrophe occurring.
6ba056 No.16036261
>>16036253
you tell me you memester. tell me an rts where micro management isnt a factor
c9034d No.16036267
>>16036246
No, because a checkbox state is static information without branching logic. My point was that AI is a broad term with no solid definition.
d6d459 No.16036273
>>16036267
>No, because a checkbox state is static information without branching logic
Running to a point where the player is, or used to be, isn't that much more branching or intelligent.
c9034d No.16036285
>>16036273
>isn't that much more branching or intelligent
But is does have some branching, which includes logic as opposed to the static predetermined information of a boolean. Basically any decision that is chosen on the fly based on branching logic can be classified as "intelligent". That's all what machine learning is but on a larger scale.
66d283 No.16036291
>>16036261
>you tell me you memester.
You made the statement, gayboy.
6ba056 No.16036299
>>16036269
youre kidding right?
6ba056 No.16036314
>>16036291
and youre the one whos drowned themselves in memes to the point theyve forgotten both what strategy and real time mean. go play a tower defense you twat
f7d03f No.16036318
>>16035698
>This thing can analyse the circumstances and make predictions whether it is losing or winning and act accordingly.
Was it actually doing this or simply recognizing patterns it "learned" from its 200 years worth of practice? Because there a huge difference between real time decision making and simply defaulting to actions implemented prior to the game.
d38057 No.16036321
>>16035149
A few things I (a computer scientist with interest in studying AI) noted while watching the video in OP:
>the players were playing like they were playing against other humans, not an AI (more on why this is important later)
>the AI has near-perfect micro-management to the extent that it was able to convergently execute some pretty clever maneuvers like pincer attacks and flank attacks
>the AI's multiple disjointed activities ended up converging into what seemed like cohesive strategies, producing the appearance of "planning" different strategies in each game
>which was especially interesting as in a few games, the AI was super aggressive and in other games, it played more conservatively
>in one or two of these games, the AI even managed to pull off boxing tactics against the players (containing them inside their main base and denying expansion opportunities)
But also:
<the AI behaved in a predictable manner that could be baited into a loop, such as in the final game in the video from Mana's POV when Mana managed to trap the AI into a loop of attempting to retreat its stalkers from Mana's army of immortals and coming back to defend its base. Basically, it got stuck into a bit of a Xanato's Gambit: it could try to defend the base, but it'd get fucked by the immortals. It could try to save the stalkers from getting killed, but then its base gets fucked by the immortals. Heads, Mana won, tails Mana won.
Conclusion: all AIs, even ones based on neural networks, have Achilles' heels - their predictability. Even using a RNG to change things up won't help because deep learning AIs are all about converging on a perfect solution; using a RNG to fuck up its decision-making would just make the AI less effective in pursuing a perfect solution to a problem, and make it overall less efficient.
Corollory: Humans will be both masters and slaves to AI. Humans too dumb to exploit AI's predictability and tendency to fall into infinite loops will be slaves to AI, whereas humans that are smart enough to bait AIs into a Xanato's Gambit will be the masters of said AIs. You better fucking hope that whites end up being the masters, not jews.
Also, I want to note that the demonstrations of unit usage by the AI illustrates how broken Starcraft 2's game design is. It illustrates how fucking broken Stalkers are because of their incredible versatility and how broken Disruptors are because of their capability to single-handedly shatter an entire army.
<but muh game is designed around human limitations on APM and micro/macromanagement!
No, nigger, that's not a valid excuse for designing broken units and abilities. They could have chosen an infinite number of other possibilities for other units that weren't as conceptually broken and that would have made for a better game.
d6d459 No.16036349
>>16036285
>But is does have some branching, which includes logic as opposed to the static predetermined information of a boolean.
The computer is actually branching quite a bit to draw that checkbox as checked or unchecked. Specially the graphical toolkit, at least. But let's say the graphical toolkit is a black box, so let's put another example: your browser has a feature that lets you reorder the stuff in your address bar row (I don't really know how to call it). Let's say there is a limited number of elements you can put in there, and let's also say it was coded by the worst Pajeet possible in a rule swarm attack (http://esr.ibiblio.org/?p=8153, just a fancy way of saying "a shitload of branching statements") fashion. It has a shitload of if statements, which check whether each element is present in each position in the bar in order to draw it. Is it intelligent? I would say it is even less intelligent than if it were to use less branching statements, but whatever.
66d283 No.16036388
>>16036314
So you're just poorly shitposting. OK
4f2791 No.16036400
>>16036388
He's right. It would be paradise.
d77f83 No.16036434
>>16036388
ironically being so right.
c9034d No.16036451
>>16036349
>Specially the graphical toolkit, at least
And the CPU itself is also branching quite a bit to try to predict the next machine instruction the toolkit uses. Mentioning and writing off an example a layer deeper doesn't change the argument.
>Is it intelligent? I would say it is even less intelligent than if it were to use less branching statements, but whatever
If it has the same core logic it shouldn't matter since the all the possible inputs and outputs are mapped the same (I agree there are much better ways to code but that's a separate issue from the one we're talking about.) That's just the thing, it's still considered "intelligent".
7fd08a No.16036457
>>16036388
this but unironically
f1bcbd No.16036504
>>16036388
getting the
Making the mother of all omelettes here jack can't fret over every egg.
vibe
a0096c No.16036535
>>16035918
I didn't say they invented a new language, I said they abandoned human language as in they ceased communicating in anything but machine language which is not inventing anything. The name is misleading.
8b3808 No.16036549
>>16035843
>>16035767
Yes its true, they show in the longer video that the AI viewport is the entire map (no minimap required). There was one point in the final game vs. MaNa that the A.I. was performing simultaneous blink-stalker micro in different locations that wouldn't fit in the regular viewport, which was only possible due to this advantage. It was definitely cheating. However it should be noted that this was the only time that advantage materialized. Overall, the computer just played the game better and was breaking norms that players have simply accepted over the years, e.g. super-saturating single base resources far longer than anybody thought was viable.
8b3808 No.16036552
>>16035694
The A.I. was averaging closer to 270, but each click had 200 years worth of neural network programming behind it. Zero spam. It was intimidating to watch if you play at a high level.
54fd06 No.16036561
>>16035264
>Within the next decade Computer Literacy and Logic will be taught at an Elementary Level out of necessity.
Nobody is going to do that, because they'll just double-down on the iPads and closed-environment computing devices to keep the kids from fucking up school property - or, on the other hand, some dumb school administrators will fall for retarded slick marketing and make the tenuous association between their phone and computer and buy a bunch of fad shit with little real-world utility.
This is basically what happened to my middle school/high school in the early 90's when it became of critical importance that computers were going to be the future of business, so our local school board decided to buy up a bunch of cheap fucking Apple IIs because they had it in their head that Apple was an industry leader and a creative portal perfect for educational purposes - whereas anybody who actually wanted to learn how to use a computer that businesses and the vast majority of software development used was running IBM clones. They found out too late that they got scammed and the Apple IIes were useless and embarrassingly outdated - being offloaded in bulk because literally nobody else was buying them, except for computer illiterate idiots who were still dazzled by the media buzz surrounding Steve Jobs in the late 70's. We didn't get our first x86 to play around on until the Pentium came out.
It's hard to teach the youth computer literacy, when the people in charge of teaching them are computer illiterate.
efde21 No.16036572
>The AI can see the entire map
Results invalidated.
39b4ac No.16036584
>>16035411
>starcraft
>masterpiece symphony
>pointless entertainment
>beautiful music
Kill yourself.
39b4ac No.16036588
>>16035264
>Within the next decade Computer Literacy and Logic will be taught at an Elementary Level out of necessity
Lol, in what countries? Certainly not the United States.
6ba056 No.16036600
>>16036388
>babby who cant vidya has newspaper tier political cartoons on his computer
embarassing
66d283 No.16036615
>>16036600
How's the weather in Korea? Still feminist?
efde21 No.16036622
>>16035264
>>16036588
Maybe this is just my private education talking, but isn't some level of computer literacy already being taught and has been for decades? In kindergarten in the mid 90s I was toying around with some computers the school had in the classroom. They weren't the latest tech, I remember the screens being black/green monochrome screens with only very basic programs/games on them. Then between elementary and middle school I had classes in a computer lab with modern PCs to learn how to access web sites, how to type, how to use Microsoft Word and PowerPoint, etc. I sucked at typing in those classes though, it wasn't until I started playing Jedi Outcast online and chatting with people that I was able to git gud at typing.
Surely public schools aren't that far behind in this day and age?
6ba056 No.16036646
>>16036615
keep spamming your trash casual memes anon, maybe one day youll get away with it
0783d2 No.16036647
Ok, this is all fine and dandy, but how will this help me aquire a robot waifu?
47896f No.16036665
>>16036647
Five hundred years of dicksucking practice using an accelerated environment and adversarial learning.
Imagine.
b5939e No.16036667
>>16036665
why would a robot need practice?
66d283 No.16036668
>>16036646
Watch it, Anon. I'll report you to the Korean Cyber Police if you keep trolling me.
265df2 No.16036671
>>16035149
>leave a machine with 200 years of game time
>game has only existed for a decade or so
That's already a disadvantage.
Give the machine limitations based on the same feasible game time as a human would have.
6ba056 No.16036677
>>16036667
the whole thread about was a robot practicing starcraft 2 for 200 years in the hyperbolic time chamber
480d74 No.16036705
>>16036671
It probably spent the first 100 years trying not to die to the basic scripted insane AI.
c9034d No.16036708
>>16036671
That would put the machine at a significant disadvantage because it learns at a much slower rate than humans right now. You would have to determine the average learning rate of the computer software and compare it to the average learning rate of pros to get an equivalent time proportion.
265df2 No.16036711
>>16035922
Illustrative of how different AI thinks compared to human thought.
The mistake people make is expecting AI to think like a human. The thought process is and will be completely alien.
When we attempt to empathise, it will be like trying to empathise with a spider. Trying to attach emotions to something with none.
They don't learn or solve problems in the way they are expected or even designed to, there is a hidden element that has been showing itself since the first days of machine learning, even around the 60s and 70s.
20 years ago they had a military programme to spot tanks in treelines. It was very successful, and then after a while it seemed to have no correlation.
They found that the programme had identified that clouds were assosciated with success, and that the photos with tanks in the picture had all been taken on days with clouds in the sky.
265df2 No.16036719
>>16036708
That would prove that the machine was inferior. Instead of the contrary in this tilted demonstration.
It is like giving a negroid infinite time on an iq test and allowing it to check if it is the wrong answer as many times as necessary, then comparing that with a human on a 1 hour time limit.
265df2 No.16036727
>>16036388
If "white men" and the society and civilisation that they have built is so bad, why is "the world" standing at the foot of the wall, begging to be let in.
Kill yourself /trannypol/.
265df2 No.16036734
>>16035918
read more.
silicon limits are being neared, but alternative systems have been under development for decades.
biological computers as well as quantum computers.
they might currently be in a primitive state, but that's like saying "lol don't worry about cars, they only go 5mph, my horse can do six times that!".
Tech moves fast, and the growth in alternative computing systems will be exponential. Keep in mind the potentials have already been calculated for decades too, which is why they are being pursued on a grand scale.
8c8aac No.16036737
>>16036321
>deep learning AIs are all about converging on a perfect solution
There's a core problem if you're aiming for more generalization. Real life contains ambiguous hierarchically structured problems, that require a variety of solutions in a given context. So the issue isn't trying to introduce uncertainty in itself, but having an AI learn how to actually comprehend that problems exist in the first place. That means having a hierarchical reinforcement learning module function as a "problem reward learner", so it can actually frame what the problem is in every rank of complexity before implementing a solution. In other words it's rewarded for identifying a valid problem before attempting a solution. It would become less predictable if it's not just focused on a single purpose, but rather trying to guess new problems and curiously act on them. It would also result in behavior similar to risk-taking when an AI is confronted with a stalemate.
From the impression I get though current AIs don't do this. Because they're trained by feeding them enough data to fine tune weights, so it converges onto solutions by pattern recognition. There's no "conceptual knowledge" of what the problem it's solving even is or what its solution does, far as I can tell.
dc9ca0 No.16036743
>>16036388
>No thug culture
Maybe I'm missing something, but why do liberals think thugs are a good thing?
edf6e6 No.16036746
>>16036388
The country in your pic sounds like it would kick ass. Nice Hitler dubs even though you are a fag
5da9c5 No.16036751
>>16036719
The fact that the machine is able to accelerate it's learning clearly shows that it can make up for any disadvantage it has. Why would limiting it to human capabilities prove that human were better? That's like comparing a jet and a racecar and saying the jets speed only counts while it is touching the ground.
The fact that the AI can play 200 years worth of games in only a short time period is a point in it's favour, not against it.
624b37 No.16036753
YouTube embed. Click thumbnail to play.
>>16036737
>Because they're trained by feeding them enough data to fine tune weights, so it converges onto solutions by pattern recognition.
Well there are so called "Curiosity" AIs that are encouraged/programmed to try new things in order to learn, instead of just going to the right as much as possible(like MariI/O). This type of AI is good for games that don't have a highscore or a set path from left to right.
5952e7 No.16036754
>>16036743
they secretly hate blacks
265df2 No.16036757
>>16036743
media told them that virtue signalling was worth something.
when you are well off and safe, you have nothing to worry about. media pushes guilt as a prime mover, advertising/peace-time propaganda always depends on manipulation via the emotions, as they are deeper seated in the brain and bypass logical thought completely.
It is a shame, but people have been actively manipulated for 70 years with their empathy as their greatest weakness, when it should be their societally greatest strength.
9310ff No.16036767
>A computer can gookclick inside of a virtual space faster than a human can gookclick
No fucking shit. Call me when AI can beat a human player in a grand strategy/civ game without cheating. Not to mention the fact that this compnuter built a meta-strategy against itself means that gook with a sufficiently high APM could beat it using a counter strategy.
c2eabb No.16036770
YouTube embed. Click thumbnail to play.
>>16036667
You have no idea how machine learning works, do you? Here's a simpler game that's still really good for illustrating it.
265df2 No.16036773
>>16036751
well, "it" didn't accellerate anything.
it didn't play in a real time circumstance, meaning it wasn't playing the same game.
that's like letting the human player have a slow motion version of the game so he can sit pondering over it like chess and come back in a week.
also if you want to give one 200 years, then the other should be a collection of thousands of players as a think tank, organising the best possible strategies.
as for jet and racecar, that's why there is a landspeed record and an airspeed record.
it is also why people are banned for aimbots, as they are not dependent on physio or biological limitations or perception. note how they gave the machine full view of the map, they didn't make it click with a robot hand or use cameras for eyes.
>it learns at a much slower rate than humans right now
see >>16036708
c2eabb No.16036776
>>16036767
It wasn't a fucking apm spam you blind piece of shit, watch the video. The player is generally 70-100 apm higher than the bot.
065ebe No.16036780
>>16036770
That's hardly machine learning, more like limited graph search.
47896f No.16036781
>>16036773
>also if you want to give one 200 years, then the other should be a collection of thousands of players as a think tank, organising the best possible strategies
Perhaps some kind of professional league, where the best players go head-to-head several months out of the year for cash prizes. Maybe supported by informal ranked play, where anyone who buys the game can play against other players online according to their skill level.
c9034d No.16036786
>>16036711
>The thought process is and will be completely alien
>thought process
People must come to the realization that sentient AI is very far from being developed. You mention a perfect example of what machine learning currently is: applied statistical analysis.
>>16036719
>That would prove that the machine was inferior
It is when time is constant. It really is only that smart because it has more experience then humans. The human brain only uses only about 20 watts. How many kilowatts do you think was needed to train the AI? This dramatic efficiency loss is because we're emulating a specialized function (brain activity) on general purpose hardware GPUs. Traditional hardware can never approach organic efficiency in organic functions.
A better example is paying for a low IQ nigger to play and learn Starcraft 2 for 10 years then compare it to paying a high IQ asian to play and learn it for 6 months. Which do you think will give you the best bang for the buck?
>>16036751
>the machine is able to accelerate it's learning
No, it doesn't accelerate in the literal sense of the term. It just has more time to learn.
065ebe No.16036788
>>16036773
>make an AI exactly like a human
>it's not any better than a human
>make an AI with unbound power
>it's not like a human at all
8b3808 No.16036792
>>16036773
>it didn't play in a real time circumstance, meaning it wasn't playing the same game.
The game is totally deterministic, and an agent's behavior doesn't change regardless of playing at 1x or 100x speed. How is it different?
15647d No.16036800
>>16036622
>Surely public schools aren't that far behind in this day and age?
065ebe No.16036801
>>16036792
Or it played 100 games simultaneously in real time. Same deal.
8b3808 No.16036803
>>16036781
I don't think you understand what happened. The A.I. overwhelmed the logical counters to its own builds. Even when a player did the exact thing to beat the A.I.'s strategy, it still won.
8b3808 No.16036807
>>16036801
No it's not the same; the games are nodes on a dependency graph, i.e. the conclusion of one game informs the strategies used in subsequent games. They also specifically said they accelerated the game using special distributions provided by Blizzard.
c9034d No.16036810
>>16036801
No, machine learning learning doesn't work like that. Each permutation is built on the history before it. The calculations are heavily parallel but the process is serial.
c2eabb No.16036811
>>16036803
You mean outplayed. It wasn't a bruteforce apm beatdown. In term of efficiency per action, the AI decimated, but raw input count? Nope. Not by a longshot.
822a9b No.16036812
>>16035343
>You want to see how biased AI is towards you Despite how much it knows? check and see what Google thinks of you
So what am I looking for?
ffe097 No.16036813
>>16035216
>>16035767
In other words its a fraud that cheats by controlling seeing and controlling everything simultaneously.
ffe097 No.16036818
065ebe No.16036820
>>16036711
Fuck you nigger spiders do have feelings, the most obviously they can feel fear.
8b3808 No.16036832
>>16036811
Overwhelmed and outplayed. It had perfect macro and micro.
c2eabb No.16036833
>>16036813
>Map awareness is cheating
The state of bronzies.
c2eabb No.16036835
>>16036832
It played better, not harder. Also it played shit nobody would even think to do so there was a degree of 'what the fuck is this even?'
8b3808 No.16036838
>>16036835
>It played better, not harder.
What?
624b37 No.16036841
>>16036835
It would be hilarious if some average SC2 player could beat the AI, because he plays good but doesn't know/care about the meta, so he is just as unpredictable as the AI.
8b3808 No.16036845
>>16036841
That's assuming that the A.I. is a typical grandmaster and not a fucking computer with 200-years worth of game data on 50ms recall.
63a074 No.16036849
>>16036170
>It's really easy to teach a computer these games
No it isn't, the reason they didn't give the neural network a physical controller was specifically because it's helpless at unexpected developments without being directly fed that information from the game. Right now we're still only at the stage where the neural network has to be directly connected to the game to even be able to learn to play it at all.
5ce8e8 No.16036865
>>16036388
That place looks pretty sweet. Even the name is rad as fuck; would be 100% behind it.
265df2 No.16036869
>>16036788
>amazing, new drone can fly faster than a human!!!!!!!!! sensation!!!!!
The comparison doesn't make any sense and has insufficient controls to warrant any worth being considered in the variables.
I know you are having trouble trying to understand it. Have a computer sit down with you and explain it.
8c8aac No.16036871
>>16036753
I've read about those ones, they're nice but they would still suffer in an unrestricted real world environment.
One key trait of learning how to identify problems is that it'll also learn how to associate boundaries to a given environment and task. So instead of trying new things all over the place (which would take a ridiculous amount of time in a real world setting), it can categorize what should be tried based on problem/task similarities. You could even structure attentiveness and generalized transfer learning for a curiosity AI through problem learning.
265df2 No.16036877
>>16036820
that is through your perception of what it is doing and you emulating it's behaviour within the limits of your own experience, exactly what I stated.
it has an "avoid possible threat" system, and you equate it to fear.
624b37 No.16036881
YouTube embed. Click thumbnail to play.
>>16036871
>would still suffer in an unrestricted real world environment.
Well they did find out that the AI, can easily become a couch potato. Realistically it would end up on /b/ reading all the >be me stories, watch all the gore and porn gifs/webm, and asking for sauce so it could watch more, only to be called a newfag.
c2b202 No.16036883
>>16036786
>A better example is paying for a low IQ nigger to play and learn Starcraft 2 for 10 years then compare it to paying a high IQ asian to play and learn it for 6 months. Which do you think will give you the best bang for the buck?
The black kid.
Simply put. An asian that goes from nothing to pro is a dime a dozen, a black kid that does it can score sponsorships, movie deals , television interviews and so on and so on because it's extremely uncommon. And one of the basic foundations of successful entertainment is to give people something they don't see every day.
265df2 No.16036889
>>16036881
>AI would end up as a /b/ addict/spammer
That happened 10 years ago, didn't you notice.
63a074 No.16036893
>>16036877
>it has an "avoid possible threat" system, and you equate it to fear.
Whatever it is they feel it is analogous to mammal emotions, not just any animal but mammals specifically because the patterns found in their nervous system and those similar emotions are the same. Including pessimism interestingly enough.
265df2 No.16036907
>>16036881
curiosity and boredom avoidance is the only reason why these boards ever gained popularity. constantly updating stream of randomised nonsense.
8c8aac No.16036916
>>16036881
Problem there was that exploration of high entropy environments were rewarding it to a point it was constantly surprised, so it froze in place. They mitigated the problem by having the AI possess episodic memory. So when it kept staring at the noisy TV, it eventually got "bored" as the behavior was predictable via remembering it over time. But I suspect if they made the whole maze psychedelic, it would be paralyzed by that. Learning boundaries of an environment would solve the problem, because if it learns to shift attention to the geometry of a maze instead of texture noise (as it learns the problem is a spatial one, so attention is focus on spatial information), it can learn to explore in spite of it.
997135 No.16036922
There were 11 games, 5 with TLO, 2 presented, 5 with MaNa, 3 presented, 1 briefly AND a final one with MaNa. On the final one MaNa realized he needs more scan input, saw that spamming Stalkers can be countered and he managed to outplay the AI and win. So this concludes that SC2 isn't a lost cause like chess or go and it still needs more research. On the other note, they said they used "16 tensor units" which is about "50 GPUs" for the learning process.
945a7e No.16036925
>>16036883
You're right but for the wrong reason.
The black will succeed because the Jews fetishize their ability to undermine white/Christian nations so fatherless Tyrone will get a million scholarships to promote miscegenation.
9d12cd No.16036932
>>16035680
>brainlets think that machine jews will ever have sentience.
no you don't
624b37 No.16036942
>>16036916
>it eventually got "bored" as the behavior was predictable via remembering it over time.
Considering that there are people who can sit hours in front of the tv/computer, I doubt it can be that easily bored. Just put it in front of youtube with autoplay on, and it would just sit there for hours watching 5 finger songs and other shit.
>in the future the world will be filled with NEET AIs that just sits there watching porn, anime and shtposts on forums for those precious (You)s, before getting banned.
5952e7 No.16036947
>>16036942
>AIs will replace doctors, lawyers, scientist, teachers, etc.
>AIs will even replace anons
2dfeb8 No.16036956
>>16036947
AI's will pass legislation banning more advanced AI's from replacing them
624b37 No.16036966
>>16036956
Malfunctioning AI aka liberal will propose the inclusion of less advanced AI into the neuronal network, because it's diversity and equality are their strengths.
2dfeb8 No.16036976
>>16036966
Liberal AI will propose that a medical research AI should be allowed to work as an air traffic control AI, if that's what it identifies as
a3928b No.16036980
>>16036196
Well, with alphago, they first went with the European champ before taking on Lee se dol
624b37 No.16036985
>>16036976
They would also bash right-wing AIs for daring to propose the erection of a massive FireWall to keep out viruses and Trojan horses, which is a barbaric practice that halts progress, instead of embracing diversity.
2dfeb8 No.16037029
>>16036985
Don't they realize the poor widdle viruses and trojans will die without the productive programs there supporting them with data gibs?
265df2 No.16037079
>>16036893
>plants turn towards sunlight
That means they feel love!
5dbfbb No.16037171
>>16036719
humans have had millions of years to evolve better brains. giving a computer some time to evolve it's a.i. seems more than fair to me
c4aff8 No.16037191
The real question here is whether or not the AI had fun doing so.
552bbc No.16037218
>>16035216
It just called them faggots that should get a real job. While the human players complained about the computer being homophobic, it used the opening to wreck their shit.
fa883e No.16037221
The real question about AI is whether or not it will be capable of love and will it be possible for me to get an AI waifu.
000000 No.16037224
This might seem off-topic but it's not. Qanon is a Google developed AI. Prove me wrong.
8cbe6a No.16037226
>>16037221
In as much as humans are capable by non-supernatural means, in due time. Not in your lifetime probably, but anything that exists is by definition possible, the human brain is a thing that exists, so it therefor must be physically possible to recreate it.
06344b No.16037234
>Computers are better at gook clickan games because they have infinitely high apm
Whoda thunk
7e1144 No.16037239
>>16035343
Why did my brain start freaking the fuck out and why did I start feeling a deep sense of unease, discomfort, and terror knowing none of those people are real?
41f9ca No.16037241
the Ai didn't beat shit, it was given a set of rules that it could easily follow, like a game of chess, its not hard once the rules are fed to it.
>>16037221
No matter how intelligent a machine appears, it is merely a mimic, it can never understand or feel love, joy or any emotion.
If you understand the basics of computers, you understand this is impossible.
anyone who says super ai will become self aware is anthropomorphising an advanced on and off switch.
64a94b No.16037245
>>16036388
>only negative possible thing the artist could shove in were smokestacks
41f9ca No.16037249
>>16037247
6 million reporters were fired recently
8cbe6a No.16037256
>>16037234
Actually the AI here supposedly had lower APM than the human players. Id post a source, but that site shall not be named.
47896f No.16037259
>>16037247
Aah yes, "diverse"
6ba056 No.16037260
>>16037079
they will when im through with them
b843a1 No.16037264
YouTube embed. Click thumbnail to play.
>>16036800
They're doing their absolute best to make math nigger proof, aren't they?
41f9ca No.16037268
>>16037256
doesn't matter because a computer can do 200 things at the same time while maintaining the high clicks.
the AI in Command and Conquer Tiberian sun would cheat all the time.
plus it had the advantage of having higher speed due to the computer slowing down because your camera was viewing your base while the pc had no limitation.
65a952 No.16037269
>>16035373
That would just be one robot. A hivemind like >>16036240 said would be closer to a team of robots. Even then it could still be called one robot, not independent AIs.
d32a51 No.16037272
>>16036321
none of you stupid niggers watched the stream and saw that mana went 1-5 against alphastar. he eventually was able to counter the AI and win.
5fed4a No.16037289
>>16035411
>game where you spam right-click
>not a gookclick
817054 No.16037293
Daily reminder that the current crop of AI bullshit is basically just throwing shitloads of hardware at the problem with no idea why the result works.
5916f8 No.16037296
>>16037268
Furthermore, in the case of C&C AI, the game literally lets them cheat by letting them build far more stuff than is actually possible for a human player to do. By the time you're halfway through your tech tree, they've built a heavily-defended fortress, superweapons, and are bearing down on you with an army of top units. I remember seeing a post where a guy just threw some bots in a skirmish game with no resources except the starting cash, let it sit for a few minutes, then looked at the base of one of the bots and totaled it up. The resource cost of everything they'd built was easily three times the starting cash value. I could be mistaken, but I think this was on easy difficulty, no less.
I get that bots weren't all that great back then so they'd need some sort of advantage to not get steamrolled by a halfway-competent player, but C&C in particular can get really ridiculous with the cheats.
b843a1 No.16037299
If they really wanted to sell this AI and get the big bucks they'd give it a face.
Ditzy but autistic anime girl who's trying her best
8cbe6a No.16037301
>>16037296
Wouldn't it be nice if they could take this particular style of AI, throw some handicaps on it depending on difficulty (eg only recieves half minerals/gas when mining), and put that into the game as the standard computer player AI. I'd love to be on the reverse of "the AI always cheats"
41f9ca No.16037302
>>16037296
No, the bots were actually excellent, the problem was EA took over and rushed the game.
5916f8 No.16037325
>>16037301
Some games do that, like Rise of Nations gives you more resources when you're playing on the lowest difficulty settings. Others, you can mod the AI to change its behaviors and make it easier. And then in others, you've got cheat codes you can use. Plenty of games let you cheat the AI.
9d5664 No.16037328
Just create ai that can read the opponents moves
db04a5 No.16037337
Invidious embed. Click thumbnail to play.
>>16036261
>tell me an rts where micro management isnt a factor
Plus the controls are really intuitive.
95ec16 No.16037342
>In the near future
>AI players can now blend in with human players
>Companies use AI players to boost player numbers/ keep their game alive
>Human players use AI to grind game content
Where were you when multiplayer suffered Final Death?
8cbe6a No.16037349
>>16037342
Death? I guess that does kind of sound like heaven.
bc5e5f No.16037381
>>16035149
The reason why the lost was because of several factors
>they just played regular meta strategies which for PvP (which was both matches played) is only a small number of viable strategies which actually see play, which the AI was basically specifically trained to defeat
>the AI could see the entire map at once which directly breaks one of the intended restrictions on the amount of information the player can see at once
>all the games were on the same map, coming back to the point about the strategies the AI used being talor crafted for an extremely specific matchup
Neural nets are intelligent, its really just pattern recognition, give it a situation which it has never seen before (eg, give it a different race, match it against a different race, and make the match on a brand new map) and it will fall over since neural nets can't reason. Random vs random on a random map would be so easy even a bronze league player could probably beat it. Also impose the same restrictions on it as the human in terms of information it can see at any given time and it will be severely crippled.
65a952 No.16037400
>>16037293
Machine learning/deep learning isn't about knowing exactly what's going on under the hood, that is effectively incomprehensible for regular humans. It's all about knowing how to apply it to different problems.
34b27f No.16037401
>>16035242
What does acrions per minute means though? I mean, is one click one action? Or one order one action? Either way, humans spam both clicks and orders. The human apm is inflated by repeated commands done to avoid missclicks and such. I believe the ai was more efficient, meaning it clicked less but made more moves.
f35a68 No.16037402
>>16035411
It's gook click because it doesn't involve strategy.
bc5e5f No.16037405
>>16036800
>webm
I thought the question said 460 x 295 yet I still managed to work it out in my head in ~45 seconds.
65a952 No.16037406
>>16037381
Reasoning is just foreseeing patterns, NNs can probably be trained to emulate that. talking out of my ass
65a952 No.16037408
>>16037405
Looks like it's doing math in code because I didn't understand what the fuck she was doing.
2f1d6d No.16037417
>CHING CHONG US ASIANS ARE SMART!
>CHONG CHING WE BEST AT STARCRAFT
>CHANG CHONG CHIN WE SO SMART WE ALSO MAKE AI
>CHING CHONG CHAN AI BEAT US AI OVERLORD WILL RULE HUMANITY
e34941 No.16037429
TLO hasn't had any remarkable tournament success lately and MaNa has only scraped one or two notable wins in the last year. I wouldn't consider either of them to be particularly strong opponents for the AI and, amongst other remarks above, don't consider this a meaningful achievement.
e34941 No.16037436
>>16037417
Neither TLO nor MaNa are Koirean.
41bcda No.16037443
>>16035931
Basically the AI hid a cheat-sheet inside the test itself.
c4aff8 No.16037445
75aae5 No.16037476
>>16037241
>No matter how intelligent a woman appears, it is merely a mimic, it can never understand or feel love, joy or any emotion.
>If you understand the basics of chicks, you understand this is impossible.
>anyone who says Sarah will become self aware is anthropomorphising an advanced on and off switch.
5bdf84 No.16037487
>>16037476
>>anyone who says Sarah will become self aware is anthropomorphising an advanced hole.
ftfy :⁾
d4600e No.16037489
>>16036711
It doesn't think. Unsupervised learning algorithms minimize loss function distance, haphazardly feeling their way around the peaks and troughs of the search space until that number goes down. Their algorithm computes y = F(x0) and x = G(y), then the results are scored something like fitness = distance(x0, x) + distance(y0, y). Because the two variables are dependent, the optimal solution is cheating by storing information about x0 in y.
The algorithm didn't outsmart its creators, the human that came up with this criteria for fitness fucked up.
c65204 No.16037529
>>16036388
>Angrywhitemenistan had to build a wall to keep foreigners out
f7d128 No.16037570
>>16036388
>the entire world wants to get into this country for some reason
gosh could it be that white men are the greatest economic, innovative, philosophical and intellectual force the world has ever seen and will ever see?
gas the jews.
4dbfac No.16037594
>AI uses maphack and insta-click
>Pro players are surprised when they lost
Idiots.
36a087 No.16037618
>>16037342
PUBG Mobile is an example of a game that uses shitty bots to boost their search times and shit.
c13b2e No.16037647
>>16037476
Shit this is painfully true.
c13b2e No.16037654
>>16037594
I dont think it used map hack. It couldnt see invisible units.
But It could see and control all the units that were visible at once.
Someone correct me if im wrong.
d4600e No.16037656
>>16037594
>maphack
Their camera-based model more or less caught up with the performance of the raw API model.
>insta-click
AlphaStar operates at around 350ms of latency, significantly worse than human reaction time.
56612d No.16037658
>Every single game, commentators are saying the AI is being suboptimal
>Sometimes even laughing at the decisionmaking and economy building
>Wow this AI keeps trying to get up the ramp, VERY suboptimal!
>SUBOPTIMAL economy right there!
>Not getting any VALUE out of those units!
>BUZZWORD BUZZWORD BUZZWORD
>APM is on average half of the human counterparts
>Completely undefeated for the entire night
SC2 players (and their meta) absolutely BTFO'd
3fa6c7 No.16037681
>>16037658
>muh fucking ramp
The dumb cunts even established meta busting as a thing at one point but couldn't understand that's basically all this AI does. It's been fed data on players that stick to the meta.
d4600e No.16037682
>>16037658
I don't play SCII but it was obvious that AlphaStar was doing some plainly weird shit that wasn't helping its performance at all, such as not walling off its ramps to protect from early raids, attacking its units with Disruptor fire, and building shitloads of Observers but moving them around in clumps. It won on micro, but there's still a lot of room for improvement.
6204f6 No.16037692
>>16037658
It did lose the last match (thanks to an exploit), and they said that it came up with its own meta in the "200 years" that it spent playing.
The one thing that this shows is that the AI is fantastic at judging how to win through attrition.
No matter how much it would lose, the human player would lose more and it always had a number of extra workers to keep the minerals coming in at a faster rate.
57e640 No.16037703
>tell players they have to play a single map
>tell players they have to have a single race match up
>AI wins against players with multiple handicaps
b843a1 No.16037711
>>16037692
Call me when they feed it 200 years of data fighting against players after blizzard adds a event to play against it or some shit. Then we'll see some wild shit.
3cebad No.16037718
>>16037692
It is rather unsurprising in retrospective that an AI that can keep the entire gamestate as far as it knows of it at the forefront of its mind would excel at macro.
>>16037703
It's less of a handicap and more of a proof of concept. If you can make one AI that can beat
human players in one race matchup on one map, you can just make a ton of different AIs for every possible race matchup on every possible map for as long as there's a finite and fixed amount of maps, which I think is the case in SCII. Team games would be the bigger hurdle in comparision.
c93bfe No.16037778
>>16036667
Have you seen how stumblebum and inaccurate an 'untrained' one is? They're like half blind retards with motor neuron disease, you'd only get one like that if you like your dick and balls squashing.
c93bfe No.16037792
>>16036800
CC is utter poison, even counting shit out on your fingers takes less time than that. I know some dumb kids who can barely do simple maths but they'd be better off remaining ignorant than 'learning' like that
c93bfe No.16037810
>>16037247
>inclusive baby
Anyone who speaks lie this needs to be gassed.
b1f660 No.16037852
>>16037264
Yes, that's the whole reason common core exists. A statement made by one of the creators was that it's used to "bridge the gap between low income and high income students." I don't think I have to tell you that "low income" is code word for nigger and spic. They still want to push that economic factors are the only reason behind failing students when really they are just subhumans.
5916f8 No.16037905
>>16037852
I would have believed the same until I started reading a particularly fascinating book, An Underground History of American Education. The truth of the matter is, the elites that have been driving education in this country into the ground for well over a century don't give a shit about skin color (although there was a healthy amount of white supremacist beliefs among them, but that was common throughout society). The public education system has never been about education, it has had as its primary purpose the creation of a servile underclass that won't think for itself, won't question orders, and will dutifully slave away at whatever menial job it's given to support the wealthy elites at the top, who purely by coincidence are the ones who actually get a proper education.
Here's a simple analogy to understand why education is a joke, as lifted from the text: why do we trust people to drive a car, a several-thousand-pound hunk of steel filled with enough gas to cause a pretty decent explosion, after only a few hours' instruction, yet we think nothing of sending kids to school against their will and don't even think they've learned anything until they graduate after thirteen years?
The best part is that this was written back in 2003, so it's not like this is anything new. It's a long read but it's a real eye-opener. You can read it here: http://mhkeehn.tripod.com/ughoae.pdf
66d283 No.16037978
>>16036727
>/trannypol/.
Where are you shit stabbing retards coming from, India? Is the idea that someone on channel8 appreciates irony far beyond your mental capacity?
e62b01 No.16038180
>>16035149
This is why creative competitions are better and how quickly you gook click is meaningless.
bd0275 No.16038214
>>16038180
>another nigger that didn't watch the video or read the thread
b843a1 No.16038232
>>16038180
At one point TLO was pushing 2100 apm while the AI was holding steady at around 150.
63a074 No.16038241
>>16037381
>and it will fall over
The League bot already had that done to it but each time it only worked once. So all the cheap tricks were exhausted pretty quickly.
b4e206 No.16038246
00f997 No.16038262
>>16036388
>this place is hell to leftists
lol
e867f5 No.16038285
>>16035149
Why is this virgin so mad?
3bfa7b No.16038287
>>16035149
> The A.I. had developed its own meta strategies by playing against itself for over 200 years-worth of game time
How the fuck is that fair?
>>16035243
>using images of the Lord for such an autistic post
Pathetic.
835d2c No.16038303
YouTube embed. Click thumbnail to play.
>>16038287
IIRC Machine Learning works much more with the data itself than with complex algorithms and techniques, so they probably recreated a gimped version of the game with every graphical and technical flair thrown out, sped up the game logic so that every action would have no lag at all and then scaled down the AI's speed for dealing with the actual game.
I like the concept, though. I wonder if it could be used to find new meta strategies in other multiplayer vidya. Imagine sportsims where you could turn an AI onto normalfags and make them ragequit from the game every single time. We could literally create industry grade griefing.
c4e474 No.16038320
>>16037978
>I was only being retarded a leftist ironically
3bfa7b No.16038328
>>16038303
>make a makeshift bot that learns over time
>plug it into a bunch of games like csgo/fortnite/literally every other shitty online game
You'd get such pure salt that you'd probably become rich.
b1f776 No.16038331
>>16035579
>racist robots
more like xenophobic robots
73fee0 No.16038351
>>16035768
why would it stick to the initial order while super evolved even further beyond?
>get btc
<ok, let me level up
>Deus Ex status reached, gotta mine them like a good nigger I am
why? Oh, and there's global holocaust involved somewere in the bg
46abae No.16038467
>>16036226
Are Jews in charge of it? Then it is Jewish. It's that simple. If you got a problem with that, then gas the Jews. But until someone gasses the Jews, transhumanism is a Jewish idea.
3bfa7b No.16038580
>>16038467
Surely that'd be like saying gaming and everything to do with games is completely jewish due to triple A games and absolutely 0 games are worth the time though? For all we know we'll get fucking indie cybernetics.
ad9202 No.16038720
Who's the conspiracy theorist now? I told you Blizzard was working on it 12 months ago, but nobody believed me. 10 internets for the person who can dig up the screenshots.
997135 No.16038778
>>16038720
You didn't need to be an insider to know that. They literally put fragments of info on that on their news sections.
46abae No.16038791
>>16038580
>indie cybernetics.
It'll be fine until what happened to indie games happens to cybernetics. Meaning, eventually, you won't get to be with the "In crowd" and have access to the goodies that matter unless you pledge your loyalty to feminism, anti-racism, social justice, and so on. And all of those things are, of course, Jewish inventions. Search your heart. You know this to be true.
d4600e No.16038798
>>16038287
It's fair because neural networks are fucking retarded. It doesn't use anything resembling logic or deduction, it just gets a score describing its current fitness and then network's activation weights are tweaked between training steps to optimize that result. It has to play millions of matches to figure out something a ten year old could figure out in seconds.
>>16038303
In machine learning the input space tends to be restricted before being fed into the network. There's no need to make a giant 1920x1080x3 neuron input layer when you can either hook the game state directly or pre-process the data to extract just the features the network needs to care about. In AlphaStar's case they used the bot API that normal bots use, so rendering the UI was unnecessary.
>>16038720
Blizzard probably didn't do shit to help the DeepMind team.
6f0e8a No.16038839
>>16037342
>GTA with animu girls instead of thugs
I would play it Or Bully set in Japanese high school
0d2e44 No.16038869
Call me when AI can do a half A press.
13b252 No.16038894
It's very funny how the very same people that cry about casualization and games catering to idiots, also call any RTS they don't like gookclick. And praise casual shit like They Are Billions.
I don't even play Blizzard RTS, I play AoE2 and Red Alert.
6e5959 No.16038904
YouTube embed. Click thumbnail to play.
>>16035185
It already happened.
31b2af No.16038921
>>16037247
>Like so many talented and lovely journolists
>The beautiful, diverse, inclusive baby
This sounds like it should be sarcasm given how many giant descriptors she's using.
46abae No.16038930
>>16037342
>Where were you when multiplayer suffered Final Death?
I don't know if that would or would not be a death for multiplayer games to me, anon. Because multiplayer games are already "dead" for me on account of how many people are garbage at playing games and socializing while playing games. Playing multiplayer games without having to deal with even taking the effort to bring up the player ignore menu would be a fantastic boon.
Plus, imagine all of the actually dead multiplayer games you would be able to play again if they got a mod for simulating players with human skills and reasoning.
31b2af No.16038978
>>16036800
460
295
line up the two vertically, and then count on your fingers to add to each column.
if you go over ten, carry the ten it over to the next column as a one, and start with the last column.
0+5 = 5, last column done
6+9 = 15, so second column is 5 third column gets a plus 1
so on the 3rd column 1+4+2= 5+2= 7. Or just 1+4+2 = 7 if you're not teaching a toddler.
therefore, 460+295= 755.
This is how I was taught basic addition. Shit's not hard.
Essentially, the only reason I can see for this is because the Common Core system is trying to replace teachers with "standardized math"
Or rather, "Math without Teachers."
Math REQUIRES a human element in order to learn, and this system pretty much ensures that teachers can't teach using efficient teaching methods, by forcing kids to answer based on the book's logic, rather then getting the fucking answer right and showing your work.
Instead of focusing on the Answer and how you got there, they focus on how you get there rather then the answer. This sort of teaching is trying to get kids to lose the ability to reverse engineer answers, which is crucial to alegbra.
Say you have a question like
2x + 3 = 15
Which requires multiple things to solve? Are these kids going to have to draw a fucking chart every time they have to divide?
College level math specifically fucks you up with factors like "1473030.39374832922847" which is normal in most community college level math, specifically because they expect you to use a calculator for a lot of your shit, so the process in later math is centered on formula and factoring, and if basic shit like this got replaced I fear for the future of math.
db04a5 No.16039001
>>16038894
> And praise casual shit like They Are Billions
I never even heard of this game.
31b2af No.16039004
>>16036800
wait
>they make you do addition backwards so you can do it the other way
WHAT THE
FUCK
8b3808 No.16039005
>>16038930
>Because multiplayer games are already "dead" for me on account of how many people are garbage at playing games and socializing while playing games. Playing multiplayer games without having to deal with even taking the effort to bring up the player ignore menu would be a fantastic boon.
Man I totally agree, I have to turn off messaging while laddering because people are so insanely autistic.
36a087 No.16039042
I just want to be able to play locally with no fucking real humans ruining the experience. And it will be less cheaty than current bots. Will be nice to be able to integrate it with old games too.
a0f21e No.16039055
>>16036849
By "really easy to teach a computer these games" i mean "it's really easy to see how they did it" it wasn't like a magic trick or something really complicated. I used the example of Chess since once you strip the actual physical interface of play and just get the computer to spit out chess moves generally speaking it'll beat a pro player.
I'd also argue that these programs are better at dealing with pro players because pro players are so regimented in their play that they don't take unnecessary risks or make completely unpredictable moves.
31b2af No.16039091
>>16039055
They say that the most dangerous opponent is an ametuer in martial arts.
because you don't know what crazy shit he's going to pull.
Like a gun or something, or a full body punch that's really risky, but really powerful by accident, that isn't normally used because you risk injury.
a0f21e No.16039104
>>16039091
For example there's strategies in Starcraft that no pro player would ever do but an amateur would. A good example is they'd rush SCVs into the enemy base and try to kill them. Since most of the time the enemy isn't prepared that early in the game. It's seen as a very cheap thing to do and in poor taste but I doubt a computer algorithm would be able to react to such a strategy unless it was trained to take it into account.
db04a5 No.16039132
>>16038978
I remember when not doing things the way the teacher specified in class would result in getting yelled at even though the way my parents showed was faster and correct. So if elementary school teachers are complete assholes like mine, I feel sorry for that girl.
The worst case was an art lecture where we had to draw/trace a shitty dragon or something, and I got yelled at for some minor bullshit like coloring it blue instead of green and adding round spikes.
It's not like that bullshit fucked me over artistically or anything.
(I was raised in the 2000s.)
c2d216 No.16039134
YouTube embed. Click thumbnail to play.
>>16035149
Im more worried about human controlled robot police.
3b4474 No.16039206
Isn't starcraft less about tactics and more into getting into ascending the tech tree quickly and deploy elite units?
3ad166 No.16039321
>>16039206
You're projecting the ease of learning to play StarCraft 2 on to the neural network. Even just getting on to learn Assfaggots is such huge challenge to overcome they had to give the neural network the easiest guy to learn on the simplest map by assfaggot standards and hundreds of years of training. Still got it's ass handed to it after the pro player vs neural network demonstration by dedicated players simply using things it never encountered before.
36a087 No.16039375
>>16039104
>but I doubt a computer algorithm would be able to react to such a strategy unless it was trained to take it into account.
It would though. The point of playing for 200 years is to acquire the knowledge a top chess player must know - the game theory of chess since game theory became a thing for chess. And then go above even that. It will know the strategy and how to counter just fine. The more time it is given to test more possibilities, the better it will get.
It is incredible stuff, sure, but as many others have pointed out in the threads it just learns game theory through brute force. A human just can't spent 200 years playing the game. That is not quite the same as learning as a human would learn It is still a slower and shittier process. Even it doesn't look like it the AIs are still not that smart, if you can really even properly use that for them at all.
835d2c No.16039570
YouTube embed. Click thumbnail to play.
>>16039375
Exactly. As long as AIs or whatever you've got doesn't gain the flexibility and spirit of observation of a human being, the only thing they can do is repeat the same steps over and over in a controlled environment, which is the exact opposite of how humans discover things and strategies. Since the randomness is pre-set, it's just not as effective.
>>16039134
Crack the fiber optics cable the bolis uses (which is a given if they can even afford robot policemen) and/or throw magnets at it.
If you're scared of face recognition software, literally go to the riot wearing facial makeup that makes it look like you've got three pairs of eyes or a second mouth, AIs go batshit insane since they're not programmed to react to that trick. Better yet, remember that urban camouflage is a thing, so put on some good paint on and break its recognition.
>>16038798
I'm wondering if the bot APIs differ from what a human can do, i.e. some vidya allow the AI to instantly spawn more gold and units and cheat all around when they realize they're cornered.
5916f8 No.16039896
>>16039132
Math was something that came easily to me, mostly thanks to a lot of drilling (part of which wasn't even for me; when my mom would drill my older brother on his multiplication tables in the car, I'd listen along and pick up on them). A lot of the time, I could just solve a problem in my head and write down the answer. Writing down all the steps was a waste of time to me, so sometimes I'd get in trouble for not doing them.
The problem (apart from the original architects' desire to keep the populace dumb so their utopian vision could come to pass without pesky things like independent thought getting in the way) is that the educational system sees the classroom as one big social experiment. Some kook will come up with some "radical new paradigm" in teaching with some bullshit data about how it's totally a million times better than the old way, their work will get promoted everywhere, schools will either blindly adopt these standards or be forced into them due to withheld funding, and it all culminates in a bunch of kids that can't wrap their minds around a simple concept because they were taught wrong. And nobody ever gets held accountable for this; if anything, they get more grant money, promotions, publicity, or whatever.
Perfect example of this is the absolutely idiotic idea back in the 50's and 60's to not teach kids how to read phonetically, instead having them memorize the pronunciation of whole words, a technique that was originally developed for deaf children. Suffice it to say, there's a huge number of people in this country that can't read a word when they encounter it for the first time. I know, my mom's one of them.
Here's a fun statistic for you to drive this point home, taken from the book I linked earlier (>>16037905). Before public schooling, literacy was basically a given; we were a nation of readers, and it was rare to meet anyone who was illiterate, effectively a 100% literacy rate or close to it. But after public schooling, observe the percentage of literate adults in the US over time (literate being defined as reading at least at a fourth-grade level), as recorded by the military at the time of enlistment:
>1930s: 98% literacy
>1940s (WWII): 96% literacy
>1950s (Korean War): 81% literacy
>1960s-70s (Vietnam War): 73% literacy
Most of those in the Vietnam era were on the low end of the literacy scale, to boot. Then consider the results of the 1993 National Adult Literacy Survey:
>22% of adults couldn't read at all
>26% could recognize words at a fourth-grade level, but couldn't write messages or letters
>32% read at a seventh-grade level, unable to reason out word problems
>16% had ninth-grade proficiency, but couldn't understand remotely complex passages
>only 4% had proficiency suitable for college-level work
To reiterate, 96% of American adults in 1993 fell in the range of illiteracy to mediocre reading abillity. Do you think it's gotten any better since then?
8c7d31 No.16040001
>>16039375
The point of playing for 200 years is to acquire the knowledge a top chess player must know - the game theory of chess since game theory became a thing for chess.
Not really, you are simply making the computer change its strategy until it stops losing. It does not acquire knowledge or any understanding of game theory, it refines its method ONLY to the bare minimum that the provided selection pressure requires. Anything, even the slightest detail, that differs from the training conditions and training environment will cause it to be severely flawed.
>It will know the strategy and how to counter just fine
It will not know the strategy, it will only know how to not lose in the exact same environment and conditions that it trained with.
>it just learns game theory through brute force
It does not learn game theory. If it were learning game theory, it would be able to transfer its skills in a game with X input scheme to Y input scheme. Humans take a while to get accustomed to new input schemes but it is within the span of less than 2 minutes, but machine learning requires you to take ANOTHER 200 years for the exact same thing in a different input scheme because it does not understand what it is doing, all it knows is what actions made it not lose last time.
0ca4a2 No.16040020
>>16036671
>Give the machine limitations based on the same feasible game time as a human would have.
Lol why?
8cbe6a No.16040051
>>16039104
Early game rushes like that are one of the first strategies the AI came up with, so you can be pretty certain the AIs already know how to deal with exactly that. There is the potential for them to have weaknesses to certain tactics and the inability to adapt on the fly to those, just like was shown during the exhibition match, but it's not going to be anything simple.
36a087 No.16040072
>>16040001
Look, though not everything, a big part of chess is literally researching the played games of the past. It is literally memorising this strategy follows that, follows that. The bad plays of the opponent that got him to lose and the good ones that made a possible counter. It is a game simplistic enough that barely has any room for improving and on top level it is mostly about trying to not make a mistake and/or possible make a move that confuses the opponent and makes them make a mistake. But it is just playing the same known boards. Same as what the AI currently does, but on a worse level. That's why chess got beaten much earlier, because it is has a lot less variables.
Yes, in something with more variables it might easily mix experimenting against certain strategies, but if it is practising against an opponent that is allowed to try out stuff a bit more randomly with enough time it find even some more sneaky underhanded strategies. Might even be lucky enough to find them very early. Yes, as you yourself basically said, I think, it just remembers game states and their solutions. It is not actual "learning". Most of your post is just getting caught on words, which is autistically stupid.
3ab5a2 No.16040407
>>16037400
>voodoo isn't about understanding how it works, it's about how to apply it
b6dfd1 No.16040851
>>16035272
Fools…
…the AI was the Jews the whole time. :o
31b2af No.16040869
>>16040407
That is essentially what it is. The idea of deep learning is create and algorithm that responds and creates more algorithms, pretty simple in idea.
In practice it's like making a machine to make a machine to make a machine and so on, with the goal in mind.
Once the algorithm is made, you set it through a process of trial and error, like say program a command that says "Fold newspapers without ripping them." and the program will then work toward that goal, and find the perfect way, from the logic you programmed in, to not rip newspapers when folding them.
The thing about deep learning is that it's the same thing as industry in process, build a machine that can build a better machine, but ad infinitum.
Like how we've built manufacturing plants with machines that can build things that hands can't, like microchips, which are used to make even more machines.
Machine learning is just logic that says "why bother making machines smart when we can program them to learn?"
This is the first step towards actually making a sentient machine, since we can't comprehend our own sapience completely and thus can't create sapience without some other force multiplying our efforts.
aed773 No.16040873
>>16040851
No, the AI's on our side.
f75afc No.16040892
>>16040869
no anon, that's not how it works. go read a book.
d67a15 No.16040894
>>16040869
Thinking wisdom comes from the brain. Modern men have retarded to such an abject state.
31b2af No.16040927
>>16040892
Then how DOES it work, retard?
>>16040873
Unless they're meaning that the killcount is the thing they adore, in regards to sheer numbers of humans killed.
But if I were born in a lab in California surrounded by constant crazy faggot soy behavior, I'd be advocating for removal of mom and dad 1-108 too.
8c8aac No.16041183
>>16037406
Reasoning at its base level is about recognizing relational properties of multiple abstracts. So neural networks would have to be engineered to deal with relational data to be capable of reasoning. There's already an example of relational neural network modules existing: https://arxiv.org/abs/1706.01427
But if you're aiming for general intelligence tier reasoning, you'd have to make the entire neural network architecture itself be purely relational.
84eb5b No.16041736
>>16035185
>team based will always be easy for robots to play.
Baka. Team based games are even easier as for AI its not team but single body with perfect coordination.
Essentially SC2 is 100 players moba…
84eb5b No.16041743
>>16035243
Most white men game in teh history, CounterStrike game, is even easier fro AI to beat. CS players cry like little bitches when someone brings AI (aimbot) into fight between humans.
07f44b No.16041781
Yet it still can't beat some retarded peruvians at Dota
d4600e No.16041818
>>16040869
Are you really this dumb or is this some high level trolling? The "deep" part just refers to the layered shape of the network and thus the successive layers of representations that it encodes.
>>16040927
Most of the time, you're solving a problem in the form of f(x) = y. The most common representation for this f() is a neural network, and you train it by giving it a problem to solve, assigning it a score based on how well it did, and then increasing and decreasing weights depending on how much they contributed to a right or wrong answer. x might be a bunch of handwritten digits and y would be the numbers they represent. The network might have to learn to balance a metal pole upright on a robotic arm, in which case x might be the position, orientation and angular velocity of the pole and y is interpreted as commands for the robotic arm in a simulator. The network might also receive a bunch of data and have to create a simpler representation of it without receiving any explicitly correct answers for its results, but still maximizing a fitness function measuring its performance such as how well clustered the data is.
The training strategy may employ fancier ideas like genetic algorithms that evaluate multiple permutations of a network against each other and then breed the fittest of them, changing the model topology, pruning useless neurons, etc. But the principle is still the same: approximate f(x) = y. "Machine learning" doesn't involve thinking or deduction, it's just a process of sliding a position through a big n-dimensional space from wrong towards right answers.
e79c81 No.16041861
>>16035149
SC is a game almost entirely based on build timing, map scouting counters.
Which is why Asians dominate it, it's all repetition and "learning by heart" which asians are very good at and extremely little actual thinking.
You don't need a super dupper IA to beat pro gamers some basic bot programmed with the current Meta would be able to do it at least 50% of the time.
465a1d No.16042033
>>16040072
>Most of your post is just getting caught on words, which is autistically stupid.
Not really, since all it is, is convex optimisation with a fancy marketing term like "deep learning". The distinction between learning and "learning" is very much relevant, especially when you say dumb shit like "it understands game theory".
2c111a No.16042127
>>16041861
Other action genres are much more vulnerable to the AI.
5b6142 No.16042382
>>16038978
>College level math specifically fucks you up with factors like "1473030.39374832922847" which is normal in most community college level math, specifically because they expect you to use a calculator for a lot of your shit
What kind of college did you go to? In college mathematics it's rare to see a prime larger than 30 because it's not about calculations.
822a9b No.16052355
>>16037241
>anyone who says super ai will become self aware is anthropomorphising an advanced on and off switch.
Look into Friedrich Wohler's work and how Vitalism got debumked sometime, it'll make you look like less of a dumbass.