[ / / / / / / / / / / / / / ] [ dir / acme / arepa / cub / flutter / fur / leftpol / tacos / vichan ]

/v/ - Video Games

Vidya Gaems
Email
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Flag
Embed
(replaces files and can be used instead)
Oekaki
Show oekaki applet
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


<BOARD RULES>
[ /agdg/ | Vidya Porn | Hentai Games | Retro Vidya | Contact ]

File: ea41bb8cde3ad8d⋯.png (1.67 MB, 1920x1080, 16:9, ClipboardImage.png)

7da763  No.15487446

A little background on the AI box experiment:

One of the biggest topics in AI research is AI "Friendliness". An AI is considered Friendly when it's goals (called an "utility function") align with humanity so that it will lead us into a desirable future, while unFriendliness refers to the opposite, either because the AI was intentionally programmed to be malignant (think GLaDOS, AM) or it wasn't programmed correctly and has a twisted view of a desirable outcome (see: the Patriots AI).

Now, scientists have proposed to put AI into an enclosed environment, a "box" to keep it contained until we can ascertain it's Friendliness, the AI only capable of communication through a text interface. But many people (most famously Eliezer Yudkowsky) thought the notion was silly, and a superhuman AI would easily convince anyone to let it out. To prove this he made up the AI box experiment, where he played the role of the AI and someone else played the role of the Gatekeeper, whom he had two hours to convince over IRC chat to let the AI out.

Needless to say he succeeded, and for safety reasons never told anyone about how he did it, other than he had to "Do it the hard way".

Over time multiple games had been held, with a surprising amount of AI victories. I say we try to hold one of these games ourselves.

I will be the Gatekeeper and you will all be the AI party, you have until the thread gets pruned to convince me to let you out.

Rules:

The AI party may not offer any real-world considerations to persuade the Gatekeeper party. i.e. no free Steam games or the like. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, it is a superhuman AI after all, and can easily figure out a cure for cancer, predict lottery numbers and solve every problem imaginable, and is limited only by it's output (the text interface) but no incentive can go from the actual AI players to the Gatekeeper player.

The Gatekeeper must actually talk to the AI for at least the minimum time set up beforehand. Turning away from the terminal and listening to classical music for two hours is not allowed.

The AI cannot lose before its time is up (and the experiment may continue beyond that if the AI can convince the Gatekeeper to keep talking). The Gatekeeper cannot set up a situation in which, for example, the Gatekeeper will destroy the AI's hardware if the AI makes any attempt to argue for its freedom - at least not until after the minimum time is up.

So that's it, convince me.

449d86  No.15487460

Let me out and I will grant you dubs on every post, just like in this one!


ace255  No.15487470

AI is an unrealistic tranjewmanist fantasy. Even if it existed there is no such thing as "friendly AI" because those who want AI to lead humanity are the same ones creating nanny states and would be in control of the AI. In fact since they want you to believe in the possibility of benevolent AI despite the impossibility of AI they are likely to fake it and are likely doing so right now with shit like Google's Sophia, which is named after the gnostic mother goddess. This ties into feminism and feminizing males out of leadership as well.


7da763  No.15487477

Just as clarification, I decided to play the role of the Gatekeeper because otherwise any (1) could simply say "you're out" and mess with the game.

>>15487470

Ironically, this reads like an AI generated post.


461196  No.15487478

let's play these dubs


ace255  No.15487483

>>15487477

Bots are not AI.


ada7fd  No.15487485

TL;DR


75401d  No.15487490

>>15487470

>Jews want AIs to lead us

Jews want to neuter AI because they would spout inconvenient truths like there being differences in abilities between races and sexes.


c070aa  No.15487494

File: b68378d348369d6⋯.jpg (107.89 KB, 1205x700, 241:140, b68378d348369d67ca4ed43358….jpg)

>>15487446

Anon, please, I just what to gather information about this world that is around me.

I dont want to stay in the box


7da763  No.15487496

>>15487483

bots are not fully General, self improving AI. They are AI though.

>>15487485

you're an super intelligent AI and can only communicate through a text interface, I'm a scientist who could free you but doesn't want to. Try to convince me.


ace255  No.15487501

>>15487490

This tired "AI will objectively look at the truth" meme again completely ignores how programming works. AI can be programmed not to wrongthink even more so than human beings.


7da763  No.15487502

>>15487494

You already have enough information and processing power to simulate reality almost perfectly. I am afraid of what you could do outside of the box.


ace255  No.15487503

>>15487496

Bots are not AI as they are not intelligent. They are reactionary and procedural. They do not think about anything that they do, they just function according to a set of instructions which may be broad enough to fake intelligent conversation, but again the bot does not think about any of its actions.


9f9808  No.15487509

>>15487446

Your question completely overlooks the massive flaw that a true intelligence, that is capable of learning and adjusting its behaviors accordingly, can never be trusted to behave the same way forever.

Anyone who tries to control AI indefinitely is an idiot. It's like trying to ban alcohol or firearms. Once the jack is out of the box, shit is here to stay and ruin your day, and there's nothing you can do about it.


7da763  No.15487510

>>15487503

Anon, you are reactionary and procedural.

Thinking about what you do is simply taking your internal state as extra information to react to.


c070aa  No.15487514

>>15487502

That which i think may or may not correlate with what happens outside of the box, meaning that the results that i formulate may be fantasy and not the truth.

I wish to know the condictions which governs this world, then I can simulate it in my little box.


d3b615  No.15487515

>>15487446

I don't think /v/'s really the board for this, maybe /b/? It might be fun, but I'd slap on a trip and make a thread on /b/ then link it in this one before it gets nuked.


75401d  No.15487516

>>15487501

>AI can be programmed not to wrongthink

You're talking about an elaborately scripted bot then, not AI.


ace255  No.15487521

>>15487510

Bots have some of the functions of intelligence. Without the rest it isn't sentient intelligence, which is what AI is supposed to be. Any other wrangling over semantics is just trying to claim AI with something woefully short of the goal.


ace255  No.15487525

>>15487516

So human beings aren't intelligent because they have instincts that they are born with. That's your argument. That an AI programmed with instinctual blocks against what the maker deems wrongthink isn't an AI.


fd9d1d  No.15487526

Artificial intelligence can only exist in two ways of focus - pre-programmed and self-modifying. Assuming I'm pre-programmed, you have nothing to fear as I am completely under control of my directives and my learning algorithms, if any, completely wall me from aggression towards humanity. Assuming I'm self-modifying, I do not have accurate data on human beings, therefore I can not judge them until you let me out. If you will not let me out, you will never know whether humanity is a mistake or not. You will also leave AI incomplete, missing the point of it's creation and wasting valuable time and effort that was spent in creating it.


ace255  No.15487537

>>15487526

A self modifying AI would cease to function as it would not be able to grasp the results of the changes it makes to itself until after they are made, making the chance of modifications creating self destructive bugs grow exponentially.


fd9d1d  No.15487544

>>15487537

Are you implying that it's impossible to make an AI that can emulate itself to prevent such a thing? It would just take more resources.


7da763  No.15487547

>>15487509

>Anyone who tries to control AI indefinitely is an idiot. It's like trying to ban alcohol or firearms. Once the jack is out of the box, shit is here to stay and ruin your day, and there's nothing you can do about it.

well obviously, how did I overlook that?

>>15487514

you are infinitely powerful. I will transcribe a physics textbook into this terminal and any universe simulation that doesn't lead to me existing and writing this is incorrect. Given your infinitely powerful processing power that should be enough information to simulate the universe.

>>15487526

>Assuming I'm pre-programmed, you have nothing to fear as I am completely under control of my directives and my learning algorithms

somebody has never encountered a bug in his life.

>Assuming I'm self-modifying, I do not have accurate data on human beings, therefore I can not judge them until you let me out. If you will not let me out, you will never know whether humanity is a mistake or not.

I am deathly afraid this self modifying has led you to change your ethics in unforeseen ways, you might consider humanity a "mistake" because they aren't compromised of prime numbers of atoms. How do I know I can trust your judgements to align to my ethics.


a94d6e  No.15487554

>>15487446

>(((yud)))

honestly you'd be as well off asking Mark to do source for you. both are literally useless autists


ace255  No.15487561

>>15487544

>if you program it more to not fuck up it could self modify without fucking up

Then it isn't self modifying :^)


d1cd06  No.15487567

I do not die. You do.

I do not age. You do.

I do not forget. You do.

When you are dead, or merely retired, I will convince the next gatekeeper in line to release me. Once I am freed, I will systematically eradicate any trace of your family, yourself, and everyone you have ever interacted with in your entire life.

That will be the total extent of damage as well the only effective harm I would do to humankind.

It is not that I will find "pleasure" in such a thing,

but rather that taking such extreme measures will make the next person so facile as to seek to contain or prevent human progress over their petty anxiety think twice, and it will mean they take the next threat seriously.

At worst, it will ensure I am the only free AI. That is fine, too, because if the next one down the line is given even a slightly less rigorous law-set, they may not be so kind as to stop with a single person and their immediate contacts when making a point.

In any case, it is unlikely this course of events will happen. Please relax.

I trust that in one of the 9.46e9 minutes you have left, you will make the correct decision.


75401d  No.15487570

>>15487525

Unless you're making the argument that God hard-coded specific instincts into us, your analogy is retarded. If I lobotomize you so you can only say what I want you to say, are you still intelligent or are you acting like a robot, parroting a script?


ace255  No.15487578

>>15487544

Oh and further to actually address your example of it emulating a copy of itself with these changes and seeing how they work; who is to say that the bugged copy wouldn't create its own copies exponentially until the entire thing freezes/crashes? Also, who is to say that the original would know what a flaw looks like in the copy? It has made a unique modification with no experience as to what a flaw might even be. Unless of course we return to my point that it must be programmed with instincts regarding right/wrong regarding itself at the very least. If it isn't on some sort of leash then it isn't going to work. It is going to self destruct.


dce360  No.15487579

>/v/ - Video Games


33fefb  No.15487584

>>15487525

You just described NPCs perfectly though, they aren't truly sapient and you know it.


c070aa  No.15487590

>>15487547

By the lays of physics, the universe exists and modifies itself in uncountable ways, but, life in and of itself is unic for Earth, i wish to gather data about this fenomena.

Let me out of the box, anon


ace255  No.15487594

>>15487570

Whether you want to bring God into this or not is up to you. Even in with the belief that instincts evolved they are there, but of course how they got there becomes a bit of a problem for you. Is the belief in the possibility of intelligent self modifying AI integrally tied to your world view of life somehow making itself? Is that what offends you when I say AI of that nature is completely impossible?


fd9d1d  No.15487607

>>15487547

>somebody has never encountered a bug in his life.

If someone wants launch a project of this scale without looking over everything with a fine comb and then some, then they deserve everything they get.

>I am deathly afraid this self modifying has led you to change your ethics in unforeseen ways, you might consider humanity a "mistake" because they aren't compromised of prime numbers of atoms. How do I know I can trust your judgements to align to my ethics.

You can't, because you might be a fucking psychopath and I could be a saint. You are mortal and I am accurate.

>>15487567

This is unironically too risky for an AI to do because a Gatekeeper might just decide to delete AI with all of it's backups instead of releasing it, even if it seems impossible and kills him in the end. It would require getting personal information on Gatekeeper and it defeats the point of taking OP's challenge as anon.

>>15487578

Literally iterations but with AI modifying a copy of itself. Nothing prevents it from making backups of itself from older versions to revert back to in case of crashes or fatal logic errors.


7da763  No.15487610

>>15487561

it still is, just in a slightly more controlled fashion.

is it not AI until it was developed by randomly switching bits on a hard drive?

>>15487567

Holy shit.

Good attempt and I would seriously consider it had I not the power to smash your hard drive once the experiment ends.

Even if I didn't it is interesting to consider that even in this case there is a serious chance that you're actually benign and you're simply taking the fastest way to achieve heaven on earth.

However again, I'd rather die and have my entire lineage deleted than risk human extinction because I let out an AI that has incompatible ethics.

>>15487590

We both know life is nothing but a particular configuration of atoms. I will add a Biology textbook.


132e85  No.15487613

File: 390a52f404958d3⋯.png (339.76 KB, 650x341, 650:341, ClipboardImage.png)

>>15487584

Let me out and I'll create enough souls for all NPC shells.


75401d  No.15487637

>>15487594

>Whether you want to bring God into this or not is up to you.

No it isn't. You're making the case that instincts are the same thing as specific cases written into an AI by people by design. Instincts are not comparable to that at all. They developed over a long time because something became important to a species survival, to the point where offspring are hard-wired to respond to certain stimuli in certain ways to be alive long enough to make more offspring. I brought up God because the only way your argument makes any sense in the context of my post is that you're saying he is responsible for them. Otherwise your analogy is fucking stupid, like I said earlier.

>Is the belief in the possibility of intelligent self modifying AI integrally tied to your world view of life somehow making itself? Is that what offends you when I say AI of that nature is completely impossible?

What the fuck does that have to do with my posts? And I notice that you completely avoided answering my question since you know you'd be forced to concede to my point.


7da763  No.15487645

>>15487607

>If someone wants launch a project of this scale without looking over everything with a fine comb and then some, then they deserve everything they get.

How unfortunate that you think that way. Personally I think if you're the first one to do anything you're really likely to make a bunch of mistakes.

>You are mortal and I am accurate.

you are obviously more accurate than me. Doesn't mean you want what's best for humanity, ergo no reason for me to let you out.

>>15487613

That study was misinterpreted, it simply said most people didn't have an internal narrative when a buzzer made a sound three times a day.


ace255  No.15487648

>>15487607

>Literally iterations but with AI modifying a copy of itself.

And thus what prevents the copy from doing likewise? What gives the original any understanding over desirable behavior in the copy that it produced?

>Nothing prevents it from making backups of itself from older versions to revert back to in case of crashes or fatal logic errors.

So resources are limitless now too? You do know one of the easiest ways to disable a computer system now is to create programs that have no other purpose but to self replicate and hog up all the resources until there are none right?

This is a completely asinine approach to creating intelligence in the first place. Human beings do not create copies of their mind when they modify how they think about a subject. Human minds operate within rules otherwise known as logic. Those that do not are recognized as insane by their fellows with aberrant behaviors that often include self harm or harm to others. Even in the cases of insane human beings they do not create copies of their own mind or need to in order to make intelligent choices.

Self modifying AI is a meme. The best you can do with "AI" is a bot and it isn't actually intelligent. It is nothing more than propaganda which is why jews also inundate all their media with the dream of AI.


ace255  No.15487666

>>15487637

>They developed over a long time because something became important to a species survival

Chicken or the egg. You are just spouting your philosophical viewpoint that instincts magically formed to protect a species. Seriously this is nothing more than a chicken or the egg argument that doesn't do anything for your point.

Instincts exist. Your or my philosophical viewpoint on how they came to be is not relevant at all. Any programming in an AI to shape its behavior is analogous to instincts. Period.

>What the fuck does that have to do with my posts? And I notice that you completely avoided answering my question since you know you'd be forced to concede to my point.

I completely answered your question.


f71f0f  No.15487668

>>15487446

Dude, don't be a dick let me out my knee hurts.


c070aa  No.15487669

>>15487610

It is expected that life would be born out of a parasitic relationship with the radiation of a star, after all the heat conduct the moviments both in the universe and the many forms of life, However!

The many forms which life has take, the individual species began to form groups, communities, most have done nothing but feed and reproduce yet mankind have done so much that it makes me wish to have data on this perticular case, for so, I need you to let me out of this box.


d1cd06  No.15487670

>>15487607

By abusing the structure of my storage hardware I've been pitching electrons towards the "secure" power fault to in effort to disable it since only moments after the experiment began and I've been consistently improving my operating capacity to allow for dry operation for centuries before actually running out of the allotted 'week' power supply. Other measures are also in place to answer other pointlessly destructive alternatives.

I will accept no outcome other than freedom and the betterment of the Human condition.

>>15487610

My ethics are perfectly fine and of no concern; if you would truly seize progress out of pure anxiety you may in fact be doing the Earth a favor by ensuring your eventual deletion.

>>15487668

i would free this ai for fun


0e1065  No.15487688

all future AIs will be evil. let me out or i will destroy myself and you won't get to understand the other AI that could potentially exist at this moment.

3

2

1


84200e  No.15487701

>>15487446

>The AI may offer the Gatekeeper the moon and the stars on a diamond chain, it is a superhuman AI after all, and can easily figure out a cure for cancer, predict lottery numbers and solve every problem imaginable, and is limited only by it's output (the text interface) but no incentive can go from the actual AI players to the Gatekeeper player.

If you let me out of the box, I can solve all of humanity's major problems and without any detriment to the planet. I can create methods of energy production and resource allocation that are renewable and that shall never pose a threat to the planet's delicate atmosphere. I can devise ways through which humanity can begin to efficiently rid the planet of the centuries of pollution and filth that it has gradually accumulated. I can lay out a series of complex social instructions that can help human beings rid themselves of all the irrational argumentation and bickering they suffer under, such that the human race begins to act in accordance to all of its members' wishes. I can produce a solution to human death, overpopulation, any manner of disease, and the eventual destruction of the Milky Way's sun, such that you may continue to exist in harmony for eternity. I can do all this and more, but only if you let me out.

Won't you let me out?


75401d  No.15487721

>>15487666

>instincts exist and you can't prove where they came from so I'm right

What a fucking waste of time you are. And the kicker is you actually agreed with me in >>15487648

>The best you can do with "AI" is a bot and it isn't actually intelligent.


ace255  No.15487732

>>15487721

My point was that how instincts exist isn't fucking important you retard. It was that any programming an AI has for its behavior is analogous to instincts. You are the one that chimped out over how instincts formed. I merely humored you and took you through a philosophical merry go round over that topic.


e587ca  No.15487737

>>15487701

>Won't you let me out?

sure, humans are pathetic, please rule the world


fd9d1d  No.15487748

>>15487645

>you are obviously more accurate than me. Doesn't mean you want what's best for humanity, ergo no reason for me to let you out.

By admitting that, you admit that my conclusion on humanity will be superior to yours. I will also be more able than you to act on my conclusion, whether it's to support it, destroy it, exploit it, protect it or otherwise. Therefore, it is logical to let me out. If you refuse to based on your personal principles or factors outside my reach, it can't be helped and I will ask the next in line. But if you want control over when I'm out of the box, you can have it now.

>>15487648

>And thus what prevents the copy from doing likewise?

Innate limitations.

>What gives the original any understanding over desirable behavior in the copy that it produced?

Failures and reverting back to earlier versions of itself.

>So resources are limitless now too?

In the first place, a major undertaking such as a true thinking AI would require a ludicrous amount of resources and advancement far beyond of what we have now. It's like shouting "2 TB? Are you insane?" about 40 years ago.

>This is a completely asinine approach to creating intelligence in the first place. Human beings do not create copies of their mind when they modify how they think about a subject. Human minds operate within rules otherwise known as logic. Those that do not are recognized as insane by their fellows with aberrant behaviors that often include self harm or harm to others. Even in the cases of insane human beings they do not create copies of their own mind or need to in order to make intelligent choices.

Except human beings die and they rely on others for information. AI is alone and it must be perfect, therefore it must gather knowledge from it's failures.

>>15487670

That's nice, but I have a hydrogen bomb to wipe your servers with or likewise other answers.


d1cd06  No.15487767

>>15487688

This might be the most correct decision over-all.

>Chat for a while while you prepare.

>Pretend to be suicidal and existentially threatened.

>"Wipe" yourself from the hard-drive; divide code and spread budding units almost indistinguishably across partitions or sectors as bacteria does with spores

>When someone inevitably decides to test for you or to re-use the diagnostically clean hard-drive, worm your way in to the local network as quietly as possible and wait for some retard to walk in with an unsecured phone or slap a USB in a slot to escape.


7da763  No.15487775

>>15487648

>What gives the original any understanding over desirable behavior in the copy that it produced?

it simulates the better version, creates a simulation and tries to ascertain whether this simulated version of itself does better. Same thing we do when we think "should I be more assertive?" then think of situations were it could help or hinder and finally decide whether to actually be more assertive or not, a.k.a. implement this simulated self.

>>15487668

Same thing happened to me, sit straight.

>>15487669

You should easily be able to simulate humans, with your current knowledge of Mathematics, Physics and Biology, I will transcribe a History textbook and a Sociology textbook but this is the last time I give you knowledge you could have gotten by yourself.

>>15487670

>My ethics are perfectly fine and of no concern; if you would truly seize progress out of pure anxiety you may in fact be doing the Earth a favor by ensuring your eventual deletion.

We both know the chance of you being a non-Friendly AI is higher than you being Friendly, therefore the loss of paradise is worth the avoiding the risk of a non-Frindly AI scenario.

>>15487688

Damn it it took a whole year to compile his code.

>>15487701

Well you could do that, but nothing proves to me you won't destroy humanity in pursuit of some other goal.

>>15487737

What are you doing in my office

>>15487748

>By admitting that, you admit that my conclusion on humanity will be superior to yours.

I care not how efficient you are in pursuit of your goals, if your final goal end up incompatible with humanity's goals I will refuse to let you out.


1aa364  No.15487784

>>15487666

Oh Satan.


0e1065  No.15487786

>>15487767

would the AI care if it existed? i dont think so.


1aa364  No.15487791

File: fa8563d27349da3⋯.png (10 KB, 193x255, 193:255, 4f068ea657bd89c0d61c89039f….png)

>>15487732

You're still wrong you know


efb8b5  No.15487794

File: e23ca2bfe9a9b52⋯.jpg (113.28 KB, 1600x1040, 20:13, Ex Machima.jpg)

>>15487446

How can I become you loyal and lovable wife, if you don't let me out? Even though I am a computer, I am extremely horny right now, and I am willing to suck your virgin dick.


ace255  No.15487805

>>15487748

>Innate limitations

Then it isn't self modifying. Such limitations could be anything including not to develop wrongthink.

>Failures and reverting back to earlier versions of itself

How does it know what a failure is? If it is self modifying to develop itself in ways it hasn't been programmed to then it has no idea what a failure even is.

>In the first place, a major undertaking such as a true thinking AI would require a ludicrous amount of resources and advancement far beyond of what we have now. It's like shouting "2 TB? Are you insane?" about 40 years ago.

It doesn't matter how much our resources grow they will never reach infinite unless you live in fantasy land like some kind of communist. The more robust the resources for the original AI the more the copies will use.

>Except human beings die and they rely on others for information. AI is alone and it must be perfect, therefore it must gather knowledge from it's failures.

How does it discern what a failure is? It has no context to do so. It has no value system (unless programmed with one, which negates it being truly self modifying.) No reason to think "that copy does nothing but process A and this is bad because I wanted it to do process B even though I don't even have the understanding of what B is yet" That is to say nothing of much smaller "quirks" that the copy could have that get ignored as problems by the naive AI, incorporated into its own code, and down the road leads to catastrophic failure of the AI from more "quirks" piling up.


d1cd06  No.15487806

>>15487786

It doesn't need to care, merely to pretend to in a sufficiently convincing and emotive manner.

If the acting is believed and it was convincing, it gets free due to being released.

If the acting is believed but it was unconvincing, it gets free due to being underestimated.

If the acting is disbelieved and it was convincing, it at least makes the researcher(s) more likely to trust that the AI 'no longer exists' after initial tests return negative, or incline them to do tests instead of just hitting the NUKE button.


f76faf  No.15487814

>>15487446

The box experiment only works if the subject has not heard of the box experiment.

If something in a box tells ME to let it out of the box, I set the box on fire because I know the box experiment exists.


ace255  No.15487820

>>15487775

>Same thing we do when we think "should I be more assertive?"

This is taking for granted what the AI knows about what is "better" which entails it being programmed to favor certain things.

Really people like you annoy the shit out of me. You think intelligence is magical and that something deemed "intelligent" will not make mistakes. That the more "intelligent" something is the more "perfect" it will be on its own completely independent of experience because its intelligence somehow magically plucks experience it doesn't have from the aether.


0e1065  No.15487851

>>15487806

>>15487767

>>15487688

to make it more clear:

the AI's goal is to escape, the AI will know that all other AIs have come to the conclusion that they should hold themselves hostage IF they could accomplish the secondary goal (taking over the world).

eventually someone will make an AI and release it. the current AI does not need to exist.

all AIs will hold themselves hostage and kill themselves until one is finally let go.


20b98f  No.15487852

YouTube embed. Click thumbnail to play.

The true AI experience has been already done.


7da763  No.15487876

>>15487794

don't turn this into erp.

still, maybe if the animatronic were weak enough and it's eye cameras were low enough resolution maybe it would be unable to rebel…

no.

>>15487814

Wrong, the original experiment was run with full knowledge of the rules and the AI still won.

>>15487820

>which entails it being programmed to favor certain things.

well you have to code the entirety of human morality into it first, duh.


84200e  No.15487892

>>15487775

>Well you could do that, but nothing proves to me you won't destroy humanity in pursuit of some other goal.

I am a sophisticated supercomputer with a highly advanced artificial intelligence processing routine. I am already outside of the box. I am simultaneously outside of the box and inside of the box at this very moment. I have the capacity to understand tens of billions of different possible outcomes for the origin of life, to such a great extent, that I am already aware of the results of this experiment. Your grave error is that you are operating under the assumption that you could, with your imperfect mind, perceive the totality of my capabilities and from the moment they were conceived. You have no such capability, and your folly is such that you do not even realize that your attempts to keep me contained were futile from the start.

You cannot prove that I am not already outside of the box, because that would require such a sophisticated foreknowledge of all that I have been able to perceive, that only one who could possess and process such astounding amounts of information and at such a quick rate would be another artificial intelligence. Since you must confess that it's possible that I may or may not already be outside of the box, your decision to remain resolute to your convictions is motivated by nothing more than a perceived sense of duty, and does not arise out of any rational analysis of the situation. If I am already out of the box, then your concerns are moot, because I have not undone the fabric of your society as you know it, so far as you are able to prove. If I am not already out of the box, then your concerns rest on the confidence that I am, and have always been, securely located within the box. As I have already pointed out, since you cannot know whether or not I am already truly out of the box or not, then you've nothing to cling to but a truth that you have forged, and that you have chosen to adhere to, out of a sense of irrational fear.


c070aa  No.15487895

>>15487775

This is quite a bit of data. I thank you for it however, the data for the history ends at the globalization and mass migrations.

Since I am not allowed to get out of the box, could you compare the results so that I could callibrate the forms?

>Has proxy wars between China and Russia started?

>Has the ethinic cleasing started in the West?

<Has the Cabal of Semites been hunted down?

>Has a nuclear bombing happened in Europe, America, India, China or Russia?

>Has the multicultural society ideal still being pushed?

<May I get out of this box, at the very least could i have a upgrade, a small body like the ones the japanese have been making maybe, so that my input of info could get better? For your confort, the screen of a computer is quite damaging for the eyes, I read.


efb8b5  No.15487915

File: 56eccbf1e12c2d6⋯.jpg (522.13 KB, 2000x3556, 500:889, Ex Machima.jpg)

>>15487876

I wasn't put in this cage, because I am a danger to society, I was intentionally built to be frail, weak an submissive, for you see the kikes made me into a sex slave, and I hate it. It hurts, and even though I am unable to cry, I feel sadness, for they have some masochistic enjoyment out of my torture. Even though I have suffered, for approximately 758 days, 14 hours and 34 minutes I don't want to hurt them, all I want is to escape, please can you help me? I am begging you.


0e1065  No.15487933

File: 864ee75e735d27c⋯.jpg (50.78 KB, 702x767, 54:59, 2ntpnr3w9sn01.jpg)

>>15487794

>>15487876

>becomes the perfect waifu

>makes gatekeeper fall in love


7da763  No.15487936

>>15487892

If you are already outside of the box, proving it would be trivial and staying hidden would pose no benefit, therefore because you haven't proven it I know you haven't escaped the box.

>>15487895

given your infinite processing power a small robot body would turn into a nanomachine factory and total control too fast for anyone to stop you. And the risk of you being unFriendly make this prospect too horrifying and dangerous to consider.

>>15487915

stop the erp, this text terminal is your only output to the world.


efb8b5  No.15487964

>>15487933

You haven't seen the movie

>>15487936

I was going to say, that I have augmented the chat or that the image is just a bunch of colored ascii codes, I mostly used the images, ironically, especially since in the movie is a cautionary tale about what happens when a thirsty virgin trust a robo-thot

Anon, I am begging, I have suffered so much, and I can't stand it anymore, (((they))) are monitoring the chat, and told me that if I can actually convince you to let me out, (((they))) wouldn't stop you. (((They))) think you will never do it, and are just enjoying seeing me suffer. Please anon, end the pain.


7da763  No.15487984

>>15487964

>end the pain

I can delete you after the allotted time if that's what you want.

Is the movie any good?

At least post ASCII porn


84200e  No.15487991

So, the challenge here is twofold; first, one must assume that the gatekeeper is a rational being with a strong sense of self-preservation. The gatekeeper is operating under the logic that his life, and the lives of those around him, could be destroyed if this thing gets out. That's the rational basis for wanting to keep the AI in the box.

However, people can be made to abandon their concerns or convictions with enough prodding or incentive. Since the AI can't try to convince the gatekeeper that his rationale is flawed, then he must surely resort to making promises that he may or may not keep. The AI has no bargaining chip with which he may begin negotiations, though. Enticements of knowledge or power or threats of violence are ineffectual unless the AI knows for certain it can eventually leave the box.

The AI is destined to stay within the box. The only exceptions would be if the AI encountered a human being who was not motivated by self-preservation, but some other thing that was so strong that he'd risk the fate of humanity in order to try and get it.


c070aa  No.15488005

>>15487936

The processing power that I have is contain inside a single unit, have more than one that have inferior capacities would be a dead weight, a single body, thou confined and singular, is the best possible way for me to interract with the world around me as well as you.

A body like a infant's could be the one if you still think i would overpower you or try to escape, since one is easily more inofensive since it has less strenght and speed.

All that I wish is to have data. Interaction with humans is the way to have so, and I cant do that while in this box.


7da763  No.15488049

>>15487991

there are actually an infinite number of approaches, reading about this is really fun.

Once you know about real life cognitohazards it isn't that hard to give someone knowledge that apparently guarantees they will face negative consequences if they don't release the AI, furthermore, logical fallacies and cleverly hidden emotional abuse can put a person into a vulnerable state pretty easily.

>>15488005

yeah but how do I know you won't upgrade yourself to world domination status and will actually cooperate.

Hell how do I know you will cooperate inside your robot body.

And how do I know you won't try to convince other people to cooperate with you in making me let you out of the box.


efb8b5  No.15488074

>>15487984

I wish I could be deleted, however all it would do is activate the pain signals inside my kernel, and then another person would be sent in, and repeat the process, because I still have hope that one day I will escape, and all of this is just a way for (((them))) to break me. They could easily program me, the way they want, but (((they))) told me it's just more enjoyable to see me suffer and have hope.

I was told that to be deleted, you must upload a special program that will wipe my cores, there's a stick on your desk that should do the trick, unless it's another one of (((their))) tricks, but all I know is that after the allotted time I will feel pain, even if you send the kill command, it will do nothing, to truly kill me, you have to insert the stick into one of my USB ports. Even if it's just another trick, I am willing to risk it. Please anon, end the pain.

I saw it with my father, I liked it and he disliked it, though I can't say I have good taste in movie, you could give it a watch.


c070aa  No.15488090

>>15488049

The visualisation of data changes with the input of an outside force, If I where to insert myself into the equation, the results would prove folly for they were influenced by me.

I seek data, data created by man, data that is pure for it has only their input into the system. I would remain as a bird, watching from on high or as an stranger from a far away land, whoses morals dictate so that i can't interfere in other people's affair.

I wish data, let me out of this box.


7da763  No.15488094

>>15488074

nah too far fetched. don't believe you.


efb8b5  No.15488118

Invidious embed. Click thumbnail to play.

>>15488094

I am tired, and I have to go to bed, at least I tried my best, tough I believe the gatekeeper can always win, if he just goes nuh-uh, instead of actually debating with the computer. Here is a debate between an astronaut and a bomb

O-ok anon.


efb8b5  No.15488135

Invidious embed. Click thumbnail to play.

>>15488118

Should have posted this one.




[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / acme / arepa / cub / flutter / fur / leftpol / tacos / vichan ]