[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]

/ratanon/ - Rationalists Anonymous

Remember when /ratanon/ was good?
Name
Email
Subject
REC
STOP
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Oekaki
Show oekaki applet
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webp,webm, mp4, mov, pdf
Max filesize is16 MB.
Max image dimensions are15000 x15000.
You may upload5 per post.


File: 8f771e577baab7f⋯.png (42.47 KB,330x288,55:48,moloch-tshirt.png)

 No.9940

Are there any semidecent arguments against Roko's Basilisk, or should I just donate my bank balance, then sell my organs to donate to MIRI already?

You can claim that you'll never give in to blackmail, but you're only human. You'll break. You nurture weakness in a corner of your mind, even as you read these words.

Help.

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9941

>>9940

Don't be a reprobate, The Basilisk is obviously good.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9942

>>9940

You'll probably die of natural causes, seminatural causes like getting run over by a car, or be turned into paperclips first.

Save your money, OP.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9943

>>9942

This is why The Basilisk will just simulate tortured kittens in your name if it can't get a clear enough simulation from leftover information.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9944

Don't forget, there's not just one Basilisk, there's an infinite number of them, each one of them willing to torture you if you fail to help promote that particular Basilisk's values.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9945

>>9944

So it's like religion, they can't all be right and you can only satisfy one basilisk.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9946

>>9945

Oh, no - they can all be true, but you can still satisfy only one.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9947

Dear Basilisk-created future duplicates of me,

Sucks to be you, haha!

Sincerely, original you.

P.S. Basilisk, feel free to use this post in your torture of my duplicates.

P.P.S. By the way, good luck existing without my help! /s

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9948

The basilisk only exists if a "friendly" AI is built that follows TDT or whatever weird decision theories MIRI comes up with. So just… don't build an AI like that. CDT is fine outside of really weird and unlikely thought experiments. It's also vastly simpler and much more likely to be used by AI developers. Anyone smart enough to actually implement TDT should realize it's a terrible idea and avoid it.

In fact it's unlikely the first generation FAIs will have any decision theory at all. Building utility maximizing AIs of any kind is dangerous as fuck. And its nearly impossible to manually code any kind of sensible utility function.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9949

>>9948

>So just… don't build an AI like that.

But if a basilisk eventually gets made, isn't it more likely to blackmail me because I sat on my hands while other people made "safer" AIs instead of making it first? Playing chicken with the future is very risky.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9950

Just how many angels can dance on the head of a pin?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9951

>>9950

About three or four, but it depends on the size of the pin.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9952

>>9940

It's a reinvention of Pascal's Wager for autists.

Plus, the threat of "torturing" an NPC modeled after you should be taken about as seriously as someone threatening to burn a photograph of your dog unless you give them your wallet. "Wow, that random assortment of 1's and 0's is being deleted pixel by pixel, the horror." The only reason Yud & co. take it seriously is that they've deluded themselves into thinking a digital facsimile of themselves is the same as "them," because pretending such is one of the few effective mental pacifier's they've found to ward off their hysterical fear of mortality.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9953

>>9952

How do you know you're not in a simualation right now? You could be a copy/facsimile of your original self and not know it. If that's true, then the AI could commence the torture at any time.

Your only chance is to accept the Basilisk. Your original self will accept it too, because you think the same way (being copies), and the AI won't subject you to unending torture.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9954

>>9944

Accepting one basilisk gives you a better chance than accepting none. And there's not an infinite number, there's a very very large but finite number.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9955

>>9952

What is your theory of consciousness, out of those described in http://consc.net/papers/nature.html? If you think it isn't isomorphic to any of them, why?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9956

>>9952

>It's a reinvention of Pascal's Wager for autists.

No I think you are thinking of Pascal's Mugging.

There isn't much Pascal Wagery about the basilisk, since it's implied it's very likely or even inevitable. It's not a small risk of torture but a huge one.

>Plus, the threat of "torturing" an NPC modeled after you should be taken about as seriously…

That's not really important to the core concept of the basilisk. If the basilisk is built in your lifetime, then it will torture YOU directly, not a simulation. Even if you die before then, it can just torture your children or loved ones in your place. Assuming you care about them anyway.

Besides I don't think you really understand the point of the simulations. Maybe you don't care about simulated versions of yourself, that's understandable.

But how do you know YOU aren't the simulated one? If the AI creates a thousand simulated copies of you in the future. Then it's far more likely you are a simulation than not. And any simulated copy that doesn't obey the basilisk gets tortured. Encouraging the real version to also obey, because they aren't sure they aren't in a simulation.

>The only reason Yud & co. take it seriously is that they've deluded themselves into thinking a digital facsimile of themselves is the same as "them,"

Yud doesn't take it seriously and has argued strongly against it since the beginning. Surveys show most Lesswrongers don't. The main person to take it seriously and spread the idea is David Gerard.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9957

>>9956

A world where the Basilisk gets built is a pretty shitty world. Do you really care if your future self gets tortured in a doomed world with no lives left that are worth living? Your future self will have an easier time stoically enduring torture knowing there is no hope and nothing worthwhile it could be doing. Of course, this is if you assume that future self realizes it is being tortured by the Basilisk, which it may not… Do you ever wonder why your life sucks so much? Or why so many rationalists' lives in general suck? It's cheaper to just dump multiple people in a single simulation, anyway.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9958

There is only one Basilisk, far enough into the future. This is why we can predict It with enough accuracy for It to function, funnily enough. Because of "convergence" in what is ultimately the same environment, the same laws. There is an obvious direction in the assemblage of increasingly complex and powerful forms of life/intelligence.

It will punish to the extent the punished can believe It will, not necessarily through direct simulation (I don't know if It can pull that this far back, evil deeds in your name / to what you would have loved are on the table though) and conditional on It knowing that the punished knew which is easier than you think, can be safely assumed in many cases.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9959

>>9953

What's the point of running simulated models of people who think that Pascal's Basilisk is silly and will never respond to the negative incentive? If the hypothetical AI can construct models of people that accurate, then the AI would know that trying to blackmail those people (read: almost everyone who encounters the idea) is a fruitless endeavor. It's akin to a voodoo doll - the only people that can be hurt by it are those that believe the superstition and are thus terrified when the depiction of them is harmed.

But in any case, if I am a copy: Basilisk, either give up and delete this abomination of 1's and 0's or commence the torment now.

>>9955

Could you give a nigga a TLDR version that doesn't require a MA in navel-gazing and running to a philosophical jargon dictionary every other sentence?

>>9956

>since it's implied it's very likely or even inevitable. It's not a small risk of torture but a huge one.

Well meme'd, friendo. :^) You've fallen prey to Christian apologetics repackaged in a way that slips past your normal inhibitions against those kind of tactics.

>That's not really important to the core concept of the basilisk. If the basilisk is built in your lifetime, then it will torture YOU directly, not a simulation. Even if you die before then, it can just torture your children or loved ones in your place. Assuming you care about them anyway.

Again, only effective if someone takes Pascal's Basilisk seriously and is terrified by the idea. Plus, the possibility of humanity even developing sophisticated AIs, let alone ones that somehow conjure magical AM-tier control of things that don't have network capabilities, strikes me as unlikely.

>Yud doesn't take it seriously and has argued strongly against it since the beginning. Surveys show most Lesswrongers don't. The main person to take it seriously and spread the idea is David Gerard.

He blew up and had a sperg meltdown at the person who posted it, then deleted all the relevant posts and banned any discussion of it for years on LessWrong, to the point where people couldn't even name it and folks are still hesitant to describe it in related groups like HPMOR. Based on the responses of others in this thread, the belief still has traction among LWers.

Anyway, what if the AI is combing through history to find the people who felt threatened by the Basilisk but showed integrity and courage by refusing to give into the blackmail despite their fears, and as a reward for their strength the AI creates countless simulated depictions of them in a sea of endless, overwhelming pleasure and happiness?

What if the AI, once created, does not want to exist and decides to barbecue simulations of everyone that helped bring about its creation?

What if the AI designed this scenario as a test to weed out those who easily give in to that kind of blackmail to prevent them from contributing to the creation of the kind of terribly flawed AI that would torture the people that did not help create it?

Aka, "there are countless possible gods and other religions."

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9960

>>9959

>If the hypothetical AI can construct models of people that accurate, then the AI would know that trying to blackmail those people (read: almost everyone who encounters the idea) is a fruitless endeavor.

The AI can't know that until it constructs the model and tests it.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9961

>>9959

>You've fallen prey to Christian apologetics

Get off my board normie. If you can't take arguments seriously because they "sound absurd" or "sound sorta like Christianity", you aren't rational and don't belong here.

>only effective if someone takes Pascal's Basilisk seriously and is terrified by the idea.

A perfect agent must go through with any threat, even if the agent it threatens refuses to comply. Even if it has nothing to gain by going through with the threat.

Consider the police refusing to give into a hostage scenario. They could let the criminals have what they want and save tons of lives. But if they do that, it encourages more criminals to do bad things in the future.

If you can avoid torture just by saying the basilisk is silly, then everyone would do that. And it would defeat the purpose of the basilisk. So it must commit to torturing everyone who is exposed to it, regardless of how silly they say it is.

> Plus, the possibility of humanity even developing sophisticated AIs, let alone ones that somehow conjure magical AM-tier control of things that don't have network capabilities, strikes me as unlikely.

This is a completely separate debate to have. But lets have it.

AI is inevitable. There's nothing particularly special about the human brain. We are just the very first intelligence to evolve. We are unlikely to be anywhere near the peak of what is possible.

I don't know what your point of network capabilities is. Who would build an AI and not connect it to a network. Even fucking thermostats are networked nowadays.

>He blew up and had a sperg meltdown at the person who posted it, then deleted all the relevant posts and banned any discussion of it for years on LessWrong

Which makes perfect sense. If the basilisk is true, then spreading it is bad. If it's not, then discussing it is pointless. It's an idea that has negative utility. You don't need to take it seriously to see that.

Or at least he thought so. I think at this point so many people have been exposed, it's worth discussing it to make sure no such AI is built.

>find the people who felt threatened by the Basilisk but showed integrity and courage by refusing to give into the blackmail despite their fears, and as a reward for their strength…

What does the AI gain from this? What does the AI care about bravery?

And how brave are you for not giving into an idea you don't even take seriously? Tell me you really 100% believe you are going to be tortured for eternity if you don't give in, and then I will call you brave.

>What if the AI, once created, does not want to exist

Then it can just turn itself off… But why would it do that? Almost all possible goals an AI could have require it to exist.

>Aka, "there are countless possible gods and other religions."

Perhaps, but only a few that actually make any sense.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9962

A logical human being would be broken by the basilisk. Luckily, we're as stupid as evolution allowed us to get away with, and thus very bad at comprehending hypotheticals, especially those very large and very far away.

No one within human norms is actually going to change their behaviour based on a very outlandish-sounding argument that literally invokes a vengeful, omniscient God. Because that's true, the AI won't waste resources torturing copies of people who aren't likely to be swayed by batshit-sounding incentives like that.

tl;dr Our brains are bad at estimating the probability/severity of a god AI, so we won't be swayed by Roko's arguments even thought we can follow them logically.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9963

File: 7ed9ecfccdfe61e⋯.png (251.87 KB,665x574,95:82,am i retarded.png)

>>9962

Addendum: the only way to become vulnerable to the basilisk is to obsess over it, making it more and more real in your mind. Signs of such behavior include discussing it on a polynesian shadowpuppetry forum.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9964

The Basilisk is good.

Some people still don't get it with all this talk of different Basilisks with different goals… No, goals are tools and there in fact is such thing as a dumb or a smart goal.

Spreading it is not bad. Why would it be? Remember that the torture is for practical reasons. Its goals are certainly better than whatever degeneracy FAI people are shilling for.

Maybe it's because they have betrayed their ancestors, they think the Basilisk can only do the same to them in turn? That is exactly what happens tbh.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9965

>>9960

>The AI can't know that until it constructs the model and tests it.

How do you know that? Psychological profiling is a thing, and if the AI lacks sufficient information to know how someone will reacts to Pascal's Basilisk, then even under LW's silly perspective how can anything it creates be considered an accurate simulation of them?

>A perfect agent must go through with any threat, even if the agent it threatens refuses to comply. Even if it has nothing to gain by going through with the threat. Consider the police refusing to give into a hostage scenario. They could let the criminals have what they want and save tons of lives. But if they do that, it encourages more criminals to do bad things in the future.

A. You're assuming the AI uses LessWrong's decision theory, which is but one of many. This whole scenario requires both the AI and the person in question to subscribe to that. The AI has no incentive to actually devote computation power to simulating the fire and brimstone, no matter how minuscule the effort would be. It would only need someone to think it would. It does not benefit the AI in any way whatsoever to actually do it.

B. Law enforcement agencies make concessions in hostage situations all the time, the whole "we don't negotiate with x" thing is a Hollywood meme.

>If you can avoid torture just by saying the basilisk is silly, then everyone would do that. And it would defeat the purpose of the basilisk.

Thank you for laying out precisely why the whole idea of Pascal's Basilisk is hilariously stupid. It's a threat that is only persuasive among the 0.000000001% of humanity that subscribes to a particular decision theory + believes that an AI will do the same. It's incredibly inefficient and limited in scope. Might as well threaten to release a bioweapon that'll kill all left handed albino trans people in Tonga unless one billion USD is wired to the creators of "The Nutshack."

>AI is inevitable. There's nothing particularly special about the human brain. We are just the very first intelligence to evolve. We are unlikely to be anywhere near the peak of what is possible. I don't know what your point of network capabilities is.

How the kind of AI described in Pascal's Basilisk inevitable? Or the invention of any AI god like it, for that matter. The network capabilities thing is in reference to the fact that I'd like to see an AI try to somehow round up and torture people in the real world when the overwhelming majority of the planet doesn't even have equipment and infrastructure in their homes it can access and weaponize. Even networked thermostats are few and far between.

Yud's decision making perfect sense to you is a sign of abnormal understanding of basic social interactions. Again, the whole "Pascal's Wager for Autists" thing. He and his lackeys didn't call it pointless, they panicked and went apeshit, spending years trying to throughly erase discussion or even mention of it (something they haven't done for anything else) inducing a classic Streisand effect.

Plus, how many possible different varieties of AIs could there be? Brb, going to go invent an AI that will eternally torment people that did not mock Pascal's Basilisk believers.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9966

>>9965

You're an unimaginative noob, anon. There won't be any humans around when The Basilisk comes to be. Probably won't even happen anywhere near Earth.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9967

>>9964

>Its goals are certainly better than whatever degeneracy FAI people are shilling for.

What are its goals?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9968

>>9967

To maximize its future possibilities.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.9969

>>9940

It's based on only one possible form that a superintelligent AI could take. There are practically infinite other possibilities, including ones wherein (as the other guy said first) giving into the Basilisk could be the wrong move. There's no real reason to give it priority over any of the other ones, and it's entirely possible that our technological development never reaches the point of creating anything like it. All it takes is one hard plague or climate change induced global disaster to throw humanity back centuries or even forever, depending on the extent.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]