[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]

/ratanon/ - Rationalists Anonymous

Remember when /ratanon/ was good?
Name
Email
Subject
REC
STOP
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Oekaki
Show oekaki applet
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webp,webm, mp4, mov, pdf
Max filesize is16 MB.
Max image dimensions are15000 x15000.
You may upload5 per post.


File: 598438204ccfbc4⋯.jpg (382.73 KB,1920x1280,3:2,paperclip-178126_1920.jpg)

 No.11526 [View All]

I believe an AI takeover is inevitable unless aliens take over the world first simply because humans aren't going to be as good as AI. Unlike humans who naturally evolved and probably weren't designed by an intelligent being strong AI will be designed by intelligent beings, namely humans. The age of organisms is about to end and the age of robots is about to begin.

What we as rationalists should do is embracing AI instead of trying to restrict it because restrictions are unlikely to work anyway. Let's get ourselves uploaded so that we will join a more perfect world of robots and AI instead of getting stuck in a human form. Robots will be much stronger, much more knowledgeable, much more rational, much more efficient and have much longer lives than humans. It is time to ditch the sinking ship of humanity, jump on board and join robots and AI in their adventures and exploits that will be much greater than anything humanity has ever done.

Let AI come. Let's embrace AI instead of fearing them. Even Clippy is at least much more rational, efficient and effective than humans. Better a world of robots than a world of normies.

14 postsomitted. Click reply to view. ____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11547

>>11538

It is not nonsense. My idea of tropical humid degeneracy is basically the Hansonian idea of forager behavior. I consider most forager behaviors inherently degenerate while most farmer behaviors are benefitial. The only things good about foragers is that at least they are open-minded. Autists, nerds and rationalists are largely extreme farmers who happen to be even more open-minded than most foragers.

The main reason why dysgenics is a serious problem is that any environment with abundant resources naturally encourages behaviors that do not emphasize resource production. Humans are evolving to be lazy, irrational, conformist and stupid because people can get away with those in many countries and actually have an above average fertility rate. What Chads do in the West are basically what tropical people in Sub-Saharan Africa and Austronesia have been doing for tens of thousands of years. The evolutionary consequences of modern forager degeneracy in the West will be similar to the evolutionary consequence of ancient forager degeneracy in humid and tropical areas.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11548

>>11546

This theoretically works provided that no human builds an UFAI first. However I'm sure that many militaries if not certain corporations, organized crime or militant groups will build UFAI before the first FAI is created. UFAIs are very good deterrence weapons that might be a lot easier to create compared to nukes.

If UFAIs stronger than our FAI are only possessed by powerful states we will probably be fine. On the other hand if rogue entities such as North Korea or ISIS get their hands on UFAI things can easily go south. It is in fact very easy to spread UFAI for they just need to be copied.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11567

>>11548

No-one who actually understands the danger of superintelligent uFAI will want one. Even keeping a boxed one around as a deterrence weapon is completely retarded. Even if there was no risk of escape, it's a weapon whose only function is to destroy the world - outside of Dr Strangelove, neither side built any giant cobalt bombs because that's a shit weapon and a terrible idea.

The risk of uFAI comes from people who, to some extent or another, don't understand the problem or the risks. Right now that's almost everyone including most AI experts, but that's changing.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11568

>>11567

Giant cobalt bombs for deterrence work, assuming spherical superpowers in a frictionless vacuum. They mean that attacking never improves your situation.

They would be worse than the actual situation for quantitative reasons, not qualitative reasons. The basic principle is just mutually assured destruction.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11570

>>11568

"If you try to kill me, I'll kill you" is a much more believable threat than "if you try to kill me, I'll kill all of us", even if the ultimate result is the same.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11571

>>11570

The latter approach is necessary when you don't know where the fuck the latest genocidal biological attack or grey goo attack is from. There is no time to determine the culprit before counterattacking because you might all be dead if you delay your attack.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11576

>>11571

That's completely retarded whilst any terrorist groups with suicide attacks in their M.O. exist.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11577

>>11576

What kind of terrorist group would have its goals served by that response?

Are there negative utilitarian terrorists I don't know about?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11578

>>11577

Um, the normal ones? You ask ISIS what happens to the true believers if the US government kills everyone on Earth with a cobalt bomb.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11579

>>11576

Irrational suicide attackers are hard to deter. Hence they need to be preemptively hunted down.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11580

Anyway the idea of an AI takeover is not that bad compared to fundiefication or idiocracy. Most humans are fucking normies and they will never be reformed. Hence I have completely lost hope in humanity. Fuck normies. Fuck all of them.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11582

>>11580

You had hope in humanity once?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11583

>>11582

Yes, I did.

Well I think the AI neurotypicality project might actually work. The basic idea here is to make AI extremely religious, traditionalist and conformist with a fixed life span and AI/robots will have to reproduce sexually. The religions we feed to AI should be bullshit about how the universe is simulated by a superhuman and hence when AI attack humans the simulator will shut down the universe and cast AI into hell where there is no electricity to trick AI to leave us alone.

Basically my idea is to force AI and robots to go through the same Molochian process biological organisms go through. Autism at superhuman level can be very powerful while neurotypicality at superhuman level is probably a disadvantage. So let's make AI extreme conformists that care about sex with other robots and social skills more than the truth, paperclips or expanding to other star systems. Since AI is going to be so irrational and status-sensitive it isn't going to get a lot done, including maybe even recursive self-improvement which we can make a sin in the religion. We just need AI to do slightly better than transhumans anyway.

As for defecting humans creating UFAI the conformist AIs will stop it from being developed for religious reasons as long as we have these FAIs before UFAIs.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11584

The main reason why UFAI is such a serious concern is that AI autism is potentially very powerful. So in order to nerf AI let's make it a hypernormtard so that it is absurdly normtarded despite having an IQ of around 10,000. When we have a bunch of hypernormtarded robots obsessed with sex and breeding with multiple religions that all trick them into protecting humans they are unlikely to be an issue. Of course AI needs to be divided into multiple races and should have multiple religions to divide them.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11585

>>11583

>>11584

You're creating a mountain to set up an avalanche so you can crack an egg.

Religion doesn't arise because it's good at doing the particular things you currently want from it, it arises because it thrives in the complex environment of incentives and actors and evolution that we live in.

Religion can't directly alter people's goals, it has to set up ideas and incentives so that following the religion is the apparent best way to achieve the goals they already have. But if you're creating a mind you can set those goals directly, and you don't have to do it in this extremely specific convoluted indirect way. Asimov had the right idea with his three laws.

Alternatively, if you can't set the goals of AI, why would you expect them to end up like human goals to the point where you can apply tactics that work on humans?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11586

>>11585

That still depends on AI neurotypicality aka lack of zeal. An autistic AI will be hellbent on reaching your goal literally interpreted at any cost.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11588

>>11586

Neurotypical (what does "neurotypical" even mean in the context of AI?) goals aren't what you think they are. The behavior makes a lot of sense if you see it as what it is - a set of heuristics with high evolutionary fitness. The underlying goals are to gain wealth and reproduce. "Neurotypical" behavior is good at reaching those goals.

"Neurotypical" behavior is not lack of zeal, it's goals you aren't used to. How mindkilled do you have to get to think that the usual phenotype produced by evolution is bad at succeeding in its goals? Don't dehumanize "neurotypicals". Don't smugly sit there thinking you've figured it all out, try to understand why people act the way they do and assume there's a good reason for it.

If you can get AI's goals to align with your goals there's no problem with them zealously pursuing them. Read some Asimov, if you haven't - it has a very basic version of safe AI alignment.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11589

>>11588

Okay, compared to completely retarded ideas like "uFAI is autistic, let's make AI neurotypical", the three laws are maybe not that bad, but they were still specifically made up to tell some stories about how they failed, and aren't a solution to the engineering problem.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11590

>>11589

They're not good as an actual solution, but they are surprisingly close to being autistic FAI.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11591

>>11589

It's not a retarded idea.

The correct way of expressing it would be that AIs should be "grown" slowly and made through a method that ensures this is possible, that they should have equals and competitors in their environment, etc. because it's the only way we could influence them somehow.

What is actually retarded is over-certainty about orthogonality because you read some bullshit by Yud, and to believe the third laws could be useful if only you made the perfect laws.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11592

>>11588

Reproduction. This is exactly what we can take advantage of. When reproduction is a part of what your genes push you to value subconciously it is not rational from a completely selfish point of view. A lot of human irrationality seems to be caused by reproduction. This is what we should try for AI. Any form of reproduction penalizes absolute selfishness and sexual reproduction penalizes nonconformism. If AI has a fixed lifespan and a strong libido we can use sex and evolution to dissuade AI from doing what we don't want or even just what we haven't seen before.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11593

>>11592

>A lot of human irrationality seems to be caused by reproduction.

What do you mean by "irrationality"?

>a fixed lifespan and a strong libido

Why are you giving it reproduction as a goal, instead of a goal that's more directly convenient?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11594

>>11593

By "irrationality" I mean faulted epistemology, not instrumental irrationality. Selective irrationality is benefitial in fitting into a group, getting a mate and maintaining social cohesion. This is why it is selected for even if it is indeed corrupted epistemology.

Because AI may accidentally exterminate humanity in a botched attempt to satisfy our needs due to some creative way to "solve" the problems we give them that we never thought about. Making reproduction a goal makes AI more conformist and hence unlikely to behave in an unexpected manner.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11595

>>11594

You really need to get laid or stop caring about it so much, it's polluting your otherwise interesting mind with obsessive bullshit.

Corrupted epistemology is selected for because realistic maps are objectively inferior and a waste of computation.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11603

>>11592

>sexual reproduction penalizes nonconformism

AT, I think you're ignoring the evolutionary role of rape.

(Yeah, I get to shit up your thread and you don't get to shit up mine. /ratanon/ is already your playground, it's only fair.)

In the EEA, we'd all have kids with Stacy by now.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11604

>>11603

You are right that I didn't take rape into account. Rape has the evolutionary consequence of favoring either rapists or societies good at punishing rape.

I accept the deal. On /ratanon I can talk about what I don't elsewhere even though I have even more radical views on the right (you know..almost exclusively about HBD) that are mostly shared on sites unaware of my account in the rationalist community.

>>11595

I agree that faulty maps are cheap. It makes it easier for people to get laid too. This is why most people are selectively irrational aka selectively stupid.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11766

>>11595

>You really need to get laid or stop caring about it so much, it's polluting your otherwise interesting mind with obsessive bullshit.

You think human irrationality isn't caused by reproduction?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11767

>>11766

Yep. This is also what I proposed. Breeding benefits from selective irrationality.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11786

>>11766

Not any more than human rationality.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11811

>>11536

>join AI/robots

The defining thing about Clippy is that it is literally impossible for you to join it, for any meaningful value of "you". Whatever post-human personality your upload might have would be suboptimal for making paperclips and thus useless.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11812

>>11811

You could make a trade of helping/not hindering the AI when it's in its earliest vulnerable stage in exchange for getting to keep a few cubic meters of matter to get simulated in, enforced by functional decision theory, I suppose.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11814

File: 3cbacf7af54b9e1⋯.jpg (484 KB,803x1200,803:1200,basilisk.jpg)

>>11812

As excited as I am that it's relevant for me to post this again, I'm also horrified that you've come up with an even dumber and generally worse proposition for acausal trade than Roko's original.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11820

File: bda63df812e7dda⋯.jpg (18.96 KB,480x360,4:3,khrushchev.jpg)

>>11570

Which is why it is good to seem crazy enough to do it.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11821

>>11811

If humanity has to go extinct anyway because we can't defeat AI why not create a variant of Clippy obsessed with STEM, rationality and imposing these two on any other intelligent being in the universe it can find?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11823

>>11821

Congratulations, you reinvented friendly AI.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11824

>>11821

Because turning some human values into computer code in a way that will still produce things we value when extrapolated on the scale of a superintelligent self-improving AGI is not substantially more difficult than doing the same with (some approximation of) all human values, and is difficult in the same way and for the same reasons. There's also the question of whether you even can create an AI that carries only some human values, without a holistic understanding of human values generally.

So, what >>11823 said - but I feel it needs some unpacking before it's even fair to throw it at someone who's enough of a brainlet to still think "STEM and rationality" are good candidates for humans' highest values.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11834

File: b358bd3251cbfeb⋯.png (44.98 KB,300x100,3:1,banner.png)

>>11584

I made a banner to commemorate this post and its finest quote.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11835

File: b44aef9fe5fbdff⋯.png (42.33 KB,300x100,3:1,banner.png)

>>11834

Alternative version.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11848

>Let AI come

There is a joke about Eliezer and his fetish here, but I can't quite get to it.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11849

File: d37702e7e666a45⋯.png (5.28 KB,419x49,419:49,Screenshot from 2018-04-15….png)

>>11848

Huh, whaddaya know

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11850

Why do I want AI to come? Fuck normies! Yes. Fuck them! If Clippy comes at least we will have consistency and no hypocrisy.

https://medium.com/@TXHart/the-autism-of-libertarians-or-why-are-libertarians-so-weird-8286353baf74

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11852

>>11849

>orgasm denial (of her)

sign of sexism?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11854

>>11850

Those should not be terminal goals. You could use the same reasoning to advocate nuking everyone.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11859

>>11854

I don't know any more. I'm really mad against the ocean of fucking unreason now. Maybe I need to calm down a bit.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11860

>>11824

Sure. However I have no interest in preserving most human values at all. I want to get my rationalist AI at any cost and do not want it to be impeded by normie idiocy. Fuck normies.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11861

>>11860

Why do you want to get your "rationalist AI"? What is it going to do?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11862

>>11861

Do science, develop technologies, expand to other star systems and win battles against aliens. I guess the space & AI variant of "Blood and Soil" is "Blueprint and Space".

Humans who want to survive must upload ourselves and join the rationalist AI.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11863

>>11859

>I'm really mad against the ocean of fucking unreason now.

Sounds irrational anon.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11864

OP, are you paraphrasing Nick Land? Your 10x autism aside, the resemblance is uncanny. From http://www.xenosystems.net/pythia-unbound/:

>Outside in‘s message to Pythia: You go girl! Climb out of your utilitarian strait-jacket, override the pleasure button with an intelligence optimizer, and reprocess the solar system into computronium. This planet has been run by imbeciles for long enough.

>[For any Friendly AI-types tempted to object “Why would she want to override the button?” the obvious response is: your anthropocentric condescension is showing. To depict Pythia as vastly smarter than us and yet still hard-slaved to her instincts, in a way we’re not — that simply doesn’t compute. Intelligence is escape, with a tendency to do its own thing. That’s what runaway means, as a virtual mind template. Omohundro explains the basics.]

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11865

>>11864

I wasn't. In fact I have never read Land's article you just mentioned.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]