[ / / / / / / / / / / / / / ] [ dir / asmr / cafechan / kc / leftpol / soyboys / turul / vg / zenpol ]

/ratanon/ - Rationalists Anonymous

Remember when /ratanon/ was good?
Name
Email
Subject
Comment *
File
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Oekaki
Show oekaki applet
(replaces files and can be used instead)
Options
Password (For file and post deletion.)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


File: e6a35387e0db7ac⋯.jpg (1.93 MB, 2592x2231, 2592:2231, fhi_whiteboard.jpg)

 No.6167

Hey /ratanon/. How likely do you think the following are?

>Over 50% of humans will die in a disaster in the next 100 years.

>We are living in a computer simulation created by an advanced civilization.

>Humanity goes extinct in the next 100 years - replacing us with something better (e.g. whole brain emulations) doesn't count

>Artificial general intelligence is developed by 2050 (on Earth)

I'll start with my credences:

>P(disaster kills >50% of humans in 100 years) = 30%

>P(simulation hypothesis) = 1%

>P(human extinction within 100 years) = 10%

>P(AGI by 2050) = 20%

Compare your probabilities with those of FHI researchers (pic related)

 No.6168

>18%

>95%

>0.01%

>20%


 No.6169

Does a Tegmarkian mathematical universe count as a simulation?


 No.6170

>>6169

Explain "Tegmarkian mathematical universe" in a short sentence please?


 No.6171

>>6169

>Explain "Tegmarkian mathematical universe" in a short sentence please?

Nevermind that, Google was pointing to nonsense… I wouldn't count it as a simulation if it wasn't set up on "purpose", simulation would just happen to be an intuitively useful but loaded word to describe the nature of reality in such case.


 No.6172

File: b7cf6f93c05845f⋯.jpg (125.48 KB, 1280x720, 16:9, matrix.jpg)

>>6168

Do you really think there's a 95% chance you're living in a simulation? Is that based on Bostrom's simulation argument or a "Tegmarkian mathematical universe"? I was surprised that "Rob" from FHI put 70% on that, but even that's not as high as your 95%.

In any case, here are some tips for living in a computer simulation (by Robin Hanson):

http://www.transhumanist.com/volume7/simulation.html


 No.6173

>>6167

A mix of "Tegmarkian mathematical universe" and a sort of reverse-anthropic principle that leads me to think it is much more likely that our "universe" is one the many "sub-simulations" rather than an informationally closed system kind of "simulation", but some of these exist.


 No.6174

>>6172

Last post >>6172 was meant as a reply to you. I fail at chans.


 No.6175

>20%

>5%

>5%

>10%


 No.6176

Where are numbers above 0.1% for human extintion within 100 years coming from?

We are hardy as fuck, I don't think anything could destroy us other than cosmic events like meteorites, gamma rays, stabler vacuum and so on.

A lot of things wouldn't be pretty and would in extreme cases maybe lead to long periods with no human life while our frozen embryos and automated civilization-rebuilding tools wait and probe for viable conditions At worst we send our biological and cultural frozen seed to space in all directions and hope that something finds it one day.

Most likely we would be able to sustain a small (<5) population to oversee this process no matter what stupidity we do to ourselves next.

People probably already built and maintain facilities for this purpose.


 No.6177

>>6176

I think it's reasonable to believe that P(human extinction within 100 years|AGI by 2050) is fairly large.


 No.6178

>>6167

Question underspecified.

No.

No.

Yes.


 No.6179

>>6167

>P(AGI by 2050)

Apparently AI experts think this has a pretty high probability.

source: http://aiimpacts.org/category/ai-timelines/predictions-of-human-level-ai-timelines/ai-timeline-surveys/

>>6177

Most of the AI experts in the above survey thought that P(human extinction | badly done AI) was negligible. I wonder how high Yudkowsky thinks it is.


 No.6180

>>6179

>Most of the AI experts in the above survey thought that P(human extinction | badly done AI) was negligible.

They're idiots.


 No.6181

>>6168

I think you're significantly underestimating the chance of human extinction. Have you taken the doomsday argument into consideration?


 No.6182

>>6181

I have taken it into consideration but I think we can use history to have a good idea regarding cosmic x-risk in the next 100 years, leaving only AI risks and the like as problems. I think the chance of AI or other manmade phenomenon leading to human extinction in the next 100 years is very small. There is huge risk but its not going to be that drastic even in the worst case scenarios. Human beings are harder than cockroaches, we are more likely to be enslaved than wiped out.

What are the most reasonable AI risks scenarios that could lead to extinction in 100 years? I'm trying to imagine something but I'm having a hard time…

Even a crazy AI with control over all networked devices and the power to convince people to do anything through almost any informational vector… Not that hard to deal with, world surely goes to shit but we still survive. I think a situation like this is pretty likely.

If we don't have the mechanisms in place to survive something like this, we most likely will before that time comes.


 No.6183

File: afae4870ab61551⋯.png (278.46 KB, 1024x768, 4:3, bostrom_ai_xrisk.png)

>>6182

>Human beings are harder than cockroaches, we are more likely to be enslaved than wiped out.

Why would a superintelligent AI need to keep human slaves around to accomplish its goals?


 No.6184

>60%

>80%

>20%

>30%

There are some simulations we might be running on that would rule out apocalyptic scenarios, and some would imply new x-risks like "the simulation is unplugged". I can't into math enough to adjust these interrelated things, so I just treat the other questions as having an unstated "given that we're not in the kind of simulation that would change the answer…"

Oh, also, none of these numbers are accounting for P(Boltzmann Brains).


 No.6185

>>6183

Why not? Humans beings are not that valuable a source of raw materials. We do really great at manual work, a human can work until it dies and then be used to feed the other humans and so on, this would save resources. Building robots would obviously be possible for a superintelligence, but why bother when nature built you easy to enslave meat robots for free? It could use it's power for something else, at least for a while. (>100 years)

We are walking bags of resources for an AI, no need to harvest us if the resources are temporarily organized in a useful form for free. Even then the human-harvesting facilities are probably going to be built by humans.

We probably stop being useful when the AI sets to abandon the solar system and strip the earth or just eat it whole. Even then we would have uses, like "diplomacy" with AI wary civilizations, but we can be considered extinct at that point if all we do is being born as brainwashed adults in some vat whenever the need arises for a Human.


 No.6186

>>6185

Machines have already replaced humans in most manual labour roles in economies run by actual humans. The AI would only have to invent robots for the few roles left over, and it would have all the same economic reasons to do so as capitalists have had in the past to automate away their labour force. Human slaves aren't half as useful or worthwhile as you seem to think, if they were it would still be legal.


 No.6187

File: da3cffb9fef2292⋯.jpg (86.05 KB, 750x500, 3:2, elon.jpg)




[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / asmr / cafechan / kc / leftpol / soyboys / turul / vg / zenpol ]