[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]

/ratanon/ - Rationalists Anonymous

Remember when /ratanon/ was good?
Name
Email
Subject
REC
STOP
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Oekaki
Show oekaki applet
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webp,webm, mp4, mov, pdf
Max filesize is16 MB.
Max image dimensions are15000 x15000.
You may upload5 per post.


File: e719c28ee9cd921⋯.jpg (81.87 KB,600x1067,600:1067,friendly_a_i__by_sonaliz02….jpg)

 No.11098

So the basic idea behind FAI is that you can limit its ability to self-modify in ways that may cause it to change its behavior.

The obvious retort to this is: if you can predict the behavioral changes caused by this self-modification, then you wouldn't need to self-modify in the first place. To give a human example, let's say you are studying subject X. If you could predict exactly how studying subject X would change your views, you wouldn't need to study it in the first place. So how do you get intelligence explosion from something that can't modify its intelligence?

Or even if it could modify its intelligence in some limited way, how is FAI supposed to outcompete an unconstrained non-F AI opponent?

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11100

>>11098

There are two separate points here. That an FAI would have a major disadvantage vs an UFAI logically follows from how a FAI would have a humanity sized liability it would have to defend. It doesn't need difficulties self-modifying to have this disadvantage.

Self-modification would be possible and productive in terms of optimization: if the AI can prove that it would preserve its utility function but function x% faster, it would be instrumentally advantageous to apply the optimization. Note that we don't have to prove what the optimized self thinks, only that it preserves its values. Sort of like how we can use induction to prove properties without having to calculate every single case.

Hypothetically: if an AI doesn't want to murder but it proves that an optimization would lead to a blindness to human bodies, it could infer an unacceptable risk of accidentally killing a human and discard the change. It wouldn't have to exhaustively calculate everything it would do to make that determination.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11105

Utility has to be one of the most retarded ideas ever

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11106

>>11100

>how is FAI supposed to outcompete an unconstrained non-F AI opponent?

Short answer: It doesn't.

Longer answer: http://xynchroni.city/posts/11

Unrelated answer: Nice digits, friendo. Will they save you, in the end?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11108

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11109

>>11108

Videos are degenerate, provide concise argument in text form or gtfo

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11110

>>11109

The video says that if you have consistent preferences - that is, if for any two worlds you can say which one you prefer, or that you like them equally, and that you don't have circular preferences - there necessarily exists a utility function that takes a world and outputs a number that can be used to order all possible worlds.

I don't know if that addresses your problems with utility at all. It would help if you posted them.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11112

>>11110

huh, different anon here, I literally watched that video earlier today

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11116

>>11100

>Note that we don't have to prove what the optimized self thinks, only that it preserves its values.

Those two things don't seem to fit together.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11118

>>11116

It's probably how I'm trying to shoehorn AI cognition into human-centric terms like "think" and "value". One way to think about it is that there is a procedure to update the AI's model of the world, another to generate strategies to change that model of the world and another that evaluates states of the world. While optimizing the evaluation part *is* very difficult, because of the instrumental desires to not change the utility function, improvements to the other aspects can be easily proved to be gains, as a more accurate model means that better strategies can be devised and better strategies mean that the resulting model of the world will have a higher evaluation.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11120

>>11106

That writer (hi, akira) has interesting ideas, and on a separate note, I appreciate him building a Flash-free interface to the Gematriculator, but he plays fast and loose with facts and reasoning where it suits his argument. For instance, he doesn't seem to have examined MIRI's output before concluding that they're slowing down. I agree MIRI has a slim chance to influence seed AI, but their output has been increasing in both volume and relevance to the core problem. Perhaps the reason he thinks they're slowing down is precisely that their output is now far less digestible than Harry Potter fanfiction. When mocking Yudkowsky's very real and disgustingly patronizing leftism he for whatever reason resorts to super ungenerous interpretation when there is plenty there that doesn't need it. It's all just weaker than it could be.

>Why is every company – … – that implements AI making such outrageous amounts of money?

Come on, this is exactly backwards.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11121

>>11118

> here is a procedure to update the AI's model of the world, another to generate strategies to change that model of the world and another that evaluates states of the world.

That's an awful lot of assumptions considering we don't know how GAI works in theory, let alone how to make one.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11141

>>11098

Any AI with a utility function will have "preserve my current utility function" as one of its highest priorities. You put almost all the expected value in its utility function on the other side of a mod that might change it's goals, of course it's going to check and double check that its goals will be preserved. Doesn't matter if it's an FAI proving friendliness or a Paperclip Maximiser proving paperclip maximizing, if it's hard to be sure of value preservation, that effects them both.

An FAI might be slightly more restricted, might be forced to care even more about goal preservation than the expected value of "tile the universe with whatever utility function you have after going FOOM" would suggest, but that's not a huge difference, dwarfed by first-mover advantage which could go either way.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11152

File: 522e22b3aa166d2⋯.gif (1.96 MB,450x450,1:1,psychedelic 2.gif)

MIRI's approach to FAI will never work. It's impossible to prove anything formally about recursively self improving AIs. No state of the art AIs use anything like formal logic and nothing ever came from the old school approaches that did.

Some of their research is just completely bizarre and so far removed from any practical AI algorithm it's ridiculous. If you have to reference Solomonoff induction or the number 3^^^3, you are doing something very wrong and irrelevant to the real world.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11161

>>11152

Is 3^^^3 referenced anywhere outside of specks vs torture and memes relating to same? And there it's just a dumb flourish on a reasonable point about moral philosophy ("here's a bullet you have to bite if you're a utilitarian"), unless you're going to edgelord and say moral philosophy is irrelevant to the real world, I don't know what your issue could be with that.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11169

I'm a supporter of uploading now. This entire friendly AI idea isn't going to work, nor does transhumanism seem to be a viable alternative if robots with AI can be stronger than transhumans.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11170

>>11169

Like an upload isn't going to go full Skynet within five minutes of starting to self-modify.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11174

>>11170

At least it is still human in some sense.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11177

>>11170

Self modifying entities are still value preserving. You'd end up with some kind of super intelligence taking over the world, sure. But it would be an entity with *human values*, which is the best we can hope for.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11180

>>11177

Have you seen what people with human values do to other people?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.11181

>>11177

Humans do not have well-defined utility functions, and are predictably irrational. A human given free rein to self-modify would quickly fuck it up and fail to preserve their values, and if they actually go FOOM then what comes out the other side would be an inhuman uFAI, guaranteed.

Also, what >>11180 said. Homo homini lupus est.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]