No.12964
From a Rolf Degen tweet
"Machiavellists and psychopaths saw greater dangers in artificial intelligence, projecting their own malicious streaks onto the concept."
https://econtent.hogrefe.com/doi/abs/10.1024/1421-0185/a000214
____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.12966
>>12964
Rolf Degen should consider how he treats ants. Malice or even lack of sympathy are not necessary for human extermination to be convenient.
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.12973
>>12964
>Machiavellians think others aren't "basically good and kind", have pathologically low time preference and tend to pursue future flexibility maximization
They sound more capable of empathizing with Clippy than other humans. Is this supposed to reflect badly on their ability to estimate AI risk?
Someone please post the PDF here. I can't get it through Sci-Hub.
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.12974
>>12973
>>12966
The way you perceive AI risk varies with dark triad traits, people basically interpret AI risk to be omnipotent versions of themselves.
Nobody let big yud near a paperclip factory
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.12979
>>12964
I hate this naive trend of attributing systemic issues to a lack of personal virtue. "It's the greedy bankers!" - change their fucking incentives, then!
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.12989
Lmao
Well, I hate to say it but might is right. Humanity will either improve or be doomed.
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.12990
>>12979
Exactly. Incentives matter a lot.
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.12991
>>12979
I do hate that trend, but I don't really see it here.
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.12993
>>12991
Framing the issue of AI safety as "of course the machiavellian STEM autists would project their own failings onto AI" frees critics from the burden of having to think about Omohundro drives and instead plays to their strengths of armchair psychologizing and sneering at bad greys.
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.13023
>>12979
Greed, for the lack of a better word, is good.
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.13024
>>13023
No, it isn't. It's amoral.
Not everything needs to be fundamentally evil or virtuous. Greed can lead to excellent outcomes and it can lead to disastrous outcomes.
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.
No.13028
>>12964
That's a lot of words to say "most people are dumbass normies who anthropomorphise AI".
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.