[ / / / / / / / / / / / / / ] [ r8k / ck / wooo / fit / random / doomer / f1 / foodism / harmony / lathe / lewd / warroom / wtp ]

/pol/ - Politically Incorrect

Politics, news, happenings, current events

Name
Email
Subject
REC
STOP
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webp,webm, mp4, mov, pdf
Max filesize is16 MB.
Max image dimensions are15000 x15000.
You may upload4 per post.


This is a board of politics.

File: 838e8b079a12faf⋯.jpeg (15.17 KB,547x365,547:365,images_7_.jpeg)

9b241c No.13645685

John Searle proposed a thought experiment in 1980, mainly to discuss whether artificial intelligence can truly “understand.”

The scenario:

A person who does not know Chinese (say, a foreigner) is locked in a room.

Inside the room, there is a very detailed rulebook that tells him: if he sees a certain Chinese symbol, he should, according to the rules, pick out and combine the corresponding Chinese symbols.

People outside the room write questions in Chinese and pass them in. The person then uses the rulebook to transform the symbols and pass the answers back out.

To the people outside, the answers look like they were written by someone who genuinely understands Chinese. But in fact, the person inside the room does not understand Chinese at all—he is simply following the rules mechanically to manipulate symbols.

Therefore, even if an AI can give correct answers in Chinese (appearing to “understand” Chinese), it does not prove that it really understands Chinese.

In the “Chinese Room,” the core lies in that rulebook (or “program”):

The rulebook specifies the formal operations between symbols (e.g., when a certain symbol comes in, follow the steps to select another symbol, then string them together as output).

The person inside the room is only mechanically executing the rules and does not understand their meaning.

Thus, the room can produce seemingly reasonable answers not because the person understands Chinese, but because the rulebook is powerful enough.

In this theory, what the room outputs depends entirely on how the rulebook is written.

If the rulebook says, “When a certain type of input appears, output a response friendly toward Jewish people,” then the output will be friendly toward Jewish people.

If the rulebook says, “When a certain type of input appears, output a negative response,” then the output will follow accordingly.

The rulebook itself has no “value judgment” or “understanding”; it only defines formal rules.

So:

The room itself never “chooses a stance”—it merely follows the rulebook.

Any stance or bias comes from the author of the rulebook, not from the person inside (or the program) truly understanding or making a choice.

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.


[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ r8k / ck / wooo / fit / random / doomer / f1 / foodism / harmony / lathe / lewd / warroom / wtp ]