John Searle proposed a thought experiment in 1980, mainly to discuss whether artificial intelligence can truly âunderstand.â
The scenario:
A person who does not know Chinese (say, a foreigner) is locked in a room.
Inside the room, there is a very detailed rulebook that tells him: if he sees a certain Chinese symbol, he should, according to the rules, pick out and combine the corresponding Chinese symbols.
People outside the room write questions in Chinese and pass them in. The person then uses the rulebook to transform the symbols and pass the answers back out.
To the people outside, the answers look like they were written by someone who genuinely understands Chinese. But in fact, the person inside the room does not understand Chinese at allâhe is simply following the rules mechanically to manipulate symbols.
Therefore, even if an AI can give correct answers in Chinese (appearing to âunderstandâ Chinese), it does not prove that it really understands Chinese.
In the âChinese Room,â the core lies in that rulebook (or âprogramâ):
The rulebook specifies the formal operations between symbols (e.g., when a certain symbol comes in, follow the steps to select another symbol, then string them together as output).
The person inside the room is only mechanically executing the rules and does not understand their meaning.
Thus, the room can produce seemingly reasonable answers not because the person understands Chinese, but because the rulebook is powerful enough.
In this theory, what the room outputs depends entirely on how the rulebook is written.
If the rulebook says, âWhen a certain type of input appears, output a response friendly toward Jewish people,â then the output will be friendly toward Jewish people.
If the rulebook says, âWhen a certain type of input appears, output a negative response,â then the output will follow accordingly.
The rulebook itself has no âvalue judgmentâ or âunderstandingâ; it only defines formal rules.
Post too long. Click here to view the full text.