Searle's Chinese Room
Written by Jaime Gómez   
Tuesday, 24 January 2006

The Chinese Room is a mental experiment proposed by the american philosopher John Searle to highlight weaknesses of the classic artificial intelligence research program.

This mental experiment focus on the issues related to the process of language understanding, questioning the possibility of real understanding by the machines postulated by AI.

Searle first formulated this problem in his now classic paper "Minds, brains and programs" that was published in 1980 (link) in the journal Behavioral and Brain Sciences.

Ever since, it has been a mainstay of debate over the possibility of what Searle called strong artificial intelligence, i.e.the possibility of building machines that not only seem to think like us but indeed do.

The Chinese Room argument can be stated in four steps:

i. Programs are syntactical

ii. Minds have semantic concepts

iii. Syntax is not sufficient for semantics

iiii.Therefore, programs are not minds.

Searle's conclusion is that the intelligent machines postulated by AI are just mindless manipulators of symbols, just as the man in the room is - and they don't understand what they're 'saying', just as he doesn't.

Seminar info

Searle's Chinese Room
Jaime Gómez
Feb. 23, 2006 12:30

Many people in A.I, even in Strong A.I, concede that computers are not conscious, but they think consciousness is unimportant anyway. What matters is Intentionality, and computers can have intentionality. Searle postulates that one advantage of his Chinese Room argument is that it does not depend on consciousness. It applies to intentionallity as well. In the seminar an Intentianlity definition will be shown, and the distinction consciousness between and intentionality clarified.

Last Updated ( Monday, 20 February 2006 )