In 2011, Bryan Caplan came up with an idea that has gained popularity in certain circles: the Ideological Turing Test. Caplan's original formulation, which was in reference to an argument between himself and Paul Krugman, went like this:
We don't have to idly speculate about how well adherents of various ideologies understand each other. We can measure the performance of anyone inclined to boast about his superior insight.
How? Here's just one approach. Put me and five random liberal social science Ph.D.s in a chat room. Let liberal readers ask questions for an hour, then vote on who isn't really a liberal. Then put Krugman and five random libertarian social science Ph.D.s in a chat room. Let libertarian readers ask questions for an hour, then vote on who isn't really a libertarian. Simple as that.Actual ITTs are, of course, not very feasible. For one thing, they would have to be anonymous. For another thing, they would have to involve the participation of several "authentic" members of an ideology, and the certification of authenticity must be performed by someone other than the tester. For these reasons, actual ITTs are
But even if they were done, I don't think ITTs would be a good measure of how much someone understands an ideology. This is because ITTs seem relatively easy to pass using Chinese Room tactics. Unlike intelligence itself (which the original Turing Test tests for), ideology has a finite, circumscribed set of inputs and outputs.
For example, suppose I'm taking an Ideological Turing Test for Austrian economics. The questioner asks: "Why must we accept the axiom that Humans Act?" And I answer: "The action axiom is itself a self-referential proposition; the statement, "Humans act," constitutes an action. The goal of the action is the positive assertion of the action axiom; the means is the statement. The positive assertion of the action axiom can be read thus: "This assertion of the action axiom is itself an action." It is thus a self-referential statement. The attempt to deny the action axiom is also self-referential. It amounts to stating, "Action does not exist; therefore, this statement is not an action."
Now I have no idea what the heck "Humans act" even means, and that answer, which was copied from the Mises Institute website, makes no sense to me either. I don't understand it. I'm not even sure there is something there to understand. But I could give the answer. I could fiddle with the sentence structure and word choice until it sounded more extemporaneous and less like a chapter-and-verse recitation. And I'm betting that there's a decent chance that an Austrian test administrator would pick me as the real Austrian over someone who really believes in Austrian stuff but hasn't memorized the Mises Institute website quite as carefully. And since ideologies are finite, there are only a finite number of such answers I'd have to memorize.
A good real-life example of someone passing a sort of ITT without understanding the ideology in question is the Sokal Hoax, who published a gibberish paper in a postmodern cultural studies journal without knowing the first thing about cultural studies. The reviewers, who undoubtedly rejected lots of other submissions from authors who did know quite a lot about postmodern cultural studies, were very close to Turing Test administrators, and postmodern studies is not too different from an ideology (*ducks*). In fact, I think it would be a lot easier for some version of the Postmodern Essay Generator to get a paper published than for it to pass a real Turing Test of general intelligence.
If you don't believe me, try it out yourself! Go into some political chat room that subscribes to some ideology you disagree with. Try to convince the room that one of the other chatters is a poser and not a true believer. I bet you can do it. Now ask yourself how well you really understand the ideology you just enforced.
To put it bluntly, ideologies are large parts bullshit to begin with, and so it's possible to bullshit your way through ideological tests.
But this is a bit academic. The fact is, for reasons mentioned above, actual Ideological Turing Tests are impractical. Instead, what you usually hear is people saying to an intellectual opponent: "I bet you couldn't pass an Ideological Turing Test." But since public arguments are very, very far away from an actual ITT, this is just bluster. It's a nerdy-sounding way to say "You're too dumb to understand my point."
In the real world, invoking the name of the ITT is usually just a way to insist that an opponent adopt one's preferred terminology, and/or accept certain of one's own premises, before proceeding with the argument. In other words, it is asking for a handicap in a debate. And if the challenge is accepted - if one party agrees to argue only on the other's terms, just to show how broad-minded they are - it tends to impoverish the debate as a whole. Debates are often more productive, and lead to more actual mutual understanding and learning, when each person argues on their own terms.
So I think the Ideological Turing Test should drop the "Turing" part and just call itself what it really is: an Ideological Test. It was an intriguing idea, but I think it's time to put it to rest.
There are other problems with ITTs.
For example: Suppose someone passes an ITT and then claims not to understand the ideology on which he was tested (thus claiming to have spoofed the test). Is his claim of non-understanding credible? Remember that real Turing Tests define intelligence as the ability to pass the test! Should we define "understanding" using the ITT itself?
Another problem: What does it mean to "understand" something that is logically incoherent? And aren't some parts of at least some ideologies logically incoherent?
Oh, and in case you're wondering, I DO think it's very good to try to understand the point of view, and the ideas, of one's opponents. If the desire to be able to pass a hypothetical ITT motivates you to try hard to understand your opponents' point of views, well then by all means, do it!
Adam Gurri defends the spirit of the ITT. I agree with all his points. I tend to think literally about this sort of thing, I guess. Also, I recently read Blindsight, by Peter Watts, a science fiction novel that deals with non-self-aware intelligence, and I was excited to apply that idea to a blogosphere debate. :-)