Wednesday, April 10, 2013
Nuthin' but a 'g' thang
I had a professor at Michigan who always thought in pictures. He would much rather draw a graph than write down a system of equations. As he drew the curves on the board, he would say "OK, here is the thing, and it touches this thing, and they go like this." And that would be it! And I would be sitting there scratching my head and thinking "OK, now are you going to tell us what you just did?" Because I've always been the exact opposite kind of guy; I would rather write down equations and define words for everything. Rarely do I think in pictures. On IQ tests, I do much worse on the "visual" sections.
Over the course of my academic existence, I've often observed this dichotomy. You have the Einstein-type people who seem to visualize everything, and then you have the Heisenberg-type people would would rather use the symbols. So I've always had the intuitive hypothesis that there are different types of intelligence; that different people tend to process information in different ways, whether due to habit or nature.
But then there are all those people who say that intelligence can be boiled down to a single factor, the mysterious "g" (which I assume stands for either "general intelligence" or "gangsta"). Since this went against years of casual observation, I was somewhat pleased to see the eminent Cosma Shalizi write an essay debunking the notion of "g". But then I saw this blog post defending the notion of "g", and claiming that Shalizi makes a bunch of errors. Basically, the disagreement revolves around the question of why most or all psychometric tests and tasks seem positively correlated with each other. Shalizi points out that this correlation structure will naturally lead to the emergence of a "g"-like factor, even if one doesn't really exist; his opponent points out that if no "g" exists, it should be possible to design uncorrelated psychometric tests, which so far has proven extremely difficult to do.
The latter post, by a pseudonymous blogger calling himself "Dalliard", contains a bunch of references to psychometric research that I don't know about and have neither the time nor the will to evaluate, so I'm a bit stumped. Normally I'd leave the matter at that, shrug, and go read something else, but I realized that my intuitive hypothesis about intelligence didn't really seem to be explicitly stated in either of the posts. So I thought I'd explain my conjecture about how intelligence works.
In a nutshell, it's this: What if there are multiple "g's"?
Suppose that simple mental tasks (of the kind apparently used in all psychometric tests) can be performed by a number of different but highly substitutable mental systems. In other words, suppose that any simple information-processing task can be solved using spatial modeling, or solved using symbolic modeling, or solved using some combination of the two. That would result in a positive correlation between all simple information-processing tasks, without any dependence between the two mental abilities.
Let's illustrate this with a simple mathematical example. Suppose the performances of subject i on tests m and n are given by:
P_mi = a + b_m * X_i + c_m * Y_i + e_mi
P_ni = a + b_n * X_i + c_n * Y_i + e_ni
Here, X and Y are two different cognitive abilities. b and c are positive constants. Assume X and Y are uncorrelated, and assume e, the error term, is uncorrelated across tests and across individuals.
In this case, assessing the covariance of performances across two tests m and n across a pooled sample of subjects, we will have:
Cov(P_m, P_n) = b_m * b_n * Var(X) + c_m * c_n * Var(Y) > 0
So even though the two cognitive abilities are uncorrelated (i.e. there is no true, unique “g”), all tests are positively correlated, and thus a “g”-type factor can be extracted for any set of tests.
Now suppose that by luck, we did manage to find "pure" tests for the X and Y. In other words:
P_xi = a + b_x * X_i + e_xi
P_yi = a + c_y * Y_i + e_yi
These tests would have no correlation with each other. But they would have positive correlations with every other test in our (large) battery of tests! So the "positive manifold" (psychometricians' name for the all-positive correlation structure between tests) would still hold, with the one zero-correlation pair attributed to statistical error. Only if we found a whole bunch of tests that each depended only one X or only on Y could we separate the "single g" model from the "two g" model. But doing that would be really hard, especially because in general test-makers try to make the various tests in a battery different from each other, not similar.
Notice that all I need for my "two-g" model to fit the data is that most of the b and c coefficients are nonzero and positive. It makes sense they'd all be positive; more of some mental ability should never hurt when trying to do some task. And the "nonzero" part comes from the conjecture that simple mental tasks can be performed by a number of different, substitutable systems. (Note: the functional form I chose has the two abilities be perfect substitutes, but that is not necessary for the result to hold, as you can easily check.)
Update: A commenter reminds me of the time Richard Feynman discovered that while he counted numbers by internally reciting the numbers to himself, his friend counted by visualizing pictures of the numbers scrolling by. This is the kind of thing I'm thinking of.
So anyway, there's my proposed model of basic intelligence. For those of you who didn't follow it, just imagine several dozen hyperplanes, and project them all onto one hyperplane... ;-)
An addendum: Why do people care whether there is one "g" or several? According to "Dalliard", the notion of multiple types of intelligence is attractive because it suggests that "that everybody could be intelligent in some way." Well, if that's what you want, then realize this: It's true! Remember that psychometric tests are simple mental tasks, but most of the mental tasks we do are complex, like computer programming or chess or writing. And for those tasks, learning and practice matter as much as innate skill, or more (for example, see this study about the neurology of chess players). Therefore, everyone can be "smart" in some way, if "smart" means "good at some complex mental task". Which, in adult American society, it typically does. So don't worry, America: We're all stupid in most of the ways that matter.
...Wait, that didn't come out right...
(Final note: Looking through "Dalliard's" blog, I see that most of it is an attempt to prove that black people are dumber than white people. Sigh. Depressing but hardly surprising. Needless to say, the fact that I addressed a "Dalliard" blog post is not intended as an endorsement of his views or his general interests...)
Update: A commenter points out that Dalliard's post does consider a theory similar to the one I outline here, which he calls the "sampling" theory. Dalliard cites some fairly weak arguments against the theory by someone from long ago, but recognizes that these arguments are weak. Dalliard also makes a good point, which is that for many applications - say, separating kids into classes based on test-taking skill - it doesn't matter whether there is one "g" or many.
Update 2: Here's some recent evidence supporting my conjecture.