Return Home

Factor Analysis of the Beta Test

 

Preliminary Comments, paraphrased from pp. 73, 96-97 of The g Factor, by Arthur Jensen:

The following introductory discussion about g factor rests on the assumptions that the number of tests in an analyzed test battery is sufficiently large to yield reliable factors and that the tests are sufficiently diverse in item types and information content to reflect more than a single narrow ability. These assumptions are not necessarily valid with the Beta Test. Keep in mind these assumptions and the caveats below when interpreting the subsequent analysis.

The first principal factor in a principal factor analysis (the type of analysis performed below) is often interpreted as g. If there really is a g in the matrix, the PF1 will not be very far off the mark as estimates of the variables' true g loadings. But if there really is no g in the matrix, or if the g accounts for only a small part of the total variance, the PF1 can be misleading. This is unlikely in the case of mental ability tests, however, simply because it is extremely hard to make up a set of diverse mental tests that does not have a large g factor.

Principal factor analysis can make a weak general factor look stronger than it really is. For example, it is possible for two or more uncorrelated variables, which actually have no factor in common, to deceptively show substantial loadings on the PF1, in which case the PF1 is not really a general factor. It is possible to ensure that such a "deceptive" g cannot appear by rotating the factor axes to meet "tandem criteria." No axis rotation has been performed in the analysis below.

Principal factor analysis is subject to "psychometric sampling error," i.e., having quite unequal numbers of tests that represent different factors in the test battery. A test battery composed of, say, ten memory tests, three verbal reasoning tests, and three spatial reasoning tests, would not yield a very good g if it were extracted by principal factor analysis. The overrepresentation of memory tests would contaminate the g factor with memory ability.

A hierarchical factor model is therefore generally preferred for estimating the g factor and representing the other factors in the matrix. A two-strata hierarchical analysis is not feasible, however, unless there are enough different kinds of tests to produce at least three group factors (with a minimum of three tests per factor).


Prior to Bill McGaugh's performing a factor analysis of the Beta Test, "Scien" subjectively grouped items into one of three categories, which are described below.

Thanks for letting me take a look at this test. I went through the answer sheet, looking at the construction of each item, and I have grouped them together into the sections they seem to come under:

1) Nonverbal squence: fluid + spatial (a few of these might be as good as the Ravens, a couple others may involve more spatial factor than fluid factor).

1, 2, 4, 5, 6, 7, 8, 9, 11, 20, 35

2) Mathematical, spatial mathematical, rotational (spatial) mathematical

a) (Mathematical) 3, 23, 24, 26, 29, 30, 31, 32, 33
b) (Spatial mathematical) 27, 28, 34, 37, 38, 39, 40, 41, 42
c) (Rotational 'spatial' mathematical) 10, 19, 25, 36

3) Verbal

12, 13, 14, 15, 16, 17, 18, 21, 22

I think the mathematical and verbal sections would mirror the c factors in the SAT.


Bill McGaugh wrote:

I have been following some of this and spent some time last night factor analyzing the bmg data set...for the bmg data set, I have more than just some SAT scores...I also have reading comprehension, vocabulary, and math scores from a nationally normed test... the reading comprehension scores could " serve as a proxy for IQ" (Jensen)... there are over 100 subjects with both achievement test and beta test score pairs... I'll keep keep working with the data and report back here when I get the chance... by the way, last night I was using the demo of a factor analysis program for excel called winstat...see winstat.com for a 30 day trial...

 

Later, Bill wrote the following (the vocab factor loading of 0.01 is not a typo).

There are many ways to factor analyze data sets, and many decisions that can be made with respect to the organization of the data...I have been playing with the some beta test data... I took the 41 subjects that I had beta test scores, sat-verbal, sat-math, vocabulary test scores, reading comprehension test scores, and math achievement test scores. I sorted them by reading comprehension scores (an arbitrary decision, but that is a good proxy for iq)... I first factor analyzed the entire data set...then the top 21...then the bottom 20... the factor loadings on the principal factor (the first number is the entire set, second is the top 21, third is the bottom 20):

Test   Factor loading on  
  principal factor  
  (entire 41 scores)  
  Factor loading on  
  principal factor (top
  21, sorted by reading  
  comprehension scores)  
  Factor loading on  
  principal factor (bottom  
  20, sorted by reading  
  comprehension scores)  
  beta test     0.79     0.76     0.78  
  sat-v   0.83     0.62     0.79  
  sat-m   0.79     0.78     0.75  
  vocab   0.61     0.01     0.58  
  reading
  comprehension  
0.58 0.46 0.16
  math
  achievement
0.72 0.68 0.77

interpret it however you would like ;-) we need a larger n...I do have over 100 subjects with beta test scores and achievement test scores (but only 41 sat scores)...I have experimented with breaking the beta test into 3 parts, as described by scien...but nothing conclusive to report yet... if I sort into high and low groups using the beta test or any of the other tests, instead of reading comprehension, the results will vary...obviously..

 

Still later, Bill wrote:

I have 106 subjects with beta test scores and achievement test. I divided the beta test into three subtests (see scien's post): fluid, math and verbal...the achievement tests were vocab, reading comprehension and math...I factor analyzed the six tests... the loadings on the principal factor were:

Test   Factor loading on  
  principal factor
  (106 scores)
  beta test, fluid 0.64
  beta test, math 0.81
  beta test, verbal   0.80
  vocab 0.77
  reading 0.68
  math/ach   0.86

 

I sorted the entire group into order based on the fluid score and factor analyzed the top half and the bottom half... the first score is the top, second is the bottom.

Test   Factor loading on  
  principal factor
  (top 53, sorted
  by fluid score)
  Factor loading on  
  principal factor
  (bottom 53, sorted  
  by fluid score)
  beta test, fluid 0.21 0.40
  beta test, math 0.67 0.74
  beta test, verbal   0.59 0.79
  vocab 0.49 0.82
  reading 0.63 0.72
  math/ach   0.61 0.88

 

then I repeated the process, this time sorting by math achievement score, the test with the highest loading on the principal factor... the results:

Test   Factor loading on  
  principal factor
  (top 53, sorted
  by math/ach score)  
  Factor loading on  
  principal factor
  (bottom 53, sorted  
  by math/ach score)  
  beta test, fluid 0.45 0.45
  beta test, math 0.69 0.64
  beta test, verbal   0.69 0.73
  vocab 0.31 0.81
  reading 0.28 0.60
  math/ach   0.51 0.78

 

Return Home