Across 18 batteries (427,596 people) and a targeted Project Talent reanalysis that matched reliability and length, verbal ability showed a higher loading on general intelligence than math, with spatial, memory, and processing speed lower. A mixed‑effects model controlled for test battery and year, and the within-PT comparison was restricted to 14–18-year-old white males to hold composition constant. This challenges the default assumption that math or spatial subtests are the purest single indicators of g.
— If verbal measures are the strongest single proxy for general intelligence, institutions may need to reconsider how they weight verbal vs math/spatial skills in admissions, hiring, and talent identification.
Steve Stewart-Williams
2025.10.04
56% relevant
Both pieces interrogate the structure of intelligence beyond surface test scores: the existing idea argues verbal subtests best capture g, while this article highlights evidence that specific abilities (e.g., reading/writing, quantitative knowledge, processing speed) have heritable components not fully explained by g—nuancing how much of test performance is g versus domain‑specific factors.
Davide Piffer
2025.09.29
80% relevant
The article contends that general knowledge (a verbal/crystallized measure) can better proxy underlying intelligence than a single reasoning test when epistemic opportunity is similar—echoing evidence that verbal measures load more strongly on g than math or speed in large batteries.
Sebastian Jensen
2025.08.08
100% relevant
Project Talent comparison where matched-reliability verbal subtests (punctuation, reading comprehension, word-function) outloaded math on g after factoring other subtests.
Davide Piffer
2025.08.08
75% relevant
The article argues LLMs convert diverse problems into text and solve them via verbal reasoning, and cites philosophy majors’ strong GRE performance—both bolstering the claim that verbal ability best captures general intelligence.