According to a World IQ study, the US ranks 27th in the world with an average IQ of 98. Most people in the US have an IQ between 85 and 115.
IQ has long been the go-to way of measuring and estimating a person’s mental abilities. It has been used to understand why some kids underperform in the curriculum established for all children and it has been used to gauge aptitude and the appropriate career stream for budding teenagers.
Average IQ can also be used to gauge the intelligence of a particular region, city or country. Many studies have been conducted to do just this, with the latest being in 2010. After combining and averaging the scores out, we can get a clear picture of how all the countries of the world fare in terms of IQ.
What is the average IQ in the US?
The average IQ in the United States was found out to be 98, giving the country an overall world rank of 27. Most people in the US have an IQ between 85 and 115.
The global list was topped by Singapore, with a 108 average IQ, while Equatorial Guinea was last on the list with an average IQ of 56.
However, what exactly does this data mean? Should governments of the world use this data as motivation to improve the education system? Or is solely basing the true intelligence of people on IQ a foolish mistake?
Let’s find out!
What is IQ? How is IQ measured?
Historically, many attempts have been made to quantify a person’s intelligence. The modern test of intelligence began with Sir Francis Galton in 1882, when he measured the acuity of vision and hearing in his open laboratory. This was further progressed by James McKeen Cattell in 1890, who devised a mental test to examine the speed and accuracy of his subjects’ perception.
However, these methods proved to be inadequate for measuring academic achievements and subsequently intelligence.
In 1905, Alfred Binet devised the first modern IQ test, taking a very different route than Sir Francis Galton. His application was aimed at gauging why some children performed poorly in a curriculum that was equitably taught to all the children. His test comprised knowledge-based questions and questions that required simple reasoning. Then, to add an external constant criterion to check the individual score, he separated the children by age, making this a constant; generally, he found that older kids are cognitively more advanced than younger kids.
He categorically documented the age at which children were able to solve specific problems and thus arrived at an average solution for each level of the test. Naturally, if a child was able to solve problems typically solved by kids two years older, then that test-taker is two years ahead of the mean for mental development.
The last piece of what would come to be called the ‘intelligence quotient’ was provided by William Stern, who suggested that instead of subtracting a child’s mental age from their chronological age, the mental age should be divided by the chronological age, thus giving a quotient. Later, Lewis Terman revised the Binet test and multiplied the quotient by 100, thus giving us the scores with which we are familiar today (e.g., if a child’s mental age is 8 and their chronological age is 6, the calculation shows her IQ as 8/6 x 100 = 133).
The Average IQ
This test was primarily intended for children, as it was believed that mental cognition doesn’t increase throughout a person’s life. Thus, the mental cognition of a 60-year-old is not empirically more advanced than at 50 years old, so this method was unusable to measure mental abilities in adults.
This roadblock was solved by Donald Wechsler, who designed a system that compared performances from the distribution of the scores. It appeared that those people whose answers were equal to the mean of their age were 100, so the IQ of an average adult became 100, much like the average age of the children in Binet’s system.
The next leap came when these tests began to be used to gauge the intelligence of groups. Tests were scored in an algorithmic fashion with an answer key, removing the need for one-on-one administration by a psychologist. These tests were first carried out by the US army and subsequently spread to other areas, such as the workplace, and even as an anthropological tool for measuring the average intelligence of an entire country.
Criticism and Controversies
IQ has had its fair share of controversies and detractors. Although it is a good way to gauge the intelligence of an individual, it falls short of assessing their full range of intellect. For example, the test does not reflect an individual’s creativity or social intelligence.
IQ tests can also dramatically vary from region to region and country to country. This is largely due to the fact that IQ can be affected by external circumstances, such as a lack of nutrition in childhood, limited access to education, cultural norms in the region, the prevalence of infectious diseases, and many other factors that play a role in developing a person’s intelligence.
This line of reasoning is backed by solid research, as a study found a direct relationship between the intelligence of people and the prevalence of infectious diseases in a country. In the US itself, a study found that places with a higher rate of childhood illnesses had a lower overall IQ.
Another problem is the failure of getting a good sample of a country, as poor representation will invariably result in misinformed scores and inaccurate ratings. This was seen in the ratings of countries in a study in 2010, which consistently gave African countries a score of 70 or less. Other studies have rejected the claim that African countries have a lower average IQ, so such a underperforming trend is owed to flawed sampling of the population.
Although IQ is an optimal way of assessing a person’s abilities, it falls short when taking into consideration the other factors that shape a person’s intellect.
Apart from that, sampling and modeling a whole region, let alone an entire country, is an arduous task that should be taken with a grain of salt, as it is only a general representation extrapolated through a limited sample of data.