Every year, thousands of high school students take part in a harrowing, yet requisite process. Obligated to wake up early on what would typically be a relaxing Saturday morning, students are afforded the pleasure of spending three hours in a quiet room with nothing but a pencil and, occasionally, a calculator to keep them company. Each one of these students is about to part take in a test that will supposedly categorically predict their future success; this is the Scholastic Aptitude Test, perhaps better known by its acronym: SAT. Although this test has been around in some form or another for almost a century, a lot of controversy has risen over the past decade as to whether or not this test has any true validity. Is sitting at desk for three hours, answering multiple-choice questions, and having only a fifteen-minute interval going to truly evaluate college success? Likely not. The idea that a multiple-choice test can, or will, actually ascertain an individual's success in college is backed, at best, by only a modicum of evidence. Simply put: SAT's do not provide an accurate or reliable indication of an individual's future college success.
This Scholastic Aptitude Test originated from the Alpha Army and was designed to determine the IQ of new recruits. Carl Brigham, a professor at Princeton University, saw the potential benefits of such a test and began adapting it for use in college admissions. A few years later, Brigham's early version of the SAT was presented to the president of Harvard to be used to evaluate scholarship candidates. In 1938 the idea of using the SAT as a uniform exam was brought to the attention of the College Board. However, it was only until after World War II that the SAT became a test for all college applicants. Over the years the SAT has changed significantly from its humble beginning "new sections were added and dropped, and vigorous attempts were made to address its shortcomings.