Graduate Record Examination
From Wikipedia, the free encyclopedia
Graduate Record Examination or GRE is a commercially run standardized test that is an admission requirement for many graduate schools principally in the United States, but also in other English speaking countries. Created and administered by Educational Testing Service (or ETS) in 1949,[1] the exam is primarily focused on testing abstract thinking skills in the areas of mathematics, vocabulary, and analytical writing. The GRE is typically a computer-based exam that is administered by select qualified testing centers; however, paper-based exams are offered in areas of the world that lack the technological requirements.
In the graduate school admissions process, the level of emphasis that is placed upon GRE scores varies widely between schools and even departments within schools. The importance of a GRE score can range from being an important selection factor to being a mere admission formality.
Critics of the GRE have argued that the exam format is so rigid that it effectively tests only how well a student can conform to a standardized test taking procedure.[2] ETS responded by announcing plans in 2006 to radically redesign the test structure starting in the fall of 2007; however, the company has since announced, "Plans for launching an entirely new test all at once were dropped, and ETS decided to introduce new question types and improvements gradually over time." The new questions have been gradually introduced since November 2007.[3]
In the United States and Canada, the cost of the general test is $150 as of September, 2009, although ETS will reduce the fee under certain circumstances. They are promoting financial aid to those GRE applicants who prove economic hardship [4]. ETS erases all test records that are older than 5 years, although graduate program policies on the admittance of scores older than 5 years will vary.
Contents[hide] |
[edit] Structure
This article may require cleanup to meet Wikipedia's quality standards. Please improve this article if you can. (July 2008) |
The exam consists of four sections. The first section is a writing section, while the other three are multiple-choice style. One of the multiple choice style exams will test verbal skills, another will test quantitative skills and a third exam will be an experimental section that is not included in the reported score. Test takers do not know which of the three multiple-choice sections is the experimental section. The entire test procedure takes about 4 hours.[5]
[edit] Analytical writing section
The analytical writing section consists of two different essays, an "issue task" and an "argument task". The writing section is graded on a scale of 0-6, in half-point increments. The essays are written on a computer using a word processing program specifically designed by ETS. The program allows only basic computer functions and does not contain a spell-checker or other advanced features. Each essay is scored by at least two readers on a six-point holistic scale. If the two scores are within one point, the average of the scores is taken. If the two scores differ by more than a point, a third reader examines the response.
[edit] Issue task
The test taker will be able to choose between two topics upon which to write an essay. The time allowed for this essay is 45 minutes.[6]
[edit] Argument task
The test taker will be given an "argument" and will be asked to write an essay that critiques the argument. Test takers are asked to consider the argument´s logic and to make suggestions about how to improve the logic of the argument. The time allotted for this essay is 30 minutes.[7]
[edit] Verbal section
One graded multiple-choice section is always a verbal section, consisting of analogies, antonyms, sentence completions, and reading comprehension passages. Multiple-choice response sections are graded on a scale of 200-800, in 10-point increments. This section primarily tests vocabulary, and average scores in this section are substantially lower than those in the quantitative section.[8] In a typical examination, this section may consist of 30 questions, and 30 minutes may be allotted to complete the section.[9]
[edit] Quantitative section
The quantitative section, the other multiple-choice section, consists of problem solving and quantitative comparison questions that test high-school level mathematics. Multiple-choice response sections are graded on a scale of 200-800, in 10-point increments. In a typical examination, this section may consist of 28 questions, and test takers may be given 45 minutes to complete the section.[10]
[edit] Experimental section
The experimental section, which can be either a verbal, quantitative, or analytical writing task, contains new questions that ETS is considering for future use. Although the experimental section does not count toward the test-taker's score, it is unidentified and appears identical to the real (scored) part of the test. As test takers have no clear way of knowing which section is experimental, they are forced to complete this section, or risk seriously damaging their final scores.[11]
If the experimental section appears as an analytical writing question (essay), if an "issue" type question is presented, a choice between two topics will not be given. This coupled with the fact that the true analytical writing section is the first test given can help the test-taker to deduce which is the experimental section and the taker can thus lower the importance of that section.[citation needed]
[edit] Scoring
[edit] Computerized adaptive testing
The common (Verbal and Quantitative) multiple-choice portions of the exam currently use computer-adaptive testing (CAT) methods that automatically change the difficulty of questions as the test taker proceeds with the exam, depending on the number of correct or incorrect answers that are given. The test taker is not allowed to go back and change the answers to previous questions, and some type of answer must be given before the next question is presented.
The first question that is given in a multiple-choice section is considered to be an "average level" question that half of the GRE test takers will answer correctly. If the question is answered correctly, then subsequent questions become more difficult. If the question is answered incorrectly, then subsequent questions become easier, until a question is answered correctly.[12] This approach to administration yields scores that are of similar accuracy while using approximately half as many items.[13] However, this effect is moderated with the GRE because it has a fixed length; true CATs are variable length, where the test will stop itself once it has zeroed in on a candidate's ability level.
The actual scoring of the test is done with item response theory (IRT). While CAT is associated with IRT, IRT is actually used to score non-CAT exams. The GRE subject tests, which are administered in the traditional paper-and-pencil format, use the same IRT scoring algorithm. The difference that CAT provides is that items are dynamically selected so that the test taker only sees items of appropriate difficulty. Besides the psychometric benefits, this has the added benefit of not wasting the examinee's time by administering items that are far too hard or easy. This occurs in fixed-form testing.
An examinee can miss one or more questions on a multiple-choice section and still receive a perfect score of 800. Likewise, even if no question is answered correctly, 200 is the lowest score possible.[14]
[edit] Scaled score percentiles
The percentiles of the current test are as follows:[15]
Scaled score | Verbal Reasoning % | Quantitative Reasoning % |
---|---|---|
800 | 99 | 94 |
780 | 99 | 90 |
760 | 99 | 86 |
740 | 99 | 82 |
720 | 98 | 77 |
700 | 97 | 72 |
680 | 96 | 68 |
660 | 94 | 63 |
640 | 91 | 58 |
620 | 89 | 53 |
600 | 85 | 49 |
580 | 81 | 44 |
560 | 76 | 40 |
540 | 71 | 35 |
520 | 65 | 31 |
500 | 60 | 28 |
480 | 55 | 24 |
460 | 49 | 21 |
440 | 43 | 18 |
420 | 37 | 15 |
400 | 31 | 13 |
380 | 26 | 11 |
360 | 21 | 9 |
340 | 15 | 7 |
320 | 10 | 5 |
300 | 6 | 4 |
280 | 3 | 3 |
260 | 1 | 2 |
240 | 1 | 1 |
220 | 0 | 1 |
200 | 0 | 0 |
mean | 465 | 584 |
Analytical Writing score | Writing % |
---|---|
6 | 96 |
5.5 | 88 |
5 | 77 |
4.5 | 54 |
4 | 33 |
3.5 | 18 |
3 | 7 |
2.5 | 2 |
2 | 1 |
1.5 | 0 |
1 | 0 |
0.5 | 0 |
mean | 4.1 |
Comparisons for "Intended Graduate Major" are "limited to those who earned their college degrees up to two years prior to the test date." ETS provides no score data for "non-traditional" students who have been out of school more than two years, although its own report "RR-99-16" indicated that 22% of all test takers in 1996 were over the age of 30.
[edit] Use in admissions
Many graduate schools in English-speaking countries (especially in the United States) require GRE results as part of the admissions process. The GRE is a standardized test intended to measure the abilities of all graduates in tasks of general academic nature, regardless of their fields of specialization. The GRE is supposed to measure the extent to which undergraduate education has developed an individual's verbal and quantitative skills in abstract thinking.
Unlike other standardized admissions tests (such as the SAT, LSAT, and MCAT), the use and weight of GRE scores vary considerably not only from school to school, but from department to department, and from program to program too. Programs in liberal arts topics may only consider the applicant's verbal score to be of interest, while mathematics and science programs may only consider quantitative ability; however, since most applicants to mathematics, science, or engineering graduate programs all have high quantitative scores, the verbal score can become a deciding factor even in these programs. Admission to graduate schools depends on a complex mix of several different factors. Schools see letters of recommendation, statement of purpose, GPA, GRE score etc.[16] Some schools use the GRE in admissions decisions, but not in funding decisions; others use the GRE for the selection of scholarship and fellowship candidates, but not for admissions. In some cases, the GRE may be a general requirement for graduate admissions imposed by the university, while particular departments may not consider the scores at all. Graduate schools will typically provide information about how the GRE is considered in admissions and funding decisions, and the average scores of previously admitted students. The best way to find out how a particular school or program evaluates a GRE score in the admissions process is to contact the person in charge of graduate admissions for the specific program in question (and not the graduate school in general).
Programs that involve significant expository writing require the submission of a prepared writing sample that is considered more useful in determining writing ability than the analytical writing section; however, the writing scores of foreign students are sometimes given more scrutiny and are used as an indicator of overall comfort with and mastery of conversational English.
[edit] GRE Subject Tests
In addition to the General Test, there are also eight GRE Subject Tests testing knowledge in the specific areas of Biochemistry, Cell and Molecular Biology, Biology, Chemistry, Computer Science, Literature in English, Mathematics, Physics, and Psychology. In the past, subject tests were also offered in the areas of Economics, Revised Education, Engineering, Geology, History, Music, Political Science, and Sociology. In April 1998, the Revised Education and Political Science exams were discontinued. In April 2000, the History and Sociology exams were discontinued, and the other four were discontinued in April 2001.[4]
[edit] GRE and GMAT
GMAT (The Graduate Management Admission Test) is a computer adaptive standardized test in mathematics and the English language for measuring aptitude to succeed academically in graduate business studies. Business schools commonly use the test as one of many selection criteria for admission into an MBA program. However, there are many business schools that also accept GRE scores.
The following are criteria of certain business schools:
- U Penn-Wharton School: Official test scores for the GMAT or GRE tests.
- Stanford: Finance - The GRE is preferred, although the GMAT will be accepted.
- NYU-Stern School: The GMAT is strongly prefered, but scores from the Graduate Record Examination (GRE) will also be accepted.
- U Chicago: For Economics - the GRE is required. For Finance - the GRE is preferred; GMAT is acceptable. For all other areas - the GRE or the GMAT are accepted.
- Berkeley-Haas: Without exception, all applicants to the Haas Ph.D. Program must submit official scores of either the Graduate Management Admissions Test (GMAT) or the Graduate Examination.
In comparison with GMAT's emphasis on logic, GRE measures the test-takers' ability more in vocabulary. This difference is reflected in the structure of each test. Despite the Analytical Writing section in common, GRE has analogies, antonyms, sentence completions, and reading comprehension passages in Verbal section, while GMAT has sentence correction, critical reasoning and reading comprehension.
Also, higher mathematical ability is required in GMAT to get a good score. The GRE is more appealing to international MBA students and applicants from a non-traditional background.[17]
[edit] Preparation
A variety of resources are available for those wishing to prepare for the GRE. Upon registration, ETS provides preparation software called PowerPrep, which contains two practice tests of retired questions, as well as further practice questions and review material. Since the software replicates both the test format and the questions used, it can be useful to predict the actual GRE scores. ETS does not license their past questions to any other company, making them the only source for official retired material. ETS used to publish the "BIG BOOK" which contained a number of actual GRE questions; however, this publishing was abandoned. Several companies provide courses, books and other unofficial preparation materials.
ETS has claimed that content of the GRE is "un-coachable"; however, many test preparation companies like Kaplan, Princeton Review, IMS Learning Resources, VISU etc claim that the test format is so rigid that familiarizing oneself with the test's organization, timing, specific foci, and the use of process of elimination is the best way to increase a GRE score.[18]
[edit] Testing locations
While the general and subject tests are held at many undergraduate institutions, the computer-based general test is only held at test centers with appropriate technological accommodations. Students in major cities in the United States, or those attending large U.S. universities, will usually find a nearby test center, while those in more isolated areas may have to travel a few hours to an urban or university location. Many industrialized countries also have test centers, but at times test-takers must cross country borders.
[edit] Validity
An analysis of the GRE's validity in predicting graduate school success found a correlation of .30 to .45 between the GRE and both first year and overall graduate GPA. The correlation between GRE score and graduate school completion rates ranged from .11 (for the now defunct analytical section) to .39 (for the GRE subject test). Correlations with faculty ratings ranged from .35 to .50.[19]
[edit] Criticism
Test takers complain about the strict test center rules. For instance, test takers may not use pens or bring their own scrap paper. Paper and pencils are provided at the testing center. Food and drink are prohibited in the test centers, as well as chewing gum. Personal items such as jackets and hats are subject to inspection. However, such rules are relevant to all high stakes tests, not just the GRE.
[edit] Bias
Critics have claimed that the computer-adaptive methodology may discourage some test takers, because the question difficulty changes with performance.[citation needed] For example, if the test-taker is presented with remarkably easy questions half way into the exam, they may infer that they are not performing well, which will influence their abilities as the exam continues, even though question difficulty is subjective. By contrast standard testing methods may discourage students by giving them more difficult items earlier on.
Critics have also stated that the computer-adaptive method of placing more weight on the first several questions is biased against test takers who typically perform poorly at the beginning of a test due to stress or confusion before becoming more comfortable as the exam continues.[20] Of course standard fixed-form tests could equally be said to be "biased" against students with less testing stamina since they would need to be approximately twice the length of an equivalent computer adaptive test to obtain a similar level of precision.[21]
The GRE has also been subjected to the same racial bias criticisms that have been lodged against other admissions tests. In 1998, the Journal of Blacks in Higher Education noted that the mean score for black test-takers in 1996 was 389 on the verbal section, 409 on the quantitative section, and 423 on the analytic, while white test-takers averaged 496, 538, and 564, respectively.[22] Note that simple mean score differences do not constitute evidence of bias unless the populations are known to be equal in ability, and insisting that group score difference are direct evidence of a bad test is an extreme position.[23] A more effective, accepted, and empirical approach is the analysis of differential test functioning, which examines the differences in item response theory curves for subgroups; the best approach for this is the DFIT framework. [24]
There is also a bias towards those students who have the financial resources to take privately owned test-taking classes. These classes do typically result in better scores;[citation needed] however, many such companies and tutors focus solely on how to use the test's format to one's advantage, and not how to actually learn the material on the exam.
[edit] Weak predictor of graduate school performance
The GREs are criticized for not being a true measure of whether a student will be successful in graduate school. Robert Sternberg of Tufts University claimed that the GRE general test was weakly predictive of success in graduate studies in psychology.[citation needed] The claim of weak predictability might be related to the mathematics portion of the GRE general test because a good foundation of mathematics is important in understanding advanced statistics. However, in some branches of psychology, the application of statistics is minimal.
The ETS published a report ("What is the Value of the GRE?") that points out the predictive value of the GRE on a student's index of success at the graduate level.[25] As mentioned earlier, the validity coefficients range from .30 to .45 between the GRE and both first year and overall graduate GPA.[26]
[edit] Historical susceptibility to cheating
In May of 1994, Kaplan, Inc warned ETS, in hearings before a New York legislative committee, that the small question pool available to the computer-adaptive test made it vulnerable to cheating. ETS assured investigators that it was using multiple sets of questions and that the test was secure. This was later discovered to be incorrect.[27]
In December of 1994, prompted by student reports of recycled questions, former Director of GRE Programs for Kaplan, Inc and current CEO of Knewton, Jose Ferreira led a team of 22 staff members deployed to 9 U.S. to cities to take the exam. Kaplan, Inc then presented ETS with 150 questions, representing 70-80% of the GRE.[28] According to early news releases, ETS appeared grateful to Stanley H. Kaplan, Inc for identifying the security problem. However, on December 31, ETS sued Kaplan, Inc for violating a federal electronic communications privacy act, copyright laws, break of contract and fraud, and a confidentiality agreement signed by test-takers on test day. [29] On January 2, 1995, an agreement was reached out of court.
Additionally, in 1994, the scoring algorithm for the computer-adaptive form of the GRE was discovered to be insecure. ETS acknowledged that Kaplan, Inc employees, led by Jose Ferreira, reverse-engineered key features of the GRE scoring algorithms. The researchers found that a test taker’s performance on the first few questions of the exam had a disproportionate effect on the test taker’s final score. To preserve the integrity of scores, ETS revised its scoring and uses a more sophisticated scoring algorithm.
[edit] Plans for the revised GRE
In 2006, ETS announced plans to enact significant changes in the format of the GRE. Planned changes for the revised GRE included a longer testing time, a departure from computer-adaptive testing, a new grading scale, and an enhanced focus on reasoning skills and critical thinking for both the quantitative and qualitative sections.[30]
On April 2, 2007, ETS announced the decision to cancel plans for revising the GRE.[31] The announcement cited concerns over the ability to provide clear and equal access to the new test after the planned change as an explanation for the cancellation. They did state, however, that they do plan "to implement many of the planned test content improvements in the future", although exact details regarding those changes have not yet been announced.
Changes to the GRE took effect on November 1, 2007, as ETS started to include new types of questions in the exam. The changes mostly center on "fill in the blank" type answers for both the mathematics and vocabulary sections that require the test-taker to fill in the blank directly, without being able to choose from a multiple choice list of answers. ETS currently plans to introduce two of these new types of questions in each quantitative or vocabulary section, while the majority of questions will presented in the regular format.[32]
On January 2008, the Reading Comprehension within the verbal sections has been reformatted, passages' "line numbers will be replaced with highlighting when necessary in order to focus the test taker on specific information in the passage" to "help students more easily find the pertinent information in reading passages."[33]
[edit] GRE prior to October 2002
Prior to October 2002, the GRE had a separate Analytical Ability section which tested candidates on logical and analytical reasoning abilities. This section has now been replaced by the Analytical Writing portion.
[edit] References
- ^ Alternative Admissions and Scholarship Selection Measures in Higher Education.
- ^ Princeton Review, Cracking the GRE, 2007 edition p. 19 # ISBN 0375765514, ISBN 978-0375765513. 2006
- ^ GRE General Test to Include New Questions
- ^ MBA Channel: "GRE:Wharton joins the club" 31 July 2009
- ^ GRE Test Content
- ^ GRE Test Content
- ^ GRE Test Content
- ^ PowerScore GRE Preparation. Retrieved February 4, 2007, from PowerScore GRE Preparation.
- ^ GRE Test Content
- ^ GRE Test Content
- ^ GRE Test Content
- ^ Princeton Review, Cracking the GRE, 2007 edition p. 19 # ISBN 0375765514, ISBN 978-0375765513. 2006
- ^ Weiss, D.J., & Kingsbury, G.G.(1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21, 361-375.
- ^ http://www.eduers.com/gre/exam.htm
- ^ GRE: Guide to the Use of Scores 2007-08. Retrieved October 25, 2007, from Guide to the Use of Scores 2007-08.
- ^ Mission GRE Mission GRE offers GRE tests and admission prediction tool
- ^ MBA Channel: "GRE: Wharton joins the club" 31 July 2009
- ^ Princeton Review, Cracking the GRE, 2007 edition p. 19 # ISBN 0375765514, ISBN 978-0375765513. 2006
- ^ Kuncel, N. R., Hezlett, S. A. and Ones, D. S. (2001). A comprehensive meta-analysis of the predictive validity of the Graduate Record Examinations: Implications for graduate student selection and performance. Psychological Bulletin, 127 (1), 162-181. [1]
- ^ "Testing service cancels February GRE"
- ^ Weiss, D.J., & Kingsbury, G.G.(1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21, 361-375.
- ^ "Estimating the Effect a Ban on Racial Preferences Would Have on African- American Admissions to the Nation's Leading Graduate Schools." The Journal of Blacks in Higher Education, No. 19. (Spring, 1998), pp. 80–82.
- ^ The Achievement Gap: Test Bias or School Structures? National Association of Test Directors 2004 Symposia [2]
- ^ Oshima, T. C., & Morris, S. B. (2008). Raju's Differential Functioning of Items and Tests (DFIT). Educational Measurement: Issues and Practice, 27(3), 43-50.
- ^ http://www.ets.org/Media/Tests/GRE/pdf/grevalue.pdf
- ^ Kuncel, N. R., Hezlett, S. A. and Ones, D. S. (2001). A comprehensive meta-analysis of the predictive validity of the Graduate Record Examinations: Implications for graduate student selection and performance. Psychological Bulletin, 127 (1), 162-181. [3]
- ^ http://www.nytimes.com/1997/09/28/us/giant-of-exam-business-keeps-quiet-on-cheating.html?sec=&spon=&pagewanted=all
- ^ http://www.nytimes.com/1994/12/16/us/computer-admissions-test-found-to-be-ripe-for-abuse.html?scp=1&sq=Ripe%20for%20abuse&st=cse
- ^ http://articles.latimes.com/1995-01-01/news/mn-15369_1_educational-testing-service
- ^ Comparison Chart of GRE Changes
- ^ Plans for the Revised GRE Cancelled
- ^ GRE General Test to Include New Question Types in November
- ^ Revisions to the Computer-based GRE General Test in 2008
[edit] See also
- List of admissions tests
- Business School
- Graduate school
- Law School
- Medical School
- SAT
- ACT (test)
- LSAT
- MCAT
- GMAT
- TOEFL
- Licensure
- Master's degree
- Doctorate degree
- First professional degree
- Professional degree
- Terminal degree
[edit] External links
- Educational Testing Service
- What is GRE? Glasgow University Help Page
- Institution Code List – List of institutions (with code numbers) that receive GRE scores
- GRE Practice test