Psychometric Entrance Test

The Psychometric Entrance Test (PET, colloquially known in Hebrew as "the Psychometric"—ha-Psikhometri, הפסיכומטרי) is a standardized test in Israel, generally taken as a higher education entrance exam. The PET covers three areas: mathematics, verbal reasoning and the English language. It is administered by the Israeli National Institute for Testing and Evaluation (NITE) and is heavily weighed for university admissions.

The test may be taken in Hebrew, Arabic, Russian, French, Spanish, or combined Hebrew/English. There are generally five dates the test may be taken each year, between February, April, July, October, and December. Hebrew may be taken at any date; Arabic at four dates; Russian and combined Hebrew/English at two dates; and French and Spanish at one date. Taking the test on two consecutive dates is not allowed; this results in the test being disqualified. The results are valid for university admission for seven years.

Purpose

According to the National Institute for Testing and Evaluation,

The Psychometric Entrance Test (PET) is a tool for predicting academic performance, and is used by institutions of higher education to screen applicants for the various departments. The test ranks all applicants on a uniform scale and, compared to other admissions tools, is less affected by differences in applicants' backgrounds or other subjective factors.
A large body of research demonstrates the high predictive ability of the Psychometric Entrance Test. In general, students who received high Psychometric Entrance Test scores are more successful in their academic studies than students who received low scores. In addition, of all the screening tools available to institutions of higher education, the combination of the Psychometric Entrance Test and the matriculation exams has proven to have the best predictive ability.
The Psychometric Entrance Test is not a perfect tool. While it is generally able to predict academic success, there may be a small number of examinees who do not do well on the test but nonetheless succeed in their studies, and vice versa. Neither is the test a direct measure of such factors as motivation, creativity, and diligence, which are definitely related to academic success – although some of these elements are measured indirectly, by both the Psychometric Entrance Test and the matriculation exams.[1]

Structure

The test is divided into 8 sections, each typically containing 20 to 23 multiple choice questions of equal weight with 20 minutes allotted (for a total time of 2 hours and 40 minutes). Of the eight sections, only six actually factor into the final test score—two quantitative reasoning chapters, two verbal reasoning chapters and two English chapters. The other two chapters (colloquially known as the "pilots"), which might be of any of the three types, do not affect the test score and are used as a form of quality control to ensure that a question is fair and measure its degree of difficulty in preparation for full-fledged inclusion as a score-affecting question at a later date. This allows the NITE to get a grasp on how challenging a question is, which helps reliably sort the questions in any given subsection in increasing difficulty; and makes possible the factoring of previous test-takers' performance given the same questions into the score of a current test-taker, which helps minimize any unfair systematic differences between one date and another that might affect one's score (such as the overall difficulty of the questions or the overall aptitude of the people taking the test). The NITE strongly encourages test-takers to approach all test sections with equal gravity.

According to the NITE the quantitative reasoning sections jointly determine 40% of the final score, the verbal reasoning sections another 40% and the English sections 20%. Use of calculators, alarm clocks, cellular telephones, beepers, electronic instruments of any kind, dictionaries, books, papers or any other study aids is strictly prohibited.

Quantitative reasoning

Quantitative reasoning sections typically contain 20 questions and thus demand an average of solving a question per minute if one is to answer all of them in the allotted time. They examine the ability to use numbers and mathematical concepts in solving quantitative problems, as well as the ability to analyze data presented in different ways, such as in table or graph form. Only basic mathematical knowledge (the material studied up to 9th–10th grades in most Israeli high schools) is needed, though that still enables a diverse enough range of topics—including geometry, algebra, percentages, proportions and low-level combinatorics.

Though this is true for the other two types of subsections as well, quantitative reasoning questions in particular often can be tackled by several different approaches. One approach that is useful across many types of questions involves taking advantage of the multiple choice nature of the test by substituting given possible answers into the question and seeing whether the result "works out", or working by elimination. Higher difficulty questions are sometimes phrased in such a way as to make this approach highly impractical (such as requiring the calculation of two independent values X and Y and then asking for their sum—any given sum could have been the result of infinitely many possible values of X and Y and thus it is impossible to conclude, foregoing analysis of the constraints on X and Y, whether a given sum might possibly fit them or not).

On the flip side, some types of questions might be effectively tackled with methods that are above 9th–10th grade level, such as high-level combinatorics, or even methods that are generally not taught in Israeli high schools at all, such as modular arithmetic. A study conducted by the NITE on groups of students in eight pre-academic preparatory institutes during the academic year 1992–1993 has indicated that performance gains from practice and external coaching in the quantitative reasoning section are the greatest, averaging at about three tenths of a standard deviation.[2]

Verbal reasoning

Verbal reasoning sections typically contain 20 questions and thus demand an average of solving a question per minute if one is to answer all of them in the allotted time. They examine verbal abilities necessary for academic studies: vocabulary, logical thought processes, the ability to analyze and understand complex texts, and the ability to think clearly and methodically. The reason for the lower average time given for questions than in the quantitative section is mainly that questions earlier in the verbal section often require much less time to answer than even the easiest questions in quantitative reasoning; the first few questions typically constitute a vocabulary quiz, and even in the event of failing to recognize a word it is a matter of seconds to realize the question is a lost cause and skip it (unlike quantitative reasoning questions—the difficulties with which are typically much less immediately obvious, and often inviting of insistent retries). Apart from vocabulary questions the verbal reasoning section contains questions based on deductive and inductive reasoning, analogies, complex multipart sentence completion and reading comprehension.

A peculiarity of the Hebrew verbal reasoning section among similar tests (such as the SAT) is "Letter Substitution". The majority of nouns and verbs in Hebrew are constructed by combining a template with a root; for example, the template XaXeXet is frequently used to indicate some type of disease, and the root K.TS.R. (ק.צ.ר)—meaning "short"—substituted into it yields "katseret" (קצרת), the Hebrew term for Asthma (figuratively, shortness of breath). A Letter Substitution question consists of four sentences with one word in each having had its root replaced by the letters Pe, Teth, Lamed, successively. (This fictitious root has led to these questions being informally known as "petel questions" – "petel" means 'raspberry'.) Three of these words share the same root and the objective is to find the odd one out, usually by figuring out the original root common to the three others via associative thinking. This is often complicated by one or several "distractor" roots which happen to make sense in the context of two sentences, but not three.

A study conducted by Tamar Kenet-Cohen and Shmuel Bruner on undergraduate students in six Israeli universities has found that among item types therein, analogies have the highest marginal contribution to the verbal reasoning section's predictive ability of academic performance. Reading comprehension and Inductive/Deductive reasoning items were found to have a reasonable contribution, with the latter—generally the most difficult type of item—having a notably high predictive contribution regarding students of highly selective undergraduate study programs. Results concerning the contribution of vocabulary and sentence completion items were inconclusive; letter substitution questions, generally the easiest type of item, were found to have a negative overall contribution.[3]

English

English sections typically contain 22 questions and thus demand an average of solving a question per approximately 54 seconds if one is to answer all of them in the allotted time. They test the applicant's proficiency in the English language as reflected, among other things, by their vocabulary and their ability to read and understand complex sentences and texts on an academic level. The English sections are fundamentally similar to the Verbal reasoning section insofar they concern skills related directly to grasp of the language—types of questions include vocabulary questions, restatements, sentence completion and reading comprehension—though the difficulty level of the questions per se is considerably lower than that of the equivalent Hebrew ones on the verbal section, for obvious reasons. Also unlike the verbal section the English section does not contain any questions related to reasoning, analogies and the like, instead focusing solely on language skills.

Scores

Applicants can usually view their online score report approximately five weeks after administration of the test—though longer delays are not unheard of (the official figure is 45 days; such longer delays are characteristic of more loaded test dates, particularly the April date, which is the latest date accepted by most academic institutions in Israel for application for the Fall semester). The scores are also sent to the applicants separately by mail and forwarded to several major universities—namely Ben-Gurion University of the Negev, Bar-Ilan University, The University of Haifa, Hebrew University of Jerusalem, Tel Aviv University and The Israel Institute of Technology in Haifa (aka the Technion). Additional higher education institutes for the score to be sent to may be specified by the applicant in advance upon registration for undertaking the test.

The distribution of PET scores is commonly believed to be a normalized distribution with a mean of 500 and a standard deviation of 100; this appears to be a misconception, however, or at least not precisely true, as statistics of percentile ranks released by the NITE show that significantly fewer people are assigned scores below 500 than above, and the mean is actually closer to 540. In fact, the scores—at least in part—do not appear to form a true normal distribution at all (see below, "percentile ranks").

The scores 800 and 200 are absolute and are reserved uniquely for applicants who have answered all questions correctly and none of them correctly, respectively. The latter is exceedingly improbable—on an average test of 252 questions, 168 out of this affecting the score, even guessing every question has an expected value of 42 correct answers. The probability of guessing all the questions incorrectly is , or almost as unlikely as winning the Israeli lottery draw three times in a row.

Each of the three sections of the test is given a separate score on a scale of 50–150 (with 50 and 150, again, as absolute scores). An institution focusing on the exact sciences will often assign additional weight to an applicant's quantitative reasoning score; the English score affects the degree to which an undergraduate might be required to take additional English courses during their studies, ranging from a grade of 83 and below—which prevents one from getting accepted into most Israeli universities regardless of the overall score—to a score of 134 and above, which would exempt the applicant from taking any courses at all.

Percentile ranks and their corresponding PET scores

Percentile   Overall  
score
Percentile   Subsection
score
98th ≥725 98th ≥145
95th ≥700 95th ≥140
91st ≥675 91st ≥135
85th ≥650 85th ≥130
78th ≥625 78th ≥125
70th ≥600 71st ≥120
62nd ≥575 63rd ≥115
53rd ≥550 55th ≥110
45th ≥525 46th ≥105
36th ≥500 38th ≥100
28th ≥475 31st ≥95
21st ≥450 24th ≥90
15th ≥425 17th ≥85
10th ≥400 12th ≥80
6th ≥375 7th ≥75
3rd ≥350 3rd ≥70

Source:[4]

Curiously, the percentile ranks for the scores between 350 and 525 seem to follow a sort of quadratic, rather than normal, cumulative distribution. The formulas tying the scores (S) and percentile ranks (P) in that range appear to be

Which implies that the probability density function in that range is linear. Note that this trend does not continue upwards past the mean (~540); and that it cannot indefinitely continue downwards past the score 350, either, as that would put the 0th percentile at a score of 300, and some applicants do receive scores between 200 and 300.

Claims of cultural bias

Nearly half of Arab students who passed their matriculation exams failed to win a place in higher education because they performed poorly in the psychometric test, compared to 20% of Jewish applicants. Khaled Arar, a professor at Beit Berl College, claimed that the psychometric test is culturally biased against Arab Israeli students. “The gap in psychometric scores between Jewish and Arab students has remained steady—at more than 100 points out of a total of 800—since 1982. That alone should have raised suspicions”, he said.[5]

However, a 1986 research found negligible differences in construct or predictive test validity across varying cultural groups and the findings appeared to be more consistent with the psychometric than with the cultural bias position.[6]

See also

References

  1. "The Test". National Institute for Testing and Evaluation. Retrieved 7 December 2010.
  2. Allalouf, Avi (April 1996). The Effect of Coaching on the Predictive Validity of Scholastic Aptitude Tests (PDF). American Educational Research Association Annual Meeting. Jerusalem: National Institute for Testing and Evaluation (NITE). pp. 9–10. Retrieved 7 December 2010.
  3. Kenet-Cohen, Tamar; Bruner, Shmuel (2002). A Comparative Inspection of Item Types in the Psychometric Entrance Test (Verbal Reasoning Section) from a Predictive Validity Point of View (PDF) (in Hebrew). Jerusalem: National Institute for Testing and Evaluation.
  4. "Interpreting the Psychometric Test Scores" (PDF). National Institute for Testing and Evaluation.
  5. Cook, Jonathan (11 April 2009). "Israel's Arab students cross to Jordan – Academic hurdles block access to universities". Atlantic Free Press. Retrieved 7 December 2010.
  6. Zeidner, Moshe (September 1986). "Are scholastic aptitude tests in Israel biased towards Arab college student candidates?". Higher Education. 15 (5): 507–522. ISSN 0018-1560.

External links

This article is issued from Wikipedia - version of the 11/10/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.