Graduate Admissions:

 

Clinical Psychology in the 1970s

 

 

 

 

 

 

Roger K. Blashfield, PhD

 

Pennsylvania State University

 

 

 

 

This unpublished paper was generated while I was an assistant professor at Penn State. I was teaching a graduate course on clinical decision making (a la Goldberg, Wiggins and Dawes). I wanted a data set to use as an example and decided to collect data on graduate admissions. I knew that my graduate students could easily understand the variables in this data set. The report that follows was written in 1974 and was distributed to all of the faculty in the psychology department at Penn State. The report was retyped in 2004 with editorial changes to meet the needs of modern readers.





 

This past year I had applied to numerous universities for graduate study in the area of clinical psychology. I was not accepted by any school. I had realized that the competition was great in psychology, but I was terribly confused as to the results of my applications. The most confusing part was not that I had been rejected, but that there were those who were accepted who I felt were considerably less qualified than I. My qualifications, I felt, were very good except for my GRE scores. As an independent student, for five years, I had to work half time to support myself which took time away from studies. In addition, I did volunteer work and also worked on an independent research project in the area of sleep and dreams. This also cut into my time. In spite of all of this extra work, I maintained good grades. Upon taking the GRE's, I had no time to practice, etc. All of these facts I feel were not taken into consideration. Many students, who live at home, having no need to work, and they have higher grades and more time to practice test taking. But do they show as much motivation as I? I feel not.

 

--   Applicant   --

 

 

 

 


I hope that you can do something about all this ( the admissions process) to make it better. I don't think you can, but I wish you could.

 

--   Faculty member   --

 

 

 


The idea that a few people, sitting around a table drinking coffee and looking at pieces of paper, can make decisions that drastically change the lives of other people is extremely frustrating.

 

--   Applicant   --

 

  Graduate admissions is a frustrating process. In addition, graduate admissions is costly, time consuming, important to applicants and programs, possibly invalid, and even controversial. Being an applied clinical psychologist with interests in measurement and decision making, I thought that the study of graduate admissions could be useful. My aims in the project were something like the following: (1) to create a large data bank on applicants to our graduate programs, (2) to formulate a statistical model of the decisions made by the faculty admissions committee, and (3) to gather data that might comment on the “validity” of the admissions process. The sections of this report will reflect my aims in this study.

 

Description of the Applicants

 

  In 1974, there were 888 individuals who started applications to one of the seven graduate programs in psychology at Penn State. The seven graduate programs were in clinical, developmental, general experimental, industrial/organizational, perception, physiological, and social. Of the 888 individuals who started applications, only 537 completed all of the necessary forms. From these 537 completed applications, 24 applicants were accepted. Thus, the selection ratio for graduate admissions in 1974 at Penn State was 4.5% (24/537). In order to obtain a new class of 24 graduate students, 52 individuals were sent offers.

  I gathered data on all completed applications to Penn State's program in 1973 and 1974. To select the variables to be used in the coding of the applications, I interviewed the faculty members of the admission committee. I used a “twenty questions” format during the interviews. I brought three folders of previous applicants to the program. I told the faculty member that they could ask me questions about the applicants associated with the folders, but that they could not read the folders per se. In particular, I would answer any question that the admissions committee asked if the question could be given a numerical code (e.g., a question like “What did the writer of the first letter of reference say?” was an illegal question since the answer could not be assigned a numerical code). I recorded the questions that I was asked. When the faculty member felt that he/she had sufficient information to make an admissions decision, the process was stopped.

  Using this procedure, I formulated 34 variables that could be coded from an applicant's folder and could be relevant to an admissions decision. These 34 variables represented information from four sections of the application: the personal information sheet, the GRE scores, the applicant's undergraduate transcript, and letters of recommendation. Table 1 lists the 34 variables and the means on these variables for the 1973 and the 1974 applicants.

  In terms of general information, 34% of the applicants were female; their mean age was 22; 69% of the applicants came from the eastern United States (zip codes started with 0, 1 or 2); and 66% of the applicants wished to specialize in clinical psychology. On the GREs, the mean scores for the applicants were 600 (verbal), 608 (quantitative), and 586 (advanced). The undergraduate GPA of the applicants was 3.44 (with a mean GPA of 3.61 for the last two years of college). The mean number of psychology courses taken by the applicants was 11. The letter of recommendations used at Penn State ended with a series of five point rating scales on various dimensions (e.g., originality, familiarity with the research literature, etc.). When averaged across all dimensions, 68% of the applicants received the highest possible ratings on these dimensions (mean across the dimensions of 4.5 or greater).

 

 My 1974 conclusion about the data reported in Table 2 is that the “average applicant” to one of Penn State's graduate psychology programs was quite good. The dilemma of the admissions committee in 1974 was that most of the applicants were quite good, yet over 90% of these good applicants would be sent rejection letters. Graduate education is a costly, labor-intensive process. The faculty neither could not accept any applicant who looked “good” without graduate courses so large that the quality would suffer.

  

 


Table 1

 

Characteristics of the Applicants

 

                  1973   1974

Personal Statement Information

  Age                 22.4   22.6
  Sex (% female)               33%   36%
  First digit of zip code (% with 0, 1 or 2)         67%   70%
  Marital status (% single)             80%   81%
  Degrees beyond B.S or B.A. (%)           12%   17%
  Graduate program to which applicant applied (%)       
    Clinical               64%   69%
    Developmental             8%    4%
    General experimental           15%   13%
    Industrial/Organizational           4%    6%
    Perception             3% 0%
    Physiological             1%    4%
    Social               5%    4%

Test Results

  GRE verbal               607   592
  GRE quantitative             617   596
  GRE advanced psychology           591   585

 

Transcript Information

  Total # of A's in all courses           16.6   18.1
  Total # of courses             39.6   39.8
  # of A's in last two years             5.9    5.7
  # of courses in last two years           9.9    9.1
  # of A's in psychology courses           6.6    7.6
  # of courses in psychology           10.7   12.0
  # of A's in math and stat             .7   .9
  # of courses in math/stat             2.4   2.2
  # courses in social sciences           4.7   4.0
  # of courses in hard sciences           3.9   4.1
  # of breaks in undergrad education (e.g., transferred universities)     .4   .3
  % of applicants from most selective undergrad universities     24.2%   33.3%
  % of applicants from universities with Top 20 psych depts     33.1%   28.3%

Letters of Reference Ratings (% with highest possible rating)

  Originality               45%   46%
  Adequacy of scientific background           40%   38%
  Adequacy of ability for research           49%   47%
  Familiarity with research literature         29%   33%
  Ability to organize scientific data           46%   50%
  Motivation to obtain PhD             65%   62%
  Overall estimate of future performance         26%   31%
  Research assistant             50%   54%
  Teaching assistant             48%   54%
  % of applicants in which both letter writers were psychologists   76%   82%
  Length of letter in inches             11.9 in.   12.6 in.

Statistical Model of Decisions by Admissions Committee

 

  In the 1960s and 1970s, an exciting new area of research was the use of statistical models to estimate the decisions of clinical psychologists and other professionals when making decisions. The literature (see Wiggins, 1975 for a superb description of this literature) generally supported three conclusions: (1) experts generally believed that their decision processes were complicated, involved looking at a large number of variables, and were predicated on a understanding of the context of the decisions; (2) however, empirical studies consistently showed that simple linear statistical models, such as regression equations using a small number of variables, could be predict the decisions of experts quite accurately (e.g., statistical models were developed to predict clinicians' diagnoses from MMPI profiles and could predict the resulting decisions as well as expected given the reliability of the clinicians); and (3) once the statistical models were created, the decisions estimated by these statistical models were more valid than the actual decisions of the experts (i.e., the model of the man is more valid than the man).

  The second part of my analysis of the admissions process at Penn State was to generate a statistical model of the decisions made by the admissions committee and then see how well that statistical model worked.

  Before describing the statistical model, some comments are needed regarding the procedures used by the admissions committee at Penn State. The application materials were requested by applicants. When completed, these materials were returned to the department. Three secretaries had the task of sorting the materials into manila folders and noting when an application was complete. An additional complicating step was that applicants had to apply to both the Department of Psychology and to the Graduate School – each of which had separate materials to fill out.

  Once a folder was complete, the folder was added to a pile that was circulated to a subset of three members of the admissions committee. There were five faculty members from various programs who were on this departmental committee. Each faculty member decided whether the application folder should be held for another review or rejected immediately (what was locally known as the “Hold” vs. “Don't hold” decision). A majority vote was decisive. The only exception to this process was when an applicant had any GRE score that was less than 500. All applicants with any GRE score < 500 were automatically rejected unless the chairperson of the committee decided that other considerations (e.g., the applicant had published two papers) made the folder worth circulating.

  The second step in the admissions procedure was the complete committee convened and ranked the “Hold” folders for each doctoral area in the department (clinical, developmental, experimental, etc.). A list was then given to the Chairperson of the department with the rank orderings per area. The Chairperson then reviewed these recommendations with faculty in the programs as well as with the recommendations made the Student/Faculty committee for Black Graduate Student Affairs. A major part of the decisions made by the Chairperson involved funding decisions and the prospect of financial aide for the forthcoming year. The Chairperson made the final decisions regarding how many and which applicants were sent letters of acceptance. The letters were sent about March 15 th and applicants were expected to respond by April 15 th .

  No interviews were used as part of the application procedure at Penn State in the mid-1970s. There were a number of reasons for avoiding interviews. Interviews were time consuming for faculty; the faculty were worried that applicants from good programs that were considerable distance from Penn State (e.g., Stanford University) would not attend interviews; and faculty were skeptical about the validity of decisions made from the interviews.

  One important point that I noted about the acceptance procedure outlined above is that this procedure was not carefully adhered to. In 1974, only 64% of the completed folders had been reviewed by the time the full committee met to rank the “Hold” applications. Decisions about the remaining 36% of the applications were made by haphazardly, usually by one member of the admissions committee, to either reject the application (which was almost always the decision) or to send it forward to the Chairperson of the department for possible admission.

  Because the number of accepted students was too small to permit the adequate development of a statistical model, I decided to focus my efforts on predicting the “Hold” vs. “Don't Hold” decision. The statistical procedure that I used was discriminant analysis. This multivariate statistical method formed a linear vector which attempted to optimally separate the “Hold” from the “Don't Hold” applications.

  I randomly split the 64% of the applications for which the “Hold” vs. “Don't Hold” decision was known into two groups. One group was used to create the linear equation to estimate this decision. The second group was a cross-validation group that was used to see how well the equation worked when estimating the actual decision.

  The resulting discriminant vector was composed of five variables: verbal GRE, quantitative GRE, sum of the ratings on the letters, number of A's in psychology courses, whether the applicant had an advanced degree or not, and the number of hard science courses that the applicant took as an undergraduate. These variables are listed by their order of importance in Table 2.

 

 

 


Table 2

 

Variables used in statistical model to estimate admissions decisions

 

GRE verbal                   .57
GRE quantitative                 .49
Sum of ratings on letters                 .41
# of A's in psychology                 .39
Advanced degree                 .38
# of hard science courses                 .35

  Multiple correlation of vector in predicting Hold vs. Don't Hold   =   .826

 

  Hit rate in predicting Hold vs. Don't Hold (cross-validation sample)   =   77.2%

 

 

 


  A general conclusion from this discriminant vector model is that GRE scores are the best predictors of admission status. This finding has been noted elsewhere (Dawes, 1971). Even though psychology faculty members at a number of institutions often list complicated reasons for their decisions, most of these decisions are optimally estimated by knowing an applicant's GRE scores.

  The hit rate for the statistical model was 77%. That is, for slightly over 3 of 4 applications, the statistical model would have made exactly the same decision that the admissions committee actually did. For this sample of applicants, 32.7% of the applications were in the “Hold” status. Thus, the base rate for the “Hold” versus the “Don't Hold” decision was 67.3% . That is, if every applicant was put into the “Don't Hold” file, the decision would match the admissions committee decision two-thirds of the time. Compared to the base rate, the obtained hit rate did not seem to be very good. A 10% increase over chance is not that impressive. The comment about the hit rate needs to tempered, of course. First of all, the base rate strategy is obviously untenable. If all applicants were automatically rejected, no graduate students would ever be admitted. Second, the upper limit of the obtained hit rate of a statistical model is the reliability (inter-rater agreement) of the committee members. As noted earlier, there were two subcommittees which made up the overall committee. When kappa values were computed on the reliability values of these subcommittees, the kappas were .427 and .320. As estimates of inter-rater reliability, these values are low. Studies of the reliability of psychiatrists and clinical psychologists when making diagnoses generally are interpreted as being low, yet clinicians typically have kappa values in the .50 to .60 range. The average hit rates for the three members of the two subcommittees were 71% and 68%. Also, for those applicants in which the committee chairperson made a Hold/Don't Hold decision, the chairperson's hit rate was 73%.

  The obvious conclusion is that the inter-rater agreement of the admissions committee when making the Hold vs. Don't Hold decision is not very high. In fact, the discriminant vector model is a better predictor of the committee would say than was any single member of the committee, including the chairperson.

  A final, important point to note about the discriminant vector model of the admissions committee is that the model did not mis-classify any applicant who was sent an acceptance offer by the department chairperson. Thus, if the discriminant vector model replaced the admissions committee, 60% of the applicants could be rejected without rejecting any applicant who eventually would have been accepted. In terms of z scores on the discriminant vector equation, the mean z score for the “Don't Hold” applicants was z = -.71, the mean z score for the “Hold” applicants was z = +1.73, and the mean for the accepted applicants was z = +2.20.

  Overall, the findings in developing a statistical model of admissions decisions at Penn State was not surprising. Robyn Dawes, in a 1971 article in the American Psychologist, had performed a similar study at the University of Oregon. Dawes found that there were four variables which entered into this linear model when predicting admissions committee decisions. These four variables were GRE verbal, GRE quantitative, undergraduate GPA, and the selectivity of the undergraduate college. This model obtained a multiple of correlation of .78 with the decision of the U of Oregon admissions committee (the multiple correlation of the statistical model at Penn State was .83). Dawes figured that 55% of the applicants at the University of Oregon could be eliminated by the model without rejecting any applicant who would be accepted eventually.

 

Data about Graduate Students at Penn State

 

  The next issue to be addressed is what happened to students who were accepted to Penn State's graduate program and actually attended the program. I gathered data on all students who were admitted from 1965 to 1970. There were 150 students who were included in this 6 year window. The applications of these students were coded using the same variables described earlier. In addition, the folders of these students were analyzed for their current status (i.e., earned Ph.D., still student, terminated without Ph.D.) and their GPAs while at Penn State. I also asked faculty to rank order students that they had in class or knew in some other way. The final variable was how many publications these students had. I estimated the last variable by asking the mentors of the students, by searching Psychological Abstracts and by using the Science Citation Index.

  From the application information, 26% of the students in this six year cohort were women; the mean age at the time of admission was 23 years old; 67% of the students came from the eastern United States; and 40% of this student group were in the clinical program. On the GREs, the mean scores were 646 (verbal), 639 (quantitative), and 610 (advanced). The mean GPA as undergraduate was 3.36. The mean number of undergraduate psychology courses was 10. The mean GPA of these students while in graduate school was 3.71. Slightly over one in four (26.1%) of the students had at least one publication.

  Of the 150 individuals in this student cohort, 43 had completed their Ph.D. by the fall of 1974, 60 had left the program, and 47 were still considered active (18 of the 47 active students were taking courses on campus). If the active students were deleted from consideration, 42% of the students actually obtained their PhDs.

  The admissions variables that were most predictive of whether a student would drop out were age, quantitative GRE score, and the first digit of the zip code for the home address at the time of application. There were two other predictors, marital status and the presence of an advanced degree, but both of these were highly correlated with the age of the applicant. Regarding age, graduate students who applied to our program and who were 22 years old or less were distinctly less likely to complete the program. For this group, 73% of the graduate students left the program without a Ph.D. In contrast, 31% of the students were 23 years old or older failed to obtain a PhD. Regarding zip code, students who came from states near Pennsylvania (i.e., zip codes starting 0, 1 or 2) were less likely to finish, while students who came from further away to study at Penn State were more likely to complete the program. Finally, the relationship between quantitative GRE score and completing the program was curvilinear. Only one out of three applicants whose quantitative GRE was less than 600 and only 1/4 of the applicants whose quantitative GRE was above 700 actually completed the Ph.D. However, six of every 10 students whose quantitative GRE was between 600 and 700 did finish their degree.

  When graduate school GPA was the variable to be predicted, none of the variables had a significant correlation (even undergraduate GPA had a correlation of less than +0.20 with graduate GPA).

  The only predictor of the number of publications by the 150 students in this cohort was sex. Slightly under 90% of the students in this group who had published an article by 1974 were male.

  The final criterion variable used in this study was the ranking of these 150 students by the faculty. Only two variables had significant relationships with this variable. The first of these was the students' undergraduate GPAs for their last two years in college. This variable had a correlation of r = +.463 with faculty ranking. The other variable was verbal GRE. This variable had a correlation of r = -.267 (more verbal students are disliked by the faculty??).

  Overall, there were four variables in the 1965 to 1970 cohort that were examined to see how well they could be predicted by the admission variables. Eight admissions variables proved to be related to at least one of these criterion variables: age, sex, zip code, marital status, presence of advanced degree, verbal GRE, quantitative GRE, and last two years undergrad GPA. However, none of these eight admissions variables were found to predict more than one criterion variable. In addition, the small sample size meant that it was not possible to replicate these findings. The reader should be skeptical that the relationships reported above, as interesting as they might be, will be generalize to future groups of students or to other clinical programs.

  To examine the generalizeability of these findings, I did a literature search on graduate student performance. I found one interesting study that was related to the data I had gathered at Penn State. This study was carried out by Clifford and Patricia Lunneborg at the University of Washington in the psychology department. They noted that 35% of their graduate students drop-out in the first four years. I found a 40% drop-out rate at Penn State. According to the Lunneborgs, Knox had found a 44% drop-out rate at the University of Georgia. Interestingly, the Lunneborgs found that the predictors of drop-outs at Washington were age, presence of advanced degree, and marital status. They did not look at zip code as a predictor (I don't understand how they missed such an obvious variable!).

  The Lunneborgs also examined which of the admission variables predicted faculty ratings of graduate students at the end of their first year in the program. These predictors were undergraduate GPA for the last two years, whether the applicant was an undergrad psychology major, rating of the undergrad school, presence of an advanced degree, and marital status. More importantly, the Lunneborgs suggested that the faculty ratings of first year students was a useful procedure because this measure was correlated with the length of time students took to complete the PhD ( r = +.69 ) and whether the student was a drop-out ( r = +.48 ). The Lunneborgs suggested that these ratings could be used to eliminate graduate students who were “bad risks.”

 

Questionnaire Responses by Applicants

 

  Having examined the admissions data submitted by applicants to the program, I decided to attempt to learn what eventually happened to these applicants. I designed a questionnaire and sent it to 170 individuals who eventually generated complete applications during the spring of 1974. The purpose of the questionnaire was to elicit information on how the applicants perceived the admissions procedure and what suggestions they had for changing the procedure.

  The response rate for the questionnaires was 42% (n = 71). Given that I am not a peer of the applicants, that I am not personally acquainted with any of them, and that Penn State rejected the applications of almost all of them, this response rate was high. To me, this response rate suggested that the graduate admissions process was an emotionally salient topic to these applicants and they wanted to have their input into this process.

  Table 3 contains the questions asked in this survey of the applicants and their responses.

 

 

 


Table 3

 

Questionnaire Responses by 1974 Applicants to Penn State

 

 

1.   Where you accepted by at least one graduate program to which you applied?

 

        Yes   =   44 (62%)

 

2.   How graduate programs did you apply to during 1974?

 

        Median   =   9

 

3.   Of the five major sources of information that we had concerning you, which one do you believe most reflected your abilities?

  

    GRE scores       5
    Letters of reference     30
    Personal statement     14
    Transcript(s)       19

 

4.   Again considering these sources of information, which one do you think we weighted most heavily when considering your application?

 

    GRE scores       39
    Letters of reference     10
    Personal statement     5
    Transcript(s)       14

 

5.   If you had been asked to visit our campus at your own cost for a personal interview, how would you have responded?

 

    Attended       57
    Not attended       12

 

6.   What were the reasons for your application to our graduate program (Check as many as applicable)?

 

    APA Guide to Graduate Study   42
    Brochures and catalogues we sent   35
    Comments by your faculty   29
    Reputation of Penn State     38
    Interest in specific faculty here   20
    Location       30
    Other         5

 

7.   Of the various graduate programs in psychology to which you applied, how would you rate the program at Penn State?

 

    Most preferred       26
    Moderately preferred     43
    Least preferred       2

 

8.   Assume that you had been rejected by all other graduate programs. If we had accepted you to our program, but had not been able to give you financial assistance (i.e., fellowships or assistantships), would you have decided to attend here?

 

    Yes         51
    No         8
    Undecided       12

 

9.   Have your applied for jobs in which your undergraduate psychology training will be relevant?

 

    Yes         29
    No         33

 

 

 


  Prior to analyzing the data from this questionnaire, I expected that only about 20% of the applicants to our program would be accepted by another program and that this 20% would receive multiple acceptances. In short, I hypothesized that there would be two groups of applicants: the “Haves” and the “Have Nots”. The “Haves” would have good test scores, come from strong colleges and universities, would have strong research backgrounds, and would be accepted by almost every program to which they applied. In contrast, I expected that most applicants would fit into the “Have Nots” – individuals with less than stellar applications who be uniformly rejected by other programs.

  Of the applicants who responded to the questionnaire, 62% (n = 44) were accepted by at least one graduate program. Only one of these 44 individuals was accepted by Penn State and that individual rejected our offer in order to attend Stanford University because we did not hire a professor that this student had worked with as an undergraduate. Of the 44 individuals, 24 were accepted by more than one school. In fact, 8 of the 44 individuals reported that they were accepted by over 50% of the programs to which they applied.

  This result contradicted my hypothesis of there being “Haves” and “Have Nots” among the applicants. There were a small set of applicants who fit my sense of being “Haves” ( 8 of 71 respondants ) and, interestingly, none of these 8 were applicants that we accepted at Penn State. On the other hand, there did seem to be a set of “Have Nots.” Slightly over one-third of the applicants ( 27 of the 71 questionnaire respondents ) failed to gain admission to any graduate program. There was an indication that overall acceptance to a graduate program was primarily determined by the number of programs to which applicants sent materials. The median number of applications by the respondents to the questionnaire was 9. For individuals applying to 8 or fewer psychology graduate programs, 70% failed to gain acceptance anywhere.

  In their prose comments, the respondents offered a number of criticisms of the admissions process. They did not like the tedious variations in application forms that varied from one graduate program to another; the lack of feedback about why they were rejected; the cost of applying in terms of both money and time; and the over-emphasis on test scores in admissions decisions to the exclusion of experience, motivation, and creative ability. The one theme that pervaded their prose comments was their sense of how depersonalizing the admissions process was.

 

Some sources of my frustration are: 1. The inability to establish the credentials I believe are important - my sensitivity and perceptive abilities. 2. The competition with 1,000 other students for 10 openings – overwhelmed by the odds created by over population, shrinking funds for education – facilities, faculty, and student aide, and the increase in people seeking credentials, required by a market overglutted with a supply of college grads who escalate requirements arbitrarily as a screening device. 3. The dehumanizing and expensive application process - form letters, lost transcripts, GRE scores, etc. 4. The feeling that academe is just another business – I feel like I'm a steak in a meat market – cursorily examined and tossed aside.

    

    – Applicant –

 

 

Once an application is made, there is generally the feeling that one has dropped the application into a bottomless pit. The silence is deafening and has undoubtedly driven more than a few students insane.

 

    – Applicant –

 

 

Final Comments

 

  My goal in this study was to create a data set that I could use when teaching a graduate course about applied decision making. That goal was achieved. I did formulate a data set that allowed me to create a statistical model of how faculty members on an admissions committee made decisions about applicants to the program. The statistical model, as expected, was relatively simple, the model was linear, and the model performed about as well as could be expected with these data.

  In performing this study, however, there were findings that surprised me. The first surprising finding was how unreliable the admissions process was. The inter-rater agreement of faculty members when making these decisions was striking low. Moreover, when I watched this process, I realized how many decisions were made in a haphazard and non-systematic fashion. Further evidence of the unreliability of the process was the fact that the chances of applicants to gain acceptance to a graduate program increased as a function of the number of applications. In effect, applying to graduate school is like betting on a specific number in a roulette wheel. If an individual only spins the wheel once, that person's odds are not good. But multiple applications to graduate school, like the random spins of the roulette wheel, increase the odds of at least one success markedly. If success in applying to graduate was reliable and consistent process, the odds of being admitted should be solely a function of credentials and should not be sensitive to the number of applications.

  The second finding that surprised me was the strikingly high attrition rate in Penn State's program (about 40%). Also surprising was to learn that age at the time of application was a good predictor of dropping out of graduate school and that age had proven to be a good predictor in other studies (i.e., the research by the Lunneborgs at the University of Washington). Why is age of the applicant relevant? My guess is that age is related to motivation for graduate study. Applicants who have completed their undergraduate degrees and who then have worked or who have had some other type of life experience might be much more certain about type of career they want. These older individuals might be relatively likely to have the determination to last through the difficult hurdles of graduate school. Students who are applying to graduate school at the end of their undergraduate education are relatively likely to be unsure of what they want to do in life and are simply applying to graduate school because it is an extension of “school” as a process that they are acquainted with. ( I can't find a job I like, so I guess I'll just apply to graduate school and see what happens. ) Motivation also potentially explains why zip code was a predictor of attrition. Students who come along way to attend Penn State must really want to be in Penn State's program, hence their motivation to finish is high. Among students who live nearby, there could be individuals whose motivation was to be near their girlfriends or who wanted to spend holidays and weekends at home having fun. Their motivation to survive the rigors of graduate school was less clear.

  In this regard, the comment by one faculty member of the admissions committee is relevant. This individual made a sarcastic suggestion about how to improve the admissions process.

 

I/O psychologists tell us that the best predictor of job performance is a behavioral sample from a task most similar to the demands of the job. I suggest that we dispense with reading application materials. Any person who wants to come to graduate school should be invited to Penn State for a weekend. On Saturday morning, we should take all of these applicants to the cow barns and let them shovel out the barns. The 20 individuals that we admit to next year's class are the last 20 people that are still shoveling at the end of the weekend.

 

 

References

 

Dawes, R.M. (1971). A case study of graduate admissions: Application of three principles of human decision making. American Psychologist, 26, 180-188.

 

Lunneborg, C.E. & Lunneborg, P.W. (1972). Doctoral study attrition in psychology. Unpublished report. Bureau of Testing Project 192, University of Washington, Seattle WA.

 

 

Vitae
History
Students
Unpublished
Dance