In the world of law school admissions, the U.S. News and World Report’s (USNWR) annual law school rankings are always looming, in one way or another, in the background.
For better or worse, the USNWR rankings are the most widely cited, and for many are considered the gospel truth when it comes to law school rankings. They receive heavy criticism along a number of lines. Many criticize them for placing insufficient emphasis on employment outcomes, and too much emphasis on things like library resources and expenditures per student, which favor schools with a lot to spend. Another common complaint is that the rankings are self-reinforcing.
The USNWR methodology (which can be found here for 2018: https://www.usnews.com/education/best-graduate-schools/articles/law-schools-methodology) places a lot of emphasis on median LSAT scores and GPAs. So if applicants are largely looking to the UNSWR rankings to evaluate school quality, the top applicants will choose (and graduate from) the schools at the tops of those rankings, lending their stellar LSAT and GPA scores to those schools’ statistics, and further solidifying those schools’ holds on the top spots. Other sources, such as Above the Law, have attempted to come up with more meaningful rankings but, for the time being, it seems that the USNWR rankings are king, and their release each year is met with much anticipation.
Another aspect of the USNWR methodology that generates a lot of chatter on law school message boards is the percentage of the rankings that deal with a school’s acceptance rate. In a nutshell, a school can marginally increase its rank by accepting a lower percentage of those students who apply. The weight given to acceptance rate in the methodology is pretty low, but that doesn’t stop people from speculating that this factor may cause a school to engage in what is called “yield protection.” The theory goes like this: some schools may actually disadvantage students with high LSAT and GPA numbers because they believe those students will end up attending other, higher-ranked schools. Rather than accept these students outright, these allegedly yield-protecting schools instead waitlist them so they can eventually reject them once it’s clear they won’t attend, therefore decreasing the school’s acceptance rate (and increasing, if only slightly, its USNWR score).
This, obviously, is a controversial subject, since the implication may be that some schools are more worried about their USNWR rankings than they are in admitting the most highly qualified students; instead they’re targeting for acceptance students they think will attend in order to game the rankings. Clearly, there are plenty of non-LSAT and GPA related reasons to deny acceptance to a student, regardless of his or her stellar numbers. That student may have an otherwise unimpressive resume, potential past misconduct issues, have shown a clear lack of interest in attending, or any other number of non-numbers related factors.
That said, I think it might be interesting to employ numbers – both LSAT and GPA and in a wider sense – to see what statistical analysis tells us about whether yield protection might exist. So that’s just what we will do in this blog. As always, I am using applicant-reported data (and not data from the schools themselves).
In order to try to shed some light on this yield-protection question, we first have to identify applicants that we think would be “yield-protection candidates.” In other words, we have to identify those applicants who schools might take a pass on, believing they will opt for a higher-ranked school. This, of course, is not an exact science, but I’ve tried to stay true to the theory underlying this analysis. In doing so, I developed two categories of yield-protect candidates, which I call “Harvard Candidates” and “Georgetown Candidates.” Harvard Candidates are those who have both an LSAT score of 173 or higher and GPA of 3.87 or higher (which are the average medians over the past five years for Harvard). Georgetown Candidates are those who have both numbers at or above Georgetown’s average medians over the same period (168+ LSAT and 3.73+ GPA). I identified two groups in order to analyze the highest number of schools possible, and frankly, once you get below USC on the rankings, there simply aren’t enough Harvard Candidate applicants to support drawing any conclusions at all.
With that established, we will approach the yield-protection subject in two ways. First, we will look at the admissions outcomes for only Harvard Candidates for schools for USC and above, as well as the outcomes for Georgetown Candidates for schools from the University of Texas down through William & Mary (I’ve left out any schools for which we didn’t have at least 10 data points). Next, we will use regression analysis to identify whether, holding other factors equal, Harvard or Georgetown Candidates are disadvantaged (or, if you prefer, “yield-protection victims”).
First, the raw admissions outcomes data for Harvard Candidates at schools from Yale through USC:
Harvard Candidate Admissions Outcomes
|School||Reject||Waitlist||Accept||# of Applicants|
To begin with, nobody believes Yale yield protects. Yale has been at the top of the USNWR rankings since they’ve been published, and nobody believes that Yale is going to deny admission to a candidate because it believes it will lose them to another school. The same goes for Harvard and Stanford. So there’s nothing particularly surprising about their results.
As for the other schools, the most interesting result here is that the University of Pennsylvania (Penn), the University of Virginia (UVA), and the University of Michigan (Michigan) all seem to waitlist an abnormally high percentage of Harvard Candidates, and Georgetown is not far behind. Perhaps not coincidentally, Penn, UVA, and Michigan are the schools most often accused of yield-protection, and you hear Georgetown tossed out there, too. It’s not hard to see why, based on this table. I think it would be unfair to draw any hard conclusions about these numbers since it’s impossible for us to know what goes on in the minds of admissions committees, but they are interesting, nonetheless.
Now, let’s have a look at how Georgetown Candidates fare:
Georgetown Candidate Admissions Outcomes
|School||Reject||Waitlist||Accept||# of Applicants|
There’s quite a bit less to be interested in here, I think, although the University of North Carolina (UNC), Notre Dame, and UCLA do seem to waitlist an outsized percentage of Georgetown Candidates. Georgetown does as well, but with Georgtown, there’s a much higher possibility that those Georgetown Candidates have Harvard Candidate numbers, and this is less true of the schools further down the list (there were fewer than 10 Harvard Candidate applicants to UNC and Notre Dame, for example).
With the same “we can’t draw any hard conclusions from these tables” caveat as before, it’s at least understandable – from a 10,000 feet perspective – that Penn, UVA, Michigan, and Georgetown may have reputations for yield-protection.
Now, let’s take a look at the results of the regression analysis for both the Harvard Candidate and Georgetown Candidate groups. This analysis identifies the effects of simply being a Harvard Candidate or Georgetown Candidate on admissions outcomes, while controlling for LSAT, GPA, gender, URM status, binding early decisions applications, the month in which the candidate applied, and whether or not the candidate is a nontraditional applicant. In other words, are there schools that seem to disadvantage applicants simply because their numbers are so high (which would, of course, be consistent with yield-protection practices). In doing this analysis, I ignored rejections, and focused only on acceptances versus waitlists, since yield-protection allegations are not that candidates get outright rejected by yield-protected schools, but instead that they get waitlisted rather than outright accepted.
First, a list of the schools that do demonstrate a statistical disadvantaging of Harvard Candidate applicants:
|Percent increase in chance of being waitlisted for Harvard Candidates|
One thing I’d like to say before digging in here is that all of the other factors that I controlled for exhibited the exact same tendencies you would expect in my models. For example, LSAT and GPA were both negatively correlated with being waitlisted (in other words, the higher your GPA, the better chances you had of being admitted). Even still, Harvard Candidates seemed to be disadvantaged at UVA, Michigan, Georgetown, and Vanderbilt (so, LSAT and GPA help you get admitted, but only so long as you stay under a certain “cap” – go beyond that, and your chances of ending up on the waitlist instead increase). In other words, the numbers show that these schools like high GPAs and LSATs, but not too high.
At UVA, the effect of having Harvard Candidate numbers is to be 52% more likely to be waitlisted. At Michigan it’s 91%, at Georgetown it’s 267% and at Vanderbilt it is 952%. Notably absent from this list is Penn, which despite the information from the first table, demonstrated no statistically significant disadvantage for Harvard Candidate applicants.
Now for a look at schools that demonstrate a statistically significant disadvantage for Georgetown Candidates:
|Percent increase in chance of being waitlisted for Georgetown Candidates|
The only two schools on this list are UNC (which is no surprise, given the earlier table), at which Georgetown Candidates are 1410% more likely to be waitlisted, and Boston University (BU) at which they are 411% more likely to be waitlisted. BU is a bit of a surprise, since the raw outcomes data in the earlier table seemed to be roughly in line with the average but, of course, those tables did not account for factors such as URM status, gender, etc.
As a final note, I’d like to again emphasize that this analysis does not (and cannot) control for other factors that schools routinely stress are important, such as the strength of your personal statement, resume and recommendation letters, and schools have defended against accusations of yield-protection by stressing that deficiencies in these types of “soft factors” can outweigh stellar LSAT and GPA numbers. I don’t present this data here to conclusively prove that any particular school does or does not engage in yield-protection (in fact, I was a Harvard Candidate and was offered very generous scholarships to both UVA and Michigan), but to instead add a little data-driven analysis to whatever debate exists.
Questions or comments? Please post them below!