In the world of law school admissions, the U.S. News and World Report’s (USNWR) annual law school rankings are always looming, in one way or another, in the background.
For better or worse, the USNWR rankings are the most widely cited. For many, they are considered the gospel truth when it comes to law school rankings. They receive heavy criticism along a number of lines. Many criticize them for placing insufficient emphasis on employment outcomes, and too much emphasis on things like library resources and expenditures per student, which favor schools with a lot to spend. Another common complaint is that the rankings are self-reinforcing.
The USNWR Methodology
The USNWR methodology places a lot of emphasis on median LSAT scores and GPAs. So if applicants are largely looking to the UNSWR rankings to evaluate school quality, the top applicants will choose (and graduate from) the schools at the tops of those rankings. They lend their stellar LSAT and GPA scores to those schools’ statistics, and further solidifying those schools’ holds on the top spots.
Other sources, such as Above the Law, have attempted to come up with more meaningful rankings. But, for the time being, it seems that the USNWR rankings are king. And their release each year is met with much anticipation.
Another aspect of the USNWR methodology that generates a lot of chatter on law school message boards is the percentage of the rankings that deal with a school’s acceptance rate. In a nutshell, a school can marginally increase its rank by accepting a lower percentage of those students who apply. The weight given to the acceptance rate in the methodology is pretty low. This doesn’t stop people from speculating that this factor may cause a school to engage in “yield protection.”
The theory goes like this: some schools may actually disadvantage students with high LSAT and GPA numbers because they believe those students will end up attending other, higher-ranked schools. Rather than accept these students outright, these allegedly yield-protecting schools instead waitlist them. They can eventually reject them once it’s clear they won’t attend, therefore decreasing the school’s acceptance rate (and increasing, if only slightly, its USNWR score).
This, obviously, is a controversial subject. The implication may be that some schools are more worried about their USNWR rankings than they are in admitting the most highly qualified students. Instead, they’re targeting for acceptance students they think will attend in order to game the rankings. Clearly, there are plenty of non-LSAT and GPA related reasons to deny acceptance to a student, regardless of his or her stellar numbers. That student may have an otherwise unimpressive resume, potential past misconduct issues, a clear lack of interest in attending, or any other number of non-numbers related factors.
That said, it might be interesting to employ numbers – both LSAT and GPA and in a wider sense – to see what statistical analysis tells us about whether yield protection might exist. So that’s just what we will do in this blog. As always, we are using applicant-reported data (and not data from the schools themselves).
In order to try to shed some light on this yield-protection question, we first have to identify applicants that we think would be “yield-protection candidates.” In other words, we have to identify those applicants who schools might take a pass on, believing they will opt for a higher-ranked school. This, of course, is not an exact science but an attempt to stay true to the theory underlying this analysis. In doing so, we developed two categories of yield-protect candidates, “Harvard Candidates” and “Georgetown Candidates.”
Harvard Candidates are those who have both an LSAT score of 173 or higher and GPA of 3.87 or higher (which are the average medians over the past five years for Harvard). Georgetown Candidates are those who have both numbers at or above Georgetown’s average medians over the same period (168+ LSAT and 3.73+ GPA). We identified two groups in order to analyze the highest number of schools possible. Once you get below USC on the rankings, there simply aren’t enough Harvard Candidate applicants to support drawing any conclusions.
With that established, we will approach the yield-protection subject in two ways. First, let’s look at the admissions outcomes for only Harvard Candidates for schools for USC and above, as well as the outcomes for Georgetown Candidates for schools from the University of Texas down through William & Mary. Left out are any schools for which we didn’t have at least 10 data points. Next, we use regression analysis to identify whether, holding other factors equal, Harvard or Georgetown Candidates are disadvantaged (or, if you prefer, “yield-protection victims”).
First, the raw admissions outcomes data for Harvard Candidates at schools from Yale through USC:
Harvard Candidate Admissions Outcomes
|School||Reject||Waitlist||Accept||# of Applicants|
To begin with, nobody believes Yale yield protects. Yale has been at the top of the USNWR rankings since they’ve been published, and nobody believes that Yale is going to deny admission to a candidate because it believes it will lose them to another school. The same goes for Harvard and Stanford. So there’s nothing particularly surprising about their results.
As for the other schools, the most interesting result here is that the University of Pennsylvania (Penn), the University of Virginia (UVA), and the University of Michigan (Michigan). They all seem to waitlist an abnormally high percentage of Harvard Candidates, and Georgetown is not far behind. Perhaps not coincidentally, Penn, UVA, and Michigan are the schools most often accused of yield-protection, and you hear Georgetown tossed out there, too. It’s not hard to see why, based on this table. It would be unfair to draw any hard conclusions about these numbers since it’s impossible for us to know what goes on in the minds of admissions committees. But they are interesting, nonetheless.
Now, let’s have a look at how Georgetown Candidates fare:
Georgetown Candidate Admissions Outcomes
|School||Reject||Waitlist||Accept||# of Applicants|
There’s quite a bit less to be interested in here. Although, UNC, Notre Dame, and UCLA do seem to waitlist an outsized percentage of Georgetown Candidates. Georgetown does as well, but with Georgetown, there’s a much higher possibility that those Georgetown Candidates have Harvard Candidate numbers. This is less true of the schools further down the list. There were fewer than 10 Harvard Candidate applicants to UNC and Notre Dame, for example.
With the same “we can’t draw any hard conclusions from these tables” caveat as before, it’s at least understandable – from a 10,000 feet perspective – that Penn, UVA, Michigan, and Georgetown may have reputations for yield-protection.
Now, let’s take a look at the results of the regression analysis for both the Harvard Candidate and Georgetown Candidate groups. This analysis identifies the effects of simply being a Harvard Candidate or Georgetown Candidate on admissions outcomes, while controlling for LSAT, GPA, gender, URM status, binding early decisions applications, the month in which the candidate applied, and whether or not the candidate is a nontraditional applicant.
In other words, are there schools that seem to disadvantage applicants simply because their numbers are so high? This would, of course, be consistent with yield-protection practices. In doing this analysis, we ignored rejections and focused only on acceptances versus waitlists. Since yield-protection allegations are not that candidates get outright rejected by yield-protected schools, but instead that they get waitlisted rather than outright accepted.
Schools that do demonstrate a statistical disadvantaging of Harvard Candidate applicants:
|Percent increase in chance of being waitlisted for Harvard Candidates|
Note that all other factors that are controlled for exhibited the exact same tendencies you would expect in the models. For example, LSAT and GPA were both negatively correlated with being waitlisted. In other words, the higher your GPA, the better chances you had of being admitted. Even still, Harvard Candidates seemed to be disadvantaged at UVA, Michigan, Georgetown, and Vanderbilt. So, LSAT and GPA help you get admitted, but only so long as you stay under a certain “cap”. Go beyond that and your chances of ending up on the waitlist instead increase. In other words, the numbers show that these schools like high GPAs and LSATs, but not too high.
At UVA, the effect of having Harvard Candidate numbers is to be 52% more likely to be waitlisted. At Michigan it’s 91%, at Georgetown it’s 267% and at Vanderbilt, it is 952%. Notably absent from this list is Penn, which despite the information from the first table, demonstrated no statistically significant disadvantage for Harvard Candidate applicants.
Schools that demonstrate a statistically significant disadvantage for Georgetown Candidates:
|Percent increase in chance of being waitlisted for Georgetown Candidates|
The only two schools on this list are UNC (no surprise, given the earlier table), at which Georgetown Candidates are 1410% more likely to be waitlisted, and Boston University (BU) at which they are 411% more likely to be waitlisted. BU is a bit of a surprise. Since the raw outcomes data in the earlier table seemed to be roughly in line with the average. But, of course, those tables did not account for factors such as URM status, gender, etc.
As a final note, this analysis does not (and cannot) control for other factors that schools routinely stress are important. These include the strength of your personal statement, resume, and recommendation letters. Schools have defended against accusations of yield-protection by stressing that deficiencies in these types of “soft factors” can outweigh stellar LSAT and GPA numbers. The presentation of this data isn’t here to conclusively prove that any school does or doesn’t engage in yield-protection. We’re just aiming to add a little data-driven analysis to whatever debate exists.
Questions or comments? Please post them below!