ScoreItNow! is an online writing practice service from ETS, maker of the GRE. For US$20, you can write two responses to real Analytical Writing prompts and have them scored by e-rater, the automated essay scoring software used to check GRE essay scores assigned by human raters.
I signed up and wrote an Argument essay. The score e-rater returned fell within the range I expected—5 to 6—and the feedback I got covered grammar, mechanics, usage, and style, as well as development and organization. My essay’s content wasn’t evaluated, but sample essays were presented that gave me an idea of what constitutes a top-scoring analysis.
I picked an Argument topic that you can find in PowerPrep II as well as the official pool of GRE Argument topics. To pull up the prompt, search the pool’s webpage for “Super Screen” (using Ctrl-F or ⌘-F). Here’s what I wrote.
The advertising director recommends that Super Screen spend more of its budget on advertising next year. The director argues that the company released good movies this past year, but poor public awareness of the quality of its films depressed attendance. However, questionable evidence and unwarranted inferences make the director’s case unpersuasive. Various alternate causes for low attendance escape the director’s attention, and the director gives little reason to believe that more people should have attended Super Screen movies during the past year.
According to the memo, annual attendance at Super Screen movies hit an all-time low this past year. The director attributes this decline to a lack of advertising, contending that a dip in film quality, the only other explanation the director considers, was not the cause. But the director’s analysis is inadequate. Other plausible causes remain unexamined. For example, how many movies did the company release and in how many theaters? Releasing fewer movies or showing the same number of films in fewer theaters could explain having fewer attendees than in previous years. Also, what was the total number of moviegoers for the year? If movie attendance in general plummeted (due, for instance, to a recession), then attendance at the company’s movies could have sunk simply because the total number of moviegoers did.
The memo also states that, in the past year, the percentage of positive reviews for specific Super Screen movies rose. The director infers that the public did not know about these reviews, given that the number of attendees fell. This is a specious inference. Assume, as the director seemingly does, that the more positive reviews a movie receives, the more people will go to see it, provided the reviews are advertised sufficiently. Presumably, the percentage the memo cites is, for a given year, equal to the total number of favorable reviews for individual Super Screen movies divided by the total number of positive movie reviews across all films. What were these totals for the past year compared to earlier years? Suppose, for instance, that total positive movie reviews dropped from 1,000 to 500, while the corresponding Super Screen sum fell from 100 to 60. The percentage of positive reviews would have risen from 10% to 12%. Meanwhile, the 40% drop in favorable critiques could have reduced attendance, even if all 60 positive write-ups were well-known to prospective moviegoers.
On the other hand, imagine that both total and percentage positive reviews for specific Super Screen movies increased this past year. Could annual attendance have dropped to its lowest for reasons other than insufficient advertising? Yes, one of the alternate causes already named, such as a sharp decline in total moviegoers, could explain the drop. Another explanation could be that the reviews were well-advertised, but potential viewers did not find them compelling. What did the reviews say? Who were the reviewers? If many of the write-ups offered only generic plaudits, like “fun movie,” or weakly positive ratings, such as “3 out of 5 stars,” these middling reviews probably would not have been strong attendance drivers. Even if the reviews were often excellent, critiques from top movie critics may have been absent or even unfavorable, potentially depressing attendance.
Attracting and promoting enticing write-ups is important for a film company’s success. Whether Super Screen garnered more this past year than in prior years is unclear. Uncertainty likewise surrounds how well the company promoted whatever good reviews its films did receive. Indeed, contrary to what the director argues, too little advertising may not have caused record-low attendance at the company’s films this past year. Consequently, the director’s recommendation that the company should boost next year’s advertising budget remains in doubt.
E-rater gave my essay a 5. The highest score you can get is 6, and the lowest is 0. According to the official scoring guide, a score of 5 is awarded to a “Strong” essay that “presents a generally thoughtful, well-developed examination of the argument and conveys meaning clearly.”
I feel pretty good about a 5. Still, why didn’t I get a 6? What did I need to do to write an “Outstanding” essay that “presents a cogent, well-articulated examination of the argument and conveys meaning skillfully”?
The words “cogent” and “thoughtful” refer to an essay’s content. Since e-rater can’t grasp the meaning of the text being scored, it doesn’t actually look at content. Instead, e-rater measures writing quality. So maybe the writing feedback the software supplied will reveal why I got a 5 rather than a 6.
The Writing Feedback
Along with my score, e-rater reported some basic writing measures, such as word count, and presented an analysis of my essay’s grammar, usage, mechanics, style, and organization and development. Scored sample essays, written by real test takers, were provided for comparison.
|Word Count||Sentence Count||Words per Sentence||Unique Words|
|My 5 Essay||609||34||18.1||190|
|Sample 6 Essay||550||22||25.0||209|
In the Counts & Stats table, word and sentence counts correspond to development, while average sentence length and total unique words measure sentence and vocabulary variety, respectively. Based on these metrics, a less developed essay with longer sentences and more unique words appears to be a better essay. But don’t be misled. Style and vocabulary carry less weight than development, and the sample essay, despite having fewer total words, was a bit more developed than mine.
|Introductory Material||Thesis||Main Ideas||Supporting Ideas||Conclusion|
|My 5 Essay||¶I||✓||3
|Sample 6 Essay||¶I||✓||4
|¶II:2, ¶III:2, ¶IV:3, ¶V:3||¶VI|
As the Org & Dev table shows, the sample had 4 main ideas to my 3. Since e-rater didn’t complain much about my grammar and the like, the software probably would’ve given me a 6 if I’d just popped on another paragraph.
But would a human have given me a 6? Humans evaluate ideas, not just count them, and mine may merit only a 5 (or lower). To see whether my analysis is cogent enough to earn a 6, I can again compare my essay to the 6-rated and, crucially, human-scored sample response. If my content is similar to the sample’s, then my score probably will be, too.
Our critiques were quite alike. In fact, our ideas, both main and supporting, mostly matched. However, in the sample essay, the main idea from my fourth paragraph (that is, whether positive reviews increase movie attendance) was split into two main ideas that occupied the sample’s fourth and fifth paragraphs. So, yes, I’d say having a fourth (relevant, well-supported) main idea likely would’ve earned my essay a 6 from a human rater.
If you’re looking to raise your Analytical Writing score, ScoreItNow is a good, low-cost resource. The topics come from the test maker, and the ratings reliably approximate official scores. Although e-rater doesn’t evaluate the content of your writing, the software does give some useful feedback on the quality, and you can evaluate the content yourself by comparing it to real, human-scored essays.