“No Award” Part V: A Look at the Hugo Voting Numbers

Separator

By Chris Chan

 

This is the fifth in a series of articles.  For Part I, please see here. For Part II, please see here.  For Part III, please see here.  For Part IV, please see here.

 

In Part III, this series took an initial look at some of the statistics and voting numbers behind the Hugo Awards.  In this article, I will take a look at some of the data and attempt to explain what it tells us about the nomination and final voting trends, as well as the ultimate impact that the Sad and Rabid Puppies had on the results.

 

Part III of this series contained links to all of the officially released voting data on the Hugos from 2007 onwards.  Those links are repeated here, along with the official numbers from 2000 to 2006.  Note that in years without “(Nominees)” or “(Winners)” afterwards, all of the nomination and winner information is combined in a single file or page.  Information on the 2000 nominees and those candidates who did not make the final ballot was not released.  The original link to the 2004 information has been disabled, but the relevant information has been saved on the Wayback Machine.

 

2000 (Winners)

2001

2002 (Nominees)  2002 (Winners)

2003 (Nominees)  2003 (Winners)

2004 (Nominees)  2004 (Winners)

2005 (Nominees)  2005 (Winners)

2006 (Nominees)  2006 (Winners)

2007 (Nominees)  2007 (Winners)

2008 (Nominees)  2008 (Winners)

2009 (Nominees)  2009 (Winners)

2010

2011

2012

2013

2014

2015

2016

Preliminary 2017 information– Full statistics will be released in August at Worldcon 75

 

Unfortunately, the Hugo nomination and voting information is not always consistent– in a handful of cases, the total number of ballots in either the nominations or final vote are not explicitly stated, and therefore the total information is incomplete.  Nevertheless, the released data provides a useful look at just how many people have been voting for what since the year 2000.

 

There is far, far too much information to analyze thoroughly here– complete coverage of the data would fill a book.  In the space available, it is at least possible to observe some general trends from the following numbers regarding the total nominating and final ballots for each year.

 

Number of valid ballots cast

2000: Final: 1,071

2001: Nominating: 498; Final: 1,050

2002: Nominating: Not explicitly stated but at least 486; Final: Not explicitly stated but at least 885

2003: Nominating: 738; Final: 805

2004: Nominating: Not explicitly stated but at least 462; Final: 1,093

2005: Nominating: 546; Final: 684

2006: Nominating: Not explicitly stated but at least 430; Final: Not explicitly stated but at least 660

2007: Nominating: Not explicitly stated but at least 102 and almost certainly a great deal more; Final: Not explicitly stated but at least 471

2008: Nominating: 483; Final: 895

2009: Nominating: Not explicitly stated but at least 639; Final: 1,074

2010: Nominating: 864; Final: 1,094

2011: Nominating: 1,006; Final: 2,100

2012: Nominating: 1,101; Final: 1,922

2013: Nominating: 1,343; Final: 1,848

2014: Nominating: 1,923; Final: 3,587

2015: Nominating: 2,122; Final: 5,950

2016: Nominating: 4,032; Final: 3,130

2017: Nominating: 2,464; Final: To be determined

 

Looking through all of this data (and there is a lot of data here), there are a couple of notable trends to follow.   As the full nomination statistics illustrate, bestseller status does not necessarily equate with nominations and wins.  For example, J.K. Rowling’s Harry Potter series has an enormous fan base, but despite a nomination for Prisoner of Azkaban and a Hugo win for Goblet of Fire, later books in the series missed out on nods, sometimes by ten or twenty-five votes.  The Hugo voters do not represent all sci-fi and fantasy readers– their preferred authors and styles of work may not be mirrored by the broader public.  The average reader may only read a handful– perhaps only one or two– bestselling works in the sci-fi/fantasy genre.  Many Hugo voters are enthusiastic fans of the genre, and may very likely read more deeply and widely than readers who do not focus primarily on sci-fi/fantasy work.  Judging by the frequent recurrence of certain names, it is evident that some authors have very strong support amongst the Hugo voters– Neil Gaiman, just to cite one widely recognizable example, has many nominations and wins, and the only reason why some of his works didn’t get on the voting list is because Gaiman declined the nominations.

 

The number of ballots submitted vary substantially from year to year.  The number has ebbed and flowed over the years, but since 2011 there has been a sharp rise on average.  Most years the total number of final votes is significantly higher than the nominating votes, but 2016 was a big exception.  The answer to this may lie in the fact the purchase of a Worldcon membership will allow someone to nominate in the next year’s Hugos, which indicates that the influx of voters participating in the 2015 Hugos (when Sad Puppies III dominated) led to more nominations in 2016, but many of the new voters did not resume their membership in order to vote in 2016.  I would be interested to hear more perspectives on why the voting numbers have risen so sharply since the start of the decade.

 

One point that needs to be considered is the location of the Worldcon.  Though some of the Worldcons with the lowest definite number of valid final ballots (2003 and 2005) were held outside the U.S., (Toronto, Canada; and Glasgow, Scotland; respectively), but other years with low numbers of final ballots have been held in the U.S., and other overseas conventions have had high numbers of votes.  Since one can vote in the Hugos by buying supporting memberships, which allow one to cast a ballot without actually attending the conference, the location of the Worldcon does not seem like a critical factor in the outcome of the voting.  However, given the fact that a relatively small number of voters might conceivably be able to swing the final vote, it is in theory possible that the location of the conference might affect who can attend and who cannot.  For example, the 2017 Worldcon in Helsinki may prove cost-prohibitive for some people, though that ought not stop them from buying a supporting membership.  More influential to the voting might be the entrance of Finnish people (or others living nearby, who might attend despite not being involved in Worldcon in the past, which could bring new opinions to the voting.  Again, these possible influences are theoretical– just because something could happen doesn’t mean it did or will.

 

There have been some impressive analyses of the Hugo voting numbers in the past, including the previously-mentioned Some Sad Puppies Data Analysis by Nathaniel Givens (which provides a very in-depth look at how the Sad and Rabid Puppy recommendations met with nomination success, and supersedes the need to cover much of the same ground here), and George Flynn’s Hugo Voting: Let’s Look at the Record (Again).  Flynn’s analysis of the data was published long before the Sad Puppies were formed, but it provides a through glimpse at how nominees were chosen during the late twentieth century. 

 

At one point, Flynn comments that, “It is depressing to see how few nominations it takes to get on the ballot.”  Throughout the late twentieth century, it was possible many years to get a Hugo nomination with twenty- five or fewer fans– in some years, a short story could be nominated with as little as nine votes.  Flynn also makes the prescient observation that “For any election with more than two candidates, it can be shown that no electoral system will give an optimal result (as defined by the satisfaction of the voters) all the time.”  Though Flynn meant it in a slightly different context, this is emblematic of the discontent that led to the Sad Puppies movement.

 

Notwithstanding the hotly contested allegations that a significant portion of Hugo voters have been motivated by promoting works that reflect certain political and philosophical worldviews, and that members of certain cliques are selected for nomination over “outsiders,” there is a main point at the heart of the Sad Puppies controversy that needs to be addressed.  A small group of sci-fi/fantasy fans concurred that either they had not cared for most of the works nominated for Hugos over time, and that other works and authors they liked went unrecognized.  Sad Puppies became a rallying point for such people, and they banded together over suggested works, though as the statistics illustrate, it seems probable that not everybody who consulted the Puppy recommendation lists agreed on everything.

 

In the early years of Sad Puppies, the results were decidedly mixed:

 

Sad Puppies I Suggestions

 

Best Novel

Monster Hunter Legion, by Larry Correia.  Result: 101 nominations, fell short of the final ballot by 17 nominations.

 

Best Fanzine

Elitist Book Reviews.  Result: 50 nominations, the fourth-highest vote-getter amongst the nominated works.  Came in 5th in the final vote.

 

Best Graphic Work

Schlock Mercenary: Random Access Memorabilia.  Result: 98 nominations, the second-highest vote-getter amongst the nominated works.  Came in 3rd in the final vote.

 

Best Fancast

Writing Excuses Podcast: Result: 13 nominations, fell short of the final ballot by 22 nominations.

 

Best Professional Artist

Vincent Chong.  Result: 53 nominations, the fifth-highest vote-getter amongst the nominated artists.  Came in 5th in the final vote.

 

Best Short Stories/Novellas

Gray Rinehart (no specific title mentioned).  Result: no listed nominations in any writing category.

 

Best Editor (length not specified)

Toni Weiskopf and Jim Minz.  Result: Toni Weiskopf received 50 nominations in the Long Form category, the fifth-highest vote-getter amongst the nominated editors.  Jim Minz received 30 nominations in the Long Form category, fell short of the final ballot by 20 nominations.  Weiskopf came in 2nd in the final vote.

 

Sad Puppies II Suggestions

 

Best Novel

Warbound, the Grimnoir Chronicles by Larry Correia.  Result: 184 nominations, the second-highest vote-getter amongst the nominated works. (Note: this placement is due to Neil Gaiman declining his 218 votes-nominated novel The Ocean at the End of the Lane, which would have been in second place had he accepted the nomination.)  Came in 5th in the final vote.

A Few Good Men by Sarah Hoyt.  Result: 91 nominations, fell short of the final ballot by 7 votes.

 

Novella

The Butcher of Khardov by Dan Wells.  Result: 106 nominations, the fourth-highest vote-getter amongst the nominated works.  Came in 5th in the final vote.

The Chaplain’s Legacy by Brad Torgersen.  Result: 111 nominations, the third-highest vote-getter amongst the nominated works.  Came in 4th in the final vote.

 

Novelette

The Exchange Officers by Brad Torgersen.  Result: 92 nominations, the second-highest vote-getter amongst the nominated works.  Came in 4th in the final vote.

Opera Vita Aeterna by Vox Day.  Result: 69 nominations, the fifth-highest vote-getter amongst the nominated works.  Came in 6th in the final vote after “No Award.”

 

Best Fanzine

Elitist Book Reviews by Steve Diamond.   Result: 107 nominations, the highest vote-getter amongst the nominated works.  Came in 5th in the final vote.

 

Graphic Story

Schlock Mercenary by Howard Tayler.  Result: 68 nominations, the second-highest vote-getter amongst the nominated works.  Disqualified because it was not eligible in 2013.

 

Best Editor Long Form

Toni Weisskopf.  Result: 169 nominations, the highest vote-getter amongst the nominated editors.  Came in 4th in the final vote.

 

Best Editor Short Form

Bryan Thomas Schmidt.  Result: 80 nominations, fell short of the final ballot by 6 votes.

 

Campbell Award

Marko Kloos.  Result: 88 nominations, the 3rd-highest vote-getter amongst the nominated works.  Nomination was withdrawn due to prior eligible work.

Frank Chadwick.  Result: 61 nominations, fell short of the final ballot by 9 votes.  If Kloos’ nomination had been valid, Chadwick would have fallen short of the final ballot by 12 votes.

 

Best Related Work

Monster Hunter International Role Playing Game by Hero Games.  Result: 38 nominations, fell short of the final ballot by 14 votes.

 

Best Short Story

Failsafe by Karen Bovenmyer.  Result: 33 nominations.  Only four nominations in this category this year because a story needs a certain percentage of the vote to qualify.  Fell short of the total nominations of the 4th-place nominee by 10 votes.

(Note: the “Best Related Work” and “Best Short Story” suggestions were not listed on the same blog post as the other nominees, and therefore may have been missed by some Sad Puppies supporters.)

 

As is by now well-known, the Sad Puppies III (2015) suggestions were far more successful, and the details behind their success are covered in-depth in Nathaniel Givens’ Some Sad Puppies Data Analysis.  Once again, the recommendation list for Sad Puppies IV (2016) list was so large that a significant number of the fan-recommended possibilities found their way onto the final ballot, and it was the more focused Rabid Puppies slate that had the dominant influence on the ballot.  Notably, both the Sad and Rabid Puppies recommended people who later declined the nomination; and some nominees were ineligible, which lowered their dominance over the final nomination list.

 

Given the wide-ranging vote totals of the nominations, the Puppies are not as well-organized as many of their critics allege they are.  As some Puppy critics insist, organized voting is highly discouraged.   Many Puppy-sympathetic voters may have relied on the recommendation lists (though again, it seems that some people may have swapped out some choices on their nomination ballots and replaced them with their own preferred favorites), but come the final vote, the Puppies did not band together to back a certain one of the nominees in order to increase the odds of a win.

 

After looking at the Hugo voting statistics, it is also important to realize just what we cannot tell from the available data.  We cannot be certain just how many people voted a “straight-party” ticket on the Puppy recommendations.  While it’s possible that that many of the nominators placed all of the recommended works on their ballots, the significant disparity in some numbers (particularly in the first two years of the Sad Puppies), indicates that a significant but nebulous percentage of people influenced by the Puppy recommendations did in fact view the lists as recommendations rather than a strict slate.  For reasons that the numbers alone cannot tell, Puppy supporters chose certain works, but for motives of their own, many of them rejected some recommendations.  Whether these voters disliked some works or really loved others and needed to make space on their ballot cannot be determined by the statistics alone.  Based on the 2015 numbers, it seems that the number of combined Sad and Rabid Puppy voters was in the mid-to-high 200’s (of course, not everybody who voted for certain works on the Puppy recommendation lists may have been aware of the lists, and as previously noted, numerous voters did not follow all of the recommendations).  By 2016, it appears that the Rabid Puppies had well over 400 followers, but the nature of Sad Puppies IV means that it is impossible to judge their impact of Sad Puppies based solely on the provided numbers.

 

Having noted that a relatively small number of Puppy-sympathizing voters shaped the Hugo nominations for two years (precipitating the E Pluribus Hugo vote restructuring), it is also necessary to look at the people who voted against the Puppy candidates.  Once again, in 2015, every work on the Sad and Rabid Puppies lists failed to receive a Hugo Award, save for Guardians of the Galaxy in the “Best Dramatic Presentation, Long Form” category.  The only other 2015 winners were nominees not on either the Sad or Rabid Puppy lists.  In some cases, some Puppy nominees (Like Jim Butcher’s Skin Game) were ranked above “No Award,” but in many cases, well over 3,000 voters placed “No Award” over all of the Puppy candidates.  Clearly, there was significant organization for the “No Award” movement, and a great many voters joined as a result of the controversy, but as stated earlier, there was a very sharp drop-off the following year.  So who joined the ranks of the Hugo voters?  Did Worldcon attendees who previously neglected to vote suddenly decide to vote against the Puppies?  Did a group of people who had previously not been involved the Hugos heard about the controversy and decided to join as a result?  Or, as some Puppies supporters have alleged, were less-expensive supporting memberships purchased en masse in an effort to swing the vote?  Could a blend of these issues be a factor?   Any of these are possible– the statistical data alone doesn’t tell us who voted and why.  All we have are the numbers– further details on who was involved in the “No Award” campaign remains elusive.

 

So what do all these numbers mean?  They provide a look at how many people were involved in the Hugo voting and the levels of support that works received, but mere numerical data cannot adequately describe the emotions and opinions that have made the Hugos so controversial recently and underscored the divisions in fandom.  In order to understand the human aspect of this issue, it’s crucial to look at the ideas, opinions, and beliefs that figured so heavily in this incident, and I will do so in the next installment in this series.

 

Coming up in Part Six of this series– Ideology, Ideas, and Speculative Fiction


    No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

THE NERD MACHINE GEAR

Read More

CATEGORIES

LATEST VIDEOS

Read More