The online surveys, known as Voting Advice Applications (VAAs), are popular voter information tools that aim to help citizens find the party or candidate closest to their own political preferences. In a typical VAA users are asked to indicate their agreement in a battery of 30 statements using 5-point response scales. Based on the estimated positions of political parties/candidates in the same statements, VAAs are able to communicate the level of match between the user and the parties/candidates by using methods of varying sophistication. As VAAs have attracted millions of users across Europe, and have shown to affect political knowledge and voter turnout, their quality as voter information tools has come under close scrutiny. The critical literature has detected, among others, shortcomings in terms of the selection and formulation of the statements found in VAAs. In this paper I look at the issues of statement selection and formulation theoretically, as well as empirically, by focusing on the distribution of responses among both users and parties. I argue that the less polarized the responses at the user and party level are, the more biased and redundant statements will be respectively. The empirical application of this argument involves a cross-national comparison of two sets of 30 questions used by two popular VAAs during the 2009 and 2014 elections to the European Parliament. The paper concludes with implications for the selection and wording of statements for VAA designers and applied researchers alike.
|Published - 7 Sept 2016
|10th ECPR General Conference 2016 - Charles University, Prague, Czech Republic
Duration: 7 Sept 2016 → 10 Sept 2016
|10th ECPR General Conference 2016
|7/09/16 → 10/09/16