2016 Pre-APSA Workshop:

'Expert indicators in the social sciences: Challenges of validity, reliability and legitimacy.'

Wednesday 31 August 2016

Philadelphia, PA

 

Details

Workshop Theme

Expert surveys have become increasingly common in comparative social science, in risk analysis by private sector organizations, in evaluation research, and among NGOs and policy makers (Meyer & Booker 1991). Expert surveys are often used to deepen understanding of, among others, the left-right position of political parties and news media outlets, the  perceived extent of corruption or bribe-paying, and the quality of democratic governance. Expert surveys increasingly supplement alternative sources of information, such as citizen mass surveys, event analysis of media reports, and official statistics.

This data collection technique has been applied to diverse research topics such as the series of studies on party and policy positioning (Laver and Hunt 1992; Huber and Inglehart 1995; Saiegh 2009; Laver, Benoit, and Sauger 2006; McElroy and Benoit, 2007), the power of prime ministers (O’Malley 2007), evaluations of electoral systems (Bowler, Farrell, and Pettitt 2005); policy constraints horizons (Warwick 2005); campaign communications (Lileker, Steta and Tencher 2015); human rights and democracy (Landman and Carvalho 2010), and the quality of public administration (Teorell, Dahlstrom and Dahlberg 2011). Expert surveys have been widely used in research on corruption - the Corruption Perceptions Index (Transparency International 2013; Global Integrity); measuring democracy since the 1900s -Varieties of Democracy (Coppedge et al. 2011)- and electoral integrity (Norris, 2014; Norris, 2015; Martinez i Coma and Van Ham, 2015).  The World Bank Institute Good Governance indicators combine an extensive range of expert perceptual surveys drawn from the public and private sectors. Indeed among the mainstream indicators of democracy, Freedom House’s estimates of political rights and civil liberties, Polity IV’s classification of autocracies and democracies, and the Economist Intelligence Unit’s estimates of democracy are all, in different ways, dependent upon expert judgments.

Expert surveys seem especially useful for measuring complex concepts that require expert knowledge and evaluative judgments; and for measuring phenomena for which alternative sources of information are scarce (Schedler 2012). Yet, expert surveys are not risk free and scholars have pointed out their limitations (Budge, 2000; Mair 2001; Steenbergen and Marks, 2007). Moreover, in contrast to mass social surveys, we still lack a common methodology to construct such surveys, as well as agreed technical standards and codes of good practice. There has been heated debate about the pros and cons of methods used to evaluate the spatial positions of party policies, and about the use of governance indicators more generally, but by contrast there has been remarkably little discussion about the challenges of validity, reliability, and legitimacy facing the construction of expert perceptual surveys. Yet it is critical to consider these issues given the lack of a clear conceptualization and sampling universe of ‘experts’, contrasting selection procedures and reliance upon domestic and international experts, variations in the number of respondents and publication of confidence intervals, and lack of consistent standards in levels of transparency and the provision of technical information. Moreover, more research needs to be done on how to evaluate the consequences of expert and context heterogeneity on the validity of expert judgments (Martinez i Coma and van Ham 2015), for example by using item response models to test and correct for expert heterogeneity (Pemstein et al. 2015), and using techniques such as ‘anchoring vignettes’ (King & Wand 2007) or ‘bridge coders’ (V-Dem) to test and correct for context heterogeneity.

This workshop features sessions and papers covering the following topics: 

  1. Use expert surveys in comparative politics and related social sciences to develop new ways of testing the validity of expert survey data;

  2. Use meta-analysis to compare expert perceptual surveys standards, methods, and techniques in terms of validity, reliability and legitimacy to other sources of information; or

  3. Propose codes of conduct and good practice for conducting expert surveys.

References

Benoit, Ken and Michael Laver. 2005. Party Policy in Modern Democracies. London: Routledge.

Bowler, Shaun; David Farrel and Robin Pettitt. 2005. ‘Expert opinion on electoral systems: So which electoral system is best?’ Journal of Elections, Public Opinion and Parties 15(1): 3-19.

Budge, Ian. (2000). ‘Expert judgments of party policy positions: Uses and limitations in political research.’ European Journal of Political Research 37(1): 103–113.

Transparency International. 2013. Corruption Perception Indexhttp://www.transparency.org/whatwedo/publication/cpi_2013

Huber, John and Inglehart, Ronald. 1995. ‘Expert Interpretations of Party Space and Party Locations in 42 Societies’, Party Politics 1:73-111.

King,G. &Wand, J. (2007). Comparing incomparable survey responses: Evaluating and selecting anchoring vignettes. Political Analysis 15(1): 46–66.

Landman, Todd and Edzia Carvalho. 2010. Measuring Human Rights. London: Routledge.

Laver, Michael and Ben Hunt, B. (1992). Party and Policy Competition. London: Routledge.

Laver, Michael, Kenneth Benoit, and Nicolas Sauger. 2006. ‘Policy Competition in the 2002 French Legislative and Presidential Elections.’ European Journal of Political Research 45: 667-697.

Lilleker, Darren., Stetka, V. and Tenscher, J., 2015. Towards hypermedia campaigning? Perceptions of new media's importance for campaigning by party strategists in comparative perspective. Information, Communication and Society, 18 (7), 747-765.

Mair, Peter. (2001). ‘Searching for the position of political actors: A review of approaches and a critical evaluation of expert surveys.’ In M. Laver (ed.), Estimating the policy positions of political actors. London: Routledge.

Martinez i Coma, Ferran; Van Ham, Carolien . 2015. ‘Can experts judge elections? Testing the validity of expert judgments for measuring election integrity.’ European Journal of Political Research 54 (2): 305-325.

McElroy, Gail and Kenneth Benoit. 2007. ‘Party groups and policy positions in the European Parliament.’ Party Politics 13:5-28. Meyer, M. & Booker, J. (1991). Eliciting and analyzing expert judgment: A practical guide. London: Academic Press.

O’Malley, Eoin. 2007. The Power of Prime Ministers: Results of an Expert Survey’ International Political Science Review 28(1):7-27.

Pemstein, D.; Tzelgov, E.; Wang, Y. 2015. “Evaluating and Improving Item Response Theory Models for Cross-National Expert Surveys”. Varieties of Democracy Institute: Working Paper Series No. 1.

Saiegh, Sebastian. 2009. ‘Recovering a basic space from elite surveys: Evidence from Latin America.’ Legislative Studies Quarterly 34(1):117-145.

Schedler, Andreas. 2012. ‘Judgment and Measurement in Political Science’ Perspectives on Politics 10(1):21-36.

Steenbergen, Marco R.  and Gary Marks. 2007.  ‘Evaluating expert judgments.’ European Journal of Political Research 46: 347–366.

Teorell, Jan. Carl Dahlström & Stefan Dahlberg. 2011. The QoG Expert Survey Dataset. University of Gothenburg: The Quality of Government Institute. http://www.qog.pol.gu.se

Warwick, Paul. 2005. ‘Do Policy Horizons Structure the Formation of Parliamentary Governments?: The Evidence from an Expert Survey’ American Journal of Political Science 49(2):373-387.

 

Full Schedule

8:30 – 9:00 Arrival, registration, and buffet continental breakfast (Foyer)

9:00 – 9:30  Welcome to the Workshop (Rittenhouse I&II)


Pippa Norris (Harvard and Sydney University)

Staffan Lindberg (V-DEM and University of Gothenburg)

9:30 – 11:00 Panel 1: (EIP) Assessing and improving electoral integrity (Rittenhouse I&II)


Chair: Alessandro Nai (University of Sydney)
Discussant: Jeffrey Karp (Exeter University)


1.1 Pippa Norris (Harvard and Sydney University)
The pragmatic case for electoral assistance: The impact of regional organizations on electoral reform

1.2 Josephine Boyle (Rider University), Michael Brogan (Rider University) and Frank Rusciano (Rider University)
Elite and Popular Perceptions: Developing a Global Definition of Democracy

1.3 David Carroll (The Carter Center) and Obehi Okojie (Georgetown University)
Assessing Electoral Integrity: Comparing Indicators and Developing an Overall Assessment Framework

1.4 Kirill Kalinin (University of Michigan)
Signaling Games of Election Fraud

*** EIP/International IDEA essay competition winner 2016 ***

9:30 – 11:00 V-DEM) Closed Organizational Meeting (Executive Boardroom)

11:00 – 11:30 Coffee/Tea Break (Foyer)

11:30 – 1:00 Panel 2: (EIP) The rules of the game: electoral systems and integrity (Franklin 1&2)


Chair: Pippa Norris (Harvard and University of Sydney)
Discussant: Frank Rusciano (Rider University)

2.1 Masaaki Higashijima (Tohoku University) and Eric C. C. Chang (Michigan State University)
The Choice of Electoral Systems in Dictatorship

2.2 Eleanor Hill (University of Manchester)
Biraderi, Political Machines and Postal Voting on Demand in Great Britain

2.3 Shane Singh (University of Georgia)
Compulsory Voting and Electoral Integrity

2.4 Víctor A. Hernandez-Huerta (University of the Andes)
Rejecting election results now to be elected in the future: evidence from sub-national elections in Mexico.

11:30 – 1:00 Panel 3: Information, communication and legitimacy (Garden Room)


Chair: Holly-Ann Garnett (McGill University)
Discussant: Alessandro Nai (University of Sydney)

3.1 Mike Omilusi (Ekiti State University)
From Convenient Hibernation to Circumstantial Desperation: Hate Speech, Party Political Communication and Nigeria’s 2015 General Elections.


3.2 Sharon Lean (Wayne State University) and Matthew Lacouture (Wayne State University) (copies are available on request from the authors (sflean@wayne.edu)

Resources, repertoire and context: what determines the quality of domestic election monitoring?


3.3 Max Grömping (University of Sydney)
Explaining news attention to domestic election monitoring initiatives

3.4 Nicholas Kerr (University of Alabama) and Anna Lührmann (University of Gothenburg and V-DEM)

Public Trust in Manipulated Elections: The Role of Election Administration and Media

11:30 – 1:00 (V-DEM) Closed Organizational Meeting (Executive Boardroom)

1:00 – 2:00 Buffet Lunch and Workgroups: (Joint) Best practices for the use of expert indicators (Rittenhouse I&II)


Code of standards and ethical conduct governing best practices in mass surveys have been agreed by many market research companies, survey research organizations, and professional bodies. Members endorse these standards and agree to abide by them in their conduct. Yet similar conventions have not yet been adapted and agreed to promote best ethical practices in the generation of expert-based indicators and surveys.

In this breakout session, led by a moderator, groups at each table will be asked to discuss two questions: (i) In your experience, what are the most important challenges to maintain appropriate standards and practices when generating and using expert indicators? (ii) What codes of conduct would you suggest to address these challenges?

Each table group will select a rapporteur to report back the key points during the final 15-20 minutes.

Chair: Pippa Norris

Table discussion moderators

1. Sarah Lister (UNDP)

2. Gary Marks (UNC Chapel Hill)

3. Monty Marshall (Polity IV)

4. Annika Werner (Griffith University, MARPOR)

5. Sarah Repucci (Freedom House)

6. Sabine Donner (BTI Index)

7. Jennifer Dunha (Freedom House)

2:00 – 3:00 Panel 4: (Joint) Judging the judges: evaluating the quality and validity of expert assessments (Franklin 1&2)


Chair: Gary Marks (UNC Chapel Hill)
Discussant: Sarah Repucci (Freedom House)

4.1 Kyle Marquardt (University of Gothenburg and V-DEM Institute), Daniel Pemstein (North Dakota State University), Brigitte Seim (UNC Chapel Hill) and Yi-ting Wang (National Cheng Kung University)
What makes experts reliable?

4.2 Kelly McMann (Case Western Reserve), Brigitte Seim (UNC, Chapel Hill), Jan Teorell (Lund University), Daniel Pemstein (North Dakota State University) and Staffan Lindberg (University of Gothenburg)
Measurement Assessment Strategies: Assessing the Varieties of Democracy Corruption Measures

4.3 Jan Teorell (Lund University), Michael Coppedge (University of Notre Dame), Svend-Erik Skaaning (University of Gothenburg and V-DEM and Staffan I. Lindberg (University of Gothenburg and V-Dem)
Measuring Electoral Democracy with V-Dem Data: Introducing a New Polyarchy Index

2:00 – 3:30 Panel 5: (Joint) Issues and techniques of expert assessments (Garden Room)

Chair: Svend-Erik Skaaning (V-DEM and Århus University)
Discussant: Annika Werner (Griffith University)

5.1 Daniel Pemstein (North Dakota State University), Brigitte Seim (UNC Chapel Hill) and Staffan Lindberg (University of Gothenburg)
Anchoring Vignettes and Item Response Theory in Cross-National Expert Surveys

5.2 Holly Ann Garnett (McGill University)
Measuring the Quality of Election Management: Perceptions, Design and Capacity


5.3 Carolien van Ham (University of New South Wales) and Ferran Martínez i Coma (University of Sydney)
Do experts judge elections differently in different contexts? The cross-national comparability of experts
To access a copy of the paper please email the authors.

3:30 – 4:00 Coffee/Tea Break (Foyer)

4:00 – 6:00 Plenary Roundtable: (Joint) Lessons for best practices in expert indicators (Rittenhouse I&II)


During this interactive Q&A session, we will return to the build upon the lunchtime discussion. Panelists will each be asked to discuss their experience of the challenges of generating standards and codes governing best practices in generating expert indicators designed to measure social phenomena.

The discussion will open with two questions where each panelists is asked to address their remarks for 4-5 minutes: (i) in your experience, what are the most important 2-3 challenges to standard and ethics when undertaking expert surveys? (ii) What best practices and codes of conduct would you suggest to address these challenges?

The Q&A, moderated by the chair, will pick up these points interactively. The panel aims to stimulate discussion and explore whether there is a consensus around some minimal best practices for generating expert-based indicators. We will conclude by considering how best to build agreement around these issues.

Chair: Pippa Norris (Harvard and University of Sydney)

§ Sarah Lister (UNDP Oslo Governance Center)

§ Gary Marks (University of North Carolina-Chapel Hill, Chapel-Hill Expert Survey)

§ Monty Marshall (Societal-Systems Research, Polity IV)

§ Annika Werner (Griffith University, Manifesto Project)

§ Sarah Repucci (Freedom House, Freedom in the World, Freedom of the Press)

§ Sabine Donner (BTI Transformation Index)

§ Staffan I. Lindberg (V-Dem)