Afrobarometer Research Speaks for Itself

– By Bob Mattes –
The Afrobarometer is an award-winning international research project that collects data on Africans’ attitudes and opinions about democracy and economics.  Now conducted in 20 countries, and with an unprecedented four waves of data collected in 12 countries, no social science project has ever approached its scope on this continent.  The project has been supported since 1999 by a wide range of multilateral and bilateral donors who use the data to understand the political atmosphere in Africa, and the results are increasingly seen and discussed at the highest levels of government across the continent.  Afrobarometer data has supported articles and books in leading social science journals and publishing houses. 

Thus, it was indeed surprising to come across an article last week written by the Democratic Alliance’s Gareth Van Onselen (“Is Afrobarometer’s Latest Poll Reliable?”).  Certainly, no research project is above criticism.  However, one can’t escape the conclusion that the motivation for the piece was not academic debate but the DA’s pique at the results to one single question in the most recent Afrobarometer South Africa survey (on party support, which I will come to below) which then led him to download the summary of the questionnaire and top-line results from the Idasa website (Idasa is one of the core partners of the Afrobarometer) and embark on a “slash and burn” exercise, pulling out three or four questions to try and undermine the reliability of the entire project.

But because all of us should be prepared to learn from scholarly debate, however vituperative, let me address each of Van Onselen’s points, one by one.

For starters, Van Onselen attacks the verbal, ordinal response scale we present to respondents answering a question about how often they were a victim of violent crime.  The verbal rankings (“never,” “once or twice,” “several times”), Van Onselen suggests, are relative concepts and might mean different things to different people.  He argues, rather, that we should have asked people to give us an actual quantitative count. 

However, there are at least two different reasons why this is an appropriate way of posing the responses.  First, in large-scale surveys like the Afrobarometer, it is incumbent on questionnaire designers to gather together sets of similar question items that can be answered with the same sets of responses.  If he had bothered to look at the questionnaire more closely Van Onselen would have seen that this item was part of a larger battery of question items on human security, looking both at crime but also the frequency with which people experience shortages of basic necessities. 

Second, it might seem on the face of it to make more sense to ask people for a more precise count of how many times they were attacked, or felt insecure, or went without enough to eat in the past year,  But research shows that such responses provide a false sense of exactitude.  Many people, especially those regularly subjected to violence and destitution, simply can’t keep track to any meaningful degree of how many times they went hungry last year.  Thus, it is safer – and more reliable – to ask for a more rough estimation: never, once or twice, several times, etc….  Indeed, we find that the aggregated responses present very reliable trends over time, with the overall percentages of those who are frequent victims of crime, for example, moving upward and downward in ways that generate important insights into national experiences of poverty and security and which provide invaluable alternatives to official data.

Trolling further through the summary of results made available by Idasa on its website, Van Onselen comes to another item, Question 18, which he claims violates basic rules of survey research by combining two ideas in a single statement: “People are like children, the government should take care of them like a parent.  Since leaders represent everyone, they should not favour their family or group.” 

This was a typographical error, not a cardinal error of survey research! 

While there is certainly no excuse for sloppiness, Van Onselen could have easily figured this one out if he’d simply bothered to look at the preceding Question 17 which taps attitudes toward patronage-client relationships by asking respondents whether they agreed with the statement “Since leaders represent everyone, they should not favour their own family or group”.

He then complains about a question that asks “How much do you trust opposition political parties,” characterizing it as “ambiguous” and “ultimately worthless” because it does not ask about specific opposition parties.  But the Afrobarometer is not designed primarily as a voter survey, but a tool to help understand democratic and economic trends.  Indeed, a more careful look at the questionnaire would have revealed that the question directly follows a similar item that asks about trust in the governing party.  The intentional juxtaposition of these two items provides us with an illuminating exploration of the degree to which political trust in Africa turns on issues of incumbency or other potential lines of social and political cleavage.

Indeed, we find that while “the opposition” enjoys lower levels of trust than the governing party in virtually all African countries, the relative ratio of governing to opposition party trust is highly predictive of the actual competitiveness of party systems from country to country.  And South Africa, importantly, has one of the larger gaps of the 20 countries where we conduct our research, something that a representative of the “official” opposition should take to heart. 

Van Onselen also criticizes Afrobarometer for asking a voting intention question of unregistered voters.  But to suggest that we don’t need to know the opinions of unregistered voters is indeed a bizarre statement from a representative of a party that expressly calls itself Democratic.  But, again, very little of the Afrobarometer is devoted to issues of partisan politics; it is not intended to predict election results.  But we are very interested in the voting intentions of all citizens whether or not they are registered or whether they intend on voting.  Indeed, it is only by asking all respondents that we have been able, over the years, to understand the differences in opinions between South Africa’s shrinking electorate on one hand, and the growing number of non-voters, on the other.  Indeed, if South Africa’s opposition paid more attention to offering this group of “turned off” voters a legitimate alternative, the country might not face such a dire threat of one-party domination.

Lastly, Van Onselen raises a question about how the Afrobarometer asks about voting intention.  In short, Afrobarometer uses the question the way hundreds of polling organizations around the world, verbally asking respondents: “If an election were held tomorrow, which party would you vote for?”

Now, it might be true that in a charged atmosphere like South Africa, other questions and methods might be more useful.  Beside the substantive answers, 8 percent said they did not know, 10 percent said they would not vote and fully 21 percent refused to give an answer. 

In contrast, the commercial firm Markinor hands respondents a paper ballot and asks them to mark it and then drop it into a box and while they tend to find the same levels of undecided voters, also finds has far fewer outright refusals.  However, university based researchers would encounter real problems with ethics review boards if they explicitly, or even implicitly indicated to people that they were participating in some form of secret ballot and then in fact re-linked the vote back up to the questionnaire after the interview.  Indeed, the very act of handing out mock ballots would create a multitude of problems in many African countries which might scuttle the entire survey.

The “secret ballot” approach also suffers from another limitation during times of fast-moving political change because respondents are limited to the parties listed on the mock ballot paper.  Thus, Markinor’s most recent poll in October registered absolutely no support for COPE – simply because it was not yet a party and thus not listed on the ballot.  The Afrobarometer question, put to respondents just a few weeks later, registered 4 percent for COPE simply because people were able to volunteer that party on their own. 

But–again–the Afrobarometer is a social science project devoted to understanding democracy in Africa, not predicting elections.  And what is more revealing about the state of our democracy than the fact that one out of five South Africans don’t feel sufficiently comfortable to reveal their vote choice to professional, independent fieldworkers?

Robert Mattes, Professor, Political Studies, Director, Centre for Social Science Research (http://www.cssr.uct.ac.za/), University of Cape Town

Advertisements

One Response

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: