I spent most of my most recent time on my project reviewing data, managing finances, and doing other day-to-day project tasks, from learning what the average profit on goats sales is to preparing a list of households to be surveyed the next day for my staff. Accordingly, despite the fact I was managing a survey, I had little time to actually observe the survey take place. However, I did get the chance on a few occasions to see my team in action
An important part of a successful development evaluation is figuring out not just if the program works, but also why it works. To do so, it’s essential to have the relevant information. For a very simple intervention, like say, giving students math textbooks, the theory of change is not so intricate. Perhaps the students are studying more effectively, perhaps they are studying more, perhaps the teacher’s instruction time is more effective. Regardless of these questions, it doesn’t take a very comprehensive survey (and perhaps a test) to develop a theory of why the program is working.
However, in our case, the program we are evaluating has several components. People receive goats/cows/bees/stock for a business, regular site visits, technical training from an expert in their new form of income generation, participate in a food for work program, and are told they need to save. Therefore, it’s not immediately clear what is bringing about any sort of change. Our survey therefore is fairly exhaustive, asking about sources of income, consumption, decision-making in the household, use of time, savings, time and risk preferences, and other questions. In total, it takes about 3-4 hours to complete.
This means that we cannot simply knock on doors and ask people if they have a few minutes. Further complicating matters is the fact that 1) there are not addresses where we work, and 2) we often don’t have the phone numbers of the people. One staff we hire then is a “mobilizer.” It’s their job to go to households ahead of time and to make sure people are around.
The start of the day surveying therefore makes for an interesting spectacle. Our survey supervisors meet with the mobilizer, while all of our surveyors stand around. Some of the soon-to-be-respondents wait around as well. The supervisors then pair people up. It reminded me of partners being picked by a teacher for a dance. Only instead of boys and girls, it was surveyors and respondents. Once the pair had been chosen, they would wander off somewhere to find a place to conduct the interview. Some of the surveyors are also given a set of instructions—go on this dirt road, when you come to this tree turn left, ask for this person. And off they go.
At its core, this is what research institutions like Innovations for Poverty Action do. It’s interesting for me to consider—we frame ourselves as an evidence-based, innovative research organization, and I’d like to think that’s accurate. However, it’s interesting to see that while our final outputs are policy briefs and peer-reviewed papers in economic journals using econometric analysis to assess which programs work and why, one of our fundamental building blocks is simply some surveyor, walking over hills to find a farmer and ask him about how his goat sales have been this year.