How accurate is our misinformation? A randomized comparison of four survey interview methods to measure risk behavior among young adults in the Dominican Republic

Tipo de publicación: Artículo en revista académica
Fecha: 2016
Autor(es): Sigrid Vivo, Sandra I. McCoy, Paula López-Peña, Rodrigo Muñoz, Monica I. Larrieu, Pablo Celhay
Fuente: Development Engineering, Volume 2, 2017, Pages 53–67

Objective

To identify the most effective survey interview method for measuring risk behavior among young adults in the Dominican Republic.

Methods

1200 young adults were randomized to one of four different survey interview methods: two interviewer-assisted methods [face-to-face interview (FTFI), and computer-assisted telephone interview (CATI)], and two self-administered methods [self-administered interview (SAI), and audio computer-assisted, self-administered interview (ACASI)]. Youth were asked about a wide range of youth-specific risk behaviors, including violence, substance use, as well as sexual and reproductive health. Quality of data collected was examined by looking at how the survey was administered, including identifying two sources of errors that typically threaten data quality1: (i) errors at the individual level with regards to survey methodology performance and cognitive difficulties [measured with the Response Consistency Index (RCI)]; and (ii) errors at the aggregate level (how desirability bias, interviewer gender, and interview privacy settings affect responses).

Results

No statistically significant differences in participant non-response rates were found at the individual level across all survey interview methods. At the individual question level, self-completion methods generated higher non-response and error rates than assisted methods. The SAI method showed the poorest performance of all four methods in terms of non-response rate (1.6%2) and RCI (83.0%).

At the aggregate level, the prevalence of several key risk indicators was statistically significant between methods. Using means-adjustment for covariates (μ), sexual and reproductive health-related indicators were found to have statistically significant differences among methods. Interviewer-assisted methods reported higher prevalence compared to self-administered methods. For example, when interviewer-assisted FTFI was administered, higher prevalence was reported for indicators, such as Ever Had Sex (for both genders), Safe Casual Sex (for males), and Transactional Sex (for males) than self-administered ACASI methods. On the contrary, when a SAI self-administered method was applied, results showed higher prevalence rates of concurrency among women, (e.g., sexual activity with multiple partners that overlap in time).

The study also suggests that there were differences in survey interview methods used when looking at drug and alcohol use indicators. The direction of bias differed between drug use and alcohol consumption indicators. Respondents using ACASI self-administered methods had a higher Drug Use prevalence. However, FTFI reported a higher prevalence of Binge Drinking among female participants, as compared to ACASI.

Additionally, the research team found a strong interaction with interviewer gender. In FTFI, affirmative responses to Sex with Same Gender questions were higher among men interviewed by women, than among men interviewed by men. The gender of the interviewer also seems to generate socially desirable response in violence risk-related indicators. That is the case for Victimization among males where the use of SAI increases the prevalence.

Furthermore, this study suggests that whenever privacy during an interview was recorded as insufficient, women tended to report sexual initiation at almost a year younger than when responding with privacy.

Conclusions

The study's findings suggest that the degree to which a specific method improves the reporting of a particular risk behavior or set of behaviors is likely to be population specific. The study's results support the researchers’ initial hypothesis that higher prevalence estimates may be less valid than lower estimates in the Dominican Republic context, especially where some risk behaviors, like sexual activity, are normative and socially acceptable. More specifically, use of interviewer-assisted methods appears to significantly increase the reporting of socially-accepted behaviors, such as that both male and female populations over-report sensitive questions. Similarly, the level of privacy during the interview and interviewer’s gender may have an important effect on responses, and this effect depends on the chosen survey interview method.

Palabras claves: Social desirability bias; Cognitive difficulty; Survey interview methods; Young adults; Effectiveness; Measurement; Data quality; Risk behaviors
Link: https://doi.org/10.1016/j.deveng.2016.06.002