Examining Validity of MTurk Workers Responses Based on Monetary Reward: A Qualitative Data Analysis

Presenter(s): Maggie Murphy—Psychology

Faculty Mentor(s): David Condon

Session: Prerecorded Poster Presentation

Amazon’s Mechanical Turk is an online crowdsourcing marketplace (OCM) that has become widely used for data collection in scientific research, especially in the social sciences . In psychology research, a common use of the platform is to pay MTurk workers (aka “MTurkers”) to complete surveys and online behavioral tasks . The MTurkers are then paid for their contribution to the survey; however, little research has considered the effect of payment on data quality (Chmielewski & Kucker, 2019) . We hypothesize that the accuracy of responses are partially dependent on the amount the MTurk Workers are paid for their responses . In this study, we sought to evaluate the effect of compensation on the care that MTurkers displayed in their responses to the survey . We look to explore the validity of MTurk responses using an SPI norming survey created by Professor Condon, and delineating it by three factors: one that compensated workers at a rate equal to the US federal minimum wage, one paying minimum wage plus 25%, and a third paying 25% less than minimum wage with an unannounced bonus (up to minimum wage) after the work was completed . We compare their responses based on the time spent responding to the survey, inter-item correlations, and evidence of “patterned responding” (e .g ., choosing the same response option for several questions in a row) . The findings from our research will be beneficial to researchers using MTurk and other OCMs for data collection.

Leave a Reply

Your email address will not be published. Required fields are marked *