Behavioral Bias Economics Social Media Time Use Papers Mturk

Running Experiments with Amazon Mechanical TurkI'll start by saying that I think Amazon Mechanical Turk (MTurk) and online markets offering no less than a revolution in experimental psychology. By now, I've already conducted over a hundred experiments on MTurk and have come to consider it equally one of the most important tools bachelor to me. Together with Qualtrics (see previous posts with tips – i, 2, 3) MTurk is a very powerful tool for very quick and inexpensive data collection.   You lot don't have to have my give-and-take for it, take it from those who know something. There are lots of high-profile articles popping up in various journals across all domains that have come up to the same conclusion every bit I have – MTurk is an important tool. The post-obit examples were chosen from psychology, direction, economics, and even biological science :

Social Psychology

From  Buhrmester,  Kwang, & Gosling (2011, Perspectives on Psychological Science) Amazon'southward Mechanical Turk:  A New Source of Inexpensive, Withal High-Quality, Data?  :

Findings indicate that: (a) MTurk participants are slightly more representative of the U.S. population than are standard Cyberspace samples and are significantly  more than diverse than typical American college samples; (b) participation is affected by  compensation charge per unit and task length but participants tin all the same be recruited rapidly and  inexpensively; (c) realistic compensation rates do non bear upon information quality; and (d) the information  obtained are at least as reliable as those obtained via traditional methods.

From Paolacci and Chandler (2014, Current Directions in Psychological Science)  Inside the Turk Understanding Mechanical Turk as a Participant Puddle :

Mechanical Turk (MTurk), an online labor market created past Amazon, has recently go popular among social scientists as a source of survey and experimental data. The workers who populate this market have been assessed on dimensions that are universally relevant to agreement whether, why, and when they should exist recruited as inquiry participants. We discuss the characteristics of MTurk as a participant pool for psychology and other social sciences, highlighting the traits of the MTurk samples, why people go MTurk workers and research participants, and how data quality on MTurk compares to that from other pools and depends on controllable and uncontrollable factors.

Clinical Psychology

From Shapiro, Chandler, & Mueller (2013, Clinical Psychological Science) : Using Mechanical Turk to Study Clinical Populations :

Although participants with psychiatric symptoms, specific take chances factors, or rare demographic characteristics can be difficult to identify and recruit for participation in inquiry, participants with these characteristics are crucial for research in the social,  behavioral, and clinical sciences. Online inquiry in general and crowdsourcing software in particular may offer a solution. […]  Findings suggest that crowdsourcing software offers several advantages for clinical enquiry while providing insight into potential problems, such as  misrepresentation, that researchers should address when collecting data online.

Economic science

From Horton, Rand & Zeckhauser (2010, Experimental Economic science) – The Online Laboratory: Conducting Experiments in a Real Labor Marketplace :

Nosotros argue that online experiments can be simply as valid— both internally and externally—every bit laboratory and field experiments, while requiring far less coin and time to design and to carry. In this paper, we first describe the benefits of conducting experiments in online labor markets; we then use one such market place to replicate three classic experiments and confirm their results. We confirm that subjects (one) reverse decisions in response to how a decision-problem is framed, (2) have pro-social preferences (value payoffs to others positively), and (three) answer to priming past altering their choices.

Management/Cognition

From Paolacci, Chandler & Ipeirotis (2010, Judgment and Determination Making) – Running experiments on Amazon Mechanical Turk  :

Although Mechanical Turk has recently go popular among social scientists as a source of experimental data, doubts may linger almost the quality of data provided past subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk discipline population, reviewing the strengths of Mechanical Turk relative to other online and offline methods of recruiting subjects, and comparing the magnitude of effects obtained using Mechanical Turk and traditional field of study pools. Nosotros farther discuss some additional benefits such as the possibility of longitudinal, cross cultural and prescreening designs, and offer some advice on how to best manage a common subject puddle.

Biology

From Rand (2011, Periodical of Theoretical Biological science) – The hope of Mechanical Turk: How online labor markets tin can assistance theorists run behavioral experiments :

I review numerous replication studies indicating that AMT data is reliable. I also present ii new experiments on the reliability of self-reported demographics. In the first, I use IP address logging to verify AMT subjects' self-reported country of residence, and find that 97% of responses are authentic. In the 2d, I compare the consistency of a range of demographic variables reported past the same subjects across two different studies, and find betwixt 81% and 98% agreement, depending on the variable. Finally, I discuss limitations of AMT and point out potential pitfalls.

[Update March 1st, 2016 : The APS Observer has a smashing summary article on MTurk :Under the Hood of Mechanical Turk ]

Scout this nifty overview lecture video about using Amazon Mechanical Turk for academic research (Gabriele Paolacci: The challenges of crowsourcing data collection in the social sciences):

Gabriele Paolacci: The challenges of crowdsourcing data collection in the social sciences

Other articles

  • Separate merely equal? A comparison of participants and data gathered via Amazon's MTurk, social media, and face-to-face behavioral testing (Computers in Human Behavior, Nov2013).
  • The relationship between motivation, monetary compensation, and data quality among Us- and India-based workers on Mechanical Turk (Litman, Robinson, & Rosenzweig, 2014, BRM)
  • Attentive Turkers: MTurk participants perform improve on online attending checks than subject area pool participants (Hauser & Schwarz, 2015, BRM) | Summary
  • Comparing the Similarity of Responses Received from Studies in Amazon'south Mechanical Turk to Studies Conducted Online and with Direct Recruitment (Bartneck, Duenser, Moltchanova, & Zawieska, 2015, PLOSOne)
  • Notes from a Day on the Forums: Recommendations for Maintaining a Good Reputation every bit an Amazon Mechanical Turk Requester (Yale David Rand'due south lab, draft recommendations)
  • Graduating from Undergrads: Are Mechanical Turk Workers More Attentive than Undergraduate Participants? (OSF)
  • The Average Laboratory Samples a Population of 7,300 AmazonMechanical Turk Workers (JDM, 2015) (Summary post on Experimental Turk)
  • MTurk 'Unscrubbed': Exploring the Skilful, the 'Super', and the Unreliable on Amazon'southward Mechanical Turk
  • Are samples drawn from Mechanical Turk valid for research on political credo? (Enquiry and Politics)
  • The Generalizability of Survey Experiments (Journal of Experimental Political Scientific discipline, 2015)
  • Conducting Clinical Research Using Crowdsourced Convenience Samples (Annual Review of Clinical Psychology, 2016)
  • Psychological research in the cyberspace age: The quality of spider web-based information (Computers in Human Behavior, 2016) | reviewed on BPS
  • Tosti-Kharas, J., & Conley, C. (2016). Coding Psychological Constructs in Text Using Mechanical Turk: A Reliable, Accurate, and Efficient Alternative. Frontiers in Psychology, 7, 741.
  • Fifty Percent of Mechanical Turk Workers Have College Degrees, Study Finds (MotherBoard, 2016)
  • Pew Inquiry – Research in the Crowdsourcing Historic period, a Case Study (July, 2016)
  • "Cargo Cult" science in traditional organization and information systems survey research: A case for using nontraditional methods of data collection, including Mechanical Turk and online panels (The Journal of Strategic Information Systems, 2016)
  • Turking Overtime: How Participant Characteristics and Behavior Vary Over Time and 24-hour interval on Amazon Mechanical Turk (Journal of the Economic science Clan, 2017)
  • A Glimpse Far into the Futurity: Understanding Long-term Crowd Worker Accurateness (CSCW 2017)
  • Replications with MTurkers who are naïve versus experienced with academic studies (2015) (JESP, 2016)
  • Are all "inquiry fields" equal? Rethinking exercise for the use of data from crowdsourcing marketplace places (BRM, 2016)
  • Beyond the Turk: An Empirical Comparison of Alternative Platforms for Crowdsourcing Online Behavioral Research (preprint, 2016)
  • Amazon Mechanical Turk in Organizational Psychology: An Evaluation and Practical Recommendations (JBP, 2016)
  • Crowdsourcing Consumer Enquiry (JCR, 2017)
  • Lie for a Dime When About Prescreening Responses Are Honest but Near Study Participants Are Impostors (SPPS, 2017)
  • Crowdsourcing Samples in Cerebral Science (Trends in Cognitive Sciences, 2017)
  • MTurk Character Misrepresentation: Cess and Solutions (JCR, 2017)
  • Validity and Mechanical Turk: An assessment of exclusion methods and interactive experiments (Computers in Human Behavior, 2017)
  • Conducting interactive experiments online (Experimental Economics, 2018)
  • Turkers and Canadian students did not differ in ability to characterization prune fine art and photographic images (BRM, 2018)
  • Common Concerns with MTurk as a Participant Pool: Evidence and Solutions (preprint)
  • How to Maintain Data Quality When You Can't Meet Your Participants (Observer, 2019)
  • Tapped Out or Barely Tapped? Recommendations for How to Harness the Vast and Largely Unused Potential of the Mechanical Turk Participant Pool (preprint)
  • An MTurk Crunch? Shifts in Information Quality and the Impact on Study Results (SPPS, 2019)
  • Berinsky, A. J., Margolis, M. F., & Sances, M. Westward. (2014). Separating the shirkers from the workers? Making sure respondents pay attention on self‐administered surveys.American Journal of Political Science,58(three), 739-753.
  • Anson, I. Yard. (2018). Taking the time? Explaining effortful participation amidst low-cost online survey participants.Research & Politics,v(three), 2053168018785483.
  • Hauser, D. J., & Schwarz, Northward. (2016). Attentive Turkers: MTurk participants perform better on online attending checks than do bailiwick puddle participants.Beliefs research methods,48(1), 400-407.
  • Snowberg, Eastward., & Yariv, L. (2018).Testing the waters: Behavior across participant pools (No. w24781). National Agency of Economic Research.
  • Gupta, N., Rigott, L., & Wilson, A. (2021). The Experimenters' Dilemma: Inferential Preferences over Populations. arXiv preprint arXiv:2107.05064.
  • Eyal, P., David, R., Andrew, M., Zak, E., & Ekaterina, D. (2021). Data quality of platforms and panels for online behavioral research.Behavior Inquiry Methods, 1-xx.

Before we begin, I think this article is a MUST read for anyone thinking of using MTurk for bookish inquiry :  The Net's hidden scientific discipline manufacturing plant

From the commodity, I strongly recommend y'all watch this following video of a life of one MTurker :

Lessons learned (some of these are rather old, I would strongly advise you in revisiting these):

  1. You need to verify that participants read and understand your survey, and that they don't randomly click their answers. For that I exercise the following:
    1. Later on each scenario, I run a quiz to exam their agreement.
    2. Obviously, every part includes a check. A manipulation should ever be tested, meliorate with more than a single manipulation bank check.
    3. Add a timer for each page and include a check in your stat syntax to examination whether they answered also fast.
    4. Include a funneling section and ask them what the survey was about and set a minimum characters answer. Go over the answers to see who puts in noise. Ofcourse, if y'all included a manipulation besides examination for suspicion and ask them what they thoughts the purpose was or whether they tin see any connection between the manipulation and your tested DV.
  2. Information technology goes without saying that you should exam your survey before setting information technology off to the wild. But, very important point is to set up email triggers and meet that the answers you get are what they should be. It happened a few times that I discovered something wrong within the first ten participants, so I stopped the batch, corrected the mistake and restarted everything.

[UPDATE 2013/02/05 my answer to a discussion about this]

  1. The questionnaire should prove participants you lot're a serious researcher. Meaning :
    1. two or 3 comprehension quiz questions nigh scenarios that they accept to become right to proceed to make sure they understood the scenario or what they need to do in a chore.
    2. Decoy questions that become in contrary directions and randomized into scales (ones I use often – "the colour of the grass is bluish" "in the same week, Tuesday comes subsequently Monday" "rich people take less money than poor people" etc.)
    3. Randomizing question and pick sequence for each department.
    4. Adding a funneling section.
    5. Adding a timer to all questions to check how much time they spent on each page and when they clicked on things.
  2. Between subject manipulations are better than a elementary survey since different participants see different conditions and hence reduce the chances of only sharing answers.
  3. There'south no escape from going over the answers in particular, checking the reply timing, checking for duplicates and reading the funneling section.

[finish of UPDATE]

For bug with running MTurkers, read :

  • Let's keep discussing M Turk sample validity
  • What'due south a "valid" sample? Problems with Mechanical Turk report samples, function 1
  • Fooled twice, shame on who? Issues with Mechanical Turk report samples, part 2
  • My Experience as an Amazon Mechanical Turk (MTurk) Worker (Utpal Dholakia)

For the technical details on how to set things upwardly read the following :

  • Experiments using Mechanical Turk. Part one
  • Experiments using Mechanical Turk. Office 2
  • THE TECHNICAL DETAILS, TUTORIALS, WALK-THROUGHS
  • How to connect Qualtrics and mturk, Role II
  • The right mode to forestall duplicate workers –  How to Cake By Workers from Doing Surveys
  • MTurk + Qualtrics
  • Guide to running Mturk experiments [2019]

There'southward also a very helpful blog I strongly recommend that you lot visit – Experimental Turk which titles itself every bit A blog on social science experiments on Amazon Mechanical Turk. It hasn't been updated for a while, only some viable info in there.

Tools :

  • If you're using MTurk for academic data collection, you lot absolutely must use Turkprime (read my review)
  • Preventing MTurkers who participated in 1 written report from participating in certain other studies – Turk Check.
  • Various tools, I especially find the "Evidence URL after accept" Javascript trick useful.
  • PsyTurk (see presentation here)
  • How to setup notifications for HITs
  • TaskMaster: A Tool for Determining When Subjects Are on Task (AMPPS, 2019)
  • Mturk Sample Calculator : Sample Reckoner
  • OpenMTurk: An Open-Source Administration Tool for Designing Robust MTurk Studies (preprint)

Survey collection:

  • Qualtrics surveys, ofcourse.

Multiple actor games:

  • Software Platform for Homo Interaction Experiments (SoPHIE) (e.g. gossip games)
  • "Breadboard is a software platform for developing and conducting human being interaction experiments on networks. Information technology allows researchers to apace design experiments using a flexible domain-specific language and provides researchers with immediate access to a diverse puddle of online participants."
  • oTree offers integration with Amazon Mechanical Turk
  • A bang-up commodity almost this –  Conducting interactive experiments online (Experimental Economics, 2017) from the developers of LIONESS: Live Online Experimental Server Software

Farther readings:

  • Identifying Careless Responses in Survey Data (Meade & Craig, 2012, Psychological Methods) – an first-class commodity on careless responses with online and student samples. A worthy read. Another article is Detecting and Deterring Bereft Effort Responding to Surveys (Huang et al., 2012, JBS)
  • Deneme – a blog of experiments on Amazon Mechanical Turk (who created – Iterative Tasks on Mechanical Turk)
  • Is Mechanical Turk the future of cognitive science inquiry?
  • Looking for Subjects? Amazon'southward Mechanical Turk
  • The Pros & Cons of Amazon Mechanical Turk for Scientific Surveys
  • Experimenting on Mechanical Turk: v How Tos
  • Slides from ACR 2012 (expert tips)
  • Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Enquiry (published at PLOSone, with a related weblog post)
  • Mechanical Turk and Experiments in the Social Sciences
  • How naïve are MTurk workers? and the followup response –  mTurk: Method, Not Panacea and the followup post –  Consequences of Worker Nonnaïvete: The Cognitive Reflection Test
  • Reputation equally a sufficient condition for data quality on Amazon Mechanical Turk, Behavior Inquiry Methods, December 2013

  • ITWorld –  Experimenting on Mechanical Turk: v How Tos
  • High quality MTurk data
  • Graduating from undergrads: Are MTurk workers less circumspect than undergraduate students? (Affiche from Manylabs)
  • Recent studies on MTurk validity (Mturk for academics, 2016)
  • What's a off-white payment on #MTurk?

Alternatives to MTurk:

  • StudyResponse
  • For Australia – Microworkers (explained in this commodity –  Crowdsourcing participants for psychological enquiry inAustralia: A test of Microworkers)
  • Prolific Academic (& Crowdflower, encounter Beyond the Turk: An Empirical Comparison of Alternative Platforms for Crowdsourcing Online Behavioral Research)
  • Call for participants
  • Find participants
  • Reddit (run across academic paper most this pick)
  • Findingfive
  • Cosmos – a community science project

Got whatsoever other MTurk tips? have yous had any experience running experiments on MTurk? Do share.

0 Response to "Behavioral Bias Economics Social Media Time Use Papers Mturk"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel