ChannelLife UK - Industry insider news for technology resellers
Story image
Prolific aims to establish industry standards for human data use in AI
Tue, 9th Apr 2024

Prolific, the online survey platform, has launched a new Participant Wellbeing report, which has been designed to set industry standards measured by ethical human data use in AI finetuning and academic research. The report aims to highlight the importance of 'AI tasker' wellbeing in the face of controversies, such as prisoners being paid only $1.67 an hour to train AI and workers being exposed to detrimental content.

The report uses the Short Warwick-Edinburgh Mental Wellbeing Scale (SWEMWBS) to measure the mental health of adults in the UK, evaluating positive aspects such as feeling good and functioning well, as well as negative emotional responses. The results of the inaugural Prolific data collection demonstrated little detriment to participant wellbeing from participation in research studies or AI training on its platform.

Prolific is showing its commitment to 'AI taskers' by pledging to conduct regular data collection on wellbeing and requiring opt-in consent for surveys showcasing sensitive material. Prolific has linked with 'Partnership on AI', a body devoted to encouraging the ethical application and development of AI for societal benefit.

This touches on the recently passed EU AI Act by the European Parliament, and the forthcoming introduction of AI regulations globally. There is a global call for AI models to be accountable, safe and accurate. Prolific is calling for these new regulations to support ethical guidelines for the use of AI taskers.

The report discusses the chilling revelation that many AI models have been trained using low-wage labour from developing countries, free labour from prison populations, or requiring AI taskers to annotate overtly violent, sexist, and racist content. In such scenarios, there is often little to no consideration for the wellbeing of workers or the existence of ethical labour standards.

Prolific CEO Phelim Bradley claims that their highest resource is their participant pool. "High-quality human data starts with well looked after participants, which is why Prolific prioritises participant wellbeing," says Bradley. The CEO further asserts that treating participants with respect, fairness, and transparency throughout their journey with Prolific engenders trust and loyalty, as does connecting them with meaningful, important and fairly paid work.

Aside from safeguarding participant wellbeing, Prolific has made strides in pushing for the establishment of ethical standards for AI taskers across the industry. The company's recent activities include aiding the Meaning Alignment Institute in generating wiser AI responses to moral questions and constructing a Reinforcement Learning from Human Feedback (RLHF) dataset for social reasoning.

Since its 2014 establishment, Prolific has gathered over 100 million responses for more than 100,000 researchers across 200 countries from over 150,000 participants. In 2023, more than 30,000 researchers and 10,000 organisations were active on the Prolific platform, with a new study launched every 3 minutes.