Since the early days of social media, there has been excitement about how data traces left behind by users can be exploited for the study of human behaviour. Nowadays, reseachers who were once restricted to surveys or experiments in laboratory settings have access to huge amounts of “real-world” data from social media.
The research opportunities enabled by social media data are undeniable. However, researchers often analyse this data with tools that were not designed to manage the kind of large, noisy observational sets of data you find on social media.
We explored problems that researchers might encounter due to this mismatch between data and methods.
What we found is that the methods and statistics commonly used to provide evidence for seemingly significant scientific findings can also seem to support nonsensical claims.
The motivation for our paper comes from a series of research studies that deliberately present absurd scientific results.
One brain imaging study appeared to show the neural activity of a dead salmon tasked with identifying emotions in photos. An analysis of longitudinal statistics from public health records suggested that acne, height, and headaches are contagious. And an analysis of human decision-making seemingly indicated people can accurately judge the population size of different cities by ranking them in alphabetical order.
READ FULLY at THE CONVERSATION