Wikiversity:Fellow-Programm Freies Wissen/Einreichungen/Giving the Results of Crowdsourced Research Back to the Crowd
Title[Bearbeiten]Giving the Results of Crowdsourced Research Back to the Crowd. A Proposal to Make Data from ‘The Crowdsourced Replication Initiative’ Reliable, Transparent and Interactive. Projektbeschreibung[Bearbeiten]The larger research question: What impact does immigration have on societies with mass influxes of immigrants? Populist parties, economic growth, terrorism, solutions to aging, xenophobia, solidarity, gender and social movements carry a sea of unclear arguments in all directions. Comparative studies on this question are similarly divergent in their results. Replications are rare, and original studies lack transparency. Most studies use different methods and their research designs are difficult if not impossible to compare without open research practices. To address the immigration question and these reproducibility issues, two colleagues and I developed the Crowdsourced Replication Initiative (CRI) (Breznau, Rinke, and Wuttke 2019). The CRI is an ongoing project harnessing the power of 180 researchers from across countries and different social science disciplines working in 88 research teams on one large-scale collaborative research project. Thus far, each team conducted independent replications of an internationally acclaimed study on the immigration question. The proposed project of this Fellowship: The goal of the CRI is to systematically compare the research and results of these 88 teams. In this process a question arises: how to generate a public good to motivate and benefit science in the future using these results? Sharing the data and code is important, but of limited value because there is too much of it. Currently each research design cannot be understood, much less compared, without reading and understanding hundreds of pages of software code. Each team investigated data from up to 10 different survey questions, further complicating the process. Extensive qualitative coding of the teams’ code is necessary at first to make these designs comparable. This coding requires special skills in diverse software and multilevel modeling. Next, the development of an interactive online tool visualizing the rich data based on user queries would make the results understandable, reproducible, and valuable for future research and teaching. In this process users could download the research designs and data of their choosing, and they could systematically investigate the research itself to determine what diverse practices a sample of 88 research team uses. To achieve these goals, I need support both in qualitative coding and development of this open science interactive tool. Interim Report for the Project ('Zwischenbericht') Output[Bearbeiten]
Autor[Bearbeiten] |