Peer Production

These guidelines are a a summary and quick reference of Chapter 2 of my thesis (thesis currently under review, link will be posted soon). For more details feel free to check out the thesis.

Peer Production in Citizen Science: What? Why? How?

Peer production, short for "commons-based peer production" (CBPP), designates a mode of production where online communities self-organize their work (Benkler, 2002). The resources produced are community-created, mostly by volunteers who decide by themselves what they want to work on, and who are not paid for it. While this does not sound like a sustainable production model, successful projects prove the opposite: The most well-known examples are Wikipedia, and open source software like the operating system Linux.

In my thesis I aimed to explore if and how peer production can also be used in citizen science, to support volunteer engagement in more phases of the research cycle. To do so, I first created a theory-based working model of peer production, and used this to analyze 3 citizen science case studies. The results of this are summarized on this page.

Working model of peer production

This model summarizes 7 characteristics of peer production platforms for citizen science contexts. It contains online platform features that enable the work of self-organizing communities.

Note: Apart from platform features that support self-organized production, there are other factors important in commons-based peer production. For a work that goes deeper into aspects of governance, licensing, and impact, see Fuster Morell et al. (2021).

Figure 1. Characteristics of peer production platforms, with examples from Wikipedia.

The characteristics are further explained in the guidelines and illustrated in the case studies below.

Case Studies

Here are some case studies with the model to see how peer production elements are used in active online citizen science projects today:

Guidelines

Here are guidelines that can help you walk through each peer production characteristic. They might be used both for analyzing existing citizen science projects, or for designing new ones. For each characteristic, there is a guiding question, a heuristic (or "rule of thumb"), and a short overview of motivations and trade offs to consider.

Note: These guidelines are based on the 3 case studies referenced above. They might not generalize to other contexts. If you used the guidelines for your own projects, I would love to know about your experiences.

PEER PRODUCTION IN CITIZEN SCIENCE DESIGN GUIDE
Common research object
Guiding questions
  • What object/s is/are collaboratively produced? How do they relate to the goal of the project? 
  • Are the objects published with an open license? Why?
CBPP heuristic guideline
  • Common research objects are published with open license, so they can be reused and adapted by the community
Motivations and trade-offs
  • Open common research objects can be useful if reuse and remix is essential to the process, e.g. when users create their own research projects by reusing methods of others, or if a goal is to help users familiarize themselves with a domain through free exploration, reuse, and contribution to a database
  • Reasons against an open common research object might include a need for exclusive proprietary rights of the project organizers to use the data
Range of tasks
Guiding questions
  • What tasks can contributors engage in? 
  • How do these tasks relate to different steps in the research process (Ideation/design, data collection, data processing, analysis, interpretation, action)?
CBPP heuristic guideline
  • The range of tasks allows contributors to engage in several or all steps of the research process
Motivations and trade-offs
  • Covering several or all steps of the research process can attract contributors with a wide range of motivations and skills, and facilitate bottom-up research projects
  • On the other hand, crowdsourcing contributions to neatly-specified tasks leaves full control in the hand of project organizers and can help projects scale up significantly
Granularity and modularity
Guiding questions
  • Are the research objects and tasks split into modules of varying size and complexity? 
  • How are individual modules integrated into the whole system?
CBPP heuristic guideline
  • Modules of different size are available, from simple routine tasks to complex creative tasks
Motivations and trade-offs
  • Modules of various size can cater for various motivations, skills, time-capacities, and can cover simple rule-based, as well as creative tasks
  • Rule-based microtasks allow for easy large-scale aggregation and punctual engagement, while the integration of bigger, less standardized models might need more manual work
Equipotential self-selection
Guiding questions
  • Are any formal credentials (e.g. a university degree) required to participate in certain tasks? Why?
  • Can contributors self-select tasks? How can they discover or determine these tasks?
CBPP heuristic guideline
  • No formal entry-credentials determine a-priori a contributor’s capability to do a task
  • Contributors can self-select tasks based on their own skills and motivations
Motivations and trade-offs
  • Self-selection can be crucial if specific expertise is needed for certain tasks
  • It moves the focus from formal credentials and values skills
  • If contributors can choose freely which data points to contribute to a database, there is a risk to introduce sampling biases; on the other hand unexpected valuable data points might be added in this way
Quality control
Guiding questions
  • What quality control mechanisms are in place? 
  • What roles does the community play in these?
CBPP heuristic guideline
  • Communal validation processes for quality control are in place
  • Community members can take maintenance roles e.g. to protect the system from vandalism
Motivations and trade-offs
  • “Given many eyeballs, all bugs are shallow” (Linus’ law)
  • Contributors can learn from the feedback and answers of others
  • If answers/data annotations from other contributors are visible, this might influence the independence of results
Learning trajectories
Guiding questions
  • How do newcomers learn to contribute to (basic) tasks? 
  • Are there ways for contributors to learn to do more advanced tasks and to take more advanced roles?
CBPP heuristic guideline
  • The range of tasks, learning materials, and visible traces of work of others allow newcomers to learn to contribute to easy tasks, integrate in the community and gradually take more advanced roles
Motivations and trade-offs
  • Community members can be empowered by learning new skills, and have the potential to gradually contribute more complex and creative work
Direct and indirect coordination
Guiding questions
  • How can contributors discover open work? 
  • Is the current status of the research objects visible, so contributors can discover open work by themselves? Are there other interface features that allow signaling of open work?
  • Are there direct, text-based communication features to give feedback and communicate beyond stigmergic collaboration?
CBPP heuristic guideline
  • Direct communication features like forums are available, as well as indirect coordination features that signal open tasks, so they can be discovered and self-assigned by community members
Motivations and trade-offs
  • Stigmergy can replace direct communication in guiding contributors to open work
  • Direct communication features help to gather further feedback and getting in touch beyond knowledge-production workflows on the platform

Further reading

After the case studies, I ran a design thinking process with the personal science community to develop a peer production approach for them. Read more here:

References

  • Benkler, Y. (2002). Coase’s penguin, or, linux and” the nature of the firm”. Yale law journal, 369-446. https://doi.org/10.2307/1562247
  • Benkler, Y. (2016). Peer production and cooperation. Handbook on the Economics of the Internet, 91-119.
  • Kostakis, V., & Bauwens, M. (2020). Grammar of peer production. The handbook of peer production, 19-32. https://doi.org/10.1002/9781119537151.ch2
  • Morell, M. F., Cigarini, A., & Senabre Hidalgo, E. (2021, May 26). A framework for assessing the commons qualities of citizen science: comparative analysis of five digital platforms. https://doi.org/10.31235/osf.io/pv78g