Mimosa and Hypergraph are both projects trying to change how researchers communicate their work. By changing the unit of publication from an entire research project to individual steps, they both try to improve reproducibility, access, and other issues in research.
In this post, the founders of both projects, Lana Sinapayen (LS) and Chris Hartgerink (CHJH), talk about what makes their projects similar and what sets them apart, in order to better understand future work that can be done cooperatively.
CHJH: Mimosa is such a lovely name! How did you come up with it and how does it tie in to what your project is trying to accomplish?
LS: Thank you! Mimosa comes from a very interesting plant I got a few years ago. “Mimosa pudica” is known for closing its leaves when you touch it, but it also has a lot of undocumented behaviours, such as shivering at night, turning away very fast from bright lights, and vibrating. I thought I would really like to find a plant researcher to collaborate with, but there was nowhere for me to find such a person, or even publicly document the plant’s behavior as a non-specialist.
Ideally, Mimosa (the platform!) would allow this kind of collaboration to happen organically.
Of course, open collaboration comes with a lot of potential issues… Is that why you chose to have a different approach with Hypergraph?
CHJH: This organic kind of collaboration is something that definitely resonates with me. As a researcher, team science sounds more appealing than a relatively individualistic culture that I previously experienced.
My approach with Hypergraph is different in the sense that I ultimately don’t want Hypergraph per se to succeed, but to make the underlying infrastructure succeed, the peer-to-peer commons (p2pcommons). Organic collaboration also is an important part, because when researchers share their research steps more frequently, it also creates more opportunities to connect over the work. I try to imagine the spontaneous connections that happen on Twitter, but then on the content instead of on thought snippets. Could you elaborate a bit on how Mimosa fulfills the five functions of a scholarly communication system?
|Function||The system should...|
|Registration||...create a record of the works|
|Certification||...have a way to judge the work’s quality|
|Awareness||...make people aware of the work|
|Archiving||...store the works for a long period of time|
|Rewarding||...incentivize the production of scholarly work|
LS: It’s the first time that I hear about peer-to-peer commons, interesting. Actually the extent of my research on research sharing is limited to “why did we end up with paper leaflets bound in expensive journals?” which is basically the history of why Elsevier came to dominate for-profit research communication, but I digress.
Mimosa allows users to publish and track the updates made to their own or other people’s work, which answers the need for Registration. All 6 types of contributions (Question, Hypothesis, Experimental protocol, Data, Analysis and Comment) can be rated on predefined rating scales (for example, reproducibility, falsifiability, design): that’s for Certification. Mimosa is a public online platform and is free of charge; it includes search and notification functions (Awareness).
Archiving and Rewarding are thornier questions. I don’t envision Mimosa as storing images or datasets online, as there are better tools for that, for example Figshare. But the text and relational data stored on Mimosa can only be available as long as Mimosa exists; beyond that, if the platform dies, I have to hope that by then a properly established Open Science organisation will have taken control of the platform and funds proper backups, that clone platforms using Mimosa’s API will have been created, and that the Internet Wayback machine will still work. Rewarding is even trickier. External rewards tend to have perverse incentives (cf “Publish or Perish” or runaway capitalism), while thankless work attracts quality contributions (Wikipedia, Open Source platforms). On the other end, removing direct/indirect financial incentives means skewing towards very niche demographics and excluding the majority of people in the world from contributing, which is the opposite of what I am trying to do.
How does Hypergraph solve each of the 5 functions?
CHJH: Interesting that you mention reproducibility, falsifiability, and design --- I wonder how good of a fit that is for researchers given the amount of debate going on about these constructs (e.g., high reproducibility of low-quality work). In some way, I like the more ephemeral nature of what you describe, because sometimes I wonder whether everything ought to be archived.
For the ideas behind Hypergraph, we aim to promote registration by embedding the chronological nature of research into how we communicate, which immediately helps certify by order of events and the subjective assessment of those events (articles nowadays do only the latter). Making people aware, primarily happens through network effects of connecting with other researchers, instead of following journals (e.g., comparable to Google Scholar). For archival, we build on the LOCKSS system, by making copies easier and more cost-effective than before. At the moment, you could get our software up-and-running in five minutes for $5/month and have copies of everything on the p2pcommons. For the incentive system, we have a whole plan prepared, but many steps remain to be taken. On the high-level, I say the current incentive system has become too monolithic by bean counting publications, citations, and so on. Rewards are dependent on what you’re trying to evaluate, hence, we want to move towards a question-driven evaluation system, making sure that the incentives are based on what people value. That will require a substantial amount of research down-the-line to figure out what that actually means. The groundwork for this is actually a big part of the original proposal we made back in 2018.
What would you like people interested in Mimosa do after reading our short conversation? I am excited to learn more about Mimosa as it develops, so I’m also asking for myself ☺️
LS: I agree that there is no measure that, alone, can tell us whether something is good or bad science; and any kind of measure tends to have perverse incentives. Funnily enough, that is a big topic in ALife (my field of research), for example when simulating evolution in virtual environments. Simulated agents will tend to randomly find and exploit any weakness in the system; finding outlandish, physics-defying solutions to simple tasks. There are several approaches to deal with that: you can add new measures until unwanted behavior is contained, which is what we tend to do in real life too. Or you can change the environment to make it foolproof, so that any solution that is found is by default valid; this is similar to our environment here on Earth, where there is no wrong way to evolve.
I do think everything in science should be archived. I also think that measures are useful. But I don’t think some values of measures should be enforced for this or that purpose: there will always be valid exceptions. in the end, what matters is to make it clear to yourself and others why the value is what it is, and whether that weakens your argument or not. Just like discussing results is mandatory in experimental papers.
In terms of measures, I like these two initiatives: Impact Story, which tries to put value on all the work scientists do that is not “number of papers published”; and the Automated Screening Working Group’s work, which scans paper looking for indications that the paper has open data or a section acknowledging limitations etc.
After reading this post, I would like for people to tell me what they find most cumbersome in the process of finding a research team, doing then sharing their science, and whether they think there could be a function in Mimosa (intro here) to help alleviate the issue. I’m very much still in the phase where anything is possible design-wise. And I’m looking for funding, so if you are interested in funding the project please contact me: lana.sinapayen at gmail.com.
Thank you for inviting me to chat, Chris!
CHJH: Thanks Lana! It is fantastic to see projects come up with such interesting backstories and ideas. I encourage all of Liberate Science’s community to take a look!