Below are two cases that represent current challenges that the scientific community faces in relation to open and reproducible science. Reproducibility is understood here as the ability of a study or project to be independently recreated using the same data and code as the original team, which is a sine qua non condition of the scientific method.
Case 1
First, imagine you’re at an university cafeteria, and someone brings up the issue of whether data and code should be made available when publishing a paper. The conversation might go something like this:
—With a researcher advocating strongly for open science,
—A second researcher taking a more skeptical point of view on full openness,
—and a third researcher playing the middle ground, promoting a more balanced approach.
In favor: [So, I heard you’re about to publish a paper related to the idea that another science is possible. Are you planning to make the data and code public?
In contrast: Well, hopefully not. I think it’s risky. I have put several years into this work. And why just hand it over for anyone to reuse, or even worse, misinterpret it?
In favor: Risky? I’d say it’s responsible. Sharing data and code helps others reproduce the results and strengthens your credibility. That’s science at its best.
In contrast: Well, maybe in theory. But in practice? The code probably has some messy parts and the data might not be ready for publishing. Also, what if someone publishes a follow-up before I get the chance to do it?
In between: Both of you have a point. Openness is valuable, but the concerns aren’t unreasonable either. We need to strike a balance taking into account the arguments you have both made.
In favor: But the benefits outweigh the risks. Openness boosts visibility and can lead to more citations, collaborations, even funding. Plus, many journals and funders now expect it.

In contrast: But that doesn’t mean it’s always wise. Some datasets can be sensitive. And what about privacy regulations? Or institutional policies? You can’t just upload everything to GitHub and then walk away.
In between: Exactly. But you can anonymize data, share the minimum data to be able to reproduce the analysis, share only processed versions, or use a repository with access controls. It’s not all or nothing.
In favor: Sure, if necessary. But too often, «in-between» becomes an excuse for opacity. Science needs transparency to move forward.
In contrast: But science also needs trust in the process. And I think that just dumping raw code data into the wild doesn’t build that.
In between: That’s why it’s about preparation. Document the code well, explain your methods, choose a proper license, make your data and code citable adding DOI’s. Make it useful, not just visible.
In favor: Fair enough. But the default should be open, unless there’s a good reason not to.
In contrast: Well, I don’t know, I’d argue the default should be closed, unless there’s a good reason to open it.]
What would you have said?
Case 2
Now, let’s go to a second situation. Imagine you’re a researcher sitting alone in your office, staring at the screen with the “Submit to Repository” button glowing — ready to submit your scripts so that anyone can reproduce your analysis. In this case, just like in the animated movie Inside Out, where different characters represent emotions of the same person,
Fear — worried about consequences and vulnerability.
Responsibility — committed to scientific integrity and transparency.
And Awareness — mindful of context and practical concerns.
[Fear: I don’t know what to do, what if someone finds a bug in my code or data? Or what if the journal retracts the paper? This could ruin my career.
Responsibility: Or… what if someone finds a bug, helps you fix it, and the science gets better? Isn’t that the point of openness — improving knowledge, not punishing honesty?
Fear: But that’s not always how it works! Journals retract papers… People talk about it… Things go viral on social media. You try to do the right thing and then get punished for it!
Awareness: You’re right… That’s a real concern. Science does have a harsh side. But also… isn’t hiding a flaw worse if it gets discovered later?
Fear: This isn’t even fair. My supervisor has never shared data! Why is he asking me to do it now?
Responsibility: Maybe because it’s time. Maybe because norms are changing. Maybe you are part of that change. Isn’t that worth something?
Awareness: Still, it’s okay to feel conflicted. You’re not a machine. You’ve worked hard. You want recognition, not risk.
Fear: Exactly. I didn’t sign up to be a sacrificial example for open science.
Responsibility: But this isn’t sacrifice — it’s stewardship. If there’s a bug, better to find it now. It’s not about shame. It’s about doing better.
Awareness: I think that’s why it’s super important to have code review sessions within groups and reproducible workflows to minimise bugs. It’s also really helpful to document as much as you can, write clear README files, and even preface your code with a note saying, “This was used in a peer-reviewed study, but may contain errors — please reach out if you find issues.”
Fear: But what if that’s not enough?
Responsibility: Then you’ll learn, fix it, and grow. You’ll be part of a culture that values correction, not perfection.
Awareness: And maybe —just maybe— you’ll inspire others to be brave, too.
How does it end?
The cursor hovers.
A deep breath.
Click: “Submit to Repository.”
Would you have done the same?
So, let’s have those conversations, both with others and within ourselves, talk openly about different positions, and keep moving forward for a better science in ecology.
Authors:
Verónica Cruz-Alonso, Elena Quintero, Guillermo Fandos y Julen Astigarraga
Ecoinformatics Working Group from the Spanish Association of Terrestrial Ecology (AEET)
