This is a follow-up to my post 2 days ago about open-ness in astronomy, with some observations and comments from the corresponding workshop yesterday in Berlin, organized by the Stifterverband. The workshop was visited by about 30 people from all parts of society — researchers, library managers, civil servants from national and foreign science ministries, as well as people from industry (software, automotive, airport services, among others). In the spirit of open-ness the workshop adopted the “Chatham House rule” which essentially says that you may report freely about what has been said, but only if individual participants cannot be identified.
The workshop was organised in discussions within the respective sectors science, administration and industry and round-the-table discussions with everyone. As always in a workshop with such a diverse audience and broad aim (“to explore potentials and challenges from open research and innovation processes”), it is hard to give a one paragraph summary of the many topics discussed, but I will try nevertheless.
An “innovation culture” needs to admit errors
Most participants agreed that open-ness is a mindset that helps with innovative thinking, but it was also agreed that there are limits to open-ness, e.g. due to privacy (e.g. patient data in medical applications) or security (e.g. when in connection with critical infrastructure) concerns. To be able to openly discuss not just final results, but also your way there (e.g. your methods), requires some tolerance towards failure. If you cannot risk to fail, you can also not be open as you will only talk about your project after you have achieved some major success. And it also requires some trust or self-confidence that you will have another good idea in case the one you publish today is being picked up by someone else.
The measure becomes the target
In the discussion group on open-ness in science the discussion was focused on the question “how can we fix science?”. Science is becoming more and more an industrial machinery, optimised for maximum impact and citation numbers, opting for certain results (that are often boring) rather than trying out radical new experiments (that often fail). Science managers and politicians try to increase the “output” for a given level of (public) funding and need to be able to measure the output in order to report on changes. Generally the output is now seen as number of papers and number of citations each paper receives. However, focusing too intensively on these narrow indicators leads to the effect that many scientists now try to maximise their impact as measured by these numbers, rather than try to work on something bigger that does not (immediately) result in a large number of papers or citations. Think about the gravitational wave experiments which produced null-results for decades – before receiving the Nobel Prize this year. Other metrics, such as Altmetic which measures the impact of your research in society, may be helpful to get a wider view of the relevance of research projects.
Typically people only publish their studies if they find a result. If nothing could be measured or the result was deemed not of interest, it is not published. A participant called the so-accumulated knowledge “dark knowledge” and cited an Austrian funding agency which estimated (by looking at allocated budgets) that this “dark knowledge” grows 2-3 times as fast as published knowledge. It was agreed that also failures should be published, but it was also agreed that publishing null-results is not honoured in our current research system, or as one participant put it: “How many unsuccessful scientists do you know”?
Science as the stroke of a genius or regular work?
Underlying many discussions about how to measure success in scientists, how to evaluate scientific work (and scientists themselves!), is the question of how scientific progress is perceived. Unfortunately many people still believe science progress when some genius has a fantastic idea. This can occasionally be the case, but usually even the genius bases his or her insight on published literature which to the most part consists of hard work by hard working people, trying out new methods and slowly progressing in understanding some topic. It was felt that this work is often not properly appreciated. We concentrate too much on people who have done some fantastic new thing, rather than on the many “smaller” scientists who contributed to the success of the “genius”. This is also reflected in the current job situation, especially in Germany, where there is little room for normal working academics: You’re either a (perceived) genius and can then advance to become a professor or you’re continuously on short-term “postdoc” contracts without stability or job security.
There was some uncertainty as to how best open up the scientific process to the general public. Some believed that the next big step, after simple press releases and more interactive talks / blogs / social media is a full participation of the general public via citizen science projects. Others were more cautious and thought this is just a “hype” that is only applicable to a small set of projects. Indeed in astronomy, the GalaxyZoo project mentioned also in my previous post was highly successful, but it is unclear if citizens’ help in classifying galaxies will be needed in the future given recent advances in machine learning codes.
What can science contribute to society?
Finally the question was discussed what can science contribute to the wider society? Here I’d like to describe two of the most widely discussed points:
- Science contributes skeptical thinking. Skepticism is one of the basic traits of a good scientist and encourages everyone not to take claims for granted, but to critically ponder whether they can be true, ask for references, proof and repetition. In times of “fake news”, “climate deniers” and vaccination hoaxers, the importance of this trait cannot be overestimated. It was also stated that science needs to be healthy to encourage skepticism. If we only try to reach the maximal numbers of papers or citations, this does not necessarily help to question existing paradigms and make real progress.
- Science may also contribute tools and best practices for open-ness, such as the distributed version-control system github, open access publication platforms, and other tools to openly share information. Note that also the world-wide web was initiated from a research environment (the CERN) and it was created in an effort to make information accessible. Nowadays, this would perhaps also be called open science…