## Abstract

The advent of the Open Science paradigm has led to new interdependencies between the funding of research and the practice of Open Science. On the one hand, traditional revenue models in Science Publishing are questioned by Open Science Methods and new revenue models in and around Open Science need to be established. This only works if researchers make large parts of their data and results available under Open Access principles. If research funding wants to have an impact within this new paradigm, it requires scientists and scientific projects to make more than just text publications available according to the Open Access principles. On the other hand, it is still to be discussed how Research Funding itself could be more open. Is it possible to generate a new understanding of financing science shaped by transparency, interaction, participation, and stakeholder governance—in other words reach the next level as Research Funding 2.0? This article focuses on both of the aspects: Firstly, how Research Funding is promoting Open Science. Secondly, how an innovative and open Research Funding might look like.

## Research Funding Policies: Pushing forward Open Science

In the past decades, with new technology allowing interactive communication between content producers, content consumers, and intermediaries such as the publishing industry, there has been tremendous pressure on journalists, politicians, and entrepreneurs to become more accessible. The Web 2.0, a buzzword comprising of phenomena such as blogs and social networks, is shaping not just communication, but also decision-making in communities ranging from Parliamentarians to CEOs, and from TV staff to newspaper editors.

The scientific community has not felt the same amount of pressure. Nevertheless, a few scientists have taken the lead: Science blogs are increasingly popular and scientists tweet and interact with a larger audience. Universities have opened pages on Facebook and other social networks, or have created internal networks for communication. We have even witnessed the rise of social networks that exclusively address scientists (see chapter 07, Nentwich et al: Academia Goes Facebook? The Potential of Social Network Sites in the Scholarly Realm). A silent revolution is on its way with more and more students entering the ivory tower of academia with a set of communication expectations based on real-time communication across borders, cultures, and scientific clubs. The ivory tower opened up some windows which are now being showcased as Science 2.0.

Meanwhile, a second silent revolution has been taking place. In 2002, the Soros-founded Budapest Open Access Initiative called for more publications along their principles. Ten years later, Open Access has spread to a plethora of scientific communities, publishers and journals (Björk et al. 2010; Laakso et al. 2011; Laakso & Björk 2012). The more common Open Access has become, the more people start to think about what other sorts of scientific information might be available openly. As is the case with Open Access, for many initiatives striving for Openness in the context of scientific information it is quite unclear what kind of Openness they are claiming or how they define Openness. Some Open Access advocates are satisfied if scientific publications are available online and free of charge (so called Gratis Open Access) while others want these publications to be made available according the principles of the Open Definition, that applies the Open Source paradigm to any sort of knowledge or information, including the rights to access, reuse, and redistribute it.

Considering the Open Science concepts as an umbrella, there are several initiatives (see Herb 2012) that want to open up nearly every component or single item within research and scientific workflows, e.g.:

• Open Review, which includes both review of funding proposals and articles that are submitted for publication, the latter traditionally conducted as a peer review. Open Review does not so much aim for Openness according to the Open Definition or the Open Source Principles, rather it is meant to make the review processes more transparent, impeding cliquishness between colleagues as submitting scientists and reviewers (Pöschl 2004).
• Open Metrics as a tool for establishing metrics for the scientific relevance of publications and data that are independent from proprietary databases like the Web of Science or the SCOPUS database which do not only charge fees, but also disallow unrestricted access to their raw data.

As we can see, some of these initiatives focus on free or open access to science related information (in the shape of scientific data, scientific publications, or bibliographic data), while others promote a more transparent handling of the assessment processes of scientific ideas and concepts (such as Open Review and Open Metrics).

Many prominent funding agencies have already adopted policies that embrace single elements of an Open Science. Among others, the National Institutes of Health NIH1, the Wellcome Trust2, the European Research Council3, and the upcoming European Commission Framework Horizon 20204 also require funded projects to make project-related research data and publications freely available.

The unfolding science of the future will be characterized not only by seamless and easy access, but also by interaction, networked and integrated research information workflows and production cycles, openness, and transparency. Science (at least in the Western hemisphere) was an open process in antiquity, having been debated in agoras in the centre of Greek cities. Even in the Roman Empire, the sharing of ideas across the Mediterranean Sea had a profound impact on civilisation - the French, Swedish, English, Italian and German languages attest to the common linguistic principles that developed in this era. Only with the ideological dominance of the Catholic doctrines following the collapse of the Roman Empire did science retreat to monasteries, and scholarly debate to universities and peer-reviewed science communities. The Enlightenment ensured that the educated citizenry became involved in science, but only the Internet has pushed the possibility for a complete citizen science, not unlike how the Greek science community would have seen the debate.

Even though the online media and the initiatives mentioned above brought Openness back to scientific communication, one might ask what research funding which is compliant with the paradigms of Openness and Science 2.0 might look like. As we have seen, many funding agencies require fundees to make their project-related research results (as data or text publication) more or less open, or at least freely available, but until now the processes of research funding are hardly ever considered to be relevant to Open Science scenarios and appear to be closed, hidden, and opaque (in contradiction to any idea of Openness).

## Research Funding at Present: Limitations and Closed Discourses

Usually, applications for research funding are reviewed and assessed in closed procedures similar to the review and assessment of articles submitted for publication in scientific journals. This also entails that the reviewers are unknown to the applicant, while, on the other hand, the applicant is known to the reviewers (so-called single blind review). Horrobin describes the process of evaluating a funding application as follows:

A grant application is submitted. The administrators send it to reviewers (usually two) who are specialists in the field and therefore competitors of the applicant. A committee (usually between ten and 30 members) assesses the application and the reviewers’ reports, perhaps with a commentary from the administration.”* (Horrobin 1996, p. 1293). Not only the sequence of events involved in the funding process, but also the results achieved through the funding as well as the publication of the results show similarities: in both contexts, the so-called Matthew Effect (Merton 1968) is evident. This effect describes the fact that authors with an already high citation quotient are more likely to keep receiving a high number of citations in the future, and that in the same vein, institutions already attracting vast amounts of funding can expect to pull in more funds than other institutions (see chapter 18, Fries: The Social Factor in Open Science). A current study of the Sunlight Foundation reveals this effect for example in the funding patterns of the National Science Foundation NSF: “Twenty percent of top research universities got 61.6% of the NSF funding going to top research universities between 2008 and 2011.” (Drutman 2012).

Even the handing-over of the final funding decision from the peers to so-called Selection Committees, whose verdicts are supposed to incorporate the judgments of the peers, has led to similar results (v. d. Besselaar 2012). Furthermore, peer decisions on research funding from the review process do not appear to be objective: Cole, Cole & Simon presented reviewers with a series of accepted as well as declined funding applications and examined the consistency of the (second) judgment. The result: No significant connection between the first and the second decision on the eligibility of a funding proposal could be established. The results indicate “that getting a research grant depends to a significant extent on chance. The degree of disagreement within the population of eligible reviewers is such that whether or not a proposal is funded depends in a large proportion of cases upon which reviewers happen to be selected for it” (Cole et al. 1981, p. 881). A study by Mayo et. al. produces similar conclusions, it “found that there is a considerable amount of chance associated with funding decisions under the traditional method of assigning the grant to two main reviewers” (Mayo et al. 2006, p. 842). Horrobin even diagnosed in 2001 “an alarming lack of correlation between reviewers’ recommendations” (Horrobin 2001, p. 51).

Although the review process for publications as well as for funding proposals is similar, the consequences of distortions in the reviewing of funding applications are far more dramatic. Whereas even a mediocre article, after a series of failed submissions, can hope to be eventually accepted by some lesser journal, an application for research funding is stymied from the beginning by the paucity of funding organizations: “There might often be only two or three realistic sources of funding for a project, and the networks of reviewers for these sources are often interacting and interlocking. Failure to pass the peer-review process might well mean that a project is never funded.” (Horrobin 2001, p. 51).

Horobin suggests that the review process for research funding is inherently conservative as evidenced by the preference for established methods, theories, and research models, and that reviewers are furthermore “broadly supportive of the existing organization of scientific enterprise”. He summarizes: “it would not be surprising if the likelihood of support for truly innovative research was considerably less than that provided by chance.” (2001, p. 51). Consequently, the funding bodies fund “research that is fundamentally pedestrian, fashionable, uniform, and second-league—the sort of research which will not stand the test of time but creates an illusion that we are spending money wisely. The system eliminates the best as well as the worst and fails to deliver what those providing the funds expect.” (Horrobin 1996, p. 1293). The preference for mainstream research is thus an impediment to innovation: “The projects funded will not be risky, brilliant, and highly innovative since such applications would inevitably arouse broad opposition from the administrators, the reviewers, or some committee members.” (Horrobin 1996, p. 1293). In addition, traditional research funding promotes uniformity: “Diversity—which is essential, since experts cannot know the source of the next major discovery - is not encouraged.” (Horrobin 1996, p. 1294). In a meta-study on the effect of peer reviewing in the funding process, Demicheli and De Pietrantonj came to the sobering conclusion that: “No studies assessing the impact of peer review on the quality of funded research are presently available.” (Demicheli & Di Pietrantonj 2007, p. 2).

Critics of classic research funding are therefore demanding among other alternatives the allocation of a part of the available funds by lot through an innovation lottery (Fröhlich 2003, p. 38) or through the assignment of funds in equal parts to all researchers: “funds [should be] distributed equally among researchers with academic (…) positions” (Horrobin 1996, p. 1294). Additionally, the application of the Open Review described above would ensure greater transparency of the review process as well as prevent or counteract distortions; however, in actual fact, Open Review is hardly practiced in funding processes5.

In the following, the question as to what extent crowdfunding and other innovative funding procedures and channels may serve as an alternative to traditional research funding will be examined.

## Open Research Funding

Open Science and Open Research Funding share mutual spheres of interest. Both want to advance science through the involvement of citizens, and both want to make content available that was previously hidden behind paywalls of traditional science publishers, informal boundaries of scientific-peer-communities, or formal boundaries established by private or public funding-bodies. It can be compared as to how two areas of content creation with similar situation have addressed this demand for open content: journalism and arts. In both realms, the suppliers of content vastly outnumber the financiers of content.

There are many more journalists, writers, photographers out there willing to provide content than there are people willing to invest in large news corporations, which before the digital era were the only institutions capable of funding large-scale news publishing. Notwithstanding the bromide that the Internet has allowed everybody to publish, it has also allowed everyone to find a financier for publishing - through self-publishing on the eBook market through micropayments to crowdfunding sites like Spot.us or Emphas.is we have seen the gradual development of democratized news funding.

Similarly in the arts. While true skills in the arts still require perseverance, endurance, and none-the-least talent, the Internet has allowed artists to broaden their audience and reach out to fans, thus converting them into patrons for the arts. Therefore artists now enjoy avenues outside of the traditional mechanism in which content is being produced, sold, and licensed.

### Social Payments in Science

Related to crowdfunding, but not entirely the same, are new tools known as social payments. Typically, these are micropayments given for content that already exists on the net. They share with crowdfunding the notion of many small payments generating large income through accumulation. Flattr and Kachingle are two tools commonly associated with social payments. They are a little different from each other, but share the idea that a content creator embeds a small button on its webpage, and a content financer in pushing that button ships a small payment to the content creator.

When the New York Times put their blogs behind a flexible paywall in 2010, Kachingle rose to the occasion and allowed the readers to “kachingle” the New York Times blogs, in other words transferring a little bit of money to the writers behind the blogs every time they visited the site. The New York Times was not amused and sued Kachingle for using their trademarks - which in the eyes of most commentators was a reaction to new forms of financing typical of a news monolith .

Flattr, another social payment provider, has deep connections with the Creative Commons ecosphere. The website Creative Commons employs a Flattr-button to earn micropayments9 and many bloggers are putting their content both under a CC license and a Flattr-button. However, there is also one mishap present: Creative Commons are typically shared under a Non-Commercial Clause, which would prohibit the use of Flattr on any blog licensing content into the public domain10.

How can social payments be applied to science? Already Scienceblogs are using the social payment system—not necessarily for monetary gains but also for sharing content, engaging in conversation with readers, and measuring relevance11, translated by the authors:

“It is easy to find out how many people access a certain Internet site – but those numbers can be deceiving. Knowing that X number of people have clicked on my article on Y is no doubt a good start. But I have no real insight on how many had a genuine interest in precisely this article and have read my article and appreciated it and how many only found my site after a Google search and left after 5 seconds. There may be tools allowing me to find answers to these questions, but they will most likely require a lot of work and analysis. But if I have a Flattr-button under each of my articles, I can assume that only people who really appreciated reading them will click on it—after all, this click costs them real money” says Florian Freistetter, author of a blog on Astronomy.

The real potential of social payments lies in combination with Open Access journals, archives, and publications. Imagine, for instance, databases of publicly available data which allow the users of content to flattr or kachingle the site whenever they visit it? This would allow the making of money from scientific content beyond the traditional licensing systems of universities and libraries. Imagine if a university has a Flattr account filled with 100.000 Euros per year. Every time a university member accesses a scientific journal, the 100.000 Euro is divided among the clicks. This could generate a demand-based but fully transparent way of funding science.

Social payments could also be integrated into direct funding: For instance, through scientists receiving a certain amount of public money or money from funding institutions which cannot be used for their own projects but must be donated to other projects in their discipline. Funds as yet unassigned would remain in a payment pool until the end of the year and then be divided up equally among all projects.

There seems to be some evidence12 showing that distributions of social payments follow roughly the sharing and distribution behavior of content. In other words, content which is often liked, shared, and tweeted is more likely to receive funds through Flattr.

Social payments are thus likely to generate an uneven distribution of science funding—controversial, popular articles and data might generate more income than scientific publications in smaller fields.

Groundbreaking research might profit from such a distribution mechanism, especially if a new idea applies to a variety of disciplines. The established citation networks of scholars and the Matthew Effect mentioned above might even be stabilized.

Social payments in combination with measuring social media impact could provide an innovative means of measuring relevance in science. Such indices would not replace traditional impact scores, such as appearances in journals, invitations to congresses, and third-party funding, but would allow assessment of the influence of scientific publications within the public sphere.

### Virtual Currencies in Science

All of the tools described above relate in one way or another to real cash-flows in the overall economy. However, these mechanisms might also work with virtual currencies which may be linked to existing currencies, but not in a 1-to-1-relationship.

In Flattr, it is customary to be able to use the earned income within the system to Flattr new content, without having to withdraw cash. The Flattr ecosystem generates its own value of worth. Similarly, on crowdfunding sites such as Sellaband or Sonicangel, fans can use some of the rewards they receive to fund new artists. The money stays inside the ecosystem of the platform. Virtual currencies are used often in games, whereby gamers can turn real money into virtual money such as Linden Dollars on Second Life or Farmdollars on Farmville; the virtual money buys goods and services inside the game, both from other players and the game provider, and the earned income can be withdrawn at any time. It might be conceivable that a scientific community creates its own virtual currency. The currency could be used to trade and evaluate access to data, publications, or other scientific resources. Let us imagine for instance that a scientist receives a certain amount of ‘Aristotle-Dollars’ for every publication in the public domain. Based on the amount of ‘Aristotle-Dollars’ which they earn, they receive earlier access to public data.

## Some Critical Reflections

### Quality Assurance and Sustainability

One advantage of the peer review system is seen in the provision of quality assurance, although the procedure, as stated before, has been criticized. Some of the crowdfunding platforms hosting scientific project ideas also use peer review (for further details, see Giles 2012); for instance, SciFlies and OSSP only publish project ideas after an expert review. Moreover, only members of scientific institutions are allowed to present project ideas via OSSP. In one possible scenario, researchers could identify themselves in crowdfunding environments by means of an author identifier such as the Open Researcher and Contributor ID ORCID and document their expert status in this way (see Chapter 21, Fenner: Unique Identity for a Researcher). Project proposals from the #SciFund Challenge, on the other hand, were not subject to proper peer review but were scrutinized only in passing. Since the crowdfunding model, however, demands that each submission reveals the research idea and the project realization, a mechanism of internal self-selection can be posited: It can be assumed that scientists will only go public in crowdfunding environments with projects that are carefully conceived.

The same applies to plagiarism and idea or data theft – these types of academic misbehavior would almost certainly be revealed through the public nature of the procedure. The same arguments have also been offered in support of Open Review. Ulrich Pöschl, editor of the journal Atmospheric Chemistry and Physics (ACP), stresses the fact that the transparency of the submissions process in ACP increases the quality of submissions because authors are discouraged from proposing articles of mediocre or questionable quality (Pöschl 2004, p. 107): In the interest of self-protection and self-preservation, scientists can be expected to refrain from exposing and potentially embarrassing themselves in their community with premature or ill-conceived publications. Furthermore, crowdfunding relies upon self-regulation through the expertise of donors who in most cases are able to judge the merits of a proposal themselves, so that weak proposals, if they are made public at all, will have very poor prospects. Some crowdfunding platforms also use forums as additional mechanisms of quality assurance; in FundaGeeks “Geek Lounge”, for instance, potential donors can exchange their thoughts on the weaknesses or strong points of a project idea. Thanks to an expert discussion in the Kickstarter forum, a questionable project could be stopped without financial loss for the donors (Giles 2012, p. 253).

In spite of the positive outlook outlined above, scientific crowdfunding has yet to prove the advantages claimed for it. To conclude with Jim Giles: “For crowd-funding to make a real difference, advocates will have to prove that the process—which sometimes sidesteps conventional peer review — channels money to good projects, not just marketable ones.” (Giles 2012, p. 252). Also somewhat questionable is the long-term perspective of the projects: Unlike classic research funders, crowdfunding donors can hardly require fundees to only develop sustainable service infrastructures, for example. Conversely, crowdfunding, social payments, and virtual currencies may create new funding avenues facilitating the funding of specific individual researchers rather than abstract projects with fluctuating staff. Small projects with a funding volume below the funding threshold of classic funders could also be financed with these instruments.

### Plagiarism

As already mentioned, the public character of proposals for crowdfunding is more likely to expose plagiarism in projects than closed review procedures. For the same reason, researchers submitting their project ideas for crowdfunding demonstrate their claim to a scientific idea or method in a manner that can hardly be ignored, thus discouraging the plagiarizing of ‘their’ project. To put things into perspective, it must be mentioned that plagiarism or idea theft has also been reported for closed review procedures (Fröhlich 2003, p. 54).

### Pop Science, Self-Marketing, and Verifiability

On account of its proximity to citizens and its public character, crowdfunding also bears the inherent danger of unduly popularizing research, especially if any individual may donate to a project. Even though internal platforms for crowdfunding in which only researchers can donate to a project proposal may develop faster, it is conceivable—as with the peer review in traditional funding—that mainstream research is favored. Some also suspect that crowdfunding, but also social payments, could establish a disproportionate preference of applied research over basic research (Giles 2012, p. 253). The same could also be suspected for popular science or science that lends itself easily to media portrayal.

Crowdfunding, social payments, and virtual currencies place new demands on researchers’ self-marketing (Ledford 2012), but these demands need not be a bad thing, since a clear, succinct, and understandable presentation of a project proposal can only enhance and augment the verifiability and testability of scientific concepts by eliminating the dense prose and difficult wording found in many funding applications (language that is often mandated by funders’ requirements), thus promoting the intersubjective verifiability of scientific concepts called for by science theory and philosophy of science.

A more solid grounding in the scientific community might be achieved if crowdfunding, social payments and virtual currencies were not applied in entirely open contexts, but rather only within scientific communities (if necessary under the umbrella of discipline-specific associations or learned societies). In such a scenario, however, the aspect of involvement of civic society would be lost.

Although crowdfunding, social payments, and virtual currencies appear as more transparent than traditional avenues of research financing, the question of the ‘openness’ or accessibility of the research results nevertheless arises. Whereas traditional financing institutions may require fundees to make project results publicly available, it is as yet unclear how projects funded through the innovative procedures detailed above might be mandated to make their project results accessible for the public. Of equal importance may be the question of who owns the rights to a project’s results. In the interest of transparency, it might be desirable to make the names of donors who contributed to a project public, so as to identify and prevent potential conflicts of interest. Conversely, the risk of sponsorship bias must not be neglected. The term sponsorship bias refers to the production of results—consciously or unconsciously—that are consistent with the presumed expectations or desires of the financiers. To minimize the risks posed by conflicts of interest as well as sponsorship bias, it may be advisable to keep the identity of the financiers hidden from the fundees until the project’s end.

## References

Besselaar, van den, P., 2012. Selection committee membership: Service or self-service. Journal of Informetrics, 6(4), pp.580–585. doi:10.1016/j.joi.2012.05.003.

Björk, B.-C. et al., 2010. Open Access to the Scientific Journal Literature: Situation 2009 E. Scalas, ed. PLoS ONE, 5(6), p.e11273. doi:10.1371/journal.pone.0011273.

Cole, S., Cole, J.R. & Simon, G.A., 1981. Chance and consensus in peer review. Science, 214(4523), pp.881–886. doi:10.1126/science.7302566.

Demicheli, V. & Di Pietrantonj, C., 2007. Peer review for improving the quality of grant applications. In The Cochrane Collaboration & V. Demicheli, eds. Cochrane Database of Systematic Reviews. Chichester, UK: John Wiley & Sons, Ltd. Available at: http://doi.wiley.com/10.1002/14651858.MR000003.pub2.

Drutman, L., 2012. How the NSF allocates billions of federal dollars to top universities. Sunlight Foundation Blog. Available at: http://sunlightfoundation.com/blog/2012/09/13/nsf-funding/.

Fröhlich, G., 2003. Anonyme Kritik: Peer Review auf dem Prüfstand der Wissenschaftsforschung. medizin - bibliothek - information, 3(2), pp.33–39.

Giles, J., 2012. Finding philanthropy: Like it? Pay for it. Nature, 481(7381), pp.252–253. doi:10.1038/481252a.

Herb, U., 2012. Offenheit und wissenschaftliche Werke: Open Access, Open Review, Open Metrics, Open Science & Open Knowledge. In U. Herb, ed. Open Initiatives: Offenheit in der digitalen Welt und Wissenschaft. Saarbrücken: Universaar, pp. 11–44.

Horrobin, D.F., 1996. Peer review of grant applications: a harbinger for mediocrity in clinical research? The Lancet, 348(9037), pp.1293–1295. doi:10.1016/S0140-6736(96)08029-4.

Horrobin, D.F., 2001. Something rotten at the core of science? Trends in Pharmacological Sciences, 22(2), pp.51–52. doi:10.1016/S0165-6147(00)01618-7.

Laakso, M. et al., 2011. The Development of Open Access Journal Publishing from 1993 to 2009 M. Hermes-Lima, ed. PLoS ONE, 6(6), p.e20961. doi:10.1371/journal.pone.0020961.

Laakso, M. & Björk, B.-C., 2012. Anatomy of open access publishing: a study of longitudinal development and internal structure. BMC Medicine, 10(1), p.124. doi:10.1186/1741-7015-10-124.

Ledford, H., 2012. Alternative funding: Sponsor my science. Nature, 481(7381), pp.254–255. doi:10.1038/481254a.

Mayo, N. et al., 2006. Peering at peer review revealed high degree of chance associated with funding of grant applications. Journal of Clinical Epidemiology, 59(8), pp.842–848. doi:10.1016/j.jclinepi.2005.12.007.

Merton, R.K., 1968. The Matthew Effect in Science: The reward and communication systems of science are considered. Science, 159(3810), pp.56–63. doi:10.1126/science.159.3810.56.

Pöschl, U., 2004. Interactive journal concept for improved scientific publishing and quality assurance. Learned Publishing, 17(2), pp.105–113. doi:10.1087/095315104322958481.

White, E., 2012. On making my grant proposals open access. Jabberwocky Ecology. Available at: http://jabberwocky.weecology.org/2012/08/08/on-making-my-grant-proposals-open-access/.

1. SHERPA/JULIET. NIH: http://www.sherpa.ac.uk/juliet/index.php?fPersistentID=9

2. SHERPA/JULIET. Wellcome Trust: http://www.sherpa.ac.uk/juliet/index.php?fPersistentID=12

3. SHERPA/JULIET. European Research Council: http://www.sherpa.ac.uk/juliet/index.php?fPersistentID=31

4. European Commission: http://ec.europa.eu/research/horizon2020/index_en.cfm

5. At least some scientists make their funding proposals available after the review is finished (on their motivation see White 2012)

6. Unreturned Love: http://blog.sellyourrights.org/?p=252

7. Cancer Research UK: http://supportus.cancerresearchuk.org/donate

### Next chapter: Open Innovation and Crowdsourcing in the Sciences

Thomas Schildhauer & Hilger Voss