The economics of open science describe the economic aspects of making a wide range of scientific outputs (publication, data, software) to all levels of society.
Open science involves a plurality of economic models and goods. Journals and other academic institutions (like learned societies) have historically favored a knowledge club or a toll access model: publications are managed as a community service for the selected benefit of academic readers and authors. During the second half of the 20th century, the "big 5" largest publishers (Elsevier, Springer, Wiley, Taylor & Francis and the American Chemical Society) have partly absorbed or outcompeted non-profits structure and applied an industrial approach to scholarly publishing.
The development of the web shifted the focus of scholarly communication from publication to a large variety of outputs (data, software, metrics). It also challenged the values and the organization of existing actors with the development of an international initiatives in favor of open access and open science. While initially distanced by new competitors, the main commercial publishers have started to flip to author-pay models after 2000, funded through article processing charges and the negotiation of transformative deals . Actors like Elsevier or Wiley have diversified their activities from journal ownership to data analytics by developing a vertical integration of tools, database and metrics monitoring academic activities. The structuration of a global open science movement, the enlargement of scientific readership beyond professional researchers and increasing concerns for the sustainability of key infrastructures has enabled the development of open science commons. Journals, platforms, infrastructures and repositories have been increasingly structured around a shared ecosystem of services and self-governance principles.
The costs and benefits of open science are difficult to assess due to the coexistence of several economic models and the untraceability of open diffusion. Open publishing is less costly overall than subscription models, on account of reduced externalities and economies of scale. Yet the conversion of leading publishers to open science has entailed a significant increased in article processing charges, as the prestige of well-known journals make it possible to extract a high consent to pay. Open science brings significant efficiency gain to academic research, especially regarding bibliographic and data search, identification of previous findings and text and data mining projects. Theses benefits extend to non-academic research, as open access to data and publications eases the development of new commercial services and products. Although the overall economic and social impact of open science could be high, it has been hardly estimated.
The development of open science has created new forms of economic regulations of scientific publishing, as funders and institutions has come to acknowledged that this sector no longer operated in normal market conditions. International coordinations like the cOAlitionS attempt to set up global rules and norms on to manage the transition to open science.
Debates on the economic theory of open science have been largely influenced by the classic typology of economic goods between Private goods, Public goods, Club goods and Common-pool resources. [1] According to a common definition matrix gradually developed by Paul Samuelson, Ricard Musgrave and Elinor Ostrom private goods and club goods are exclusive (they cannot be freely shared and are exclusively used by owners or members), while privated goods and common goods are rivalrous (they cannot be consumed simultaneously). [2]
In theory, the outputs of open science could be defined as public goods: they are not exclusive (free-licensed publications, data or software can be shared without restriction) and they are not substractive (they can be undefinitely copied). In 2017 an OECD report underlined that research data "exhibit public good characteristics" as "it is not exhausted in consumption (i.e. it can be consumed many times without being diminished), and it may be inefficient to exclude potential users". [3] For Elinor Ostrom and Charlotte Hess this approach does not fit with the actual uses and constraints of knowledge online. Like shared natural resources, the outputs of open science can be polluted, exhausted or enclosed: "The parallel, yet contradictory trends, where, on the one hand, there is unprecedented access to information through the Internet but where, on the other, there are ever-greater restrictions on access (...) indicate the deep and perplexing characteristics of this resource". [4] Additionally, in contrast to other forms of knowledge commons, open science actors continue to enforce exclusion rules for the creation, curation and administration of resources: "the scientific and scholarly commons furnishes information input into a scientific discovery but the Mertonian norms of priority award the property rights in the claim to whoever is first to publish." [5]
The leading definitions of open access and open science are sufficiently ambiguous to allow for a plurality of allocation systems: "open access is a boundary object that does not refer to a common set of practices, assumptions or principles." [6] Consequently, uses and models of open science can span the entire typology of economic goods:
Excludable | Non-excludable | |
---|---|---|
Rivalrous | Private goods closed infrastructure, commercial databases | Common-pool resources open archive, open science infrastructure, open data portals, diamond open access journals |
Non-rivalrous | Club goods diamond open access journals, scientific societies | Public goods bibliographic record, scientific language, open-source software |
The coexistence of the differing economic models of open science remains an evolving process. Competing narratives of the future of open access involve all the potential axis of open science goods: they include the disruption of legacy scientific publisher by new competitors, the transformation of private scientific goods into public goods and the rehabilitation of community-led governance. [7] Nikos Koutras has argued for a structural inflexion of the role of commercial publishers, which would act more as editorial service than gatekeeper, as "it is feasible for authors to not rely on [them]". [8]
Models of open science are embedded into wider socio-economic structures. North-South inequalities remain a major structural factor, that affect not only the access and use of open science output, but also the way the discourses and representations on open science. [9]
The economic theory of club goods was originally developed in the 1965 by James Buchanan to complement the distinction between private and public goods. [10] While clubs are private organizations they also manage the allocation of the resources between the individual members, in a similar manner to a public service. Membership criteria are a fundamental feature of clubs and affect their efficiency: "The central question in a theory of clubs is that of determining the membership margin, so to speak, the size of the most desirable cost and consumption sharing arrangement." [10]
Before the Second World War, academic publishing was mostly characterized by a wide range of community-driven scholarly structures with little concerns for profitability. [11] They relied on informal community norms rather than commercial regulations. [12] Theses structures have been described as knowledge clubs: [13] "until the second part of the twentieth century, most journals could be assimilated to a club model". [14]
While managed by a community and publicly available, knowledge club are demic and mostly used to the benefit of their members. [15] As a defining feature of the club model, scientific authors are not paid for their publications: "ever since the first scientific journals were founded in 1665 in London and Paris, journals have not paid authors for articles." [16] Acknowledgment and recognition by the relevant community of peers is the main incentive: "intangible rewards (made nearly tangible in tenure and promotion) compensate scholars for relinquishing royalties on their journal articles". [16]
Users and consumers of club goods are basically the same population as the core readers of scientific journals are also their core contributors: "the set of potential producers and the set of incumbent consumers are the same set". [13] Determination of relevant membership and exclusion criteria plays a fundamental role in the management of the club. In contrast with other forms of clubs (such as Health clubs), membership criteria of knowledge clubs are not enforced strictly but stem from widespread conventions: it "happens quite naturally (i.e. culturally) in scholarly knowledge clubs by simple cost of access in time and language." [17] As there are no formal process of adhesion, knowledge club can be joined by non-reliable members, so long as they are willing to devote the necessary time to demonstrate they adhere to common cultural values and customs: "Hostile pranks, such as the Sokal hoax/fraud, demonstrate that clubs may be hoodwinked by outsiders who apparently ‘speak their language’ but are in fact using it to challenge their knowledge." [18]
The concept of knowledge club has highlighted the continuities between scientific publications and other form of restrictive associations. Journals are strongly embedded in wider institutional networks and communities and cannot be dissociated from it: "More specialized journals appeared in the 18th and 19th centuries, most of which were published by learned societies. Only at the end of the 19th century did university presses gain importance as publishers of scholarly journals". [19] While in their daily management knowledge clubs are not strictly separated from other economic actors, the interests of the community takes precedence over any other economic incentive: "we see a journal as a club in which access to these services is internalised as a membership benefit. While the services might still be outsourced, in practice it can be seen that such a shift potentially has substantial political and economic consequences as to how we see the relations among players." [20] As adhesion to the club is not exclusionary, researchers are usually part of a complex network of clubs: "Membership of an academic institution with the relevant benefits, including access to subscription content, is another parallel club (...) Further work will be needed to define those situations which are better analysed as complex clubs, with differential membership contributions, and those situations where multiple clubs are interacting." [21]
Community-lead journals have been progressively acquired or outcompeted by large international publishers after the Second World War: [19] "The small society presses, struggling to cope with growing scale, were supported and then largely supplanted by the ‘Big 5’ commercial presses". [13] While the knowledge club has receded, some of its conventions have persisted: "academic journals have retained their club-like qualities through blind peer review (even more through open review), and via editorial boards that are carefully constructed to ‘send the right signals’ in order to build prestige and quality assurance". [22] The evaluation of scientific journal remained largely performed as a community service, with researchers submitting peer-reviews for free. Journals continued to be officially managed by editorial committee, although in a context of ownership by a large industrial structure, their authority and their ability to set the policy of the publication is limited. Both the authors and the audience of academic publication have primarily non-commercial incitations: "When publishing articles in academic journals, most scholars are predominantly motivated by curiosity, priority and the expected gain in reputation, and much less so by any monetary rewards for the actual publications." [23]
The OA Diamond Study states that non-commercial journals in open access "maintain a secular tradition of “club” journals, set up for the uses and interests of a specific closed community of knowledge." [24] While access to read is no longer a material condition for membership, non-commercial journals are still largely managed for the benefits of a community. They are "still strongly embedded in institutional environments (from a legal and governance perspective)." [24]
Club journals have gained a new relevance with the development of electronic publishing and played a fundamental role in the early development of online open science in the early 1990s. Pioneers of open access electronic publishings were non-commercial and community-driven initiatives that built up on a trend of grassroot publishing innovation in the social sciences and the humanities: "In the late ‘80s and early ‘90s, a host of new journal titles launched on listservs and (later) the Web. Journals such as Postmodern Cultures, Surfaces, the Bryn Mawr Classical Review and the Public-Access Computer Systems Review were all managed by scholars and library workers rather than publishing professionals." [25]
The development of the web and free editing tools for scientific publications like Open Journal System made it possible to run non-commercial journals and apply automated editorial routines at a limited cost: "model made economic sense as outsourced specialisation, but technological change has upended that logic by dramatically lowering the cost of in-house production." [26] Equipment in personal computers has additionally reinforced the free exchange of services among club members, as new contributions could be made beyond peer review evaluation: "The wide availability of desktop hardware and software enabled new capabilities among authors, and an expectation from publishers that authors would self-manage much of the layout and editing of articles." [13] Unless they had largely transformed their publication in a commercial activity, knowledge clubs and historical scientific societies have embraced open access: since 2014, the Royal Society has published the Royal Society Open Science journal.
New forms of research infrastructure developed in the context of open science have also retained some features of a "club model". While they create and manage a common or a public goods, reearch data repositories are frequently developed for the selected benefit and prestige of a few institutions: "In the case of research data repositories, such a “privileged” situation may arise for research funders, research centres, universities, or a disciplinary group or society that may gain recognition and further funding from supporting or hosting an open data repository." [27] Yet, a full membership or club model, that would also include restrictions to access, remains rare among research data repositories as "it is difficult to develop and maintain a group large enough to cover costs, affecting both scale and sustainability." [27]
Despite the continuities, the articulation between the historical values of the club (exclusion, internal management, centrality) and the new values of open science remain challenging. [28] Scientific clubs have long maintained an ambiguous position on the value of openness. While openness was held as fundamental scientific principle, that makes it possible to have a free exchange of ideas, knowledge club have also relied on structural mechanism of exclusion: "A claim of openness, and a narrative that this openness sits at the core of the value system, that is not quite realized in practice. The building of institutions that seek to enhance openness – the Royal Society holding formalized meetings, open to members, in the place of private demonstrations – that are nonetheless exclusive (...) Yet what is passed down to us today, is less that exclusive gentleman's club and more the core values that it sought to express." [28]
Recent developments of open science and citizen science creates a new source of tension that is still not resolved: "There are profound challenges to adapting our institutions to interact productively with differing knowledge systems, but we are perhaps for the first time well placed to do so." [29] According to Samuel Moore, the main discourses on open science and scientific commons continues to encover exclusionary practices reminiscent of historical knowledge clubs : "many uses of the term commons in scholarly communications are themselves ill- or un-defined and intend to evoke a kind of participatory, inclusive or freely accessible resource." [30]
In Western Europe and North America, direct ownership of journals by academic communities and institutions started to wane in the 1950s. The historical model of scientific periodicals seemed unable to keep up with the quickly increasing volume of publication in the context of big science. [31] In 1959, Robert Maxwell created one of the first giant of scientific publishing, Pergamon and through the following decades acquired hundreds of journal to small university press and scientific societies. [32] While theses journals were not very profitable individually, with such a high concentration made Pergamon became "too big to fail" and was able to impose its own conditions to academic libraries and other potential customers. This approach was applied as well by Springer and Elsevier. Scientific publishing in the second half of the 20th century has been described as a two-sided market with "significant network externalities" since "authors prefer to publish in academic journals with the largest readership, and readers prefer the journals with the best authors". [33]
Due to high market concentration, scholarly journal are not "subtractable": they cannot be replaced by equivalent product on the market, which hinders competition. [33] The development of bibliometric index has reinforced this locked-in process, as highly quoted journals will receive more submissions. [34]
By the 1980s, the CEO of Elevier, Pierre Vinken aimed for an annual growth rate of 20%, [35] mostly through an uncontrolled raise of subscription prices. From 1985 to 2010, the budget allocated by American Research Libraries to periodicals increased five-fold. [36]
The small society presses, struggling to cope with growing scale, were supported and then largely supplanted by the ‘Big 5’ commercial presses: Elsevier (which acquired Pergamon in 1991), Wiley, Springer, Taylor & Francis and Sage. These newly-empowered players brought an industrial approach to the publication and dissemination process, for the first time realising the benefits that these specialised capital and skills could provide by operating at a scale that was unprecedented to that date. [13]
As it became a private good, the scientific journal was also transformed into an industrial product, with an increased standardization of publishing norms, peer-review process or copyrights. [37] In contrast smaller scientific publishers participate to a more typical publishing market, with a persistent competition. Four out of the five most costly journals "are published by big five publishers". [34]
With the conversion to electronic journals, scientific publishing became a hybrid market: along with individual subscriptions, leading publishers introduced big deals or pluri-annual licenses to large bundle of journal titles. Big deals were typically negotiated with national networks of research libraries and academic institutions. Big deals proved advantageous to publishers as well as they limited the "administrative costs" of managing a large number of contracts between journals and buyers. [38] The bundling of thousands of journals titles made it possible to ensure the commercial viability of journals that would have had a limited success. [39] In 2011, David Colquhoun showed that 60% of the journals included in the Elsevier licenses granted to the University College were accessed less than 300 times per year and 251 journls were not even accessed once. [40] Even though the inflation of individual subscription cost has slowed, [41] the total amount allocated to scientific periodicals has continued to rise: "While the North-American research libraries spent about a third more on journals than on monographs in 1987, this ratio had risen to about four to one by 2011." [19]
Following the generalization of the big deal model, the main transactions between large publishers and scientific institutions no longer operated under normal market conditions with fixed public prices: "An optimal pricing strategy when bundling electronic information still does not exist (...) Prices will be determined in bilateral negotiations and every library pays a different price according to its institutional willingness to pay." [42] Opting out of a big deal is a nearly impossible choice for major scientific institutions as "big deals of different publishers are complementary and not substitutes". [43] Big deal licenses are usually covered by non-disclosure agreements, so that prices can be determined on the basis of the financial capacity of the buyer: "these practices give publishers pricing flexibility that allows them to try to charge the highest price that each institution is willing to pay and make it hard for new publishers to compete." [44] For Jason Potts et al., this deviation from the market norms shows that the market model is fundamentally less efficient than the knowledge club in the context of scientific publishing. It creates more ecosystemic costs than the direct management of the journals and other scientific outputs by the community: ""If our argument is that clubs and communities are capable of acting together to solve collective action provisioning problems in ways that are more efficient than either markets or the state, then the dissolution of clubs, or their inability to coordinate, will lead to inefficient or non-existent provisioning." [45] Due to increased concentration, large publishers also became a powerful lobby: in the United States, Elsevier had long influenced some key policy issues relating to the economy of publishing and was able to significantly slow down the transition to open access. [46]
The scientific market is structured by large scale inequalities. Overpriced subscriptions and paywalls have been "a major barrier to progress in developing countries"' [47] While leading publishers have initiated programs at a reduced costs for developing countries, their impact has been limited by the complexity of the subscriptions procedures: "the library or consortia have to go through a procedure to request the discount, assuming they know it exists and can navigate the bureaucracy involved." [48]
The early developments of Open science and the open sharing of scientific publication on the web was at first a challenge for leading commercial publishers: the executive board of Elsevier "had failed to grasp the significance of electronic publishing altogether, and therefore the deadly danger that it posed—the danger, namely, that scientists would be able to manage without the journal". [49] Initial experiments of scientific journals were mostly led by non-commercial initiatives.
Commercially viable open access journals have appeared in the late 1990s, under the leadership of new competitors such as the Public Library of Science (PLOS), MDPI or Hindawi. [50] They relies generally on the author-pay model: the journal provides an editorial service to the author of a publication and charge an article-processing charge to cover the editing costs. This practice is anterior to open access, as subscription-based journals held by scientific societies occasionally charged for additional services (such as photographies in color). For Peter Suber, this economic model is similar to broadcast television: "If advertisers can pay all the costs of production, then a TV studio can broadcast a show without charging viewers. In the case of scholarly research articles, the model works because authors are willing to relinquish royalties to get their message across and a growing number of institutions that employ researchers or fund research are willing to consider the cost of dissemination". [51]
After 2000 large commercial publishers started to adopt a hybrid business model also termed open choice: authors have either the possibility to submit for free a paywalled articles or pay for a free versions. This practice has been increasingly criticized as double dipping. Due to the complexity of the publishing market, largely structured around big deal license, scientific publishers were in a position of "collecting money twice" [52] Supervisors of open access policies have become more skeptical of the transitory nature of hybrid models. According to the lead coordinator of Plan S, Robert-Jan Smits "when I then asked [the large publishers] when this transition would be completed, they were silent. The reason for this was clear: they saw hybrids as a way to continue the status quo." [53] To overcome this risk, the Plan S stated that transformative journals will no longer be compliant after a transition period that ends in 2023. [54]
While they were originally devised for a subscription-based model, big deals have been repurposed as large scale agreement to generalize commercial open access. Since subscription costs were already bundled at a national level, they could be repurposed as publications licenses or Article-Processing Charges licenses as part of a journal flipping. In 2015, the Max Planck Society issued a White Paper on the economic cost of the transformation to open access: "All the indications are that the money already invested in the research publishing system is sufficient to enable a transformation that will be sustainable for the future." [55] In this context, the APC become the default business model for all journals: "Our own data analysis shows that there is enough money already circulating in the global market – money that is currently spent on scientific journals in the subscription system and that could be redirected and re-invested into open access business models to pay for APCs." [56] The economic debate over journal flipping has largely evacuated the issue of potential savings: negotiations aim rather to ensure a global conversion to open science at the same costs as existing subscriptions licenses. [56] Several national negotiations with big publishers attempted to implement the journal flipping approach with a limited success.
Commercial open access models based on article processing charges create new structural inequalities, no longer in terms of access to read but access to publish. High APC prices mean that in practices global south authors are bound to be cut off from major journals: "APCs also present a problem for researchers in the Global South, who typically have much smaller budgets to work with than their northern counterparts." [57]
In parallel with the open science movement, leading scientific publishers have diversified their activities beyond publishing and moved "from a content-provision to a data analytics business". [59] The ubiquituous use of digital technologies in research activities and in the institutional management of science and highed education has been "creating new income streams". [60] Large publishers have been well positioned in this new market, as they already have know-how, the infrastructures and intellectual property on a large range of scientific outputs. [61] Additionally, they have the necessary resources for long-term investments thanks to the accumulated high margins of journal subscriptions. Negotiation of "big deal" additionally created a favorable framework, as access to subscriptions or APCs could be easily tied with exclusive contracts on other databases and tools.
In the 2010s, leading publishers developed or acquired new key infrastructures for the management scientific and pedagogic activities: "Elsevier has acquired and launched products that extend its influence and its ownership of the infrastructure to all stages of the academic knowledge production process". [62] In the past two decades, there has been 340 merging and acquisitions for Elsevier, 240 for Informa (Taylor & Francis) and 80 for Wiley. [63] While most of theses transactions were linked to academic content, until the 2010s, it has quickly been expanded to academic "services" and other data analytics tools. [64] Although all leading publishers attempt a vertical integration of existing and new services it can take different shapes. For instance there is "a clear attempt by Wiley to enhance its control over the university decision-making process in education, as Elsevier has for academic knowledge production." [65] By 2019, Elsevier has either acquired or built a large portofolio platforms, tools, databases and indicators covering all aspects and stages of scientific research:
the largest supplier of academic journals is also in charge of evaluating and validating research quality and impact (e.g., Pure, Plum Analytics, Sci Val), identifying academic experts for potential employers (e.g., Expert Lookup5), managing the research networking platforms through which to collaborate (e.g., SSRN, Hivebench, Mendeley), managing the tools through which to find funding (e.g., Plum X, Mendeley, Sci Val), and controlling the platforms through which to analyze and store researchers’ data (e.g., Hivebench, Mendeley). [66]
Since it has expanded beyond publishing, the vertical integration of privately owned infrastructures has become extensively integrated to daily research activities: "the privatised control of scholarly infrastructures is especially noticeable in the context of ‘vertical integration’ that publishers such as Elsevier and SpringerNature are seeking by controlling all aspects of the research lifecycle, from submission to publication and beyond". [67] In contrast with publication, which was an outsourced business separated from institutional and community activities, the new services developed by large publishers are embedded in the infrastructure of universities and create potentially stronger dependency links: "Pure embeds Elsevier within the university workflow process through its abilities to manage research at the university level, including the provision of a dashboard to facilitate decision making by university research administrators (Elsevier, “Features”)." [68]
Metrics and indicators are key components of vertical integration: "Elsevier's further move to offering metrics-based decision making is simultaneously a move to gain further influence in the entirety of the knowledge production process, as well as to further monetize its disproportionate ownership of content". [69] While depency subscription journals has been fragilized by the open science movement, metrics can create a new locked-in situation for scientific institutions: "For universities keen on raising or maintaining their rankings, publishing in Elsevier high impact journals may help them gain the advantage (...) vertical integration and the promotion of citation metrics and algorithmic recommendations may, in fact, constitute rent-seeking behavior designed to increase." [70] Consequently, a shift of leading publishers to data analytics is not incompatible with the parallel development of a large APC market for open science publishing. For Samuel Moore, it is even "incentivised by the governmental policies for OA through APCs, repository services" as it create new "need to track compliance". [71]
The emerging open science market has been compared with the business models of social networks, search engines and other forms of platform capitalism. [67] [58] [72] While content access is free, it is indirectly paid through data extraction and surveillance. "If the primary negative manifestation of market power in the publishing sector is high paywall price (lack of access, therefore), the result of monopolistic competition in academic data analytics will be the combination of dependence and surveillance that we might associate with, e.g., Facebook." [61] Increasing similarities with other digital platforms may have contributed to the increased regulations on the academic publishing market in Europe in the 2010s: "It's why Facebook, Apple and Google are now dominant: once they are controlling X per cent of the market, it's almost impossible for a competitor to come up." [73]
The concept of commons was originally developed to describe the management of "a resource shared by a group of people" and the establishment of common governance rules to ensure that the resource is not overused or polluted (which would result in a tragedy of the commons). [74] </ref> Similarly to club goods, common goods are used and maintained by a community. Yet, the membership is no longer exclusive: "toll goods (also called club goods) share with private goods the relative ease of exclusion". [75] Typical forms of commons include shared natural resources like timber, berries or fishes. They are managed by unformal local associations. Governance rules are neither rigid nor pre-existing but have to be adapted to the specific requirements of the resource and the local environment: "One of the central findings was that an extremely rich variety of specific rules were used in systems sustainable over a long time period. No single set of specific rules, on the other hand, had a clear association with success." [76] Common goods are also differentiated to Public goods (such as air or radio waves) due their subtractibility: natural resources can be depleted and rules have to be put in place to ensure they will not be overused. [77]
Until the 1990s open knowledge was not considered as a commons in economic theory but as a "classic example of a pure public good, a good available to all and where one person's use does not subtract from another's use". For Elinor Ostrom and Charlotte Hess, this framework is no longer viable as the principle of non-excludability has been significantly weakened: "new technologies can enable the capture of what were once free and open public goods (...) Knowledge, which can seem so ubiquitous in digital form, is, in reality, more vulnerable than ever before" [78] In a scientific context, examples of new enclosures of public goods may include all the surveillance data systems put in place by Elsevier, Springer or Academic social networks that capturate activities such as social interactions, reference collections. The uncontrolled development of the early web highlighted the need for common management of knowledge resources: "People started to notice behaviors and conditions on the web — congestion, free riding, conflict, overuse, and pollution — that had long been identified with other types of commons." [74]
The open access of the 1990s and early 2000s movement aimed to ensure that science will be a public good, freely usable to all. The unlimited potential circulation of online content has transformed historical forms of non-commercial open access as a commons, at least from a reader's point of view: "By definition, OA literature excludes no one, or at least no one with an Internet connection. By contrast, non-OA electronic journals try very hard to exclude nonsubscribers from reading the articles". [79]
While "knowledge commons is not synonymous with open access", the process of making open access a reality has also incidentally created a global "community network of the open-access movement": [80] decisions had to be made regarding a commonly accepted definition of open access, free licenses and potential exclusions of non-open access initiatives that are embodied in the Budapest Open Access Initiative.
The early open science infrastructures aimed to ensure the circulation of scientific publications as a common good. Archives or institutional repositories were conceived as local or global community services. In August 1991, Paul Ginsbarg created the first inception of the arXiv project at the Los Alamos National Laboratory in answer to recurring storage issue of academic mailboxes on account of the increasing sharing of scientific articles. [81] Repositories embody numerous characteristics of common resources under the definition of Elinor Ostrom: they maintain and protect a scientific resources, they implement weak requirements for membership (submissions are not peer-reviewed) and they prime coordination and shared management over competition. By the early 2000s, numerous repositories strived to "comply with the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) of 1999, which ensures the interoperability of different repositories for the purpose of locating their contents". [82]
Archive repositories and other forms of open science infrastructures have been originally individual initiatives. As such, their status as a scientific commons were not always institutionalized and they were hardly protected against potential privatizations: "one recent strategy of traditional commercial journal publishers to avoid negative externalities from green OA is to acquire successful OA repositories". [19] The acquisition of Digital Commons and SSRN by Elsevier has highlighted the lack of reliability of critical scientific infrastructure for open science, which creates the conditions of a tragedy of the commons. [83] [84] [85] The SPARC report on European Infrastructures underlines that "a number of important infrastructures at risk and as a consequence, the products and services that comprise open infrastructure are increasingly being tempted by buyout offers from large commercial enterprises. This threat affects both not-for-profit open infrastructure as well as closed, and is evidenced by the buyout in recent years of commonly relied on tools and platforms such as SSRN, bepress, Mendeley, and GitHub." [86] Weak definitions of scientific commons, and of the requirements and expectations of commons governance, may have facilitated this take-over: "many self-described ‘commons’ projects for open access publishing simply restate the values of commercial publishing (...) while relying on the language of a more progressive politics." [87]
In contrast with the consolidation of privately owned infrastructure, the open science movement "has tended to overlook the importance of social structures and systemic constraints in the design of new forms of knowledge infrastructures." [88] It remained mostly focused to the content of scientific research, with little integration of technical tools and few large community initiatives. "common pool of resources is not governed or managed by the current scholarly commons initiative. There is no dedicated hard infrastructure and though there may be a nascent community, there is no formal membership." [89] In 2015, the Principles for open science infrastructures underlined the discrepancy between the increasing openness of scientific publications or datasets and the closeness of the infrastructure that control their circulation.
Over the past decade, we have made real progress to further ensure the availability of data that supports research claims. This work is far from complete. We believe that data about the research process itself deserves exactly the same level of respect and care. The scholarly community does not own or control most of this information. For example, we could have built or taken on the infrastructure to collect bibliographic data and citations but that task was left to private enterprise. [90]
The fragility of open science commons until the 2010s contrasts with the dynamics of contributive projects beyond the scope of research and scientific activities. Wikipedia, Open Street Map or Wikidata are open communities with a low threshold of admission and membership that will come to tipify the online knowledge commons. Their management is analogous to natural common-pool resource system, where local uses and participation are rarely discriminated a priori, although repeated abuses can lead to exclusion.
Since 2015, open science infrastructures, platforms and journals have converged to the creation of digital academic commons. While they were initially conceived either as a public good (a "grid" that ensure the distribution of a non-excludable resource) or a club good (with a limited range beyond a specific communities), open science infrastructures have been increasingly structured around a shared ecosystem of services and standards has emerged through the network of dependencies from one infrastructure to another. Open science infrastructures face similar issues met by other open institutions such as open data repositories or large scale collaborative project such as Wikipedia: "When we study contemporary knowledge infrastructures we find values of openness often embedded there, but translating the values of openness into the design of infrastructures and the practices of infrastructuring is a complex and contingent process". [91]
The conceptual definition of open science infrastructures has been largely influenced by the analysis of Elinor Ostrom on the commons and more specifically on the knowledge commons. In accordance with Ostrom, Cameron Neylon understates that open infrastructures are not only a public good characterized by the management of a pool of common resources but also by the elaboration of common governance and norms. [92] The economic theory of the commons make it possible to expand beyond the scope of limited scope of scholar associations toward large scale community-led initiatives: "Ostrom's work (...) provide a template (...) to make the transition from a local club to a community-wide infrastructure." [93] Open science infrastructure tend to favor a non-for profit, publicly funded model with strong involvement from scientific communities, which disassociate them from privately owned closed infrastructures: "open infrastructures are often scholar-led and run by non-profit organisations, making them mission-driven instead of profit-driven." [94] This status aims to ensure the autonomy of the infrastructure and prevent their incorporation into commercial infrastructure. [95] It has wide range implications on the way the organization is managed: "the differences between commercial services and non-profit services permeated almost every aspect of their responses to their environment". [96]
As of 2022, major actors that have formally adopted the core principles of open science infrastructures (or POSSE) include Crossref, CORE, OpenAir, and OpenCitations. [97]
Consolidation of the commons ecosystem has been also visible in non-commercial journals, which moved from a knowledge club paradigm to more global commons initiative. While the daily management of non-commercial journals fit better to the definition of a knowledge club, [24] more innovative models of governance "tend to bridge the secular heritage of scientific societies with the new wave of digitized knowledge commons such as Wikipedia or OpenStreetMap". [24] New forms of common-based regulations and distributed decision-making processes have been gradually introduced: "The ascending role of the editorial committee and volunteers brings OA diamond journals closer to community-run projects where contributors are constantly self-learning and appropriating tasks" [24] Integrations of the specific perspectives of the global South have redefined the common "understandings of the commons" beyond the perspectives of "more powerful stakeholders, wealthy disciplines and countries in the Global North". [9]
The future of scientific commons remain a debated issue. The OA Diamond Study underlines the Open Access Commons as a potential future road of development for non-commercial open access journals and beyond: "The OA Commons will be a new more integrated international OA publishing system and ecosystem that serves the research community." [98] Fragmentation has hindered on the development of non-commercial structures. Reliancy on small local communities result on a low visibility to potential readers or funders: most of the estimated 17,000 to 29;000 non-commercial journals are currently off the charts of scientific publishing indicators. The creation of common services and infrastructures as well as inter-disciplinary and inter-community coordinations may contribute to overcome built-in limitations of the knowledge club model: "The OA Commons will be community-driven and will bring communities together who already are or want to work together to become more effective." [98] Alternatives visions of scientific commons include more decentralized models of "small, semi-autonomous projects that are loosely affiliated but mutually reliant" as large platforms and infrastructures could be "unable to account for nuanced relational practices of commoning in local communities and a variety of contexts." [99]
Due to the coexistence of several economic models, there can be no unilateral estimate of the cost of open science. Cost-estimate frequently relies on different "scenarios" that match the different models of open science. In 2021, Grossmann and Brembs retain 7 different scenarios that includes outsourcing to a leading commercial publisher, small-scale non-commercial journals supported by free software and volunteer contributions or a hypothetical "decentralized, federated platform solution where all scholarly articles are published without being divided into journals". [100] Economies of scale are also a significant factor, as large platforms and infrastructures can benefit from bundled expenses in numerous areas.
According to Grossmann and Brembs the total costs of scholarly publication range between $194.89 to $723.16 per article. Regardless of the wide variation per models and potential economies of scale, even the highest estimate of costs of publications in open science are low: "publication costs only cover 15% of the subscription price (...) assuming a conservative profit margin of 30% (i.e., US$1,200 per article) for one of the large publishers there remains a sizeable gap of about US$2,200 in non-publication costs, or 55% of the price of a scholarly subscription article". [100]
Editorial work and the management of peer review remain the core activity of academic journals and their most identified contribution and service rendered to scientific communities. It is the main expenses of the non-commercial journals surveyed by the OA Diamond Study: "the five main expenses/payables of the journal are editing (531), copy-editing (463), technical and software support (393), typesetting (384), and design (336)". [101] Translation is a frequently quoted added cost, that may have been incentivicized by open science as the potential audience of local non-academic readers create incentives to maintain a multilingual website. Even before their conversion to electronic publishing, non-for-profits journals have maintained an affordable price, due to a nearly exclusive focus on editorial services: "in 2013, the mean price per article in for-profit journals was 3.2 times higher than in non-for-profit journals, and that the corresponding ratio for the median price per article was even 4.33:1". [102] With Internet publication, costs can be significantly lower, due to additional cuts in transaction and labor costs, use of share platforms and infrastructure or reliance on voluntary work: "over 60% of journals reported annual costs in the previous year under $/€10,000, including in-kind contributions." [103]
In contrast with the more technical aspects of scientific publication, editorial work cannot be easily scaled. Unless relying on volunteer work, cost in time and expertise is roughly similar: "For certain tasks, for example copyediting or typesetting, there are hundreds of individual companies worldwide providing those services (...) having compared the pricing of those service providers with others, we found only a very small variation of cost for such tasks". [104] The only potential margin is by allocating this work to the scientific authors: "The wide availability of desktop hardware and software [has created] an expectation from publishers that authors would self-manage much of the layout and editing of articles." [26]
For Peter Suber, the expenses of open science journals "peer review is the most significant." [51] As an inheritance of the historical model of knowledge club the main costs of peer review are not directly supported by journals: the evaluation is performed freely by researchers. Following the expansion of commercial journals since the late 20th century, free services of peer review have increasingly become an over-exploited resource. The conversion of subscription journals to author-pays model of open access has added new pressure to a strained practice: as authors are the main customers of what has essentially become an editorial service, fast-track evaluation are in high demand. While the development of integrated editorial system has streamlined the editorial process of receiving and managing reviews, locating competent reviewers is major issue and create added costs and work to journal editors: "finding, recruiting and retaining reviewers" are a major concern of non-commercial journal editors. [105]
The development of new Open science platforms and infrastructure makes it possible to unbundle the academic editorial workflow: "costs are reduced by eliminating the need for type-setting and copy-editing, with web-hosting costing only $15/year, and a total operating cost of between $6.50–$10.50 per article." [106] Through theses mechanism, "open access has the opportunity to become a cost-reducing mechanism for scholarly publishing." [107]
Conversion to electronic publishing has created significant economies of scale. Large scale publishers has been among the first beneficiaries of reduced editorial and technical cost, through to the concentration and the standardization of the publishing infrastructure: "These newly empowered players brought an industrial approach to the publication and dissemination process, for the first time realising the benefits that these specialised capital and skills could provide by operating at a scale that was unprecedented to that date." [26] This process started before the development of electronic publishing, with the creation of internal databases to manage peer-review and other key aspects of editorial management. Large academic search engines finalized this process: "As the dominant publishers build databases, discovery systems, and online platforms to house large and integrated collections of journals, it is more difficult for small publishers to compete with them. Building an effective platform for publishing e-journals is expensive (...) After a platform has been created, it is much cheaper and easier to add new journals to it than to build new and redundant platforms." [108]
After 2000, non-commercial publishers and infrastructure have gradually benefited from the same economies of scale, due to the development of open software tools dedicated to academic production such as Open Journal Systems, that facilitated the creation and the administration of journal website and the digital conversion of existing journals. [109] Among the non-commercial journals registered to the Directory of Open Access Journals, the number of annual creation has gone from 100 by the end of the 1990s to 800 around 2010. [110] By 2021, Open Journal Systems has become "a widespread solution in the peer review management of journals". [111]
Many Open Science Infrastructure run "at a relatively low cost" as small infrastructures are an important part of the open science ecosystem. [112] 16 data research repositories surveyed by the OECD in 2017 quoted technical infrastructure and shared services among the costs "most likely to be susceptible to cost optimisation". [113] In 2020, 21 out of 53 surveyed European infrastructures "report spending less than €50,000". [112] Overall, European infrastructures were financially sustainable in 2020 [114] which contrasts with the situation ten years prior: in 2010, European infrastructures had much less visibility: they usually lacked "a long-term perspective" and struggled "with securing the funding for more than 5 years". [115]
Beyond the economies of scale, technical infrastructure also created fixed costs to standard publishing services such as article identification (DOI), plagiarism check, long-term digital preservation and standardized XML. [116] While most of theses services are covered by flat fees at a limited expenses, it can still affect significantly the tight budgets of small non-commercial journals. [117]
The overall price per article of commercial publishers is consistently higher than in the non-commercial sectors. This discrepancy has been partly accounted by the maintenance of several services necessary to run a business activity (pricing, transaction management, marketing...), that non-commercial structure can drop entirely with little impact. [118] Leading commercial journals claim to be more selective in regard to article submissions, which has the effect of creating a more complex and time-consuming acceptation workflow: "The more effort a publisher invests in each paper, and the more articles a journal rejects after peer review, the more costly is each accepted article to publish [although] the key question is whether the extra effort adds useful value". [118] Besides, "costs vary widely in this sector". [118] Without any transformation of the editorial workflow or efficiency gain a standard well-established subscription journal could "charge about $3,700 per paper to cover costs". Yet, in a publisher like Nature, the costs would be "at £20,000–30,000 ($30,000–40,000) per paper". [118] In the higher end of the commercial spectrum, cost per articles are more likely to embed services that are not directly related to publishing : "One possible reason for such variation between journals and publishers is that it is generally unclear whether proposed costs relate to those directly involved in article processing or those required in order for a publisher to ‘break even’ if they receive zero subscription income for an article made OA." [119]
Early projections suggested that commercial models of open science would result in lower editorial costs than subscription journals. On the basis of the APC commonly practiced by leading open access publishers like PLOS, Houghton & Oppenheim identified a potential save of £800 per articles (£1525 instead of £2335 for subsciption publishing). [120] Taken globally, this would result in "savings of around £500 million per annum nationally in the UK in a worldwide open access system". [121] Critics at the time focused on the irrealism of a global conversion to open science: "many of the savings hypothesised would depend on the rest of the world adopting author-pays or self-archiving models." [122] In 2012, David Lewis characterized commercial open access based on article-processing charges as a "disruptive innovation" that will radically "shift in the nature of scholarly journal publishing”. New commercial publishers seemed able to lower significantly editorial expenses: by 2013, "some emerging players (...) say that their real internal costs are extremely low" with Hindawi publishing "22,000 articles at a cost of $290 per article". [118]
Prestige continues to be a significant driver of price-making in the commercial open access market: "In the academic environment, prestige and reputation have a lot of staying power (...) So far, leading firms in the academic industry have been remarkably resistant to disruptive innovators." [123] The evolution of the cost of article processing charge and the concentration of the commercial open access market has challenged this assumptions. Due to the prestige of some publishers or the integration of new editorial services (like fast-track peer review), the mean price of open access articles has consistently risen: "there is no standard price, and largely no regulation of APCs, which results in some publishers demanding very large amounts of money from authors for the privilege of publishing OA." [124] In France, the mean price of APCs of open access journals has gone up significantly between 2013 (€1395) and 2020 (€1745). [125] A range of scenarios include the total cost of APCs nearly getting comparable to the cost of subscriptions by 2030 (68.7M€ vs. 97,5M€), while a full journal flip from subscriptions to APCs would be much costlier (168.7M€). [126]
High APCs price are less related to the measurable quality of the journal or of the editorial service than to the capacity of well-known actors to impose elevated prices: "several studies reported only very weak or no correlation between quality of journals (measured in journal impact factors) and the level of APC. Contrarily, the level of APCs for publishing an article is more related to the market power of specific academic publishing companies". [127] The risk of uncontrolled growth of APCs had been clearly identified at the start of the Plan S initiative: its coordinator, Robert-Jan Smits, was "determined to introduce a limit on APCs of €2,000", but in the end, "the cap rejected, on account of too many members of the Plan S Coalition being against its enforcement." [128]
The economic contribution of open science to scientific publishing, to non-academic economic sectors or to society remains little documented. In 2019, the economists Michael J. Fell underlined that while open science policies usually advanced the claim that opening research can bring "significant social and economic benefits", there have been "no systematic attempt has yet been made to identify and synthesize evidence relating to this claim and present a clear picture of the economic impacts that open science might have, how these comes about, and how benefits might be maximized." [129] In his assessment of the state of the art, Fell identified 21 empirical studies that aimed to evaluate the "direct economic impacts in which open science has been a contributory factor", [130] with a focus on Anglo-american countries (United Kingdom, United States and Canada) and Scandinavian countries (Denmark and Finland). [131] Estimates are complicated by the fact that open science is both a scientific and a social movement: the specific scope of academic publishing is too limiting and yet it is more challenging to develop global macro-economic indicators like has been done on open data. [132]
In a market economy, the diffusion of private goods is usually assorted with transactions costs. Theses costs cover all the services required to manage the commoditization of a product, both from the side of the producer as from the side of a consumer. In a wider sense, it encompasses all the work and time allocated all the stakeholders of the transactions to perform it, such as benchmarks, negotiations or contractualizations. [133]
Savings on transaction costs are frequently quoted as a significant advantage of non-excludable resource system (common good, public good) over private markets. Since access to the resource is weakly restrained and conditioned by unformal rules, its allocation is less costly overall: "a community could (...) produce better quality outcomes at lower economic costs, because of lower transaction costs, than alternative institutional systems involving private property" [134] According to a 2017 OECD report, market allocation of open knowledge outputs is not sufficiently efficient: as the price of a public good converges to zero, any attempt at setting up a price will result in excessive exclusion and reduced collective benefits.
The market may not be the best mechanism for the allocation of a public good, as any price above the marginal cost (of copying and distribution) will reduce net welfare – by locking out users and uses that do not have the capacity to pay or are not willing to pay. For digital information made available online the marginal cost is very low – close to zero. However, a price set at zero or close to zero will not be sufficient to cover full costs (…) To be sustainable, data repositories need to generate sufficient revenue to cover their costs, but setting a price above the marginal cost of copying and distribution will reduce net welfare. [135]
All the models of open science have a direct impact on one sub-sample of transaction costs: exclusion costs. There is no need to maintain systems of enforce rules to ensure that a publication will not be used by an unauthorized reader: "This exclusion costs the excluder money. One cost is digital-rights management or DRM, the software lock that opens for authorized users and blocks access to the unauthorized. A second cost is writing and enforcing the licensing agreement that binds subscribers." [79] In addition to DRM system, large commercial publishers have also developed intrusive methods to track subsequent usages of a publication. [136]
Non-commercial Open Science journals and open infrastructures can mitigate a larger range of services and costs. Author-paid journals still have to maintain transactional activities, as the management of article-processing charges becomes a core business activity. Moreover, the real cost of large commercial agreements with leading publisher is not well documented as the proceedings of big deals are not public. Even in the context of journal flipping, the negotiation of complex licenses may represent a significant time investments from library and research institution.
Access to pre-existent research outputs such as publications, datasets or code is a major condition of research. The subscription model used to rely on the assumptions that researchers only need to access a few specialized publications. In practice, research is more unpredictible and frequently rely crossing methods and observations from different fields or even different disciplines. In 2011, a survey of JISC showed that 68% of UK researchers felt they did have a sufficiently wide access to journal and conference papers. [137] Non-academic professional audiences experience difficulties in accessing directly relevant research: "a quarter of those in industry/commerce described their current level of access as fairly/very difficult" as "the main barrier was unwillingness to pay". [138] A survey of business use of research in Denmark highlighted a large range of strategies to avoid the high cost of pay-per-view, especially through collaboration with academics that have institutional access. [139] The impact of open access is amplified by the high degree of internationalization of research: "83% of the economic return on cancer research is drawn from research from non-UK sources" which are less likely to be accessible in a subscription-based model. [107]
Beyond the extended coverage, Open science can enhance the efficiency of bibliographic search: "It can take longer for people to access closed research outputs than when access is open." [140] Access to the full text is also a common bibliographic search strategies, since it will often reference other relevant publications. A case studies on knowledge workers shows that restrictions to access translate in significant costs in work-time: "knowledge-based SME employees spent on average 51 min to access the last research article they had difficulty accessing, and this rose to 63 min for university researchers." [140]
The open science movement has also entailed the diffusion of new resources: open research data and software. In these cases, access was not limited or constrained by high prices but generally non-existent, as they were at most shared across research teams or institutions. Economic estimates of the impact of opening new research output are consequently more difficult, as there is no prior market. [138] Several studies Houghton and Beagrie on the commercial use of major open data portals (Economic and Social Data Service, Archaeology Data Service, British Atmospheric Data Service and the European Bioinformatics Institute) attempted to circumvent the issue by estimating the "willingness to pay", as a proxy for the positive economic impact: how much would the company would agree to pay if the service became only accessible through subscriptions. In all the case, this "consumer surplus" was much higher than the cost needed to run the service (for instance, £21m per year for the Economic and Social Data Service against an operating cost of £3m, or £322m per year for the European Bioinformatics Institute against an operating cost of £47). [141] For large repositories of data or publication, the consumer surplus may be even more significant on a long-term basis, as the value of the infrastructure and the potential benefits it brings becomes more important as the range of hosted outputs continues to expand: "data archives are appreciating rather than depreciating assets. Most of the economic impact is cumulative and it grows in value over time, whereas most infrastructure (such as ships or buildings) has a declining value as it ages. Like libraries, data collections become more valuable as they grow and the longer one invests in them, provided that the data remain accessible, usable, and used." [142]
Impacts of open science on research efficiency stem from the benefits of enhanced access to previous work. Due to the complexity of bibliography search, closed subscription system "can lead to high levels of duplication—that is, where separate teams work on the same thing unbeknownst to each other." [140] The issue is not limited to academic research, but affect industrial R&D as well: "an analysis of pharmaceutical patents by 18 large companies showed that 86% of target compounds were investigated by two or more companies" [143] Non-publication of data or intermediary results can also have cascading effects on the overall quality of research. Meta-analysis rely on the reproductibility of pre-existent observations and experiments to identify the scientific consensus on a specific topic or field of research. They can be affected by statistical errors and bias, as well as the pre-selection of statistically significant results. Extensive opening of final and intermediary data sources makes it easier to spot potential mistakes. [143]
Text and data mining projects have more recently become a major focus of studies on potential gain of research efficiency. [141] In contrast with the standard procedures of state of the art, text mining projects process very large corpus and are bound to be limited by the available collections in academic libraries. Additionally, special authorization has to be given from the publishers unless the corpus is published in a free license, as the proper use of automated analyses require making copies accessible among project members. Access procedures may represent a significant investment for text mining projects: "As well as the costs and time required to reach such agreements, it also introduces significant uncertainty into such projects as it is possible that some agreements may not be reached". [141] In 2021, a quantitative analysis of text and data mining research showed that "there is strong evidence that the share of DM research in total research output increases, where researchers do not need to acquire specific consent by rights holders". [144] The restrictive effect of the lack of open access or text and data mining exception is sufficiently noticeable to highlight "an adverse net effect of IP on innovation, in the sense that there is strong evidence for stricter copyright hindering the wide adoption of novel ways to build on copyright works and generate derivative works." [145] In 2012, a JISC report estimated that a facilitated use of text and data mining tools, notably in the context of bibliographic search, could generate significant gains of productivity: "if text mining enabled just a 2% increase in productivity – corresponding to only 45 minutes per academic per working week (...) this would imply over 4.7 million working hours and additional productivity worth between £123.5m and £156.8m in working time per year." [146]
The potential economic impact for open science research outputs is significant. Innovation commons are a major, although overlooked, source of economic growth: "The innovation commons is the true origin of innovation. It is the source from which the subsequent markers of innovation emerge — the entrepreneurial actors, the innovating firms, the new markets, and so on." [147] Recent developments like the growth of data analytics services across a large variety of economic sectors have created further needs for research data: "There are many other values (...) that are promoted through the longterm stewardship and open availability of research data. The rapidly expanding area of artificial intelligence (AI) relies to a great extent on saved data." [142] In 2019, the combined data market of the 27 countries of the European Union and the United Kingdom was estimated at 400 billion euros and had a sustained growth of 7.6% per year. [148] although no estimation was given of the specific value of research data, research institutions were identified as important stakeholders in the emerging ecosystem of "data commons". [149]
In 2011, a JISC report estimated that there was 1.8 million knowledge workers in the United Kingdom working in R&D, IT, engineering services most of whom being "unaffiliated, without corporate library or information center support." [150] Among a representative set of English knowledge workers, 25% stated that access to the literature was fairly difficult or very difficult and 17% had a recent access problems that has never been resolved. [151] A 2011 survey of Danese business highlighted a significant dependence of R&D to academic research: "Forty-eight per cent rated research articles as very or extremely important". [152] Consequently, lack or difficulty of access affects the development of commercial services and products: "It would have taken an average of 2.2 years longer to develop or introduce the new products or processes in the absence of contributing academic research. For new products, a 2.2 years delay would cost around DKK 36 million per firm in lost sales, and for new processes it would cost around DKK 211 000 per firm in lost savings." [152] Research data repositories have also experimented with efficient data management workflows that can become a valuable inspiration for commercial structures: "properly designed data commons can serve to R&D processes as an active and accessible repository for research data". [149]
Estimations of the global business impact of open science are challenged by another positive economic factor of open science: weak or even inexistent transactions costs. Commercial uses of open publications, data or software occurs unformaly and is hardly identifiable: "use of open science outputs (e.g., by firms) often leaves no obvious trace, so most evidence of impacts is based on interviews, surveys, inference based on existing costs, and modelling approaches." [153] Concrete impact of open science on commercial products and activities has been measured at the scale of a few major projects. The Human Genome Project made all the progressively available results of human sequencing within 24 of discovery from 1990 to 2003. A retrospective assessment showed a very high return cost on investment: "a $3.8 billion project drove $796 billion in economic impact [and] created 310,000 jobs". [154] Another case study focused on the incidence of opening data on a pharmaceutical compound, JQ1: 105 patents have been filed in the following years, in comparison with less than 30 for similar compounds. [155]
Social impact has become an important focus of open science infrastructure in the late 2010s. Access to non-academic audiences has created a new potential justification for their funding and maintenance. Potential groups that may benefit from open access "include citizen scientists, medical patients and their supporting networks, health advocates, NGOs, and those who benefit from translation and transformation (e.g., sight-impaired people)." [107]
Economic regulation of scientific publishing has long be stuck in a "collective action dilemma", due to the lack of coordinations between all stakeholders: "To truly reduce their costs, librarians would have to build a shared online collection of scholarly resources jointly managed by the academic community as a whole, but individual academic institutions lack the private incentives necessary to invest in a shared collection." [156]
Although it was initially expected by some economists, that the academic publishing markets would be structurally disrupted by new open access competitors, [157] change was mostly driven by scientific communities, scientific institutions and, lately, coordinations of funders. In the 2000s, forms of regulation appear at a local scale to solve obvious market failure in the management of open science outuput. Development of research data repositories has been sustained by the implementation of "government or funder mandates for open data that require the producers of data to make them openly accessible". [27] Mandates have been less easily expanded to other scientific outputs such as publications, that could not be covered by open data programs and were already invested by large commercial structures. In the great recession, scientific institutions and libraries had to balanced significantly reduced budgets, which entailed a first wave of big deal cancellation as well as "promoted the search for alternatives to this model". [158] This specific context created a precedent for a secondary wave of big deal cancellation, no longer solely motivated by fund cuts but also by "the advance of open science"<ref> [159]
In the early 2010s, leading publishers had come under heightened pressure to convert to open access. Along with the mobilization of researchers, the realization that academic publishing no longer operated in normal market conditions has redefined the position of scientific funders and policy-makers: [For Robert-Jan Smits], "if we really want OA to become a reality, we just have to make it obligatory, I thought: no more friendly requests, but rules and implications." [160] Freedom of Information Requests has come to unveil the real cost of big deals in several countries. [119] On July 17, 2012 the European Union issued a recommendation on Access to and Preservation of Scientific Information that called to "define clear policies" on open access. [161] This approach "was a major shift compared with the previous EU 7th Framework Programme (2007–13), which had defined OA merely as a pilot action in select areas." [161] It initiated a new cycle of regulatory policies of large academic publishers. The Horizon 2020 research program made open access a requirement for funding. [161]
The Plan S was originally "a simple plan" mostly addressed to funding agencies: "any researcher who receives a grant from one of them must only publish in an OA journal under a CC BY licence". [162] The early draft included a mechanism to cap the price of Article-Processing charges that was finally not retained in the final version. The first official version released in September 2018 favored "transformative agreements, where subscription costs are offset by publication costs, can help to accelerate the transition to open access." [163] While criticized for its bias in favor of commercial open access, and the perpetuation of high publishing costs, [164] the Plan S has facilitated the creation of a global coordination in negotiations with large publishers. [163]
In academic publishing, a scientific journal is a periodical publication designed to further the progress of science by disseminating new research findings to the scientific community. These journals serve as a platform for researchers, scholars, and scientists to share their latest discoveries, insights, and methodologies across a multitude of scientific disciplines. Unlike professional or trade magazines, scientific journals are characterized by their rigorous peer review process, which aims to ensure the validity, reliability, and quality of the published content. With origins dating back to the 17th century, the publication of scientific journals has evolved significantly, playing a pivotal role in the advancement of scientific knowledge, fostering academic discourse, and facilitating collaboration within the scientific community.
Academic publishing is the subfield of publishing which distributes academic research and scholarship. Most academic work is published in academic journal articles, books or theses. The part of academic written output that is not formally published but merely printed up or posted on the Internet is often called "grey literature". Most scientific and scholarly journals, and many academic and scholarly books, though not all, are based on some form of peer review or editorial refereeing to qualify texts for publication. Peer review quality and selectivity standards vary greatly from journal to journal, publisher to publisher, and field to field.
An academic journal or scholarly journal is a periodical publication in which scholarship relating to a particular academic discipline is published. They serve as permanent and transparent forums for the presentation, scrutiny, and discussion of research. They nearly universally require peer review for research articles or other scrutiny from contemporaries competent and established in their respective fields.
Open access (OA) is a set of principles and a range of practices through which nominally copyrightable publications are delivered to readers free of access charges or other barriers. With open access strictly defined, or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright, which regulates post-publication uses of the work.
Elsevier is a Dutch academic publishing company specializing in scientific, technical, and medical content. Its products include journals such as The Lancet, Cell, the ScienceDirect collection of electronic journals, Trends, the Current Opinion series, the online citation database Scopus, the SciVal tool for measuring research performance, the ClinicalKey search engine for clinicians, and the ClinicalPath evidence-based cancer care service. Elsevier's products and services include digital tools for data management, instruction, research analytics, and assessment. Elsevier is part of the RELX Group, known until 2015 as Reed Elsevier, a publicly traded company. According to RELX reports, in 2022 Elsevier published more than 600,000 articles annually in over 2,800 journals; as of 2018 its archives contained over 17 million documents and 40,000 e-books, with over one billion annual downloads.
The term serials crisis describes the problem of rising subscription costs of serial publications, especially scholarly journals, outpacing academic institutions' library budgets and limiting their ability to meet researchers' needs. The prices of these institutional or library subscriptions have been rising much faster than inflation for several decades, while the funds available to the libraries have remained static or have declined in real terms. As a result, academic and research libraries have regularly canceled serial subscriptions to accommodate price increases of the remaining subscriptions. The increased prices have also led to the increased popularity of shadow libraries.
A hybrid open-access journal is a subscription journal in which some of the articles are open access. This status typically requires the payment of a publication fee to the publisher in order to publish an article open access, in addition to the continued payment of subscriptions to access all other content. Strictly speaking, the term "hybrid open-access journal" is incorrect, possibly misleading, as using the same logic such journals could also be called "hybrid subscription journals". Simply using the term "hybrid access journal" is accurate.
Scholarly communication involves the creation, publication, dissemination and discovery of academic research, primarily in peer-reviewed journals and books. It is “the system through which research and other scholarly writings are created, evaluated for quality, disseminated to the scholarly community, and preserved for future use." This primarily involves the publication of peer-reviewed academic journals, books and conference papers.
Open scientific data or open research data is a type of open data focused on publishing observations and results of scientific activities available for anyone to analyze and reuse. A major purpose of the drive for open data is to allow the verification of scientific claims, by allowing others to look at the reproducibility of results, and to allow data from many sources to be integrated to give new knowledge.
A copyright transfer agreement or copyright assignment agreement is an agreement that transfers the copyright for a work from the copyright owner to another party. This is one legal option for publishers and authors of books, magazines, movies, television shows, video games, and other commercial artistic works who want to include and use a work of a second creator: for example, a video game developer who wants to pay an artist to draw a boss to include in a game. Another option is to license the right to include and use the work, rather than transferring the copyright.
Academic journal publishing reform is the advocacy for changes in the way academic journals are created and distributed in the age of the Internet and the advent of electronic publishing. Since the rise of the Internet, people have organized campaigns to change the relationships among and between academic authors, their traditional distributors and their readership. Most of the discussion has centered on taking advantage of benefits offered by the Internet's capacity for widespread distribution of reading material.
PeerJ is an open access peer-reviewed scientific mega journal covering research in the biological and medical sciences. It officially launched in June 2012, started accepting submissions on December 3, 2012, and published its first articles on February 12, 2013.
Predatory publishing, also write-only publishing or deceptive publishing, is an exploitative academic publishing business model, where the journal or publisher prioritizes self-interest at the expense of scholarship. It is characterized by misleading information, deviates from the standard peer review process, is highly non-transparent, and often utilizes aggressive solicitation practices.
An article processing charge (APC), also known as a publication fee, is a fee which is sometimes charged to authors. Most commonly, it is involved in making an academic work available as open access (OA), in either a full OA journal or in a hybrid journal. This fee may be paid by the author, the author's institution, or their research funder. Sometimes, publication fees are also involved in traditional journals or for paywalled content. Some publishers waive the fee in cases of hardship or geographic location, but this is not a widespread practice. An article processing charge does not guarantee that the author retains copyright to the work, or that it will be made available under a Creative Commons license.
Open access to scholarly communication in Germany has evolved rapidly since the early 2000s. Publishers Beilstein-Institut, Copernicus Publications, De Gruyter, Knowledge Unlatched, Leibniz Institute for Psychology Information, ScienceOpen, Springer Nature, and Universitätsverlag Göttingen belong to the international Open Access Scholarly Publishers Association.
Open access scholarly communication of Norway can be searched via the Norwegian Open Research Archive (NORA). "A national repository consortium, BIBSYS Brage, operates shared electronic publishing system on behalf of 56 institutions." Cappelen Damm Akademisk, Nordic Open Access Scholarly Publishing, University of Tromsø, and Universitetsforlaget belong to the Open Access Scholarly Publishers Association. Norwegian signatories to the international "Open Access 2020" campaign, launched in 2016, include CRIStin, Norsk institutt for bioøkonomi, Norwegian Institute of Palaeography and Historical Philology, Norwegian University of Science and Technology, Oslo and Akershus University College of Applied Sciences, University of Tromsø, University of Bergen, University of Oslo, and Wikimedia Norge.
Diamond open access refers to academic texts published/distributed/preserved with no fees to either reader or author. Alternative labels include platinum open access, non-commercial open access, cooperative open access or, more recently, open access commons. While these terms were first coined in the 2000s and the 2010s, they have been retroactively applied to a variety of structures and forms of publishing, from subsidized university publishers to volunteer-run cooperatives that existed in prior decades.
Open Science Infrastructure is an information infrastructure that supports the open sharing of scientific productions such as publications, datasets, metadata or code. In November 2021 the Unesco recommendation on Open Science describe it as "shared research infrastructures that are needed to support open science and serve the needs of different communities".
Subscribe to Open (S2O) is an economic model used by peer-reviewed scholarly journals to provide readers with open access (OA) to the journal’s content, without charging costs to authors. S2O converts journals that have a traditional subscription model to open access.
An Open Science Monitor or Open Access Monitor is a scientific infrastructure that aimed to assess the spread of open practices in a scientific context.
{{cite book}}
: CS1 maint: DOI inactive as of November 2024 (link)