Open Science Infrastructure (or open scholarly infrastructure) is an information infrastructure that supports the open sharing of scientific productions such as publications, datasets, metadata or code. In November 2021 the Unesco recommendation on Open Science describe it as "shared research infrastructures that are needed to support open science and serve the needs of different communities". [1]
Open science infrastructures are a form of scientific infrastructure (also called cyberinfrastructure , e-Science or e-infrastructure) that support the production of open knowledge. Beyond the management of common resources, they are frequently structured as community-led initiatives with a set collective norms and governance regulations, which makes them also a form of knowledge commons. The definition of open science infrastructures usually exclude privately-owned scientific infrastructures run by leading commercial publishers. Conversely it may include actors not always characterized as scientific infrastructures that play a critical role in the ecosystem of open science, such as publishing platforms in open access (open scholarly communication service).
Computing infrastructures and online services have played a key role in the production and diffusion of scientific knowledge since the 1960s. While these early scientific infrastructure were initially envisioned as community initiatives, they could not be openly used due to the lack of interconnectivity and the cost of network connection. The creation of the World Wide Web made it possible to share data and publications on a large scale. The sustainability of online research projects and services became a critical policy issue and entailed the development of major infrastructure in the 2000s.
The concept of open science infrastructure emerged after 2015 following a scientific policy debate over the expansion of commercial and privately-owned infratructures in numerous research activities and the publication of the Principles for Open Scholarly Infrastructures. Since the 2010s, large ecosystems of interconnected scientific infrastructures have emerged in Europe, South and North America through the development of new open science project and the conversion of legacy infrastructures to open science principles.
Open science infrastructure is a form of knowledge infrastructure that makes it possible to create, publish and maintain open scientific outputs such as pûblication, data or softwares.
The Unesco recommendation of Open Science approved in November 2021 define open science infrastructures as "shared research infrastructures that are needed to support open science and serve the needs of different communities". [1] The SPARC report on European Open Science Infrastructure include the following activities within the range of open science infrastructures: "We define Open Access & Open Science Infrastructure as sets of services, protocols, standards and software contributing to the research lifecycle – from collaboration and experimentation through data collection and storage, data organization, data analysis and computation, authorship, submission, review and annotation, copyediting, publishing, archiving, citation, discovery and more" [2]
The use of the term "infrastructure" is an explicit reference to the physical infrastructures and networks such as power grids, road networks or telecommunications that made it possible to run complex economic and social system after the industrial revolution: "The term infrastructure has been used since the 1920s to refer collectively to the roads, power grids, telephone systems, bridges, rail lines, and similar public works that are required for an industrial economy to function (…) If infrastructure is required for an industrial economy, then we could say that cyberinfrastructure is required for a knowledge economy". [3] The concept of infrastructure was notably extended in 1996 to forms of computer-mediated knowledge production by Susan Leigh Star and Karen Ruhleder, through an empirical observation of an early form of open science infrastructure, the Worm Community System. [4] This definition has remained influential through the next two decades in science and technology studies [5] and has affected the policy debate over the building of scientific infrastructure since the early 2000s [3]
Open science infrastructure have specific properties that contrast them with other forms of open science projects or initiatives:
Open science infrastructures are open, which differentiate them with other scientific and knowledge infrastructure and, more specifically, with subscription-based commercial infrastructures. Openness is both a core value and a directing principle that affect the aims, the governance and the management of the infrastructure. Open science infrastructure face similar issues met by other open institutions such as open data repositories or large scale collaborative project such as Wikipedia: "When we study contemporary knowledge infrastructures we find values of openness often embedded there, but translating the values of openness into the design of infrastructures and the practices of infrastructuring is a complex and contingent process". [14]
The conceptual definition of open science infrastructures has been largely influenced by the analysis of Elinor Ostrom on the commons and more specifically on the knowledge commons. In accordance with Ostrom, Cameron Neylon understates that open infrastructures are not only characterized by the management of a pool of common resources but also by the elaboration of common governance and norms. [15] The economic theory of the commons make it possible to expand beyond the scope of limited scope of scholar associations toward large scale community-led initiatives: "Ostrom's work (…) provides a template (…) to make the transition from a local club to a community-wide infrastructure." [16] Open science infrastructure tend to favor a non-for profit, publicly-funded model with strong involvement from scientific communities, which disassociate them from privately-owned closed infrastructures: "open infrastructures are often scholar-led and run by non-profit organisations, making them mission-driven instead of profit-driven." [17] This status aims to ensure the autonomy of the infratructure and prevent their incorporation into commercial infrastructure. [18] It has wide range implications on the way the organization is managed: "the differences between commercial services and non-profit services permeated almost every aspect of their responses to their environment". [19]
Open science infrastructures are not only a more specific subset of scientific infrastructures and cyberinfrastructures but may also include actors that would not fall into this definition. "Open access publication platforms" such as Scielo, OpenEdition or the Open Library of Humanities are considered an integral part of open science infrastructures in the UNESCO definition [1] and in several literature review [20] and policy reports, [21] whereas they were usually considered as a separate entities in the policy debate on cyberinfrastructure and e-infrastructures. [22] In the 2010 report of the European Commission on e-infrastructure, scientific publishing platforms are "not e-Infrastructures but closely related to it". [23]
Open science infrastructures may also incorporate additional values and ethical principles. Samuel Moore has theorized a form of care-full scholarly commons that does not exist yet but would incorporate latent forms of open science infrastructure and communities: "In addition to sharing resources with other projects, commoning also requires commoners to adopt an outwardly-focused, generous attitude to other commons projects, redirecting their labour away from proprietary." [24] In 2018, Okune et al. introduced a similar concept of "inclusive knowledge infrastructures" that "deliberately allow for multiple forms of participation amongst a diverse set of actors (…) and seek to redress power relations within a given context." [9]
In 2015 Principles for Open Scholarly Infrastructure have laid out an influential prescriptive definition of open science infrastructures. Subsequent definitions and terminologies of open science infratructures have been largely elaborated on this basis. [2] [25] [26] The text has also influenced the definition of open science infrastructure retained by the UNESCO in November 2021. [27]
The Principles attempt to hybridize the framework of infrastructure studies with the analysis of the commons initiated by Elinor Ostrom. The principles develop a series of recommendations in three critical areas to the success of open infrastructures:
The text ends by mentioning several potential consequences of the principles. The authors advocate for a responsible centralization, that embodies a different than the large web commercial platforms like Google and Facebook while still maintaining the important benefit of centralized infrastructures: "we will be able to build accountable and trusted organisations that manage this centralization responsibly". [12] Existing examples of large open infrastructure include ORCID, the Wikimedia Foundation or CERN.
A more critical reception has focused on the underlying political philosophy of the Principles. [28] [29] While the scientific community is a key part of the governance of open science infrastructure, Samuel Moore underline that it is never precisely defined, which raised potential issues of under-representation of minority groups:
[this] raises questions over who is the community that gets to govern and exclude, and what gives them the right to decide the conditions These questions are especially relevant for understandings of the commons that are all-encompassing or operate on a large scale, which tend to favour more powerful stakeholders, wealthy disciplines and countries in the Global North. Such commons treat subjects in a political vacuum rather than embedded in a particular situation and entangled in a number of different relationships and projects with asymmetrical power structures. [30]
Scientific projects have been among the earliest use case for digital infrastructure. The theorization of scientific knowledge infrastructure even predates the development of computing technologies. The knowledge network envisioned by Paul Otlet or Vannevar Bush already incorporated numerous features of online scientific infrastructures. [31]
After the Second World War, the United States faced a "periodical crisis": existing journals could not keep up with the rapidly increasing scientific output. [32] The issue became politically relevant after the successful launch of Sputnik: "The Sputnik crisis turned the librarians’ problem of bibliographic control into a national information crisis." [33] The emerging computing technologies were immediately considered as a potential solution to make a larger amount of scientific output readable and searchable. Access to foreign language publication was also a key issue that was expected to be solved by machine translation: in the 1950s, a significant amount of scientific publications were not available in English, especially the one coming from the Soviet block.
Influent members of the National Science Foundation like Joshua Ledeberg advocated for the creation of a "centralized information system", SCITEL that would at first coexist with printed journals and gradually replace them altogether on account of its efficiency. [34] In the plan laid out by Ledeberg to Eugen Garfield in November 1961, the deposit would index as much as 1,000,000 scientific articles per year. Beyond full-text searching, the infrastructure would also ensure the indexation of citation and other metadata, as well as the automated translation of foreign language articles. [35]
Although it anticipates key features of online scientific platforms, the SCITEL plan was technically irrealistic at the time. The first working prototype on an online retrieval system developed in 1963 by Doug Engelhart and Charles Bourne at the Stanford Research Institute was heavily constrained by memory issues: no more than 10,000 words of a few documents could be indexed. [36]
Instead of a general purpose publishing platform, the early scientific computing infrastructures focused on specific research areas, such as MEDLINE for medicine, NASA/RECON for space engineering or OCLC Worldcat for library search: "most of the earliest online retrieval system provided access to a bibliographic database and the rest used a file containing another sort of information—encyclopedia articles, inventory data, or chemical compounds." [37] This early development of scientific computing affected a large variety of disciplines and communities, including the social sciences: "The 1960s and 1970s saw the establishment of over a dozen services and professional associations to coordinate quantitative data collection". [38] Yet these infrastructures were mostly invisible to researchers, as most of the research was done by professional librarians. Not only were the search operating systems complicated to use, but the search has to be performed very efficiently given the prohibitive cost of long distance telecommunication. [39] To become technically feasible, scientific infrastructure could never be open and became fundamentally hidden to their end users:
The designers of the first online systems had presumed that searching would be done by end users; that assumption undergirded system design. MEDLINE was intended to be used by medical researchers and clinicians, NASA/RECON was designed for aerospace engineers and scientists. For many reasons, however, most users through the seventies were librarians and trained intermediaries working on behalf of end users. In fact, some professional searchers worried that even allowing eager end users to get at the terminals was a bad idea. [40]
The development of digital infrastructure for scientific publication was largely undertaken by private companies. In 1963, Eugene Garfield created the Institute for Scientific Information that aimed to transform the projects initially envisioned with Lederberg into a profitable business. The Science Citation Index relied on a computational processing of citation data. It had a massive and lasting influence on the structuration of global scientific publication in the last decades of the 20th century, as its most important metrics, the Journal Impact Factor, "ultimately came to provide the metric tool needed to structure a competitive market among journal. [41] Garfield also successfully launched Current Contents, a periodic compilation of scientific abstracts that acted as a simplified commercial version of the central deposit envisioned within SCITEL. Rather than being replaced by a centralized information system, leading scientific publishers have been able to develop their own information infrastructure that ultimately reinforced their business position. By the end of the 1960s, the dutch publisher Elsevier and the german publisher Springer have started to computarize their internal data, as well as the management of the journal reviews. [42]
Until the advent of the web, the landscape of scientific infrastructures remained fragmented. [43] Projects, and communities relied on their own unconnected networks at a national or institutional level: "the Internet was nearly invisible in Europe because people there were pursuing a separate set of network protocols". [44] The birthing place of the World Wide Web, the CERN, had its own version of Internet, CERN-Net and also supported its own protocol for e-mail exchange. [45] The European Space Agency used its own iteration of the RECON system also used by NASA engineers (ESRO/RECON). [46] The insulated scientific infrastructures could hardly be connected before the advent of the web. Communication between scientific infrastructures was not only challenging across space, but also across time. Whenever a communication protocol was no longer maintained, the data and knowledge it disseminated was likely to disappear as well: "the relationship between historical research and computing has been durably affected by aborted projects, data loss and unrecoverable formats". [22]
The World Wide Web was originally framed as an open scientific infrastructure. The project was inspired by ENQUIRE, an information management software commissioned to Tim Berners-Lee by the CERN for the specific needs of high energy physics. The structure of ENQUIRE was closer to an internal web of data: it connected "nodes" that "could refer to a person, a software module, etc. and that could be interlined with various relations such as made, include, describes and so forth". [47] While it "facilitated some random linkage between information" Enquire was not able to "facilitate the collaboration that was desired for in the international high-energy physics research community". [48] Like any significant computing scientific infrastructure before the 1990s, the development of ENQUIRE was ultimately impeded by the lack of interoperability and the complexity of managing network communications: "although Enquire provided a way to link documents and databases, and hypertext provided a common format in which to display them, there was still the problem of getting different computers with different operating systems to communicate with each other". [44]
Sharing of data and data documentation was a major focus in the initial communication of the World Wide Web when the project was first unveiled in August 1991 : "The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data". [49]
The web rapidly superseded pre-existing online infrastructure, even when they included more advanced computing features. From 1991 to 1994, users of the Worm Community System, a major biology database on worms, switched to the Web and Gopher. While the Web did not include many advanced functions for data retrieval and collaboration, it was easily accessible. Conversely, the Worm Community System could only be browsed on specific terminals shared across scientific institutions: "To take on board the custom-designed, powerful WCS (with its convenient interface) is to suffer inconvenience at the intersection of work habits, computer use, and lab resources (…) The World-Wide Web, on the other hand, can be accessed from a broad variety of terminals and connections, and Internet computer support is readily available at most academic institutions and through relatively inexpensive commercial services." [50]
The Web and similar protocols developed at the time have had a similar impact on scientific publications. Early forms of open access publishing were not developed by large scale institutional infrastructures but through small initiatives. Universal access, regardless of the operating system, made it possible to maintain and share community-driven electronic journals year before online commercial scientific publishings became viable:
In the late ‘80s and early ‘90s, a host of new journal titles launched on listservs and (later) the Web. Journals such as Postmodern Cultures, Surfaces, the Bryn Mawr Classical Review and the Public-Access Computer Systems Review were all managed by scholars and library workers rather than publishing professionals. [51]
The first open-access repositories were individual or community initiatives as well. In August 1991, Paul Ginsparg created the first inception of the arXiv project at the Los Alamos National Laboratory in answer to recurring storage issue of academic mailboxes on account of the increasing sharing of scientific articles. [52]
The development of the World-Wide Web had rendered numerous pre-existing scientific infrastructure obsolete. It also lifted numerous restrictions and obstacles to online contribution and network management that made it possible to attempt more ambitious project. By the end of the 1990s, the creation of public scientific computing infrastructure became a major policy issue. [53] The first wave of web-based scientific projects in the 1990s and the early 2000s revealed critical issues of sustainability. As funding was allocated on a specific time period, critical databases, online tools or publishing platforms could hardly be maintained; [22] and project managers were faced with a valley of death "between grant funding and ongoing operational funding". [54]
Several competing terms appeared to fill this need. In the United States, the cyber-infrastructure was used in a scientific context by a US National Science Foundation (NSF) blue-ribbon committee in 2003: "The newer term cyberinfrastructure refers to infrastructure based upon distributed computer, information and communication technology. If infrastructure is required for an industrial economy, then we could say that cyberinfrastructure is required for a knowledge economy." [3] E-infrastructure or e-science were used in a similar meaning in the United Kingdom and European countries.
Thanks to "sizable investments", [55] major national and international infrastructures have been incepted from the initial policy discussion in the early 2000s to the economic crisis of 2007-2008, such as the Open Science Grid, BioGRID, the JISC, DARIAH or the Project Bamboo. [22] [56] Specialized free software for scientific publishing like Open Journal Systems became available after 2000. This development entailed a significant expansion of non-commercial open access journals by facilitating the creation and the administration of journal website and the digital conversion of existing journals. [57] Among the non-commercial journals registered to the Directory of Open Access Journals, the number of annual creation has gone from 100 by the end of the 1990s to 800 around 2010, and not evolved significantly since then. [58]
By 2010, infrastructure are "no longer in infancy" and yet "they are also not yet fully mature". [55] While the development of the web solved a large range of technical issues regarding network management, building scientific infrastructure remained challenging. Governance, communication across all involved stakeholders, and strategical divergences were major factors of success or failure. One of the first major infrastructure for the humanities and the social science, the Project Bamboo was ultimately unable to achieve its ambitious aims: "From the early planning workshops to the Mellon Foundation’s rejection of the project’s final proposal attempt, Bamboo was dogged by its reluctance and/or inability to concretely define itself". [59] This lack of clarity was further aggravated by recurring communication missteps between the project initiators and the community it aimed to serve. "The community had spoken and made it clear that continuing to emphasize Service-oriented architecture would alienate the very members of the community Bamboo was intended to benefit most: the scholars themselves". [60] Budgets cuts following the economic crisis of 2007-2008 underlined the fragility of ambitious infrastructure plans relying on a significant recurring funds. [61]
Leading commercial publishers were initially distanced by the unexpected rise of the Web for academic publication: the executive board of Elsevier "had failed to grasp the significance of electronic publishing altogether, and therefore the deadly danger that it posed—the danger, namely, that scientists would be able to manage without the journal". [62] The persistence of high revenues from subscription and the consolidation of the sector made it possible to fund the conversion of the pre-existing online services to the web as well as the digitization of past collections. By the 2010s, leading publishers have been "moving from a content-provision to a data analytics business" [63] and developed or acquired new key infrastructures for the management scientific and pedagogic activities: "Elsevier has acquired and launched products that extend its influence and its ownership of the infrastructure to all stages of the academic knowledge production process". [64] Since it has expanded beyond publishing, the vertical integration of privately-owned infrastructures has become extensively integrated to daily research activities.
The privatised control of scholarly infrastructures is especially noticeable in the context of ‘vertical integration’ that publishers such as Elsevier and SpringerNature are seeking by controlling all aspects of the research life cycle, from submission to publication and beyond. For example, this vertical integration is represented in a number of Elsevier’s business acquisitions, such as Mendeley (a reference manager), SSRN (a pre-print repository) and Bepress (a provider of repository and publishing software for universities). [65]
The consolidation and expansion of commercial scientific infrastructure had entailed renewed calls to secure "community-controlled infrastructure". [66] The acquisition of the open repositories Digital Commons and SSRN by Elsevier has highlighted the lack of reliability of critical scientific infrastructure for open science. [67] [68] [69] The SPARC report on European Infrastructures underlines that "a number of important infrastructures at risk and as a consequence, the products and services that comprise open infrastructure are increasingly being tempted by buyout offers from large commercial enterprises. This threat affects both not-for-profit open infrastructure as well as closed, and is evidenced by the buyout in recent years of commonly relied on tools and platforms such as SSRN, bepress, Mendeley, and Github." [2]
In contrast with the consolidation of privately-owned infrastructure, the open science movement "has tended to overlook the importance of social structures and systemic constraints in the design of new forms of knowledge infrastructures". [70] It remained mostly focused to the content of scientific research, with little integration of technical tools and few large community initiatives. "Common pool of resources is not governed or managed by the current scholarly commons initiative. There is no dedicated hard infrastructure and though there may be a nascent community, there is no formal membership." [71]
More precise concepts were needed to embed ethical principles of openness, community-service and autonomous governance in the building of infrastructure and ensure the transformation of small localized scholarly networks into large, "community-wide" structures. [15] In 2013, Cameron Neylon underlined that the lack of common infrastructure was one of the main weakness of the open science ecosystem: "in a world where it can be cheaper to re-do an analysis than to store the data, we need to consider seriously the social, physical, and material infrastructure that might support the sharing of the material outputs of research". [72] Two years later, Neylon, Geoffrey Bilder and Jenifer Lin defined a series of Principles for Open Scholarly Infrastructure that reacted primarily to the discrepancy between the increasing openness of scientific publications or datasets and the closeness of the infrastructure that control their circulation. [12]
Over the past decade, we have made real progress to further ensure the availability of data that supports research claims. This work is far from complete. We believe that data about the research process itself deserves exactly the same level of respect and care. The scholarly community does not own or control most of this information. For example, we could have built or taken on the infrastructure to collect bibliographic data and citations but that task was left to private enterprise. [12]
Since 2015 these principles have become the most influential definition of Open Science Infrastructures and been endorsed by leading infrastructures such as Crossref, [73] OpenCitations [74] or Data Dryad [75] and has become a common basis for the institutional evaluation of existing open infrastructures. [76] The main focus of the Principles is to build "trustworthy institutions" with significant commitments in terms of governance, financial sustainability and technical efficiency sot that it can be durably relied on by scientific communities. [15]
By 2021, public services and infrastructures for research have largely endorsed open science as an integral part of their activity and identity: "open science is the dominant discourse to which new online services for research refer." [19] According to the 2021 Roadmap of the European Strategy Forum on Research Infrastructures (ESFRI), major legacy infrastructures in Europe have embraced open science principles. "Most of the Research Infrastructures on the ESFRI Roadmap are at the forefront of Open Science movement and make important contributions to the digital transformation by transforming the whole research process according to the Open Science paradigm." [77] Examples of extensive data sharing programs include the European Social Survey (in social science), ECRIN ERIC (for clinical data) or the Cherenkov Telescope Array (in Astronomy). [77]
In agreement with the original intent of the Principles, open science infrastructure are "seen as an antidote to the increased market concentration observed in the scholarly communication space." [17] In November 2021, the UNESCO Recommendation for Open Science acknowledged open science infrastructure as one of the four pillar of open science, along with open science knowledge, open engagement of societal actors and open dialog with other knowledge system and called for sustained investment and funding: "open science infrastructures are often the result of community-building efforts, which are crucial for their longterm sustainability and therefore should be not-for-profit and guarantee permanent and unrestricted access to all public to the largest extent possible." [1]
The development of open scientific infrastructure has become a debated topic regarding the future of online scientific research. In January 2021, a collective of researchers called for a Plan I or Plan Infrastructure in reaction to perceived shortcomings of the international initiative for open science of the cOAlition S, the Plan S. [69] In contrast with the focus of Plan S on scientific publication, Plan I aims to integrate all research outputs on large interoperable infrastructures: "research and scholarship are crucially dependent on an information infrastructure that treats all scholarly output, text, data and code, equally and that is based on open standards and open markets." [78]
Most of the landscape reports on Open Infrastructure have been undertaken in Europe and, to a lesser extent, in Latin America. For Europe, the main sources include the SPARC report from 2020, [79] the OPERAS report on social science and humanities infrastructure, [80] as well as the 2019 report of Katherine Skinner (that also extends to a few North American infrastructures). [81] International studies include European Commission 2010 report on The Role of E-Infrastructure which mostly receive input from Europe, South America and North America. [82]
These reports underline that important open science infrastructures may be already existing and yet remain invisible to funders and scientific policies: "alternative practices and projects exist inside and outside Europe, but these projects are almost invisible to the eyes of the public authorities". [83]
Open Access repositories are the most frequent form of Open Science Infrastructure [84] with 5,791 repositories in existence in December 2021 according to OpenDOAR [85]
Yet, there is a significant diversification of the roles and the activities of open science infrastructure, at least among the largest infrastructures. In the survey of European infrastructure conducted by SPARC Europe, 95% of the respondents mention that they provide services in at least three different stages of research production out of six (Creation, Evaluation, Publishing, Hosting, Discovering and Archiving). [86] Agregation, hosting and indexing are especially central activities, common to most Open Science Infrastructures regardless of their focus.
Specialization does happen at a higher level. A network analysis identifies "two main clusters of activities":
Standardization is a major function of open science infrastructure as they aim to insure that the content they share and support is distributed consistently as well as ease reuse.
Maintaining open standards is one of the main challenge identified by leading European open infrastructures, as it implies choosing among competing standards in some case, as well as ensuring that the standards are correctly updated and accessibile through APIs or other endpoints. [88] Two third of the respondents have undertaken an evaluation of their technological environment during the past year, to ensure that key components have not become obsolete. [89] As a consequence of this sustained efforts, most open infrastructure complies with the new established standards of open science, such as FAIR data or Plan S. [89]
Open science infrastructures preferably integrate standards from other open science infrastructures. Among European infrastructures: "The most commonly cited systems – and thus essential infrastructure for many – are ORCID, Crossref, DOAJ, BASE, OpenAIRE, Altmetric, and Datacite, most of which are not-for-profit". [90] Google Scholar is the first mentioned commercial service, while Scopus, the leading proprietary academic search engine developed by Elsevier, is one of least quoted leading service. [91] Open science infrastructure are then part of an emerging "truly interoperable Open Science commons" that hold the premise of "researcher-centric, low-cost, innovative, and interoperable tools for research, superior to the present, largely closed system." [92]
Infrastructures are frequently dependent on choices made by external stakeholders, especially scientific publishers: they "do not themselves decide on the openness of content since they are dependent on the policies of content providers". [93] This affects not only the content but also the "user data policies [that] are set by publishers which limits what can be made available". [94]
Open Science Infrastructure have strong ties with the open source movement. 82% of the European infrastructures surveyed by SPARC claim to have partially built open source software and 53% have their entire technological infrastructure in open source. [89]
Governance has been self-identified as a potential weakness by the European infrastructure surveyed by SPARC. [95] Less than half of the respondents considering that they are at a "mature" stage in this regard and a "good governance" is quoted as the main challenge. [88] Interaction between the communities they aim to support and the other stakeholders and funders is especially complicated: "One specific challenge identified was the tension between serving the needs of the community of users versus prioritising the needs of clients that provide financial support to the OSI". [88]
The tension between centralization and diversity largely characterizes Open Science Infrastructure. While historically defined as a "centralized [Open Access] project", Redalyc aims to become a "community-based sustainable infrastructure in Latin America" (Berrecil). The leading European open infrastructures have reported "challenges around ensuring sufficient (and sufficiently diverse) representation" as well as the involvement from some professional communities like researchers and librarians. [88]
Open Science Infrastructure "target and serve a wide range of stakeholders". [96] Researchers remain the primary target, but libraries, teachers and learners are among the expected audience of more than half of the infrastructure surveyed by Sparc Europe.
A majority of european infrastructures "operate at a global scale", with English being the primary language of 82% of the respondents. [97] These infrastructures are also frequently multilingual and integrate a specific national focus: they "provide access to a range of language content of local and international significance". [97]
Open Science Infrastructures benefit to diverse disciplines and scientific communities. In 2020, 72% of the european infrastructures surveyed by Sparc Europe claim to support all disciplines. The social sciences and the humanities are the most mentioned disciplines, which is partly attributed to the fact that the survey was "distributed widely by the OPERAS network". [98] In 2010, the infrastructures supporting the social sciences and the humanities were much less prevalent and most of the uses cases came from "biosciences, High Energy Physics and other fields of physics, earth and environmental sciences, computer science, astronomy and astrophysics". [99]
Many Open Science Infrastructure run "at a relatively low cost" as small infrastructures are an important part of the open science ecosystem. [100] In 2020, 21 out of 53 surveyed European infrastructures "report spending less than €50,000". [100] Consequently, more than 75% of surveyed European infrastructures are run by small teams of 5 FTEs or less. [101] The size of the infrastructure and the extent of its funding is far from always proportional to the critical service it offers: "some of the most heavily used services make ends meet with a tiny core team of two to five people." [102] Volunteer contributions are significant as well with is both "a strength and weakness to an OSI’s sustainability". [100] The landscape of open science infrastructures is therefore rather close to the ideals of a "decentralised network of small projects" envisioned by theoricians of the scholarly commons. [103] A very large majority of open science infrastructure are non-commercial [104] and collaborations or financial support from the private sector remain very limited. [105]
Overall, European infrastructures were financially sustainable in 2020 [106] which contrasts with the situation ten years prior: in 2010, European infrastructures had much less visibility: they usually lacked "a long-term perspective" and struggled "with securing the funding for more than 5 years". [107] In 2020, European infrastructures frequently relies on grants from National funds and from the European Commission. [105] Without theses grants, most of theses actors would "could only remain viable for less than a year". [104] Yet, one quarter of surveyed European infrastructures was not supported by any grants and subventions and used either alternative means of incomes or voluntary contributions. [100] As they can be "difficult to define adequately", open science infrastructures can be overlooked by funding bodies, which "contributes to the challenge of securing funding". [108]
Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of access charges or other barriers. With open access strictly defined, or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright.
E-Science or eScience is computationally intensive science that is carried out in highly distributed network environments, or science that uses immense data sets that require grid computing; the term sometimes includes technologies that enable distributed collaboration, such as the Access Grid. The term was created by John Taylor, the Director General of the United Kingdom's Office of Science and Technology in 1999 and was used to describe a large funding initiative starting in November 2000. E-science has been more broadly interpreted since then, as "the application of computer technology to the undertaking of modern scientific investigation, including the preparation, experimentation, data collection, results dissemination, and long-term storage and accessibility of all materials generated through the scientific process. These may include data modeling and analysis, electronic/digitized laboratory notebooks, raw and fitted data sets, manuscript production and draft versions, pre-prints, and print and/or electronic publications." In 2014, IEEE eScience Conference Series condensed the definition to "eScience promotes innovation in collaborative, computationally- or data-intensive research across all disciplines, throughout the research lifecycle" in one of the working definitions used by the organizers. E-science encompasses "what is often referred to as big data [which] has revolutionized science... [such as] the Large Hadron Collider (LHC) at CERN... [that] generates around 780 terabytes per year... highly data intensive modern fields of science...that generate large amounts of E-science data include: computational biology, bioinformatics, genomics" and the human digital footprint for the social sciences.
Bibliometrics is the application of statistical methods to the study of bibliographic data, especially in scientific and library and information science contexts, and is closely associated with scientometrics to the point that both fields largely overlap.
Open science is the movement to make scientific research and its dissemination accessible to all levels of society, amateur or professional. Open science is transparent and accessible knowledge that is shared and developed through collaborative networks. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open-notebook science, broader dissemination and engagement in science and generally making it easier to publish, access and communicate scientific knowledge.
Open data is data that is openly accessible, exploitable, editable and shared by anyone for any purpose. Open data is licensed under an open license.
The Scholarly Publishing and Academic Resources Coalition (SPARC) is an international alliance of academic and research libraries developed by the Association of Research Libraries in 1998 which promotes open access to scholarship. The coalition currently includes some 800 institutions in North America, Europe, Japan, China and Australia.
Open scientific data or open research data is a type of open data focused on publishing observations and results of scientific activities available for anyone to analyze and reuse. A major purpose of the drive for open data is to allow the verification of scientific claims, by allowing others to look at the reproducibility of results, and to allow data from many sources to be integrated to give new knowledge.
The Panton Principles are a set of principles which were written to promote open science. They were first drafted in July 2009 at the Panton Arms pub in Cambridge.
The following is a timeline of the international movement for open access to scholarly communication.
Open access to scholarly communication in Germany has evolved rapidly since the early 2000s. Publishers Beilstein-Institut, Copernicus Publications, De Gruyter, Knowledge Unlatched, Leibniz Institute for Psychology Information, ScienceOpen, Springer Nature, and Universitätsverlag Göttingen belong to the international Open Access Scholarly Publishers Association.
In Belgium, open access to scholarly communication accelerated after 2007 when the University of Liège adopted its first open-access mandate. The "Brussels Declaration" for open access was signed by officials in 2012.
In France, open access to scholarly communication is relatively robust and has strong public support. Revues.org, a digital platform for social science and humanities publications, launched in 1999. Hyper Articles en Ligne (HAL) began in 2001. The French National Center for Scientific Research participated in 2003 in the creation of the influential Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities. Publishers EDP Sciences and OpenEdition belong to the international Open Access Scholarly Publishers Association.
Scholarly communication of the Netherlands published in open access form can be found by searching the National Academic Research and Collaboration Information System (NARCIS). The web portal was developed in 2004 by the Data Archiving and Networked Services of the Netherlands Organisation for Scientific Research and Royal Netherlands Academy of Arts and Sciences.
FAIR data are data which meet principles of findability, accessibility, interoperability, and reusability (FAIR). The acronym and principles were defined in a March 2016 paper in the journal Scientific Data by a consortium of scientists and organizations.
Diamond open access refers to academic texts published/distributed/preserved with no fees to either reader or author. Alternative labels include platinum open access, non-commercial open access, cooperative open access or, more recently, open access commons. While these terms were first coined in the 2000s and the 2010s, they have been retroactively applied to a variety of structures and forms of publishing, from subsidized university publishers to volunteer-run cooperatives that existed in prior decades.
FORCE11 is an international coalition of researchers, librarians, publishers and research funders working to reform or enhance the research publishing and communication system. Initiated in 2011 as a community of interest on scholarly communication, FORCE11 is a registered 501(c)(3) organization based in the United States but with members and partners around the world. Key activities include an annual conference, the Scholarly Communications Institute and a range of working groups.
Scientific languages are vehicular languages used by one or several scientific communities for international communication. According to Michael Gordin, they are "either specific forms of a given language that are used in conducting science, or they are the set of distinct languages in which science is done."
The economics of open science describe the economic aspects of making a wide range of scientific outputs to all levels of society.
The open science movement has expanded the uses scientific output beyond specialized academic circles.
An Open Science Monitor or Open Access Monitor is a scientific infrastructure that aimed to assess the spread of open practices in a scientific context.