The Asset Description Metadata Schema (ADMS) is a common metadata vocabulary to describe standards, so-called interoperability assets, on the Web.
Used in concert with web syndication technology ADMS helps people make sense of the complex multi-publisher environment around standards and in particular the ones which are semantic assets such as ontologies, data models, data dictionaries, code lists, XML and RDF schemas. In spite of their importance, standards are not easily discoverable on the web via search engines because metadata about them is seldom available. Navigating on the websites of the different publishers of standards is not efficient either.
A semantic asset is a specific type of standard which involves:
highly reusable metadata (e.g. xml schemata, generic data models) and/or reference data (e.g. code lists, taxonomies, dictionaries, vocabularies)
Organisations use semantic assets to share information and knowledge (within themselves and with others). Semantic assets are usually very valuable and reusable elements for the development of Information Systems, in particular, as part of machine-to-machine interfaces. As enablers to interoperable information exchange, semantic assets are usually created, published and maintained by standardisation bodies. Nonetheless, ICT projects and groups of experts also create such assets. There are therefore many publishers of semantic assets with different degrees of formalism.
ADMS [1] is a standardised metadata vocabulary created by the EU's Interoperability Solutions for European Public Administrations (ISA) Programme [2] of the European Commission to help publishers of standards document what their standards are about (their name, their status, theme, version, etc.) and where they can be found on the Web. ADMS descriptions can then be published on different websites while the standard itself remains on the website of its publisher (i.e. syndication of content). ADMS embraces the multi-publisher environment and, at the same time, it provides the means for the creation of aggregated catalogues of standards and single points of access to them based on ADMS descriptions. The Commission will offer a single point of access to standards described using ADMS via its collaborative platform, Joinup. [3] The Federation [4] service will increase the visibility of standards described with ADMS on the web. This will also stimulate their reuse by Pan-European initiatives.
More than 43 people of 20 EU Member States as well as from the US and Australia have participated in the ADMS Working Group. Most of them were experts from standardisation bodies, research centres and the EU Commission. The working group used a methodology based on W3C’s processes and methods. [5]
ADMS version 1 was officially released in April 2012. [6] Version 1.00 of ADMS is available for download on Joinup: [3] https://web.archive.org/web/20120430065401/http://joinup.ec.europa.eu/asset/adms/release/100 [7]
ADMS is offered under ISA's Open Metadata Licence v1.1 [8]
The ADMS specification reuses existing metadata vocabularies and core vocabularies including:
ADMS v1.00 will be contributed to [12] W3C’s Government Linked Data (GLD) Working Group. [13] This means that ADMS will be published by the GLD Working Group as First Public Working Drafts for further consultation within the context of the typical W3C standardization process. The desired outcome of that process will be the publication of ADMS as a W3C Recommendation available under W3C's Royalty-Free License.
The ADMS RDFS Vocabulary already has a w3.org namespace: http://www.w3.org/ns/adms#.
The Dublin Core™, also known as the Dublin Core™ Metadata Element Set, is a set of fifteen "core" elements (properties) for describing resources. This fifteen-element Dublin Core™ has been formally standardized as ISO 15836, ANSI/NISO Z39.85, and IETF RFC 5013. The core properties are part of a larger set of DCMI Metadata Terms. "Dublin Core™" is also used as an adjective for Dublin Core™ metadata, a style of metadata that draws on multiple RDF vocabularies, packaged and constrained in Dublin Core™ application profiles.
The Semantic Web is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable. To enable the encoding of semantics with the data, technologies such as Resource Description Framework (RDF) and Web Ontology Language (OWL) are used. These technologies are used to formally represent metadata. For example, ontology can describe concepts, relationships between entities, and categories of things. These embedded semantics offer significant advantages such as reasoning over data and operating with heterogeneous data sources.
The Resource Description Framework (RDF) is a family of World Wide Web Consortium (W3C) specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description or modeling of information that is implemented in web resources, using a variety of syntax notations and data serialization formats. It is also used in knowledge management applications.
The Web Ontology Language (OWL) is a family of knowledge representation languages for authoring ontologies. Ontologies are a formal way to describe taxonomies and classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects. Ontologies resemble class hierarchies in object-oriented programming but there are several critical differences. Class hierarchies are meant to represent structures used in source code that evolve fairly slowly whereas ontologies are meant to represent information on the Internet and are expected to be evolving almost constantly. Similarly, ontologies are typically far more flexible as they are meant to represent information on the Internet coming from all sorts of heterogeneous data sources. Class hierarchies on the other hand are meant to be fairly static and rely on far less diverse and more structured sources of data such as corporate databases.
RDFa is a W3C Recommendation that adds a set of attribute-level extensions to HTML, XHTML and various XML-based document types for embedding rich metadata within Web documents. The RDF data-model mapping enables its use for embedding RDF subject-predicate-object expressions within XHTML documents. It also enables the extraction of RDF model triples by compliant user agents.
Simple Knowledge Organization System (SKOS) is a W3C recommendation designed for representation of thesauri, classification schemes, taxonomies, subject-heading systems, or any other type of structured controlled vocabulary. SKOS is part of the Semantic Web family of standards built upon RDF and RDFS, and its main objective is to enable easy publication and use of such vocabularies as linked data.
The AgMES initiative was developed by the Food and Agriculture Organization (FAO) of the United Nations and aims to encompass issues of semantic standards in the domain of agriculture with respect to description, resource discovery, interoperability and data exchange for different types of information resources.
Agricultural Information Management Standards, abbreviated to AIMS is a space for accessing and discussing agricultural information management standards, tools and methodologies connecting information workers worldwide to build a global community of practice. Information management standards, tools and good practices can be found on AIMS:
Semantically-Interlinked Online Communities Project is a Semantic Web technology. SIOC provides methods for interconnecting discussion methods such as blogs, forums and mailing lists to each other. It consists of the SIOC ontology, an open-standard machine readable format for expressing the information contained both explicitly and implicitly in Internet discussion methods, of SIOC metadata producers for a number of popular blogging platforms and content management systems, and of storage and browsing/searching systems for leveraging this SIOC data.
SIMILE was a joint research project run by the World Wide Web Consortium (W3C), Massachusetts Institute of Technology Libraries and MIT CSAIL and funded by the Andrew W. Mellon Foundation. The project ran from 2003 to August 2008. It focused on developing tools to increase the interoperability of disparate digital collections. Much of SIMILE's technical focus is oriented towards Semantic Web technology and standards such as Resource Description Framework (RDF).
In computing, linked data is structured data which is interlinked with other data so it becomes more useful through semantic queries. It builds upon standard Web technologies such as HTTP, RDF and URIs, but rather than using them to serve web pages only for human readers, it extends them to share information in a way that can be read automatically by computers. Part of the vision of linked data is for the Internet to become a global database.
The Semantic Interoperability Centre Europe (SEMIC.EU) was an eGovernment service initiated by the European Commission and managed by the Interoperable Delivery of European eGovernment Services to public Administrations, Businesses and Citizens (IDABC) Unit. As one of the 'horizontal measures' of the IDABC, it was established as a permanent implementation of the principles stipulated in the 'European Interoperability Framework' (EIF). It offered a service for the exchange of semantic interoperability solutions, with a focus on demands of eGovernment in Europe. Through the establishment of a single sharing and collaboration point, the European Union wanted to resolve the problems of semantic interoperability amongst the EU member states. The main idea behind the service was to make visible specifications that already exist, so as to increase their reuse. In this way, governmental agencies and developers benefit as they do not reinvent the wheel, they reduce development costs, and increase the interoperability of their systems.
The Publishing Requirements for Industry Standard Metadata (PRISM) specification defines a set of XML metadata vocabularies for syndicating, aggregating, post-processing and multi-purposing content. PRISM provides a framework for the interchange and preservation of content and metadata, a collection of elements to describe that content, and a set of controlled vocabularies listing the values for those elements. PRISM can be XML, RDF/XML, or XMP and incorporates Dublin Core elements. PRISM can be thought of as a set of XML tags used to contain the metadata of articles and even tag article content.
Knowledge extraction is the creation of knowledge from structured and unstructured sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL, the main criteria is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge or the generation of a schema based on the source data.
Data Catalog Vocabulary (DCAT) is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web. By using DCAT to describe datasets in catalogs, publishers increase discoverability and enable applications to consume metadata from multiple catalogs. It enables decentralized publishing of catalogs and facilitates federated dataset search across catalogs. Aggregated DCAT metadata can serve as a manifest file to facilitate digital preservation.
Joinup is a collaboration platform created by the European Commission. It is funded by the European Union via its Interoperability Solutions for Public Administrations Programme.
The Open Semantic Framework (OSF) is an integrated software stack using semantic technologies for knowledge management. It has a layered architecture that combines existing open source software with additional open source components developed specifically to provide a complete Web application framework. OSF is made available under the Apache 2 license.
In natural language processing, linguistics, and neighboring fields, Linguistic Linked Open Data (LLOD) describes a method and an interdisciplinary community concerned with creating, sharing, and (re-)using language resources in accordance with Linked Data principles. The Linguistic Linked Open Data Cloud was conceived and is being maintained by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation, but has been a point of focal activity for several W3C community groups, research projects, and infrastructure efforts since then.
OntoLex is the short name of a vocabulary for lexical resources in the web of data (OntoLex-Lemon) and the short name of the W3C community group that created it.
In linguistics and language technology, a language resource is a `[composition] of linguistic material used in the construction, improvement and/or evaluation of language processing applications, (...) in language and language-mediated research studies and applications'.