Social Semantic Web

Last updated

The concept of the Social Semantic Web subsumes developments in which social interactions on the Web lead to the creation of explicit and semantically rich knowledge representations. The Social Semantic Web can be seen as a Web of collective knowledge systems, which are able to provide useful information based on human contributions and which get better as more people participate. [1] The Social Semantic Web combines technologies, strategies and methodologies from the Semantic Web, social software and the Web 2.0. [2]

Contents

Overview

The social-semantic web (s2w) aims to complement the formal Semantic Web vision by adding a pragmatic approach relying on description languages for semantic browsing using heuristic classification and semiotic ontologies. A socio-semantic system has a continuous process of eliciting crucial knowledge of a domain through semi-formal ontologies, taxonomies or folksonomies. S2w emphasize the importance of humanly created loose semantics as means to fulfil the vision of the semantic web. Instead of relying entirely on automated semantics with formal ontology processing and inferencing, humans are collaboratively building semantics aided by socio-semantic information systems. While the semantic web enables integration of business processing with precise automatic logic inference computing across domains, the socio-semantic web opens up for a more social interface to the semantics of businesses, allowing interoperability between business objects, actions and their users.

Socio-semantic web was coined by Manuel Zacklad and Jean-Pierre Cahier in 2003[ citation needed ] and used in the field of Computer Supported Cooperative Work (CSCW). It was then discussed in Peter Morville's 2005 book Ambient Findability. [3] In Chapter 6, he defines the socio-semantic web as relying on "the pace-layering of ontologies, taxonomies, and folksonomies to learn and adapt as well as teach and remember." Morville writes, “I'll take the ancient tree of knowledge over the transient leaves of popularity any day”. [4] There is undoubtedly scepticism towards the adoption of folksonomies. The socio-semantic web may be seen as a middle way between the top-down monolithic taxonomy approach like the Yahoo! Directory and collaborative tagging (folksonomy) approaches.

The socio-semantic web differs from the semantic web in that the semantic web often is regarded as a system that will solve the epistemic interoperability issues we have to day. While the semantic web will provide ways for businesses to interoperate across domains the socio-semantic web will enable users to share knowledge.

There are various possible social approaches for solving the problems of user driven ontology evolution for the semantic web. First, users could create a folksonomy (flat taxonomy). With Social Network Analysis (SNA) in conjunction with automated parsers, the ontology could be extracted from the tags and this ontology could be entered into a Topic Map/TMCL [5] or RDF/OWL ontology store. Secondly a set of ontology engineers or ontologists could manually analyze the tags created by the users and by using this data, create a more sound ontology. The third approach is to create a system for self governance where the users themselves create the ontology over time in an organic fashion. All of these approaches could start out with an empty ontology or be seeded manually or with an existing ontology, for example the WordNet ontology. [6] Social Networks Ontology is the most important concept in social web.

Examples

Related Research Articles

<span class="mw-page-title-main">Semantic Web</span> Extension of the Web to facilitate data exchange

The Semantic Web, sometimes known as Web 3.0, is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable.

In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.

The Resource Description Framework (RDF) is a World Wide Web Consortium (W3C) standard originally designed as a data model for metadata. It has come to be used as a general method for description and exchange of graph data. RDF provides a variety of syntax notations and data serialization formats, with Turtle currently being the most widely used notation.

The Web Ontology Language (OWL) is a family of knowledge representation languages for authoring ontologies. Ontologies are a formal way to describe taxonomies and classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects.

An annotation is extra information associated with a particular point in a document or other piece of information. It can be a note that includes a comment or explanation. Annotations are sometimes presented in the margin of book pages. For annotations of different digital media, see web annotation and text annotation.

<span class="mw-page-title-main">Tag (metadata)</span> Keyword assigned to information

In information systems, a tag is a keyword or term assigned to a piece of information. This kind of metadata helps describe an item and allows it to be found again by browsing or searching. Tags are generally chosen informally and personally by the item's creator or by its viewer, depending on the system, although they may also be chosen from a controlled vocabulary.

A semantic wiki is a wiki that has an underlying model of the knowledge described in its pages. Regular, or syntactic, wikis have structured text and untyped hyperlinks. Semantic wikis, on the other hand, provide the ability to capture or identify information about the data within pages, and the relationships between pages, in ways that can be queried or exported like a database through semantic queries.

Simple Knowledge Organization System (SKOS) is a W3C recommendation designed for representation of thesauri, classification schemes, taxonomies, subject-heading systems, or any other type of structured controlled vocabulary. SKOS is part of the Semantic Web family of standards built upon RDF and RDFS, and its main objective is to enable easy publication and use of such vocabularies as linked data.

BioMOBY is a registry of web services used in bioinformatics. It allows interoperability between biological data hosts and analytical services by annotating services with terms taken from standard ontologies. BioMOBY is released under the Artistic License.

Semantic interoperability is the ability of computer systems to exchange data with unambiguous, shared meaning. Semantic interoperability is a requirement to enable machine computable logic, inferencing, knowledge discovery, and data federation between information systems.

<span class="mw-page-title-main">Semantically Interlinked Online Communities</span>

Semantically Interlinked Online Communities Project is a Semantic Web technology. SIOC provides methods for interconnecting discussion methods such as blogs, forums and mailing lists to each other. It consists of the SIOC ontology, an open-standard machine-readable format for expressing the information contained both explicitly and implicitly in Internet discussion methods, of SIOC metadata producers for a number of popular blogging platforms and content management systems, and of storage and browsing/searching systems for leveraging this SIOC data.

In computer science, the semantic desktop is a collective term for ideas related to changing a computer's user interface and data handling capabilities so that data are more easily shared between different applications or tasks and so that data that once could not be automatically processed by a computer could be. It also encompasses some ideas about being able to share information automatically between different people. This concept is very much related to the Semantic Web, but is distinct insofar as its main concern is the personal use of information.

<span class="mw-page-title-main">Semantic HTML</span> HTML used to reinforce meaning of documents or webpages

Semantic HTML is the use of HTML markup to reinforce the semantics, or meaning, of the information in web pages and web applications rather than merely to define its presentation or look. Semantic HTML is processed by traditional web browsers as well as by many other user agents. CSS is used to suggest its presentation to human users.

Amit Sheth is a computer scientist at University of South Carolina in Columbia, South Carolina. He is the founding Director of the Artificial Intelligence Institute, and a Professor of Computer Science and Engineering. From 2007 to June 2019, he was the Lexis Nexis Ohio Eminent Scholar, director of the Ohio Center of Excellence in Knowledge-enabled Computing, and a Professor of Computer Science at Wright State University. Sheth's work has been cited by over 48,800 publications. He has an h-index of 106, which puts him among the top 100 computer scientists with the highest h-index. Prior to founding the Kno.e.sis Center, he served as the director of the Large Scale Distributed Information Systems Lab at the University of Georgia in Athens, Georgia.

Freebase was a large collaborative knowledge base consisting of data composed mainly by its community members. It was an online collection of structured data harvested from many sources, including individual, user-submitted wiki contributions. Freebase aimed to create a global resource that allowed people to access common information more effectively. It was developed by the American software company Metaweb and run publicly beginning in March 2007. Metaweb was acquired by Google in a private sale announced on 16 July 2010. Google's Knowledge Graph is powered in part by Freebase.

<span class="mw-page-title-main">Ontology engineering</span> Field that studies the methods and methodologies for building ontologies

In computer science, information science and systems engineering, ontology engineering is a field which studies the methods and methodologies for building ontologies, which encompasses a representation, formal naming and definition of the categories, properties and relations between the concepts, data and entities of a given domain of interest. In a broader sense, this field also includes a knowledge construction of the domain using formal ontology representations such as OWL/RDF. A large-scale representation of abstract concepts such as actions, time, physical objects and beliefs would be an example of ontological engineering. Ontology engineering is one of the areas of applied ontology, and can be seen as an application of philosophical ontology. Core ideas and objectives of ontology engineering are also central in conceptual modeling.

Folksonomy is a classification system in which end users apply public tags to online items, typically to make those items easier for themselves or others to find later. Over time, this can give rise to a classification system based on those tags and how often they are applied or searched for, in contrast to a taxonomic classification designed by the owners of the content and specified when it is published. This practice is also known as collaborative tagging, social classification, social indexing, and social tagging. Folksonomy was originally "the result of personal free tagging of information [...] for one's own retrieval", but online sharing and interaction expanded it into collaborative forms. Social tagging is the application of tags in an open online environment where the tags of other users are available to others. Collaborative tagging is tagging performed by a group of users. This type of folksonomy is commonly used in cooperative and collaborative projects such as research, content repositories, and social bookmarking.

Knowledge extraction is the creation of knowledge from structured and unstructured sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL, the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge or the generation of a schema based on the source data.

In natural language processing, linguistics, and neighboring fields, Linguistic Linked Open Data (LLOD) describes a method and an interdisciplinary community concerned with creating, sharing, and (re-)using language resources in accordance with Linked Data principles. The Linguistic Linked Open Data Cloud was conceived and is being maintained by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation, but has been a point of focal activity for several W3C community groups, research projects, and infrastructure efforts since then.

References

  1. Tom Gruber (2006). "Where the Social Web Meets the Semantic Web". Keynote presentation at ISWC, The 5th International Semantic Web Conference, November 7, 2006
  2. Katrin Weller (2010), Knowledge Representation in the Social Semantic Web. Berlin: De Gruyter Saur.
  3. Morville, Peter (26 September 2005). Ambient Findability. O'Reilly Media. ISBN   978-0-596-00765-2.
  4. ( Morville 2005 , p. 139)
  5. "The Topic Map Constraint Language".
  6. "Wordnet in RDFS and OWL".
  7. Almeida JS, Deus HF, Maass W (2010). "S3DB core: a framework for RDF generation and management in bioinformatics infrastructures". BMC Bioinformatics. 11: 387. doi: 10.1186/1471-2105-11-387 . PMC   2918582 . PMID   20646315.