This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages) (Learn how and when to remove this template message)
|
Sebastian Schaffert | |
---|---|
Born | Trostberg, Germany | March 18, 1976
Nationality | German and Swedish |
Occupation | Site Reliability Engineering Manager |
Employer |
Sebastian Schaffert is a software engineer and researcher. He was born in Trostberg, Bavaria, Germany on March 18, 1976 [1] and obtained his doctorate in 2004. [2]
Before moving out of research, he was very active in the Semantic Web, Linked Data and Multimedia Semantics fields, his works received more than 1.800 citations. [3] He is a contributor to open source projects, among those Apache Marmotta [4] and participated in several European FP6 and FP7 research projects such as REWERSE (Reasoning on the Web with rules and semantics), [5] [6] KiWi (Knowledge in a Wiki), [7] [8] IKS (Interactive Knowledge Stack) [9] and MICO (Media in Context). [10]
In April 2001, he graduated in Computer Science with a thesis on "Grouping Structures for Semistructured Data: Enhancing, Data Modelling and Data Retrieval". [11] [12] [13]
In December 2004, he obtained his doctorate at the faculty of Mathematics, Computer Science and Statistics at the Ludwig-Maximilians-Universität of Munich with the thesis "Xcerpt: A Rule-Based Query and Transformation Language for the Web". [1] [2] [14]
In August 2005, he worked as senior researcher and project manager at the Salzburg Research Institute where he is currently head of the Knowledge and Media Technologies group. [2]
In 2006, he also became scientific director of the Salzburg NewMedia Lab. [2]
In 2009–2010, he taught at the Fachhochschule (University of Applied Science) of Salzburg [15]
In 2013, he co-founded Redlink and held the CTO position. [4] [16] [17] [18] [19]
In December 2014, he joined Google at Zürich, where he currently works as Site Reliability Engineering Manager.
He wrote the following PhD thesis:
He wrote the following articles and research papers:
He is the author (or co-author) of the following books and publications:
The Semantic Web is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable. To enable the encoding of semantics with the data, technologies such as Resource Description Framework (RDF) and Web Ontology Language (OWL) are used. These technologies are used to formally represent metadata. For example, ontology can describe concepts, relationships between entities, and categories of things. These embedded semantics offer significant advantages such as reasoning over data and operating with heterogeneous data sources.
A semantic wiki is a wiki that has an underlying model of the knowledge described in its pages. Regular, or syntactic, wikis have structured text and untyped hyperlinks. Semantic wikis, on the other hand, provide the ability to capture or identify information about the data within pages, and the relationships between pages, in ways that can be queried or exported like a database through semantic queries.
Semantic MediaWiki (SMW) is an extension to MediaWiki that allows for annotating semantic data within wiki pages, thus turning a wiki that incorporates the extension into a semantic wiki. Data that has been encoded can be used in semantic searches, used for aggregation of pages, displayed in formats like maps, calendars and graphs, and exported to the outside world via formats like RDF and CSV.
Ontology alignment, or ontology matching, is the process of determining correspondences between concepts in ontologies. A set of correspondences is also called an alignment. The phrase takes on a slightly different meaning, in computer science, cognitive science or philosophy.
An RDF query language is a computer language, specifically a query language for databases, able to retrieve and manipulate data stored in Resource Description Framework (RDF) format.
Semantically-Interlinked Online Communities Project is a Semantic Web technology. SIOC provides methods for interconnecting discussion methods such as blogs, forums and mailing lists to each other. It consists of the SIOC ontology, an open-standard machine readable format for expressing the information contained both explicitly and implicitly in Internet discussion methods, of SIOC metadata producers for a number of popular blogging platforms and content management systems, and of storage and browsing/searching systems for leveraging this SIOC data.
Semantic search denotes search with meaning, as distinguished from lexical search where the search engine looks for literal matches of the query words or variants of them, without understanding the overall meaning of the query. Semantic search seeks to improve search accuracy by understanding the searcher's intent and the contextual meaning of terms as they appear in the searchable dataspace, whether on the Web or within a closed system, to generate more relevant results. Semantic search systems consider various points including context of search, location, intent, variation of words, synonyms, generalized and specialized queries, concept matching and natural language queries to provide relevant search results.
Machine interpretation of documents and services in Semantic Web environment is primarily enabled by (a) the capability to mark documents, document segments and services with semantic tags and (b) the ability to establish contextual relations between the tags with a domain model, which is formally represented as ontology. Human beings use natural languages to communicate an abstract view of the world. Natural language constructs are symbolic representations of human experience and are close to the conceptual model that Semantic Web technologies deal with. Thus, natural language constructs have been naturally used to represent the ontology elements. This makes it convenient to apply Semantic Web technologies in the domain of textual information. In contrast, multimedia documents are perceptual recording of human experience. An attempt to use a conceptual model to interpret the perceptual records gets severely impaired by the semantic gap that exists between the perceptual media features and the conceptual world. Notably, the concepts have their roots in perceptual experience of human beings and the apparent disconnect between the conceptual and the perceptual world is rather artificial. The key to semantic processing of multimedia data lies in harmonizing the seemingly isolated conceptual and the perceptual worlds. Representation of the Domain knowledge needs to be extended to enable perceptual modeling, over and above conceptual modeling that is supported. The perceptual model of a domain primarily comprises observable media properties of the concepts. Such perceptual models are useful for semantic interpretation of media documents, just as the conceptual models help in the semantic interpretation of textual documents.
A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including Pei Wang's non-axiomatic reasoning system, and probabilistic logic networks.
A triplestore or RDF store is a purpose-built database for the storage and retrieval of triples through semantic queries. A triple is a data entity composed of subject-predicate-object, like "Bob is 35" or "Bob knows Fred".
Dr. Amit Sheth is a computer scientist at Wright State University in Dayton, Ohio. He is the Lexis Nexis Ohio Eminent Scholar for Advanced Data Management and Analysis. Up to October 2018, Sheth's work has been cited by over 41,000 publications. He has an h-index of 100, which puts him among the top 100 computer scientists with the highest h-index. Prior to founding the Kno.e.sis Center, he served as the director of the Large Scale Distributed Information Systems Lab at the University of Georgia in Athens, Georgia.
The Semantic Sensor Web (SSW) is a marriage of sensor and Semantic Web technologies. The encoding of sensor descriptions and sensor observation data with Semantic Web languages enables more expressive representation, advanced access, and formal analysis of sensor resources. The SSW annotates sensor data with spatial, temporal, and thematic semantic metadata. This technique builds on current standardization efforts within the Open Geospatial Consortium's Sensor Web Enablement (SWE) and extends them with Semantic Web technologies to provide enhanced descriptions and access to sensor data.
CubicWeb is a free and open-source semantic web application framework, licensed under the LGPL. It is written in Python.
Prof. (FH) Dr. Tassilo Pellegrini studied International Trade, Communication Science and Political Science at the University of Linz, University of Salzburg and University of Málaga. Since the end of 2007 he has been working as a lecturer at the University of Applied Sciences in St. Pölten. He obtained his master's degree in 1999 from the University of Salzburg on the topic of telecommunications policy in the European Union, which was followed by a PhD in 2010 on the topic of bounded policy-learning in the European Union with a focus on intellectual property policies. His current research encompasses economic effects of internet regulation with respect to market structure and outcome. He is a member of the International Network for Information Ethics (INIE), the African Network of Information Ethics (ANIE) and the Deutsche Gesellschaft für Publizistik und Kommunikationswissenschaft (DGPUK). Beside his specialisation in policy research and media economics Tassilo Pellegrini has worked on semantic technologies and the Semantic Web. He is a co-founder of the Semantic Web Company in Vienna, a co-editor of the first German textbook on Semantic Web and Conference Chair of the annual I-SEMANTICS conference series founded in 2005.
Apache Stanbol is an open source modular software stack and reusable set of components for semantic content management. Apache Stanbol components are meant to be accessed over RESTful interfaces to provide semantic services for content management. Thus, one application is to extend traditional content management systems with semantic services.
translatewiki.net is a web-based translation platform, powered by the Translate extension for MediaWiki, which makes MediaWiki a powerful tool for translating all kinds of text.
The Extended Semantic Web Conference, formerly known as the European Semantic Web Conference, is a yearly international academic conference on the topic of the Semantic Web. The event began in 2004 as the European Semantic Web Symposium. The goal of the event is "to bring together researchers and practitioners dealing with different aspects of semantics on the Web".
Apache Marmotta is a linked data platform that comprises several components. In its most basic configuration it is a Linked Data server. Marmotta is one of the reference projects early implementing the new Linked Data Platform recommendation that is being developed by W3C.
Semantic queries allow for queries and analytics of associative and contextual nature. Semantic queries enable the retrieval of both explicitly and implicitly derived information based on syntactic, semantic and structural information contained in data. They are designed to deliver precise results or to answer more fuzzy and wide open questions through pattern matching and digital reasoning.
Schema-agnostic databases or vocabulary-independent databases aim at supporting users to be abstracted from the representation of the data, supporting the automatic semantic matching between queries and databases. Schema-agnosticism is the property of a database of mapping a query issued with the user terminology and structure, automatically mapping it to the dataset vocabulary.
|journal=
(help)Wikimedia Commons has media related to Sebastian Schaffert . |