Patent visualisation is an application of information visualisation. The number of patents has been increasing, [1] encouraging companies to consider intellectual property as a part of their strategy. [2] Patent visualisation, like patent mapping, is used to quickly view a patent portfolio.
Software dedicated to patent visualisation began to appear in 2000, for example Aureka from Aurigin (now owned by Thomson Reuters). [3] Many patent and portfolio analytics platforms, such as Questel, [4] PatSnap, Patentcloud, Relecura, and Patent iNSIGHT Pro, [5] offer options to visualise specific data within patent documents by creating topic maps, [6] priority maps, IP Landscape reports, [7] etc. Software converts patents into infographics or maps, to allow the analyst to "get insight into the data" and draw conclusions. [8] Also called patinformatics, [9] it is the "science of analysing patent information to discover relationships and trends that would be difficult to see when working with patent documents on a one-and-one basis".[ citation needed ]
Patents contain structured data (like publication numbers) and unstructured text (like title, abstract, claims and visual info). Structured data are processed by data-mining and unstructured data are processed with text-mining. [10]
The main step in processing structured information is data-mining, [11] which emerged in the late 1980s. Data mining involves statistics, artificial intelligence, and machine learning. [12] Patent data mining extracts information from the structured data of the patent document. [13] These structured data are bibliographic fields such as location, date or status.
Structured data | Description | Business Intelligence use |
---|---|---|
Data | Patents contain identifying data including priority, publication data and the issue date.
| Crossing dates and locations fields offer a global vision of a technology in time and space. |
Assignee | Patent assignees are organizations or individuals - the owners of the patent. | The field can offer a ranking of the principal actors of the environment, thus allowing us to visualise potential competitors or partners. |
Inventor | Inventors develop the invention/patent. | Inventors' field combined with the assignee field can create a social network and provide a method to follow field experts. |
Classification | The classification can regroup inventions with similar technologies. The most commonly used is the International Patent Classification (IPC). However, patent organizations have their own classification; for instance, the European Patent Office has framed the ECLA. | Grouping patents by theme offers an overview of the corpus and the potential applications of studied technology. |
Status | The legal status indicates whether an application is filed, approved, or rejected. | Patent family and legal status searching is important for litigation and competitive intelligence. |
Data mining allows study of filing patterns of competitors and locates main patent filers within a specific area of technology. This approach can be helpful to monitor competitors' environments, moves and innovation trends and gives a macro view of a technology status.[ citation needed ]
Text mining is used to search through unstructured text documents. [14] [15] This technique is widely used on the Internet, it has had success in bioinformatics and now in the intellectual property environment. [16]
Text mining is based on a statistical analysis of word recurrence in a corpus. [17] An algorithm extracts words and expressions from title, summary and claims and gathers them by declension. "And" and "if" are labeled as non-information bearing words and are stored in the stopword list. Stoplists can be specialised in order to create an accurate analysis. Next, the algorithm ranks the words by weight, according to their frequency in the patent's corpus and the document frequency containing this word. The score for each word is calculated using a formula such as: [18] [19]
A frequently-used word in several documents has less weight than a word used frequently in a few patents. Words under a minimum weight are eliminated, leaving a list of pertinent words or descriptors. Each patent is associated to the descriptors found in the selected document. Further, in the process of clusterisation, these descriptors are used as subsets, in which the patent are regrouped or as tags to place the patents in predetermined categories, for example keywords from International Patent Classifications.
Four text parts can be processed with text-mining :
Software offer different combinations but title, abstract and claim are generally the most used, providing a good balance between interferences and relevancy.
Text-mining can be used to narrow a search or quickly evaluate a patent corpus. For instance, if a query produces irrelevant documents, a multi-level clustering hierarchy identifies them in order to delete them and refine the search. Text-mining can also be used to create internal taxonomies specific to a corpus for possible mapping.[ citation needed ]
Allying patent analysis and informatic tools offers an overview of the environment through value-added visualisations. As patents contain structured and unstructured information, visualisations fall in two categories. Structured data can be rendered with data mining in macrothematic maps and statistical analysis. Unstructured information can be shown in like clouds, cluster maps and 2D keyword maps.
Visualisation | Picture | Description | Business Intelligence use |
---|---|---|---|
Matrix chart | Picture | Graphic organizer used to summarize a multidimensional data set in a grid | Data comparison |
Location map | Picture | Map with overlaid data values on geographic regions |
|
Bar chart | Picture | Graph with rectangular bars proportional to the values that they represent, useful for numerical comparisons. | Data evolution |
Line graph | Picture | Graph used to summarize how two parameters are related and how they vary. | Data evolution and relationships |
Pie chart | Picture | Circular chart divided into sections, to illustrate proportions. | Data comparison |
Bubble chart | Picture | 3-axis 2D chart which enables visualization similar to the Magic quadrant chart. |
|
Visualisation | Description | Business Intelligence use |
---|---|---|
Tree list | Hierarchy list |
|
Tag cloud | Full text of concepts. The size of each word is determined by its frequency in the corpus |
|
2D keyword map [20] | Tomographic map with quantitative representation of relief, usually using contour lines and colors. Distance on map is proportional to the difference between themes. [13] |
|
2D hierarchical cluster map with quantitative and qualitative representation of document set association to topic, usually using quantized cells and colors. Size of topic cells may represent patent count per topic relative to overall document set. Density and distribution inside of a topic cell may be proportional to document count relative to association to the topic and strength of association, respectively. |
| |
Text is decomposed into logical groupings and sub-groupings, then represented as a navigable hierarchy of those groupings by means of proportionate circle arcs. |
|
Mapping visualisations can be used for both text-mining and data-mining results.
Visualisation | Picture | Description | Business Intelligence use |
---|---|---|---|
Tree map | Picture | Visualization of hierarchical structures. Each data item, or row in the data set is represented by a rectangle, whose area is proportional to selected parameters. |
|
Network map | Picture | In a network diagram, entities are connected to each other in the form of a node and link diagram. |
|
Citation map | Picture | In the citation map, the date of citation is visualized on the x axis and each individual citation takes an entry on the y axis. A strong vertical line indicates the filing date, showing which citations are cited by the patent as opposed to those which cite the patent. |
|
What patent visualisation can highlight: [21] [22]
Business intelligence (BI) comprises the strategies and technologies used by enterprises for the data analysis and management of business information. Common functions of business intelligence technologies include reporting, online analytical processing, analytics, dashboard development, data mining, process mining, complex event processing, business performance management, benchmarking, text mining, predictive analytics, and prescriptive analytics.
Text mining, text data mining (TDM) or text analytics is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005) we can distinguish between three different perspectives of text mining: information extraction, data mining, and a knowledge discovery in databases (KDD) process. Text mining usually involves the process of structuring the input text, deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling.
Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text. A matrix containing word counts per document is constructed from a large piece of text and a mathematical technique called singular value decomposition (SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by cosine similarity between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.
A document-term matrix is a mathematical matrix that describes the frequency of terms that occur in a collection of documents. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. This matrix is a specific instance of a document-feature matrix where "features" may refer to other properties of a document besides terms. It is also common to encounter the transpose, or term-document matrix where documents are the columns and terms are the rows. They are useful in the field of natural language processing and computational text analysis.
Parallel coordinates are a common way of visualizing and analyzing high-dimensional datasets.
Unstructured data is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, numbers, and facts as well. This results in irregularities and ambiguities that make it difficult to understand using traditional programs as compared to data stored in fielded form in databases or annotated in documents.
Data and information visualization is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certain domain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data. When intended for the general public to convey a concise version of known, specific information in a clear and engaging manner, it is typically called information graphics.
Concept mining is an activity that results in the extraction of concepts from artifacts. Solutions to the task typically involve aspects of artificial intelligence and statistics, such as data mining and text mining. Because artifacts are typically a loosely structured sequence of words and other symbols, the problem is nontrivial, but it can provide powerful insights into the meaning, provenance and similarity of documents.
Search engine indexing is the collecting, parsing, and storing of data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, and computer science. An alternate name for the process, in the context of search engines designed to find web pages on the Internet, is web indexing.
The National Centre for Text Mining (NaCTeM) is a publicly funded text mining (TM) centre. It was established to provide support, advice, and information on TM technologies and to disseminate information from the larger TM community, while also providing tailored services and tools in response to the requirements of the United Kingdom academic community.
Enterprise search is the practice of making content from multiple enterprise-type sources, such as databases and intranets, searchable to a defined audience.
Document clustering is the application of cluster analysis to textual documents. It has applications in automatic document organization, topic extraction and fast information retrieval or filtering.
Digital history is the use of digital media to further historical analysis, presentation, and research. It is a branch of the digital humanities and an extension of quantitative history, cliometrics, and computing. Digital history is commonly digital public history, concerned primarily with engaging online audiences with historical content, or, digital research methods, that further academic research. Digital history outputs include: digital archives, online presentations, data visualizations, interactive maps, timelines, audio files, and virtual worlds to make history more accessible to the user. Recent digital history projects focus on creativity, collaboration, and technical innovation, text mining, corpus linguistics, network analysis, 3D modeling, and big data analysis. By utilizing these resources, the user can rapidly develop new analyses that can link to, extend, and bring to life existing histories
The Information Retrieval Facility (IRF), founded 2006 and located in Vienna, Austria, was a research platform for networking and collaboration for professionals in the field of information retrieval. It ceased operations in 2012.
A concept search is an automated information retrieval method that is used to search electronically stored unstructured text for information that is conceptually similar to the information provided in a search query. In other words, the ideas expressed in the information retrieved in response to a concept search query are relevant to the ideas contained in the text of the query.
WordStat is a content analysis and text mining software. It was first released in 1998 after being developed by Normand Peladeau from Provalis Research. The latest version 9 was released in 2021.
Word2vec is a technique for natural language processing (NLP) published in 2013. The word2vec algorithm uses a neural network model to learn word associations from a large corpus of text. Once trained, such a model can detect synonymous words or suggest additional words for a partial sentence. As the name implies, word2vec represents each distinct word with a particular list of numbers called a vector. The vectors are chosen carefully such that they capture the semantic and syntactic qualities of words; as such, a simple mathematical function can indicate the level of semantic similarity between the words represented by those vectors.
Patent analysis is the process of analyzing patent documents and other information from the patent lifecycle. The field of patent analytics uses patent analysis to obtain deeper insights into different technologies and innovation. Other terms are sometimes used as synonyms for patent analytics: patent landscaping, patent mapping, or cartography. However, there is no harmonized terminology in different languages, including in French and Spanish, while in some languages terms are borrowed from other languages. Patent analytics encompasses the analysis of patent data, analysis of the scientific literature, data cleaning, text mining, machine learning, geographic mapping, and data visualisation.
KH Coder is an open source software for computer assisted qualitative data analysis, particularly quantitative content analysis and text mining. It can be also used for computational linguistics. It supports processing and etymological information of text in several languages, such as Japanese, English, French, German, Italian, Portuguese and Spanish. Specifically, it can contribute factual examination co-event system hub structure, computerized arranging guide, multidimensional scaling and comparative calculations.