Document processing is a field of research and a set of production processes aimed at making an analog document digital. Document processing does not simply aim to photograph or scan a document to obtain a digital image, but also to make it digitally intelligible. This includes extracting the structure of the document or the layout and then the content, which can take the form of text or images. The process can involve traditional computer vision algorithms, convolutional neural networks or manual labor. The problems addressed are related to semantic segmentation, object detection, optical character recognition (OCR), handwritten text recognition (HTR) and, more broadly, transcription, whether automatic or not. [1] The term can also include the phase of digitizing the document using a scanner and the phase of interpreting the document, for example using natural language processing (NLP) or image classification technologies. It is applied in many industrial and scientific fields for the optimization of administrative processes, mail processing and the digitization of analog archives and historical documents.
Document processing was initially as is still to some extent a kind of production line work dealing with the treatment of documents, such as letters and parcels, in an aim of sorting, extracting or massively extracting data. This work could be performed in-house or through business process outsourcing. [2] [3] Document processing can indeed involve some kind of externalized manual labor, such as mechanical Turk.
As an example of manual document processing, as relatively recent as 2007, [4] document processing for "millions of visa and citizenship applications" was about use of "approximately 1,000 contract workers" working to "manage mail room and data entry."
While document processing involved data entry via keyboard well before use of a computer mouse or a computer scanner, a 1990 article in The New York Times regarding what it called the "paperless office" stated that "document processing begins with the scanner". [5] In this context, a former Xerox vice-president, Paul Strassman, expressed a critical opinion, saying that computers add rather than reduce the volume of paper in an office. [5] It was said that the engineering and maintenance documents for an airplane weigh "more than the airplane itself"[ citation needed ].
As the state of the art advanced, document processing transitioned to handling "document components ... as database entities." [6]
A technology called automatic document processing or sometimes intelligent document processing (IDP) emerged as a specific form of Intelligent Process Automation (IPA), combining artificial intelligence such as Machine Learning (ML), Natural Language Processing (NLP) or Intelligent Character Recognition (ICE) to extract data from several types documents. [7] [8] Advancements in automatic document processing, also called Intelligent Document Processing, improve the ability to process unstructured data with fewer exceptions and greater speeds. [9]
Automatic document processing applies to a whole range of documents, whether structured or not. For instance, in the world of business and finance, technologies may be used to process paper-based invoices, forms, purchase orders, contracts, and currency bills. [10] Financial institutions use intelligent document processing to process high volumes of forms such as regulatory forms or loan documents. ID uses AI to extract and classify data from documents, replacing manual data entry. [11]
In medicine, document processing methods have been developed to facilitate patient follow-up and streamline administrative procedures, in particular by digitizing medical or laboratory analysis reports. The goal is also to standardize medical databases. [12] Algorithms are also directly used to assist physicians in medical diagnosis, e.g. by analyzing magnetic resonance images, [13] [14] or microscopic images. [15]
Document processing is also widely used in the humanities and digital humanities, in order to extract historical big data from archives or heritage collections. Specific approaches were developed for various sources, including textual documents, such as newspaper archives, [16] but also images, [17] or maps. [18] [19]
If, from the 1980s onward, traditional computer vision algorithms were widely used to solve document processing problems, [20] [21] these have been gradually replaced by neural network technologies in the 2010s. [22] However, traditional computer vision technologies are still used, sometimes in conjunction with neural networks, in some sectors.
Many technologies support the development of document processing, in particular optical character recognition (OCR), and handwritten text recognition (HTR), which allow the text to be transcribed automatically. Text segments as such are identified using instance or object detection algorithms, which can sometimes also be used to detect the structure of the document. The resolution of the latter problem sometimes also uses semantic segmentation algorithms.
These technologies often form the core of document processing. However, other algorithms may intervene before or after these processes. Indeed, document digitization technologies are also involved, whether in the form of classical or three-dimensional scanning. [23] The digitization of 3D documents can in particular resort to derivatives of photogrammetry. Sometimes, specific 2D scanners must also be developed to adapt to the size of the documents or for reasons of scanning ergonomics. [17] The document processing also depends on the digital encoding of the documents in a suitable file format. Furthermore, the processing of heterogeneous databases can rely on image classification technologies.
At the other end of the chain are various image completion, extrapolation or data cleanup algorithms. For textual documents, the interpretation can use natural language processing (NLP) technologies.
Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.
Natural language processing (NLP) is an interdisciplinary subfield of computer science and artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related to information retrieval, knowledge representation and computational linguistics, a subfield of linguistics. Typically data is collected in text corpora, using either rule-based, statistical or neural-based approaches in machine learning and deep learning.
Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene photo or from subtitle text superimposed on an image.
A point cloud is a discrete set of data points in space. The points may represent a 3D shape or object. Each point position has its set of Cartesian coordinates. Points may contain data other than position such as RGB colors, normals, timestamps and others. Point clouds are generally produced by 3D scanners or by photogrammetry software, which measure many points on the external surfaces of objects around them. As the output of 3D scanning processes, point clouds are used for many purposes, including to create 3D computer-aided design (CAD) or geographic information systems (GIS) models for manufactured parts, for metrology and quality inspection, and for a multitude of visualizing, animating, rendering, and mass customization applications.
Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other devices. The image of the written text may be sensed "off line" from a piece of paper by optical scanning or intelligent word recognition. Alternatively, the movements of the pen tip may be sensed "on line", for example by a pen-based computer screen surface, a generally easier task as there are more clues available. A handwriting recognition system handles formatting, performs correct segmentation into characters, and finds the most possible words.
In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.
Automatic summarization is the process of shortening a set of data computationally, to create a subset that represents the most important or relevant information within the original content. Artificial intelligence algorithms are commonly developed and employed to achieve this, specialized for different types of data.
3D scanning is the process of analyzing a real-world object or environment to collect three dimensional data of its shape and possibly its appearance. The collected data can then be used to construct digital 3D models.
Optical music recognition (OMR) is a field of research that investigates how to computationally read musical notation in documents. The goal of OMR is to teach the computer to read and interpret sheet music and produce a machine-readable version of the written music score. Once captured digitally, the music can be saved in commonly used file formats, e.g. MIDI and MusicXML . In the past it has, misleadingly, also been called "music optical character recognition". Due to significant differences, this term should no longer be used.
Thomas Shi-Tao Huang was a Chinese-born American computer scientist, electrical engineer, and writer. He was a researcher and professor emeritus at the University of Illinois at Urbana-Champaign (UIUC). Huang was one of the leading figures in computer vision, pattern recognition and human computer interaction.
Intelligent character recognition (ICR) is used to extract handwritten text from images. It is a more sophisticated type of OCR technology that recognizes different handwriting styles and fonts to intelligently interpret data on forms and physical documents.
Book scanning or book digitization is the process of converting physical books and magazines into digital media such as images, electronic text, or electronic books (e-books) by using an image scanner. Large scale book scanning projects have made many books available online.
Computer-aided detection (CADe), also called computer-aided diagnosis (CADx), are systems that assist doctors in the interpretation of medical images. Imaging techniques in X-ray, MRI, endoscopy, and ultrasound diagnostics yield a great deal of information that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process digital images or videos for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional.
CuneiForm Cognitive OpenOCR is a freely distributed open-source OCR system developed by Russian software company Cognitive Technologies.
Digital pathology is a sub-field of pathology that focuses on data management based on information generated from digitized specimen slides. Through the use of computer-based technology, digital pathology utilizes virtual microscopy. Glass slides are converted into digital slides that can be viewed, managed, shared and analyzed on a computer monitor. With the practice of whole-slide imaging (WSI), which is another name for virtual microscopy, the field of digital pathology is growing and has applications in diagnostic medicine, with the goal of achieving efficient and cheaper diagnoses, prognosis, and prediction of diseases due to the success in machine learning and artificial intelligence in healthcare.
Forms processing is a process by which one can capture information entered into data fields and convert it into an electronic format. This can be done manually or automatically, but the general process is that hard copy data is filled out by humans and then "captured" from their respective fields and entered into a database or other electronic format.
Optical braille recognition is technology to capture and process images of braille characters into natural language characters. It is used to convert braille documents for people who cannot read them into text, and for preservation and reproduction of the documents.
Scene text is text that appears in an image captured by a camera in an outdoor environment.
The GigaMesh Software Framework is a free and open-source software for display, editing and visualization of 3D-data typically acquired with structured light or structure from motion.
Studierfenster or StudierFenster (SF) is a free, non-commercial open science client/server-based medical imaging processing online framework. It offers capabilities, like viewing medical data (computed tomography (CT), magnetic resonance imaging (MRI), etc.) in two- and three-dimensional space directly in the standard web browsers, like Google Chrome, Mozilla Firefox, Safari, and Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images (segmentation), manual placing of (anatomical) landmarks in medical image data, viewing medical data in virtual reality, a facial reconstruction and registration of medical data for augmented reality, one click showcases for COVID-19 and veterinary scans, and a Radiomics module.
{{cite book}}
: CS1 maint: multiple names: authors list (link)