Signature recognition

Last updated
Example of signature shape. Online signture.jpg
Example of signature shape.
Example of dynamic information of a signature. Looking at the pressure information it can be seen that the user has lift the pen 3 times in the middle of the signature (areas with pressure equal to zero). Dynamic information of a signature.jpg
Example of dynamic information of a signature. Looking at the pressure information it can be seen that the user has lift the pen 3 times in the middle of the signature (areas with pressure equal to zero).

Signature recognition is an example of behavioral biometrics that identifies a person based on their handwriting. It can be operated in two different ways:

Contents

Static: In this mode, users write their signature on paper, and after the writing is complete, it is digitized through an optical scanner or a camera to turn the signature image into bits. [1] The biometric system then recognizes the signature analyzing its shape. This group is also known as "off-line". [2]

Dynamic: In this mode, users write their signature in a digitizing tablet, which acquires the signature in real time. Another possibility is the acquisition by means of stylus-operated PDAs. Some systems also operate on smart-phones or tablets with a capacitive screen, where users can sign using a finger or an appropriate pen. Dynamic recognition is also known as "on-line". Dynamic information usually consists of the following information: [2]

The state-of-the-art in signature recognition can be found in the last major international competition. [3]

The most popular pattern recognition techniques applied for signature recognition are dynamic time warping, hidden Markov models and vector quantization. Combinations of different techniques also exist. [4]

Recently, a handwritten biometric approach has also been proposed. [5] In this case, the user is recognized analyzing his handwritten text (see also Handwritten biometric recognition).

Databases

Several public databases exist, being the most popular ones SVC, [6] and MCYT. [7]

Related Research Articles

Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms.

<span class="mw-page-title-main">Optical character recognition</span> Computer recognition of visual text

Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene photo or from subtitle text superimposed on an image.

<span class="mw-page-title-main">Dynamic time warping</span> An algorithm for measuring similarity between two temporal sequences, which may vary in speed

In time series analysis, dynamic time warping (DTW) is an algorithm for measuring similarity between two temporal sequences, which may vary in speed. For instance, similarities in walking could be detected using DTW, even if one person was walking faster than the other, or if there were accelerations and decelerations during the course of an observation. DTW has been applied to temporal sequences of video, audio, and graphics data — indeed, any data that can be turned into a one-dimensional sequence can be analyzed with DTW. A well-known application has been automatic speech recognition, to cope with different speaking speeds. Other applications include speaker recognition and online signature recognition. It can also be used in partial shape matching applications.

Document processing is a field of research and a set of production processes aimed at making an analog document digital. Document processing does not simply aim to photograph or scan a document to obtain a digital image, but also to make it digitally intelligible. This includes extracting the structure of the document or the layout and then the content, which can take the form of text or images. The process can involve traditional computer vision algorithms, convolutional neural networks or manual labor. The problems addressed are related to semantic segmentation, object detection, optical character recognition (OCR), handwritten text recognition (HTR) and, more broadly, transcription, whether automatic or not. The term can also include the phase of digitizing the document using a scanner and the phase of interpreting the document, for example using natural language processing (NLP) or image classification technologies. It is applied in many industrial and scientific fields for the optimization of administrative processes, mail processing and the digitization of analog archives and historical documents.

Keystroke dynamics, keystroke biometrics, typing dynamics andtyping biometrics refer to the detailed timing information that describes each key press related event that occurs when a user is typing on a keyboard.

<span class="mw-page-title-main">Relative neighborhood graph</span>

In computational geometry, the relative neighborhood graph (RNG) is an undirected graph defined on a set of points in the Euclidean plane by connecting two points and by an edge whenever there does not exist a third point that is closer to both and than they are to each other. This graph was proposed by Godfried Toussaint in 1980 as a way of defining a structure from a set of points that would match human perceptions of the shape of the set.

Sammon mapping or Sammon projection is an algorithm that maps a high-dimensional space to a space of lower dimensionality by trying to preserve the structure of inter-point distances in high-dimensional space in the lower-dimension projection.

A chain code is a lossless compression based image segmentation method for binary images based upon tracing image contours. The basic principle of chain coding, like other contour codings, is to separately encode each connected component, or "blob", in the image.

<span class="mw-page-title-main">Unimodal thresholding</span>

Unimodal thresholding is an algorithm for automatic image threshold selection in image processing. Most threshold selection algorithms assume that the intensity histogram is multi-modal; typically bimodal. However, some types of images are essentially unimodal since a much larger proportion of just one class of pixels is present in the image, and dominates the histogram. In such circumstances many of the standard threshold selection algorithms will fail. However, a few algorithms have been designed to specifically cope with such images.

<span class="mw-page-title-main">Marcos Faundez-Zanuy</span> Professor and dean of UPC

Marcos Faundez-Zanuy is full professor and the Dean at Escuela Universitaria Politécnica de Mataró. He has a Phd on Telecommunication from UPC. He was the chair of the European COST action 277 "Non-linear Speech Processing", as well as the secretary of COST-2102 "Cross-Modal Analysis of Verbal and Non-verbal Communication"

<span class="mw-page-title-main">Handwritten biometric recognition</span> Process of identifying the author of a given text from the handwriting style

Handwritten biometric recognition is the process of identifying the author of a given text from the handwriting style. Handwritten biometric recognition belongs to behavioural biometric systems because it is based on something that the user has learned to do.

Matti Kalevi Pietikäinen is a computer scientist. He is currently Professor (emer.) in the Center for Machine Vision and Signal Analysis, University of Oulu, Finland. His research interests are in texture-based computer vision, face analysis, affective computing, biometrics, and vision-based perceptual interfaces. He was Director of the Center for Machine Vision Research, and Scientific Director of Infotech Oulu.

Sayre's paradox is a dilemma encountered in the design of automated handwriting recognition systems. A standard statement of the paradox is that a cursively written word cannot be recognized without being segmented and cannot be segmented without being recognized. The paradox was first articulated in a 1973 publication by Kenneth M. Sayre, after whom it was named.

<span class="mw-page-title-main">Biometric device</span> Identification and authentication device

A biometric device is a security identification and authentication device. Such devices use automated methods of verifying or recognising the identity of a living person based on a physiological or behavioral characteristic. These characteristics include fingerprints, facial images, iris and voice recognition.

Ulisses M. Braga Neto is a Brazilian-American electrical engineer and is currently Professor of Electrical and Computer Engineering at Texas A&M University. His main research areas are statistical pattern recognition, machine learning, signal and image processing, and systems biology. He has worked extensively in the field of error estimation for pattern recognition and machine learning, having published with Edward R. Dougherty the first book dedicated to this topic. Braga-Neto has also published a classroom textbook on Pattern Recognition and Machine Learning. He has also made contributions to the field of Mathematical morphology in signal and image processing.

Indic OCR refers to the process of converting text images written in Indic scripts into e-text using Optical character recognition (OCR) techniques. Broadly, it can also refer to the OCR systems of Brahmic scripts for languages of South Asia and Southeast Asia, not just the scripts of the Indian subcontinent, which are all written in an abugida-based writing system.

Pattern Recognition is a single blind peer-reviewed academic journal published by Elsevier Science. It was first published in 1968 by Pergamon Press. The founding editor-in-chief was Robert Ledley, who was succeeded from 2009 until 2016 by Ching Suen of Concordia University. Since 2016 the current editor-in-chief is Edwin Hancock of the University of York. The journal publishes papers in the general area of pattern recognition, including applications in the areas of image processing, computer vision, handwriting recognition, biometrics and biomedical signal processing. The journal awards the Pattern Recognition Society Medal to the best paper published in the journal each year. In 2020, the journal had an impact factor of 7.196 and it currently has a Scopus CiteScore of 13.1. Google Scholar currently lists the journal as ranked 6th in the top 20 publications in Computer Vision and Pattern Recognition.

Discrete Skeleton Evolution (DSE) describes an iterative approach to reducing a morphological or topological skeleton. It is a form of pruning in that it removes noisy or redundant branches (spurs) generated by the skeletonization process, while preserving information-rich "trunk" segments. The value assigned to individual branches varies from algorithm to algorithm, with the general goal being to convey the features of interest of the original contour with a few carefully chosen lines. Usually, clarity for human vision is valued as well. DSE algorithms are distinguished by complex, recursive decision-making processes with high computational requirements. Pruning methods such as by structuring element (SE) convolution and the Hough transform are general purpose algorithms which quickly pass through an image and eliminate all branches shorter than a given threshold. DSE methods are most applicable when detail retention and contour reconstruction are valued.

Land cover maps are tools that provide vital information about the Earth's land use and cover patterns. They aid policy development, urban planning, and forest and agricultural monitoring.

References

  1. Ismail, M.A.; Gad, Samia (Oct 2000). "Off-line arabic signature recognition and verification". Pattern Recognition. 33 (10): 1727–1740. Bibcode:2000PatRe..33.1727I. doi:10.1016/s0031-3203(99)00047-3. ISSN   0031-3203.
  2. 1 2 "Explainer: Signature Recognition | Biometric Update". www.biometricupdate.com. 2016-01-11. Retrieved 2021-04-03.
  3. Houmani, Nesmaa; A. Mayoue; S. Garcia-Salicetti; B. Dorizzi; M.I. Khalil; M. Mostafa; H. Abbas; Z.T. Kardkovàcs; D. Muramatsu; B. Yanikoglu; A. Kholmatov; M. Martinez-Diaz; J. Fierrez; J. Ortega-Garcia; J. Roure Alcobé; J. Fabregas; M. Faundez-Zanuy; J. M. Pascual-Gaspar; V. Cardeñoso-Payo; C. Vivaracho-Pascual (March 2012). "BioSecure signature evaluation campaign (BSEC'2009): Evaluating online signature algorithms depending on the quality of signatures". Pattern Recognition. 45 (3): 993–1003. Bibcode:2012PatRe..45..993H. doi:10.1016/j.patcog.2011.08.008. S2CID   17863249.
  4. Faundez-Zanuy, Marcos (2007). "On-line signature recognition based on VQ-DTW". Pattern Recognition. 40 (3): 981–992. Bibcode:2007PatRe..40..981F. doi:10.1016/j.patcog.2006.06.007.
  5. Chapran, J. (2006). "Biometric Writer Identification: Feature Analysis and Classification". International Journal of Pattern Recognition & Artificial Intelligence. 20 (4): 483–503. doi:10.1142/s0218001406004831.
  6. Yeung, D. H.; Xiong, Y.; George, S.; Kashi, R.; Matsumoto, T.; Rigoll, G. (2004). "SVC2004: First International Signature Verification Competition". Biometric Authentication. Lecture Notes in Computer Science. Vol. 3072. pp. 16–22. doi:10.1007/978-3-540-25948-0_3. ISBN   978-3-540-22146-3.
  7. Ortega-Garcia, Javier; J. Fierrez; D. Simon; J. Gonzalez; M. Faúndez-Zanuy; V. Espinosa; A. Satue; I. Hernaez; J.-J. Igarza; C. Vivaracho; D. Escudero; Q.-I. Moro (2003). "MCYT baseline corpus: A bimodal biometric database". IEE Proceedings - Vision, Image, and Signal Processing. 150 (6): 395–401. doi:10.1049/ip-vis:20031078.