Nicolai Petkov

Last updated

Nicolai Petkov (born 1956) is Dutch computer scientist, and professor of Intelligent Systems and Computer Science at the University of Groningen, known for his contributions in the fields of brain-inspired computing, pattern recognition, machine learning, and parallel computing.

Contents

Life and work

Petkov received his doctoral degree at the Dresden University of Technology in Germany. After graduation he worked at several universities and in 1991 he was appointed Professor of Computer Science (chair of Intelligent Systems and Parallel Computing) at the University of Groningen. He was PhD thesis director (promoter) of Michael Wilkinson (1995), Henk Bekker (1996), Marc Lankhorst (1996), Frank Schnorrenberg (1998), Thomas A. Lippert (1998), Peter Kruizinga (1999), Michel Westenberg (2001), Simona E. Grigorescu (2003), Cosmin Grigorescu (2004), Anarta Ghosh (2007), Gisela Klette (2007), Lidia Sanchez Gonzalez (2007), Erik Urbach (2008), Easwar Subramanian (2008), Giuseppe Papari (2009), Georgeos Ouzounis (2009), Arie Witoelar (2010), Petra Schneider (2010), Florence Tushabe (2010), Kerstin Bunte (2011), Panchalee Sukjit (2011), George Azzopardi (2013), Ioannis E. Giotis (2013), Fred N. Kiwanuka (2013), Ando C. Emerencia (2014), Ugo Moschini (2016), Nicola Strisciuglio (2016), Laura Fernandez Robles (2016), Andreas Neocleous (2016), Jiapan Guo (2017), Eirini Schiza (2018). [1] At the University of Groningen he was scientific director of the Institute for Mathematics and Computer Science (now Johann Bernoulli Institute) from 1998 to 2009, and he is member of the University Council and chairman of the Science Faction since 2011.

Petkov is associate editor of several scientific journals (e.g. J. Image and Vision Computing). He co-organised and co-chaired the 10th International Conference of Computer Analysis of Images and Patterns CAIP 2003 in Groningen, the 13th CAIP 2009 in Münster, Germany, the 16th CAIP 2015 in Valletta, Malta, and the Workshops Braincomp 2013 and 2015 on Brain-Inspired Computing in Cetraro, Italy.

Petkov's initial research in the late 1980s was in the field of systolic parallel algorithms. His current research interests are in the field of development of pattern recognition and machine learning algorithms that he applies to various types of big data: image, video, audio, text, genetic, phenotype, medical, sensor, financial, web, and heterogeneous. [2] He develops methods for the generation of intelligent programs that are automatically configured using training examples of events and patterns of interest.

Selected publications

Petkov is author and editor of several books and more than 150 other scientific publications. [3]

Books:

Edited books:

Articles, a selection:

Related Research Articles

<span class="mw-page-title-main">Image segmentation</span> Partitioning a digital image into segments

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.

<span class="mw-page-title-main">Gabor filter</span> Linear filter used for texture analysis

In image processing, a Gabor filter, named after Dennis Gabor, who first proposed it as a 1D filter. The Gabor filter was first generalized to 2D by Gösta Granlund, by adding a reference direction. The Gabor filter is a linear filter used for texture analysis, which essentially means that it analyzes whether there is any specific frequency content in the image in specific directions in a localized region around the point or region of analysis. Frequency and orientation representations of Gabor filters are claimed by many contemporary vision scientists to be similar to those of the human visual system. They have been found to be particularly appropriate for texture representation and discrimination. In the spatial domain, a 2D Gabor filter is a Gaussian kernel function modulated by a sinusoidal plane wave.

Scale-space theory is a framework for multi-scale signal representation developed by the computer vision, image processing and signal processing communities with complementary motivations from physics and biological vision. It is a formal theory for handling image structures at different scales, by representing an image as a one-parameter family of smoothed images, the scale-space representation, parametrized by the size of the smoothing kernel used for suppressing fine-scale structures. The parameter in this family is referred to as the scale parameter, with the interpretation that image structures of spatial size smaller than about have largely been smoothed away in the scale-space level at scale .

Robert M. Haralick is Distinguished Professor in Computer Science at Graduate Center of the City University of New York (CUNY). Haralick is one of the leading figures in computer vision, pattern recognition, and image analysis. He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and a Fellow and past president of the International Association for Pattern Recognition. Professor Haralick is the King-Sun Fu Prize winner of 2016, "for contributions in image analysis, including remote sensing, texture analysis, mathematical morphology, consistent labeling, and system performance evaluation".

<span class="mw-page-title-main">Automatic image annotation</span>

Automatic image annotation is the process by which a computer system automatically assigns metadata in the form of captioning or keywords to a digital image. This application of computer vision techniques is used in image retrieval systems to organize and locate images of interest from a database.

<span class="mw-page-title-main">Scale-space segmentation</span>

Scale-space segmentation or multi-scale segmentation is a general framework for signal and image segmentation, based on the computation of image descriptors at multiple scales of smoothing.

<span class="mw-page-title-main">Simple cell</span> Beaker with Dilute Sulphuric Acid, Zinc and Copper Sheet is known as A Simple Cell

A simple cell in the primary visual cortex is a cell that responds primarily to oriented edges and gratings. These cells were discovered by Torsten Wiesel and David Hubel in the late 1950s.

Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems — "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."

As applied in the field of computer vision, graph cut optimization can be employed to efficiently solve a wide variety of low-level computer vision problems, such as image smoothing, the stereo correspondence problem, image segmentation, object co-segmentation, and many other computer vision problems that can be formulated in terms of energy minimization. Many of these energy minimization problems can be approximated by solving a maximum flow problem in a graph. Under most formulations of such problems in computer vision, the minimum energy solution corresponds to the maximum a posteriori estimate of a solution. Although many computer vision algorithms involve cutting a graph, the term "graph cuts" is applied specifically to those models which employ a max-flow/min-cut optimization.

Mean shift is a non-parametric feature-space mathematical analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm. Application domains include cluster analysis in computer vision and image processing.

<span class="mw-page-title-main">Pyramid (image processing)</span> Type of multi-scale signal representation

Pyramid, or pyramid representation, is a type of multi-scale signal representation developed by the computer vision, image processing and signal processing communities, in which a signal or an image is subject to repeated smoothing and subsampling. Pyramid representation is a predecessor to scale-space representation and multiresolution analysis.

Local binary patterns (LBP) is a type of visual descriptor used for classification in computer vision. LBP is the particular case of the Texture Spectrum model proposed in 1990. LBP was first described in 1994. It has since been found to be a powerful feature for texture classification; it has further been determined that when LBP is combined with the Histogram of oriented gradients (HOG) descriptor, it improves the detection performance considerably on some datasets. A comparison of several improvements of the original LBP in the field of background subtraction was made in 2015 by Silva et al. A full survey of the different versions of LBP can be found in Bouwmans et al.

<span class="mw-page-title-main">Image texture</span>

An image texture is a set of metrics calculated in image processing designed to quantify the perceived texture of an image. Image texture gives us information about the spatial arrangement of color or intensities in an image or selected region of an image.

COSFIRE stands for Combination Of Shifted FIlter REsponses.

Matti Kalevi Pietikäinen is a computer scientist. He is currently Professor (emer.) in the Center for Machine Vision and Signal Analysis, University of Oulu, Finland. His research interests are in texture-based computer vision, face analysis, affective computing, biometrics, and vision-based perceptual interfaces. He was Director of the Center for Machine Vision Research, and Scientific Director of Infotech Oulu.

Surround suppression is where the relative firing rate of a neuron may under certain conditions decrease when a particular stimulus is enlarged. It has been observed in electrophysiology studies of the brain and has been noted in many sensory neurons, most notably in the early visual system. Surround suppression is defined as a reduction in the activity of a neuron in response to a stimulus outside its classical receptive field.

In signal processing it is useful to simultaneously analyze the space and frequency characteristics of a signal. While the Fourier transform gives the frequency information of the signal, it is not localized. This means that we cannot determine which part of a signal produced a particular frequency. It is possible to use a short time Fourier transform for this purpose, however the short time Fourier transform limits the basis functions to be sinusoidal. To provide a more flexible space-frequency signal decomposition several filters have been proposed. The Log-Gabor filter is one such filter that is an improvement upon the original Gabor filter. The advantage of this filter over the many alternatives is that it better fits the statistics of natural images compared with Gabor filters and other wavelet filters.

<span class="mw-page-title-main">Amir Hussain (cognitive scientist)</span>

Amir Hussain is a cognitive scientist, the director of Cognitive Big Data and Cybersecurity (CogBID) Research Lab at Edinburgh Napier University He is a professor of computing science. He is founding Editor-in-Chief of Springer Nature's internationally leading Cognitive Computation journal and the new Big Data Analytics journal. He is founding Editor-in-Chief for two Springer Book Series: Socio-Affective Computing and Cognitive Computation Trends, and also serves on the Editorial Board of a number of other world-leading journals including, as Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Systems, Man, and Cybernetics (Systems) and the IEEE Computational Intelligence Magazine.

Dynamic texture is the texture with motion which can be found in videos of sea-waves, fire, smoke, wavy trees, etc. Dynamic texture has a spatially repetitive pattern with time-varying visual pattern. Modeling and analyzing dynamic texture is a topic of images processing and pattern recognition in computer vision.

<span class="mw-page-title-main">Gradient vector flow</span> Computer vision framework

Gradient vector flow (GVF), a computer vision framework introduced by Chenyang Xu and Jerry L. Prince, is the vector field that is produced by a process that smooths and diffuses an input vector field. It is usually used to create a vector field from images that points to object edges from a distance. It is widely used in image analysis and computer vision applications for object tracking, shape recognition, segmentation, and edge detection. In particular, it is commonly used in conjunction with active contour model.

References

  1. Nicolai Petkov at the Mathematics Genealogy Project OOjs UI icon edit-ltr-progressive.svg
  2. Nicolai Petkov research at rug.nl. Accessed 2013.11.05
  3. Nicolai Petkov at DBLP Bibliography Server OOjs UI icon edit-ltr-progressive.svg