Nicolai Petkov

Last updated

Nicolai Petkov is Dutch computer scientist, and professor of Intelligent Systems and Computer Science at the University of Groningen, known for his contributions in the fields of brain-inspired computing, pattern recognition, machine learning, and parallel computing.

Contents

Life and work

Nicolai Petkov received his doctoral degree at the Dresden University of Technology in Germany. After graduation he worked at several universities and research institutes and in 1991 he was appointed Professor of Computer Science (chair of Parallel Computing, later Intelligent Systems and Parallel Computing) at the University of Groningen. He was PhD thesis advisor/director/supervisor (promotor in Dutch) of more than forty PhD graduates. At the University of Groningen he was head of the divisions High Performance Computing and Imaging, and Intelligent Systems (1991-2000, resp. 2000-2023), head of the Center for High Performance Computing (1993-1999), head of the Department of Computer Science (1997-1999), and scientific director of the Institute for Mathematics and Computer Science (now Johann Bernoulli Institute), 1998-2009. He was also member of the University Council (2011-2023) in which he was chairman of the Science Faction (2013-2023). As of 2023 he is professor emeritus of the University of Groningen.

Nicolai Petkov was/is associate editor of several scientific journals (e.g. Parallel Computing, J. Image and Vision Computing, etc.). He co-organized and co-chaired several editions of the International Conference on Computer Analysis of Images and Patterns CAIP (2003 in Groningen; 2009 in Münster, Germany; 2015 in Valletta, Malta; 2021 online Cyprus), the International Workshops on Brain-Inspired Computing BrainComp in Cetraro, Italy (2013, 2015, 2017, 2019), the International Conferences on Applications of Intelligent Systems APPIS (annually since 2018) and the International Workshops on Advances and Applications of Machine Learning and AI AMALEA (annually since 2022).

Nicolai Petkov's initial research in the 1980s and early 1990s was in the field of systolic parallel algorithms. After that he worked mainly on brain-inspired computing. In this area he considers as most valuable his and his students' work on the computational modeling of non-classical receptive field inhibition (also known as surround suppression) in neural cells in the visual cortex. This pioneering work provided explanation of various visual perception effects, such as the masking effect of texture on the perception of object contours and the orientation-contrast pop-out. It led to the development of more effective computer vision algorithms for various industrial, medical and other applications. His further work was/is for the development of pattern recognition and machine learning algorithms for various types of data: image, video, audio, and time series with applications in robotics, manufacturing, agricultural industry, medicine, finance, etc.

Selected publications

Nicolai Petkov is author and editor of several books and many other scientific publications. [1]

Books (monographs):

Articles (a selection):

Edited books:

Related Research Articles

<span class="mw-page-title-main">Handwriting recognition</span> Ability of a computer to receive and interpret intelligible handwritten input

Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other devices. The image of the written text may be sensed "off line" from a piece of paper by optical scanning or intelligent word recognition. Alternatively, the movements of the pen tip may be sensed "on line", for example by a pen-based computer screen surface, a generally easier task as there are more clues available. A handwriting recognition system handles formatting, performs correct segmentation into characters, and finds the most possible words.

<span class="mw-page-title-main">Image segmentation</span> Partitioning a digital image into segments

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.

Scale-space theory is a framework for multi-scale signal representation developed by the computer vision, image processing and signal processing communities with complementary motivations from physics and biological vision. It is a formal theory for handling image structures at different scales, by representing an image as a one-parameter family of smoothed images, the scale-space representation, parametrized by the size of the smoothing kernel used for suppressing fine-scale structures. The parameter in this family is referred to as the scale parameter, with the interpretation that image structures of spatial size smaller than about have largely been smoothed away in the scale-space level at scale .

Robert M. Haralick is Distinguished Professor in Computer Science at Graduate Center of the City University of New York (CUNY). Haralick is one of the leading figures in computer vision, pattern recognition, and image analysis. He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and a Fellow and past president of the International Association for Pattern Recognition. Professor Haralick is the King-Sun Fu Prize winner of 2016, "for contributions in image analysis, including remote sensing, texture analysis, mathematical morphology, consistent labeling, and system performance evaluation".

The following outline is provided as an overview of and topical guide to computer vision:

<span class="mw-page-title-main">Scale-space segmentation</span>

Scale-space segmentation or multi-scale segmentation is a general framework for signal and image segmentation, based on the computation of image descriptors at multiple scales of smoothing.

<span class="mw-page-title-main">Computer-aided diagnosis</span> Type of diagnosis assisted by computers

Computer-aided detection (CADe), also called computer-aided diagnosis (CADx), are systems that assist doctors in the interpretation of medical images. Imaging techniques in X-ray, MRI, endoscopy, and ultrasound diagnostics yield a great deal of information that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process digital images or videos for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional.

Mean shift is a non-parametric feature-space mathematical analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm. Application domains include cluster analysis in computer vision and image processing.

Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several computer science communities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine, human-computer interaction, or sociology.

<span class="mw-page-title-main">Pedestrian detection</span> Computer technology

Pedestrian detection is an essential and significant task in any intelligent video surveillance system, as it provides the fundamental information for semantic understanding of the video footages. It has an obvious extension to automotive applications due to the potential for improving safety systems. Many car manufacturers offer this as an ADAS option in 2017.

<span class="mw-page-title-main">Visual odometry</span> Determining the position and orientation of a robot by analyzing associated camera images

In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers.

In computer science, an evolving intelligent system is a fuzzy logic system which improves the own performance by evolving rules. The technique is known from machine learning, in which external patterns are learned by an algorithm. Fuzzy logic based machine learning works with neuro-fuzzy systems.

COSFIRE stands for Combination Of Shifted FIlter REsponses.

A convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns features by itself via filter optimization. This type of deep learning network has been applied to process and make predictions from many different types of data including text, images and audio. Convolution-based networks are the de-facto standard in deep learning-based approaches to computer vision and image processing, and have only recently have been replaced -- in some cases -- by newer deep learning architectures such as the transformer. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.

<span class="mw-page-title-main">Amir Hussain (cognitive scientist)</span>

Amir Hussain is a cognitive scientist, the director of Cognitive Big Data and Cybersecurity (CogBID) Research Lab at Edinburgh Napier University He is a professor of computing science. He is founding Editor-in-Chief of Springer Nature's internationally leading Cognitive Computation journal and the new Big Data Analytics journal. He is founding Editor-in-Chief for two Springer Book Series: Socio-Affective Computing and Cognitive Computation Trends, and also serves on the Editorial Board of a number of other world-leading journals including, as Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Systems, Man, and Cybernetics (Systems) and the IEEE Computational Intelligence Magazine.

<span class="mw-page-title-main">Event camera</span> Type of imaging sensor

An event camera, also known as a neuromorphic camera, silicon retina or dynamic vision sensor, is an imaging sensor that responds to local changes in brightness. Event cameras do not capture images using a shutter as conventional (frame) cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur, and staying silent otherwise.

Mark S. Nixon is an author, researcher, editor and an academic. He is the former president of IEEE Biometrics Council, and former vice-Chair of IEEE PSPB. He retired from his position as Professor of Electronics and Computer Science at University of Southampton in 2019.

References

  1. Nicolai Petkov at DBLP Bibliography Server OOjs UI icon edit-ltr-progressive.svg