This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages) |
Yasuo Matsuyama | |
---|---|
Born | Yokohama, Japan | March 23, 1947
Nationality | Japanese |
Alma mater | Waseda University (Dr. Engineering, 1974) Stanford University (PhD, 1978) |
Known for | Alpha-EM algorithm |
Scientific career | |
Fields | Machine learning and human-aware information processing |
Institutions | Waseda University, Stanford University |
Thesis | Studies on Stochastic Modeling of Neurons (Dr. Engineering from Waseda University) Process Distortion Measures and Signal Processing (PhD from Stanford University) |
Doctoral advisor | Waseda University: Jun'ichi Takagi, Kageo Akizuki, and Kastuhiko Shirai for Dr. Engineering Stanford University: Robert M. Gray for PhD |
Website | http://www.f.waseda.jp/yasuo2/en/index.html |
Yasuo Matsuyama (born March 23, 1947) is a Japanese researcher in machine learning and human-aware information processing.
Matsuyama is a Professor Emeritus and an Honorary Researcher of the Research Institute of Science and Engineering of Waseda University.
Matsuyama received his bachelor’s, master’s and doctoral degrees in electrical engineering from Waseda University in 1969, 1971, and 1974 respectively. The dissertation title for the Doctor of Engineering is Studies on Stochastic Modeling of Neurons. [1] There, he contributed to the spiking neurons with stochastic pulse-frequency modulation. Advisors were Jun’ichi Takagi, Kageo, Akizuki, and Katsuhiko Shirai.
Upon the completion of the doctoral work at Waseda University, he was dispatched to the United States as a Japan-U.S. exchange fellow by the joint program of the Japan Society for the Promotion of Science, Fulbright Program, and the Institute of International Education. Through this exchange program, he completed his Ph.D. program at Stanford University in 1978. The dissertation title is Process Distortion Measures and Signal Processing. [2] There, he contributed to the theory of probabilistic distortion measures and its applications to speech encoding with spectral clustering or vector quantization. His advisor was Robert. M. Gray.
From 1977 to 1078, Matsuyama was a research assistant at the Information Systems Laboratory of Stanford University.
From 1979 to 1996, he was a faculty of Ibaraki University, Japan (the final position was a professor and chairperson of the Information and System Sciences Major).
Since 1996, he was a Professor of Waseda University, Department of Computer Science and Engineering. From 2011 to 2013, he was the director of the Media Network Center of Waseda University. At the 2011 Tōhoku earthquake and tsunami of March 11, 2011, he was in charge of the safety inquiry of 65,000 students, staffs and faculties.
Since 2017, Matsuyama is a Professor Emeritus and an Honorary Researcher of the Research Institute of Science and Engineering of Waseda University. Since 2018, he serves as an acting president of the Waseda Electrical Engineering Society.
Matsuyama’s works on machine learning and human-aware information processing have dual foundations. Studies on the competitive learning (vector quantization) for his Ph.D. at Stanford University brought about his succeeding works on machine learning contributions. Studies on stochastic spiking neurons [3] [4] for his Dr. Engineering at Waseda University set off applications of biological signals to the machine learning. Thus, his works can be grouped reflecting these dual foundations.
Statistical machine learning algorithms: The use of the alpha-logarithmic likelihood ratio in learning cycles generated the alpha-EM algorithm (alpha-Expectation maximization algorithm). [5] Because the alpha-logarithm includes the usual logarithm, the alpha-EM algorithm contains the EM-algorithm (more precisely, the log-EM algorithm). The merit of the speedup by the alpha-EM over the log-EM is due to the ability to utilize the past information. Such a usage of the messages from the past brought about the alpha-HMM estimation algorithm (alpha-hidden Markov model estimation algorithm) [6] that is a generalized and faster version of the hidden Markov model estimation algorithm (HMM estimation algorithm).
Competitive learning on empirical data: Starting from the speech compression studies at Stanford, Matsuyama developed generalized competitive learning algorithms; the harmonic competition [7] and the multiple descent cost competition. [8] The former realizes the multiple-object optimization. The latter admits deformable centroids. Both algorithms generalize the batch-mode vector quantization (simply called, vector quantization) and the successive-mode vector quantization (or, called learning vector quantization).
A hierarchy from the alpha-EM to the vector quantization: Matsuyama contributed to generate and identify the hierarchy of the above algorithms.
On the class of the vector quantization and competitive learning, he contributed to generate and identify the hierarchy of VQs.
Applications to Human-aware information processing: The dual foundations of his led to the applications to huma-aware information processing.
The above theories and applications work as contributions to IoCT (Internet of Collaborative Things) and IoXT (http://www.asc-events.org/ASC17/Workshop.php).
In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.
Speech processing is the study of speech signals and the processing methods of signals. The signals are usually processed in a digital representation, so speech processing can be regarded as a special case of digital signal processing, applied to speech signals. Aspects of speech processing includes the acquisition, manipulation, storage, transfer and output of speech signals. Different speech processing tasks include speech recognition, speech synthesis, speaker diarization, speech enhancement, speaker recognition, etc.
Speech coding is an application of data compression to digital audio signals containing speech. Speech coding uses speech-specific parameter estimation using audio signal processing techniques to model the speech signal, combined with generic data compression algorithms to represent the resulting modeled parameters in a compact bitstream.
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, improve subjective video quality, and to detect or pinpoint components of interest in a measured signal.
Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors. Developed in the early 1980s by Robert M. Gray, it was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms. In simpler terms, vector quantization chooses a set of points to represent a larger set of points.
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional representation of a higher-dimensional data set while preserving the topological structure of the data. For example, a data set with variables measured in observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze.
Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. Other frameworks in the spectrum of supervisions include weak- or semi-supervision, where a small portion of the data is tagged, and self-supervision. Some researchers consider self-supervised learning a form of unsupervised learning.
Stochastic gradient descent is an iterative method for optimizing an objective function with suitable smoothness properties. It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient by an estimate thereof. Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate.
Recurrent neural networks (RNNs) are a class of artificial neural networks for sequential data processing. Unlike feedforward neural networks, which process data in a single pass, RNNs process data across multiple time steps, making them well-adapted for modelling and processing text, speech, and time series.
In wireless communications, channel state information (CSI) is the known channel properties of a communication link. This information describes how a signal propagates from the transmitter to the receiver and represents the combined effect of, for example, scattering, fading, and power decay with distance. The method is called channel estimation. The CSI makes it possible to adapt transmissions to current channel conditions, which is crucial for achieving reliable communication with high data rates in multiantenna systems.
Bernard Widrow is a U.S. professor of electrical engineering at Stanford University. He is the co-inventor of the Widrow–Hoff least mean squares filter (LMS) adaptive algorithm with his then doctoral student Ted Hoff. The LMS algorithm led to the ADALINE and MADALINE artificial neural networks and to the backpropagation technique. He made other fundamental contributions to the development of signal processing in the fields of geophysics, adaptive antennas, and adaptive filtering. A summary of his work is.
An echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer. The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behavior is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.
Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.
In various science/engineering applications, such as independent component analysis, image analysis, genetic analysis, speech recognition, manifold learning, and time delay estimation it is useful to estimate the differential entropy of a system or process, given some observations.
There are many types of artificial neural networks (ANN).
Deep learning is the subset of machine learning methods based on neural networks with representation learning. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
In communications technology, the technique of compressed sensing (CS) may be applied to the processing of speech signals under certain conditions. In particular, CS can be used to reconstruct a sparse vector from a smaller number of measurements, provided the signal can be represented in sparse domain. "Sparse domain" refers to a domain in which only a few measurements have non-zero values.
A convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns features by itself via filter optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.
The following outline is provided as an overview of and topical guide to machine learning: