Michael Elad | |
---|---|
Born | |
Nationality | Israel |
Alma mater | Technion |
Known for | Sparse Representations, K-SVD, Image Super-Resolution, Diffusion Models |
Scientific career | |
Fields | Engineering, Computer Science, Mathematics, Statistics |
Institutions | Technion Stanford University |
Doctoral advisor | Arie Feuer |
Doctoral students | Michal Aharon |
Michael Elad (born December 10, 1963) is an Israeli computr scientist who is a professor of Computer Science at the Technion - Israel Institute of Technology. His work includes contributions in the fields of sparse representations and generative AI, and deployment of these ideas to algorithms and applications in signal processing, image processing and machine learning.
Elad holds a B.Sc. (1986), M.Sc. (1988) and D.Sc. (1997) in electrical engineering from the Technion - Israel Institute of Technology. His M.Sc., under the guidance of David Malah, focused on video compression algorithms; and his D.Sc. on super-resolution algorithms for image sequences, was guided by Arie Feuer.
After several years (1997–2001) in industrial research in Hewlett-Packard Lab Israel and in Jigami, Elad took a research associate position at Stanford University from 2001 to 2003, working closely with Gene Golub (CS-Stanford), Peyman Milanfar (EE-UCSC) and David Donoho (Statistics-Stanford).
In 2003, Elad assumed a tenure-track faculty position in the Technion's computer science department. He was tenured and promoted to associate professorship in 2007, and promoted to full-professorship in 2010. The following is a list of is editorial activities during his academic career:
Elad works in the fields of signal processing, image processing and machine learning, specializing in particular on inverse problems, sparse representations and generative AI. Elad has authored hundreds of technical publications in these fields. Among these, he is the creator of the K-SVD algorithm, [1] together with Michal Aharon and Bruckstein, and he is also the author of the 2010 book [2] "Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing".
In 2017, Elad and Yaniv Romano (his PhD student) created a specialized MOOC on sparse representation theory, given under edX.
During the years 2015-2018, Elad headed the Rothschild-Technion Program for Excellence]. This is an undergraduate program at the Technion, meant for exceptional students with emphasis on tailored and challenging study tracks.[ citation needed ]
Robert Gray Gallager is an American electrical engineer known for his work on information theory and communications networks.
Abraham Lempel was an Israeli computer scientist and one of the fathers of the LZ family of lossless data compression algorithms.
Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions taken from . An approximation with atoms has the form
Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals. Compressed sensing has applications in, for example, MRI where the incoherence condition is typically satisfied.
Sparse approximation theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in image processing, signal processing, machine learning, medical imaging, and more.
In mathematics, more specifically in linear algebra, the spark of a matrix is the smallest integer such that there exists a set of columns in which are linearly dependent. If all the columns are linearly independent, is usually defined to be 1 more than the number of rows. The concept of matrix spark finds applications in error-correction codes, compressive sensing, and matroid theory, and provides a simple criterion for maximal sparsity of solutions to a system of linear equations.
Ali H. Sayed is the dean of engineering at EPFL, where he teaches and conducts research on Adaptation, Learning, Statistical Signal Processing, and Signal Processing for Communications. He is the Director of the EPFL Adaptive Systems Laboratory. He has authored several books on estimation and filtering theories, including the textbook Adaptive Filters, published by Wiley & Sons in 2008. Professor Sayed received the degrees of Engineer and Master of Science in Electrical Engineering from the University of São Paulo, Brazil, in 1987 and 1989, respectively, and the Doctor of Philosophy degree in electrical engineering from Stanford University in 1992.
In linear algebra, the coherence or mutual coherence of a matrix A is defined as the maximum absolute value of the cross-correlations between the columns of A.
Lee Swindlehurst is an electrical engineer who has made contributions in sensor array signal processing for radar and wireless communications, detection and estimation theory, and system identification, and has received many awards in these areas. He is currently a Professor of Electrical Engineering and Computer Science at the University of California at Irvine.
In applied mathematics and statistics, basis pursuit denoising (BPDN) refers to a mathematical optimization problem of the form
Professor Shlomo Shamai (Shitz) (Hebrew: שלמה שמאי (שיץ) ) is a distinguished professor at the Department of Electrical engineering at the Technion − Israel Institute of Technology. Professor Shamai is an information theorist and winner of the 2011 Shannon Award.
In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
In applied mathematics, k-SVD is a dictionary learning algorithm for creating a dictionary for sparse representations, via a singular value decomposition approach. k-SVD is a generalization of the k-means clustering method, and it works by iteratively alternating between sparse coding the input data based on the current dictionary, and updating the atoms in the dictionary to better fit the data. It is structurally related to the expectation–maximization (EM) algorithm. k-SVD can be found widely in use in applications such as image processing, audio processing, biology, and document analysis.
Sparse dictionary learning is a representation learning method which aims at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation.
Ümit V. Çatalyürek is a professor of computer science at the Georgia Institute of Technology, and Adjunct Professor in department of Biomedical Informatics at the Ohio State University. He is known for his work on graph analytics, parallel algorithms for scientific applications, data-intensive computing, and large scale genomic and biomedical applications. He was the director of the High Performance Computing Lab at the Ohio State University. He was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2016 for contributions to combinatorial scientific computing and parallel computing.
Yonina C. Eldar is an Israeli professor of electrical engineering at the Weizmann Institute of Science, known for her pioneering work on sub-Nyquist sampling.
Mário A. T. Figueiredo is a Portuguese engineer, academic, and researcher. He is an IST Distinguished Professor and holds the Feedzai chair of machine learning at IST, University of Lisbon.
Michal Aharon is an Israeli computer scientist known for her research on sparse dictionary learning, image denoising, and the K-SVD algorithm in machine learning. She is a researcher on advertisement ranking for Yahoo! in Haifa.
Joseph Tabrikian is an Israeli professor in the School of Electrical and Computer Engineering at Ben-Gurion University of the Negev. He is the founder and former head of the School. He is a fellow of IEEE “For contributions to estimation theory and Multiple-Input Multiple-Output radars.”
Moshe Sidi is a professor emeritus in the Faculty of Electrical and Computer Engineering at the Technion - Israel Institute of Technology.