Yixin Chen | |
---|---|
Born | 16 June 1979 |
Awards | Fellow, Association for the Advancement of Artificial Intelligence Fellow, Institute of Electrical and Electronics Engineers Fellow, Asia-Pacific Artificial Intelligence Association |
Academic background | |
Education | B.Sc. computer science M.Sc. computer science Ph.D. computer science |
Alma mater | University of Science and Technology of China University of Illinois at Urbana-Champaign |
Academic work | |
Discipline | Computer science,machine learning,deep learning |
Institutions | Washington University in St. Louis |
Yixin Chen is a professor of computer science and engineering at Washington University in St. Louis. [1] He is known for his contributions to deep learning systems.
Chen is an IEEE Fellow and an AAAI Fellow.
Chen completed his Bachelor's in Computer Science from the University of Science and Technology of China in 1999 and Master's in Computer Science from the University of Illinois at Urbana-Champaign in 2001. He then pursued his Ph.D. in computer science from the University of Illinois at Urbana-Champaign and completed it in 2005. [2]
Chen started his academic career Washington University in St. Louis in 2005. As of 2025,he is a professor in the department of Computer Science and Engineering. He is the Director of the Center for Collaborative Human-AI Learning and Operation (HALO) at Washington University. [1]
Chen's research is focused on computer sciences,in the fields of machine learning,deep learning,and data mining. He has made contributions to artificial intelligence in healthcare,optimization algorithms,data mining,and computational biomedicine.
Chen has conducted research on compactness and applicability of deep neural networks (DNNs). He proposed the concept and architecture of lightweight DNNs. His group invented the HashedNets architecture,which compresses prohibitively large DNNs into much smaller networks using a weight-sharing scheme. [3]
Chen also developed a compression framework for convolutional neural networks (CNNs). His lab invented a frequency-sensitive compression technique in which more important model parameters are better preserved. [4]
Chen and his students proposed DGCNN,one of the first graph convolution techniques that can learn a meaningful tensor representation from arbitrary graphs,and showed its deep connection to the Weisfeiler-Lehman algorithm. [5] They applied GNNs to link prediction (in the well-known SEAL algorithm) and matrix completion and achieved record results. [6]
For time series classification,Chen advocated using a multi-scale convolutional neuronal network,also known as MCNN,citing its computational efficiency. He illustrated that MCNN brings out features at varying frequencies and scales by leveraging GPU computing,contrary to other frameworks that can only retract features at a single-time-scale. [7]