Percy Liang is an American computer scientist whose research focuses on machine learning, natural language processing, and foundation models. He is an Associate Professor of Computer Science at Stanford University and is the Director of the Center for Research on Foundation Models (CRFM). [1] [2]
Percy Liang | |
|---|---|
| Occupation | Associate Professor of Computer Science |
| Employer | Stanford University |
| Title | Director |
| Academic background | |
| Education | Massachusetts Institute of Technology (BS) Massachusetts Institute of Technology (MEng) University of California, Berkeley (PhD) |
| Doctoral advisor | Michael I. Jordan, Dan Klein |
| Academic work | |
| Discipline | Computer science |
| Sub-discipline | Machine learning,Natural language processing,Foundation models |
| Institutions | Stanford University |
Liang received a Bachelor of Science degree in 2004 and a Master of Engineering degree in 2005 from Massachusetts Institute of Technology. He received bronze and silver medals at the International Olympiad in Informatics (IOI). [3] He earned his Ph.D. in Computer Science from the University of California,Berkeley in 2011,where his doctoral advisors were Michael I. Jordan and Dan Klein. [4]
After completing his doctorate,Liang held a postdoctoral position at Google.[ citation needed ] He later joined the faculty at Stanford University,where he conducts research and teaches courses in artificial intelligence,machine learning,statistical learning theory,and language modeling.
Liang is known for his work on semantic parsing,weak and indirect supervision,robustness and generalization in machine learning,and the study of large-scale foundation models. [5] [6] He has also been an advocate for efficient and reproducible research,and is one of the developers of CodaLab Worksheets,a platform for managing computational experiments. [5] [7] [8]
Liang is the founding Director of the Center for Research on Foundation Models (CRFM) at Stanford. The center focuses on the development,evaluation,and governance of foundation models,including technical,social,and policy considerations. CRFM operates as an interdisciplinary research initiative within Stanford HAI. [1] [9]
With CRFM,Liang has supported the development of open source large language models. [10]
Liang has authored peer-reviewed publications in artificial intelligence and machine learning venues,including ACL,EMNLP,ICML,and COLT. His work has influenced both theoretical and applied research in natural language understanding and machine learning systems. [11] [12]
Liang has received awards for his research contributions,including the National Science Foundation CAREER Award, [13] the Presidential Early Career Award for Scientists and Engineers, [6] the IJCAI Computers and Thought Award, [14] and the Sloan Research Fellowship. [2]