Wang's research work has centered on connecting language,vision,and the empirical study of generative intelligence.[2] He is the recipient of IEEE SPS Pierre-Simon Laplace Award[3] and British Computer Society's Karen Spärck Jones Award.[4]
From 2013 to 2016,Wang worked as a research fellow at Carnegie Mellon University. In 2016,he joined University of California as an assistant professor,became associate professor in 2021,and professor in 2023.[6] In 2019,he was appointed as Duncan and Suzanne Mellichamp Chair in Artificial Intelligence and Designs.[2] He was employed at Amazon Web Services as a visiting academic in 2022.[7]
Wang is a founder and CEO of Alpha Design AI and holds the position of director of the UCSB Responsible Machine Learning Center,[8] UCSB NLP Group,[9] and UCSB Mind and Machine Intelligence Initiative.[10]
Research
Wang's research interests have focused on Machine Learning (ML),Natural Language Processing (NLP),and Artificial Intelligence (AI),investigating reasoning methods and generative models in particular. In 2017,he presented LIAR,a dataset for detecting fake news,and determined that a CNN model that combined text and meta-data performed better than deep learning models that only used text.[11] Together with Xiong and Hoang,he used accuracy-aware reward functions and established a DeepPath learning reinforcement approach for multi-hop reasoning in graphs of knowledge.[12] In other work based on vision-language navigation,he introduced Reinforced Cross-Modal Matching (RCM).[13]
Wang carried out a survey to assess the NLP techniques for detecting fake news and suggested solutions to enable platforms to tackle false information.[14]
William Yang Wang (2017). ""Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection". arXiv:1705.00648 [cs.CL].
Xiong, Wenhan; Hoang, Thien; William Yang Wang (2017). "DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning". arXiv:1707.06690 [cs.CL].
Wang, X.; Huang, Q.; Celikyilmaz, A.; Gao, J.; Shen, D.; Wang, Y. F.; Zhang, L. (2019). "Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation". 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp.6629–6638. arXiv:1811.10092. doi:10.1109/CVPR.2019.00679. ISBN978-1-7281-3293-8.
Sun, Tony; Gaut, Andrew; Tang, Shirlyn; Huang, Yuxin; ElSherief, Mai; Zhao, Jieyu; Mirza, Diba; Belding, Elizabeth; Chang, Kai-Wei; William Yang Wang (2019). "Mitigating Gender Bias in Natural Language Processing: Literature Review". arXiv:1906.08976 [cs.CL].
Wang, X.; Wu, J.; Chen, J.; Lei, L.; Wang, Y. F.; Wang, W. Y. (2019). "VaTeX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research". 2019 IEEE/CVF International Conference on Computer Vision (ICCV). pp.4581–4591. arXiv:1904.03493. doi:10.1109/ICCV.2019.00468. ISBN978-1-7281-4803-8.
References
↑ "William Wang". University of California, Santa Barbara. Retrieved April 30, 2025.
1 2 "William Wang". University of California, Santa Barbara. Retrieved May 12, 2025.
↑ "Directory". UCSB Mind and Machine Intelligence Initiative. Retrieved April 30, 2025.
↑ Upadhayay, Bibek; Behzadan, Vahid (9 November 2020). "Sentimental LIAR: Extended Corpus and Deep Learning Models for Fake Claim Classification". 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). pp.1–6. arXiv:2009.01047. doi:10.1109/ISI49825.2020.9280528. ISBN978-1-7281-8800-3.
↑ Das, Rajarshi; Dhuliawala, Shehzaad; Zaheer, Manzil; Vilnis, Luke; Durugkar, Ishan; Krishnamurthy, Akshay; Smola, Alex; McCallum, Andrew (2018). "Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning". ICLR.
↑ Wu, Siying; Fu, Xueyang; Wu, Feng; Zha, Zheng-Jun (10 October 2022). "Cross-modal Semantic Alignment Pre-training for Vision-and-Language Navigation". Proceedings of the 30th ACM International Conference on Multimedia. pp.4233–4241. doi:10.1145/3503161.3548283. ISBN978-1-4503-9203-7.
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.