Callison-Burch was a part-time visiting researcher at Google in 2019 and 2020, where he collaborated on applying Google's LLM to Dungeons & Dragons dialogues.[20] In 2023, he took a sabbatical at the Allen Institute for AI (AI2), where he contributed to vision-language models.[21][22][23]
Media Bias Detector (2025): Real-time tool analysing selection and framing bias in news, using LLM s to detect persuasive language differences (e.g., Russian vs. English Wikipedia).[34]
Holodeck (2024): Language-guided system for generating 3D embodied AI environments, presented at CVPR 2024.[35][36]
BORDIRLINES (2024): Dataset for cross-lingual retrieval-augmented generation, focusing on culturally sensitive tasks.[37][38]
He has co-authored over 200 publications, featured at conferences like ACL, EMNLP, and CVPR.[3][5][26][22]
Awards and recognition
Callison-Burch has received numerous awards:
Best Paper Honourable Mention at CVPR 2025 for "Molmo and PixMo".[27][29]
Best Paper Award at the Workshop on Cognitive Modelling and Computational Linguistics (CMCL) 2024 for "Evaluating Vision-Language Models on Bistable Images".[39]
Best Paper Award at STARSEM 2016 for "So-Called Non-Subsective Adjectives".[40]
Best Paper Award at the Workshop on Sense, Concept and Entity Representations 2017 for "Word Sense Filtering Improves Embedding-Based Lexical Substitution".[41]
Honourable Mention Award at CHI 2018 for "A Data-Driven Analysis of Workers’ Earnings on Amazon Mechanical Turk".[42]
Google Faculty Research Award (2013) for crowdsourcing in NLP.[2][43]
On May 17, 2023, Callison-Burch testified before the U.S. House Subcommittee on Courts, Intellectual Property, and the Internet on AI and copyright law.[2][7] His testimony emphasised generative AI’s role in creative industries and the need for balanced copyright frameworks.[7] He has appeared on Fox News to discuss AI’s societal impact, and discussed its impact with other print news sources.[45][46]
He contributes to AI ethics discussions, including workshops on AI’s effects on writing and creative professions.[47]
↑ Deitke, Matt; Clark, Christopher; Lee, Sangho; Tripathi, Rohun; Yang, Yue; Park, Jae Sung; Salehi, Mohammadreza; Muennighoff, Niklas; Lo, Kyle (2024-12-05). "Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Vision-Language Models". arXiv:2409.17146 [cs.CV].
↑ Wang, Jenny S.; Haider, Samar; Tohidi, Amir; Gupta, Anushkaa; Zhang, Yuxuan; Callison-Burch, Chris; Rothschild, David; Watts, Duncan J. (2025-04-28). "Media Bias Detector: Designing and Implementing a Tool for Real-Time Selection and Framing Bias Analysis in News Coverage". Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. pp.1–27. arXiv:2502.06009. doi:10.1145/3706598.3713716. ISBN979-8-4007-1394-1.
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.