Shai Ben-David

Last updated

Shai Ben-David is an Israeli-Canadian computer scientist and professor at the University of Waterloo. He is known for his research in theoretical machine learning. [1]

Contents

Biography

Shai Ben-David grew up in Jerusalem, Israel and received a Ph.D. in mathematics from the Hebrew University of Jerusalem, [2] where he was advised by Saharon Shelah. [3] [2] He held postdoctoral positions in mathematics and computer science at the University of Toronto. He was a professor of computer science at the Technion and also held visiting positions at the Australian National University and Cornell University. [4]

He has been a professor of computer science at the University of Waterloo since 2004.

Selected publications and awards

Ben-David has written highly cited papers on learning theory and online algorithms. [5] [6] [7] [8] [9] He is a co-author, with Shai Shalev-Shwartz, of the book "Understanding Machine Learning: From Theory to Algorithms"(Cambridge University Press, 2014). [1]

He received the best paper award at NeurIPS 2018. [10] for work on sample complexity of distribution learning problems. [11] He was the President of the Association for Computational Learning from 2009 to 2011. [12]

Awards

Publications

Authors: Shai Shalev-Shwartz, Shai Ben-David Publication date 2014/5/19 Publisher Cambridge university press

Authors: Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman Vaughan Publication date 2010/5 Journal Machine learning Volume 79 Pages 151-175 Publisher Springer US

Authors: Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira Publication date 2006 Journal Advances in neural information processing systems Volume 19

Authors: Daniel Kifer, Shai Ben-David, Johannes Gehrke Publication date 2004/8/31 Journal VLDB Volume 4

Related Research Articles

<span class="mw-page-title-main">Leslie Valiant</span> British American computer scientist

Leslie Gabriel Valiant is a British American computer scientist and computational theorist. He was born to a chemical engineer father and a translator mother. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. Valiant was awarded the Turing Award in 2010, having been described by the A.C.M. as a heroic figure in theoretical computer science and a role model for his courage and creativity in addressing some of the deepest unsolved problems in science; in particular for his "striking combination of depth and breadth".

<span class="mw-page-title-main">Noga Alon</span> Israeli mathematician

Noga Alon is an Israeli mathematician and a professor of mathematics at Princeton University noted for his contributions to combinatorics and theoretical computer science, having authored hundreds of papers.

Margin-infused relaxed algorithm (MIRA) is a machine learning algorithm, an online algorithm for multiclass classification problems. It is designed to learn a set of parameters by processing all the given training examples one-by-one and updating the parameters according to each training example, so that the current training example is classified correctly with a margin against incorrect classifications at least as large as their loss. The change of the parameters is kept as small as possible.

Dataspaces are an abstraction in data management that aim to overcome some of the problems encountered in data integration system. The aim is to reduce the effort required to set up a data integration system by relying on existing matching and mapping generation techniques, and to improve the system in "pay-as-you-go" fashion as it is used. Labor-intensive aspects of data integration are postponed until they are absolutely needed.

<span class="mw-page-title-main">Ofer Dekel (researcher)</span>

Ofer Dekel is a computer science researcher in the Machine Learning Department of Microsoft Research. He obtained his PhD in Computer Science from the Hebrew University of Jerusalem and is an affiliate faculty at the Computer Science & Engineering department at the University of Washington.

Jubatus is an open-source online machine learning and distributed computing framework developed at Nippon Telegraph and Telephone and Preferred Infrastructure. Its features include classification, recommendation, regression, anomaly detection and graph mining. It supports many client languages, including C++, Java, Ruby and Python. It uses Iterative Parameter Mixture for distributed machine learning.

Philip S. Yu is an American computer scientist and professor of information technology at the University of Illinois at Chicago. He is a prolific author, holds over 300 patents, and is known for his work in the field of data mining.

Michael Justin Kearns is an American computer scientist, professor and National Center Chair at the University of Pennsylvania, the founding director of Penn's Singh Program in Networked & Social Systems Engineering (NETS), the founding director of Warren Center for Network and Data Sciences, and also holds secondary appointments in Penn's Wharton School and department of Economics. He is a leading researcher in computational learning theory and algorithmic game theory, and interested in machine learning, artificial intelligence, computational finance, algorithmic trading, computational social science and social networks. He previously led the Advisory and Research function in Morgan Stanley's Artificial Intelligence Center of Excellence team, and is currently an Amazon Scholar within Amazon Web Services.

<span class="mw-page-title-main">Domain adaptation</span> Field associated with machine learning and transfer learning

Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning a model from a source data distribution and applying that model on a different target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user to a new user who receives significantly different emails. Domain adaptation has also been shown to be beneficial for learning unrelated sources. Note that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation.

Zhou Zhihua is a Chinese computer scientist and Professor of Computer Science at Nanjing University. He is the Standing Deputy Director of the National Key Laboratory for Novel Software Technology, and Founding Director of the LAMDA Group. His research interests include artificial intelligence, machine learning and data mining.

<span class="mw-page-title-main">Ihab Ilyas</span> Canadian-Egyptian computer scientist (born 1973)

Ihab Francis Ilyas is a computer scientist who works in data science. He is currently a professor of computer science in the David R. Cheriton School of Computer Science at the University of Waterloo. He also leads the Knowledge Platform team at Apple Inc. Ihab is the holder of the Thomson Reuters-NSERC Industrial Research Chair in Data Cleaning at the University of Waterloo.

Animashree (Anima) Anandkumar is the Bren Professor of Computing at California Institute of Technology. Previously, she was a director of Machine Learning research at NVIDIA. Her research considers tensor-algebraic methods, deep learning and non-convex problems.

Maria-Florina (Nina) Balcan is a Romanian-American computer scientist whose research investigates machine learning, algorithmic game theory, theoretical computer science, including active learning, kernel methods, random-sampling mechanisms and envy-free pricing. She is an associate professor of computer science at Carnegie Mellon University.

<span class="mw-page-title-main">Joseph Keshet</span> Israeli professor of Computer Science

Joseph (Yossi) Keshet is an Israeli professor in the Electrical and Computer Engineering Faculty of the Technion.

<span class="mw-page-title-main">Jennifer Wortman Vaughan</span> American computer scientist

Jennifer (Jenn) Wortman Vaughan is an American computer scientist and Senior Principal Researcher at Microsoft Research focusing mainly on building responsible artificial intelligence (AI) systems as part of Microsoft's Fairness, Accountability, Transparency, and Ethics in AI (FATE) initiative. Jennifer is also a co-chair of Microsoft's Aether group on transparency that works on operationalizing responsible AI across Microsoft through making recommendations on responsible AI issues, technologies, processes, and best practices. Jennifer is also active in the research community, she served as the workshops chair and the program co-chair of the Conference on Neural Information Processing Systems (NeurIPs) in 2019 and 2021, respectively. She currently serves as Steering Committee member of the Association for Computing Machinery Conference on Fairness, Accountability and Transparency. Jennifer is also a senior advisor to Women in Machine Learning (WiML), an initiative co-founded by Jennifer in 2006 aiming to enhance the experience of women in Machine Learning.

In the theory of Probably Approximately Correct Machine Learning, the Natarajan dimension characterizes the complexity of learning a set of functions, generalizing from the Vapnik-Chervonenkis dimension for boolean functions to multi-class functions. Originally introduced as the Generalized Dimension by Natarajan, it was subsequently renamed the Natarajan Dimension by Haussler and Long.

Mi Zhang is a computer scientist at Ohio State University, where he is an Associate Professor of Computer Science and Engineering and the director of AIoT and Machine Learning Systems Lab. He is best known for his work in Edge AI, Artificial Intelligence of Things (AIoT), machine learning systems, and mobile health.

Sébastien Bubeck is a French-American computer scientist and mathematician. He is currently a Senior Principal Research Manager at Microsoft Research in the Machine Learning Foundations group and was formerly professor at Princeton University. He is known for his contributions to online learning, optimization and more recently studying deep neural networks, and in particular transformer models.

References

  1. 1 2 Shalev-Shwartz, Shai; Ben-David, Shai (2014). Understanding Machine Learning: From Theory to Algorithms. Cambridge: Cambridge University Press. ISBN   978-1-107-05713-5.
  2. 1 2 "Shai Ben-David at the Mathematics Genealogy Project".
  3. "ACML 2018 Main/Speakers". www.acml-conf.org. Retrieved 2021-04-26.
  4. "Shai Ben-David | Simons Institute for the Theory of Computing". simons.berkeley.edu. Retrieved 2021-04-10.
  5. Ben-David, Shai; Blitzer, John; Crammer, Koby; Kulesza, Alex; Pereira, Fernando; Vaughan, Jennifer Wortman (2010-05-01). "A theory of learning from different domains". Machine Learning. 79 (1): 151–175. doi: 10.1007/s10994-009-5152-4 . ISSN   1573-0565.
  6. Schölkopf, Bernhard; Platt, John; Hofmann, Thomas (2007). Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference. MIT Press. ISBN   978-0-262-19568-3.
  7. VLDB (2004-10-08). Proceedings 2004 VLDB Conference: The 30th International Conference on Very Large Databases (VLDB). Elsevier. ISBN   978-0-08-053979-9.
  8. Ben-David, S.; Borodin, A.; Karp, R.; Tardos, G.; Wigderson, A. (1994-01-01). "On the power of randomization in on-line algorithms". Algorithmica. 11 (1): 2–14. doi:10.1007/BF01294260. ISSN   1432-0541. S2CID   26771869.
  9. Alon, Noga; Ben-David, Shai; Cesa-Bianchi, Nicolò; Haussler, David (1997-07-01). "Scale-sensitive dimensions, uniform convergence, and learnability". Journal of the ACM. 44 (4): 615–631. doi: 10.1145/263867.263927 . ISSN   0004-5411.
  10. "Professor Shai Ben-David and colleagues win best paper award at NeurIPS 2018". Cheriton School of Computer Science. 2018-12-03. Retrieved 2021-04-10.
  11. "Nearly Tight Sample Complexity Bounds for Learning Mixtures of Gaussians via Sample Compression Schemes" (PDF).
  12. "Shai Ben-David". CIFAR. Retrieved 2021-04-10.
  13. "Shai Ben-David". awards.acm.org. Retrieved 2024-01-26.