Vasant Dhar | |
---|---|
Nationality | Indian |
Alma mater | The Lawrence School, Sanawar Indian Institute of Technology Delhi University of Pittsburgh |
Scientific career | |
Fields | Data science Information systems Machine learning Artificial intelligence Big data Finance |
Institutions | New York University |
Vasant Dhar is a professor at the Stern School of Business and the Center for Data Science at New York University, [1] former editor-in-chief of the journal Big Data [2] and the founder of SCT Capital, one of the first machine-learning-based hedge funds in New York City in the 1990s. His research focuses on building scalable decision-making systems from large sources of data using techniques and principles from the disciplines of artificial intelligence and machine learning.
This section of a biography of a living person does not include any references or sources .(September 2022) |
Dhar is a graduate of The Lawrence School, Sanawar, which he considers one of the best presents his parents gave him without realizing it. He graduated from the Indian Institute of Technology Delhi in 1978 with a B.Tech in chemical engineering. He subsequently attended the University of Pittsburgh where he received an M. Phil and a Ph.D. in 1984. After he earned his doctorate, he joined the faculty at New York University. He worked at Morgan Stanley between 1994 and 1997 where he created the Data Mining Group that focused on predicting financial markets and customer behavior.
This section of a biography of a living person does not include any references or sources .(September 2022) |
Dhar is an artificial intelligence researcher and data scientist whose research addresses the question, when do we trust AI systems with decision making? The question is particularly relevant to current-day autonomous machine-learning-based systems that learn and adapt with ongoing data. His research has been motivated by building predictive models in a number of domains, most notably finance, as well as areas including healthcare, sports, education and business, asking why are we willing to trust machines in some areas and not others? His view is that there is a discontinuity when we give complete decision-making control to a machine that learns from ongoing data. This discontinuity introduces some risks, specifically those around the errors made by such systems, which directly affect our degree of trust in them.
Dhar's research breaks down trust along two risk-based dimensions: predictability, or how frequently a system makes mistakes (X-axis), and the associated costs of error (Y-axis) of such mistakes. The research demonstrates the existence of a "frontier" that expresses a trade-off between how often a system will be wrong and the consequences of such mistakes. Trust, and hence our willingness to cede control of decision making to the machine, increases with increasing predictability and lower error costs. In other words, we are willing to trust machines if they do not make too many mistakes and their costs are tolerable. As mistakes increase, we require that their consequences be less costly.
The automation frontier provides a natural way to think about the future of work. With more and better data and algorithms, parts of existing processes become automated due to increased predictability, and cross the automation frontier into the "trust the machine" zone, whereas the parts with high error costs remain under human control. The model provides a way to think about the changing responsibilities of humans and machines as more data and better algorithms become better than humans with decisions.
Dhar also uses the framework to frame policy issues around the risks of AI-based social media platforms and issues of privacy and ethical uses and governance of data. He writes regularly in the media on artificial intelligence, societal risks of AI platforms, data governance, privacy, ethics, and trust. He is a frequent speaker in academic as well as industrial forums.
Dhar teaches courses on systematic investing, prediction, data science and the foundations of FinTech. He has written over 100 research articles, funded by grants from industry and government agencies such as the National Science Foundation.
Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves.
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.
An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
Predictive analytics is a form of business analytics applying machine learning to generate a predictive model for certain business applications. As such, it encompasses a variety of statistical techniques from predictive modeling and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events. It represents a major subset of machine learning applications; in some contexts, it is synonymous with machine learning.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.
The fields of marketing and artificial intelligence converge in systems which assist in areas such as market forecasting, and automation of processes and decision making, along with increased efficiency of tasks which would usually be performed by humans. The science behind these systems can be explained through neural networks and expert systems, computer programs that process input and provide valuable output for marketers.
Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.
Artificial empathy or computational empathy is the development of AI systems—such as companion robots or virtual agents—that can detect emotions and respond to them in an empathic way.
Artificial intelligence in healthcare is an overarching term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.
Industrial artificial intelligence, or industrial AI, usually refers to the application of artificial intelligence to industry. Unlike general artificial intelligence which is a frontier research discipline to build computerized systems that perform tasks requiring human intelligence, industrial AI is more concerned with the application of such technologies to address industrial pain-points for customer value creation, productivity improvement, cost reduction, site optimization, predictive analysis and insight discovery.
Explainable AI (XAI), often known as Interpretable AI, or Explainable Machine Learning (XML), either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
Marzyeh Ghassemi is a Canada-based researcher in the field of computational medicine, where her research focuses on developing machine-learning algorithms to inform health-care decisions. She is currently an assistant professor at the University of Toronto's Department of Computer Science and Faculty of Medicine, and is a Canada CIFAR Artificial Intelligence (AI) chair and Canada Research Chair in machine learning for health.
The Artificial Intelligence of Things (AIoT) is the combination of Artificial intelligence (AI) technologies with the Internet of things (IoT) infrastructure to achieve more efficient IoT operations, improve human-machine interactions and enhance data management and analytics.
Artificial intelligence agents sometimes misbehave due to faulty objective functions that fail to adequately encapsulate the programmers' intended goals. The misaligned objective function may look correct to the programmer, and may even perform well in a limited test environment, yet may still produce unanticipated and undesired results when deployed.
The impact of artificial intelligence on workers includes both applications to improve worker safety and health, and potential hazards that must be controlled.
Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.