This biographical article is written like a résumé .(July 2024) |
Bruce Martin McLaren | |
---|---|
Born | October 28, 1959 Pittsburgh, Pennsylvania, United States |
Education | B.S., Computer Science M.S., in Computer Science M.S., Intelligent Systems Ph.D., Intelligent Systems |
Alma mater | Millersville University of Pennsylvania University of Pittsburgh |
Occupation(s) | Researcher, scientist and author |
Children | 2 |
Academic career | |
Institutions | Carnegie Mellon University German Research Center for Artificial Intelligence Saarland University |
Main interests | Artificial Intelligence, Educational Technology, Digital Learning Games, Machine Ethics |
Doctoral advisor | Prof. Kevin D. Ashley |
Website | http://www.cs.cmu.edu/~bmclaren/ |
Bruce Martin McLaren (born 1959 in Pittsburgh, Pennsylvania) is an American researcher, scientist and author. He is a professor at Carnegie Mellon University [1] in the Human-Computer Interaction Institute, head of the McLearn Lab, [2] and a former President of the International Artificial Intelligence in Education Society (2017-2019). [3]
McLaren's scientific research is focused on exploring how students learn with digital learning games (also called educational games), intelligent tutoring systems, e-learning principles, and collaborative learning. McLaren is also a co-founder, along with Vincent Aleven, of Mathtutor, [4] a free website for middle-school math intelligent tutoring systems. He has written or co-written over 200 academic articles, [5] is in the top 0.5% of all scholars worldwide in Artificial Intelligence (regarding publication record and the quality of scholarly contributions, according to ScholarGPS [6] ), and holds five patents. [7]
McLaren received a B.S. in Computer Science (cum laude) from Millersville University of Pennsylvania in 1981. He later attended the University of Pittsburgh, where he received an M.S. in Computer Science in 1984 and an M.S. in Intelligent Systems in 1994. In 1999, McLaren received a Ph.D. in Intelligent Systems from the University of Pittsburgh. [8] His Ph.D. thesis was entitled "Assessing the Relevance of Cases and Principles Using Operationalization Techniques". [9] His doctoral advisor was Kevin Ashley. McLaren published a paper based on his Ph.D. in the Artificial Intelligence Journal. [10]
McLaren began his career as a software engineer, working for General Electric. After completing his M.Sc. McLaren joined the Robotics Institute at Carnegie Mellon University as a Research Programmer and then Project Supervisor in the Intelligent Systems Laboratory. In 1986 he joined Carnegie Group, an AI and expert systems company, as a Senior Consultant, where he was responsible for the company's expert systems projects in Europe. He later worked as a Senior Engineer and a Project Manager at the Carnegie Group in the United States until 1998. After completing his Ph.D. in 1999, McLaren joined OpenWebs Corporation where he first worked as the Director of Research and Development and then as the Director of eCommerce Technologies. In 2002, McLaren left OpenWebs to join Carnegie Mellon University (CMU) as a Systems Scientist. In 2015, he became an Associate Research Professor at CMU. [11]
From 2006 to 2010, he worked as a visiting senior researcher at the German Research Center for Artificial Intelligence in Saarbrücken, Germany, where he did research on collaborative learning, argumentation and technology for analyzing collaborative argumentation. On both the ARGUNAUT and LASAD projects, his research was focused on developing educational technology, using AI techniques, to help teachers moderate collaborative e-Discussions and arguments. [12] [13]
McLaren was elected to the Executive Committee of the International Artificial Intelligence in Education Society for a six-year term in 2011. From 2017 to 2019, he served as the President of the society. [14] During his tenure as president, McLaren instituted annual (versus bi-annual) society conferences, started the bi-annual Lifetime Achievement Awards [15] and worked toward a more diverse society, regarding gender, race, and geography. As President, McLaren was quoted in a 2019 article about AI in the classroom in a PBS article [16] In 2021, McLaren was again elected to the Executive Committee of the society.
McLaren has given keynote talks at a variety of educational technology conferences, including the 11th International Conference on e-Learning and e-Teaching (ICeLeT 2024) in Isfahan, Iran, [17] the 2021 IEEE International Conference on Engineering, Technology, and Education (TALE 2021) in Wuhan, China, [18] the Australian Learning Analytics Summer Institute in 2019 (ALASI 2019), [19] e-Learning Korea 2018, [20] and the 24th International Conference on Computers in Education in 2016 in Mumbai, India. [21]
McLaren is a faculty member in Carnegie Mellon University’s METALS (Masters of Educational Technology and Applied Learning Sciences) [22] program and has taught the METALS capstone course since 2016. [23]
McLaren's research is focused in three areas of educational technology: learning with digital learning games; learning to argue and reason through computer-mediated collaborative learning; and learning with interactive worked and erroneous examples. McLaren has also done fundamental research in how ethical reasoning can be implemented through artificial intelligence techniques, what is sometimes referred to as “machine ethics".
Collaborating with Professor Jodi Forlizzi, McLaren developed a digital learning game called Decimal Point to teach decimal fractions and decimal operations to middle school students. [24] In 2017, they conducted a study, which involved 153 students from two middle schools, 70 students learned about decimals from playing Decimal Point, whereas 83 students learned the same content by a more conventional, computer-based approach. In the study, the game led to significantly better learning gains, on both an immediate and delayed posttest and was rated by the students as significantly more enjoyable. [25] They later ran several replications of the study and achieved the same results. The replication studies also revealed that the game is more effective in teaching female students than male students. [26]
More recently, McLaren and his team have explored a variety of issues related to digital learning games, including student agency, [27] [28] gender effects, [29] game-based educational data mining, [30] [31] and the impact of feedback and hints on student learning. [32] McLaren’s team has run studies in many middle schools in the local Pittsburgh area with these new research questions. A forthcoming book chapter describes the many studies run with the Decimal Point learning game between 2014 and 2023. [33]
In 2023 McLaren, along with his PhD student Huy Nguyen, authored a book chapter for the Handbook on AI in Education on how Artificial Intelligence has been used in digital learning games. [34] McLaren's lab has also explored the use of a large language model (LLM - ChatGPT) as a means of responding to prompted self-explanation in the context of Decimal Point. [35]
Since 2005, McLaren has done research on computer-supported collaborative learning (CSCL) and how technology can be leveraged to support constructivist learning. His initial work in collaborative learning involved the semi-automated development of intelligent tutors to support collaborative learning, [36] learning of algebra through scripted dyad collaboration with Cognitive Tutors, [37] and the learning of chemistry through scripted dyad collaboration with a virtual laboratory. This research supported the claim that collaborative learning can be improved with guidance, either explicit direction on steps to take or feedback on domain content, student actions, and/or collaboration. [38]
In collaboration with colleagues and his students, McLaren has developed software tools, using the combination of AI and language analysis techniques, to analyze collaborative argumentation or e-discussions, to help classroom teachers guide multiple discussions and, consequently, to help students learn argumentation skills. In a paper published in 2010, he and his students showed that software classifiers can be created using machine-learning techniques to identify key constructs in online collaborative arguments. A teacher can use these constructs to guide students in debating and learning with one another. [39]
McLaren and his team have focused on developing analysis and feedback techniques, which leverage the structure, order, and textual contributions of arguments, so that the teacher has information to guide and advise the collaborating groups. McLaren and colleagues used graph matching, machine learning, and language processing techniques to analyze e-discussions from high school ethics and university education classrooms. He and his team developed an algorithm called DOCE (Detection Of Clusters by Example) that, given labelled example clusters, can identify similar clusters of student contributions in new discussions. [40] Ultimately, both DOCE and the combined machine learning/text mining approach are used in the context of the ARGUNAUT system to provide "alerts" so that a teacher can, at a glance, see and react to problems in the e-discussions. [41]
McLaren's web-based argumentation workspace and variety of analysis techniques was later made widely available to a range of students and other researchers through another project, for which he, along with Niels Pinkwart, [42] was principal investigator, LASAD – Learning to Argue: Generalized Support Across Domains. [43]
McLaren's research has also explored how worked examples, both correct and incorrect, can be used to help students learn. In three separate but similar studies, he and his colleagues investigated whether examples studied in conjunction with tutored problems can lead to better learning.They found that worked examples alternating with isomorphic tutored problems did not produce greater learning gains than tutored problems alone. On the other hand, the examples group across the three studies learned more efficiently than the tutored-alone group; students spent 21% less time learning the same amount of material. [44]
McLaren is among the first educational technology researchers to extensively investigate the learning potential of interactive erroneous examples. [45] In the early 2010s, he participated in several research projects that explored the instructional benefits of erroneous examples. He conducted classroom studies with middle school math students that revealed that students who worked with erroneous examples to learn decimals performed better on a delayed posttest than those who worked with problems to solve. [46] With respect to correct worked examples, he and his colleagues later showed that worked examples can lead to as much learning but in significantly less time than erroneous examples, intelligently-tutored problems, and problems to solve in the domain of chemistry. [47]
McLaren has also collaborated with Professor Ryan Baker, an expert in educational data mining, and other colleagues on analyzing the affective states of students as they learn from erroneous examples. [48]
As part of his dissertation research, McLaren built a computational model of ethical reasoning, specifically a program built with AI and case-base reasoning techniques that retrieves and analyzes ethical dilemmas. Thus, McLaren is recognized as one of the first researchers to contribute to the research area of machine ethics and, according to Google Scholar, is the second most cited researcher in this field. [49] The journal paper McLaren published about his PhD work [50] is often cited within this research community. McLaren also wrote a journal article describing both his dissertation research and his earlier work on an ethical reasoning system called TRUTH-TELLER. [51] [52] A 2016 CNN article, in which McLaren is quoted, discusses the issue of machine ethics and robotics. [53]
McLaren's parents are Thomas James McLaren, who died in 2012 and who was a Presbyterian minister, and Shirley Martin McLaren, a former high school English teacher. McLaren was married to Gabriele McLaren ( née Huber) from 1990 until their divorce in 2013. McLaren is an avid outdoorsman and hiker; he hiked the entire Appalachian Trail in 1989. [54]
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
Educational technology is the combined use of computer hardware, software, and educational theory and practice to facilitate learning. When referred to with its abbreviation, "EdTech", it often refers to the industry of companies that create educational technology. In EdTech Inc.: Selling, Automating and Globalizing Higher Education in the Digital Age, Tanner Mirrlees and Shahid Alvi (2019) argue "EdTech is no exception to industry ownership and market rules" and "define the EdTech industries as all the privately owned companies currently involved in the financing, production and distribution of commercial hardware, software, cultural goods, services and platforms for the educational market with the goal of turning a profit. Many of these companies are US-based and rapidly expanding into educational markets across North America, and increasingly growing all over the world."
A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
An intelligent tutoring system (ITS) is a computer system that imitates human tutors and aims to provide immediate and customized instruction or feedback to learners, usually without requiring intervention from a human teacher. ITSs have the common goal of enabling learning in a meaningful and effective manner by using a variety of computing technologies. There are many examples of ITSs being used in both formal education and professional settings in which they have demonstrated their capabilities and limitations. There is a close relationship between intelligent tutoring, cognitive learning theories and design; and there is ongoing research to improve the effectiveness of ITS. An ITS typically aims to replicate the demonstrated benefits of one-to-one, personalized tutoring, in contexts where students would otherwise have access to one-to-many instruction from a single teacher, or no teacher at all. ITSs are often designed with the goal of providing access to high quality education to each and every student.
The following outline is provided as an overview of and topical guide to artificial intelligence:
The Human–Computer Interaction Institute (HCII) is a department within the School of Computer Science at Carnegie Mellon University (CMU) in Pittsburgh, Pennsylvania. It is considered one of the leading centers of human–computer interaction research, and was named one of the top ten most innovative schools in information technology by Computer World in 2008. For the past three decades, the institute has been the predominant publishing force at leading HCI venues, most notably ACM CHI, where it regularly contributes more than 10% of the papers. Research at the institute aims to understand and create technology that harmonizes with and improves human capabilities by integrating aspects of computer science, design, social science, and learning science.
Project LISTEN was a 25-year research project at Carnegie Mellon University to improve children's reading skills. Project LISTEN. The project created a computer-based Reading Tutor that listens to a child reading aloud, corrects errors, helps when the child is stuck or encounters a hard word, provides hints, assesses progress, and presents more advanced text when the child is ready. The Reading Tutor has been used daily by hundreds of children in field tests at schools in the United States, Canada, Ghana, and India. Thousands of hours of usage logged at multiple levels of detail, including millions of words read aloud, have been stored in a database that has been mined to improve the Tutor's interactions with students. An extensive list of publications can be found at Carnegie Mellon University.
AutoTutor is an intelligent tutoring system developed by researchers at the Institute for Intelligent Systems at the University of Memphis, including Arthur C. Graesser that helps students learn Newtonian physics, computer literacy, and critical thinking topics through tutorial dialogue in natural language. AutoTutor differs from other popular intelligent tutoring systems such as the Cognitive Tutor, in that it focuses on natural language dialog. This means that the tutoring occurs in the form of an ongoing conversation, with human input presented using either voice or free text input. To handle this input, AutoTutor uses computational linguistics algorithms including latent semantic analysis, regular expression matching, and speech act classifiers. These complementary techniques focus on the general meaning of the input, precise phrasing or keywords, and functional purpose of the expression, respectively. In addition to natural language input, AutoTutor can also accept ad hoc events such as mouse clicks, learner emotions inferred from emotion sensors, and estimates of prior knowledge from a student model. Based on these inputs, the computer tutor determine when to reply and what speech acts to reply with. This process is driven by a "script" that includes a set of dialog-specific production rules.
Allan M. Collins is an American cognitive scientist, Professor Emeritus of Learning Sciences at Northwestern University's School of Education and Social Policy. His research is recognized as having broad impact on the fields of cognitive psychology, artificial intelligence, and education.
Adaptive learning, also known as adaptive teaching, is an educational method which uses computer algorithms as well as artificial intelligence to orchestrate the interaction with the learner and deliver customized resources and learning activities to address the unique needs of each learner. In professional learning contexts, individuals may "test out" of some training to ensure they engage with novel instruction. Computers adapt the presentation of educational material according to students' learning needs, as indicated by their responses to questions, tasks and experiences. The technology encompasses aspects derived from various fields of study including computer science, AI, psychometrics, education, psychology, and brain science.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
A pedagogical agent is a concept borrowed from computer science and artificial intelligence and applied to education, usually as part of an intelligent tutoring system (ITS). It is a simulated human-like interface between the learner and the content, in an educational environment. A pedagogical agent is designed to model the type of interactions between a student and another person. Mabanza and de Wet define it as "a character enacted by a computer that interacts with the user in a socially engaging manner". A pedagogical agent can be assigned different roles in the learning environment, such as tutor or co-learner, depending on the desired purpose of the agent. "A tutor agent plays the role of a teacher, while a co-learner agent plays the role of a learning companion".
Vincent Aleven is a professor of human-computer interaction and director of the undergraduate program at Carnegie Mellon University's Human–Computer Interaction Institute.
Cognitive computing refers to technology platforms that, broadly speaking, are based on the scientific disciplines of artificial intelligence and signal processing. These platforms encompass machine learning, reasoning, natural language processing, speech recognition and vision, human–computer interaction, dialog and narrative generation, among other technologies.
Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
Argument technology is a sub-field of collective intelligence and artificial intelligence that focuses on applying computational techniques to the creation, identification, analysis, navigation, evaluation and visualisation of arguments and debates.
Seiji Isotani is a Brazilian scientist specializing in the areas of artificial intelligence applied to education, particularly intelligent tutoring systems, gamification, and educational technologies. He is a visiting full professor in education at Harvard University and a full professor in computer science and educational technologies at the University of São Paulo. since 2019,
Artificial intelligence or Ai is a broad “skewer” term that has specific areas of study clustered next to it, including machine learning, natural language processing, the philosophy of artificial intelligence and autonomous robots. Research about AI in higher education is widespread in the global north, and there is much hype from venture capital and big tech about revolutionising education with machines and their ability to understand natural language or improve reasoning There is at present, no scientific consensus on what Ai is or how to classify and sub-categorize AI This has not hampered the growth of Ai systems which offer scholars and students automatic assessment and feedback, predictions, instant machine translations, on-demand proof-reading and copy editing, intelligent tutoring or virtual assistants. Ai brings conversational coherence to the classroom, and automates the production of content.Using categorisation, summaries and dialogue, Ai "intelligence" or "authority" is reinforced through anthropomorphism and the Eliza effect. Ai also introduces hazards and harmful educational practices. Worries about risks such as privacy breaches, algorithmic biases, security concerns, ethics, compliance barriers are accompanied by other doomsday warnings.