Stephen Muggleton | |
---|---|
Born | 6 December 1959 |
Alma mater | University of Edinburgh |
Known for | |
Awards | |
Scientific career | |
Fields | |
Institutions | |
Thesis | Inductive acquisition of expert knowledge (1987) |
Doctoral advisor | Donald Michie [3] |
Website | www |
Stephen H. Muggleton FBCS, FIET, FAAAI, [4] FECCAI, FSB, FREng [5] (born 6 December 1959, son of Louis Muggleton) is Professor of Machine Learning and Head of the Computational Bioinformatics Laboratory at Imperial College London. [2] [6] [7] [8] [9] [10] [11]
Muggleton received his Bachelor of Science degree in computer science (1982) and Doctor of Philosophy in artificial intelligence (1986) supervised by Donald Michie at the University of Edinburgh. [12]
Following his PhD, Muggleton went on to work as a postdoctoral research associate at the Turing Institute in Glasgow (1987–1991) and later an EPSRC Advanced Research Fellow at Oxford University Computing Laboratory (OUCL) (1992–1997) where he founded the Machine Learning Group. [13] In 1997 he moved to the University of York and in 2001 to Imperial College London.
Muggleton's research interests [7] [14] are primarily in Artificial intelligence. From 1997 to 2001 he held the Chair of Machine Learning at the University of York [15] and from 2001 to 2006 the EPSRC Chair of Computational Bioinformatics at Imperial College in London. Since 2013 he holds the Syngenta/Royal Academy of Engineering Research Chair [16] as well as the post of Director of Modelling for the Imperial College Centre for Integrated Systems Biology. [16] He is known for founding the field of Inductive logic programming. [17] [18] [19] [20] [21] In this field he has made contributions to theory introducing predicate invention, inverse entailment and stochastic logic programs. He has also played a role in systems development where he was instrumental in the systems Duce, Cigol, Golem, [22] Progol and Metagol [23] and applications – especially biological prediction tasks.
He worked on a Robot Scientist together with Ross D. King [24] that is capable of combining Inductive Logic Programming with active learning. [25] His present work concentrates on the development of Meta-Interpretive Learning, [23] a new form of Inductive Logic Programming which supports predicate invention and learning of recursive programs.
Logic programming is a programming, database and knowledge-representation and reasoning paradigm which is based on formal logic. A program, database or knowledge base in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
Prolog is a logic programming language that has its origins in artificial intelligence and computational linguistics.
Inductive logic programming (ILP) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers to philosophical rather than mathematical induction. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples.
Hypercomputation or super-Turing computation is a set of models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.
Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.
Golem is an inductive logic programming algorithm developed by Stephen Muggleton and Cao Feng in 1990. It uses the technique of relative least general generalisation proposed by Gordon Plotkin, leading to a bottom-up search through the subsumption lattice. In 1992, shortly after its introduction, Golem was considered the only inductive logic programming system capable of scaling to tens of thousands of examples.
Machine Learning is a peer-reviewed scientific journal, published since 1986.
Alan Richard Bundy is a professor at the School of Informatics at the University of Edinburgh, known for his contributions to automated reasoning, especially to proof planning, the use of meta-level reasoning to guide proof search.
Ian Robert Horrocks is a professor of computer science at the University of Oxford in the UK and a Fellow of Oriel College, Oxford. His research focuses on knowledge representation and reasoning, particularly ontology languages, description logic and optimised tableaux decision procedures.
Progol is an implementation of inductive logic programming that combines inverse entailment with general-to-specific search through a refinement graph.
Peter William O'Hearn, formerly a research scientist at Meta, is a Distinguished Engineer at Lacework and a Professor of Computer science at University College London (UCL). He has made significant contributions to formal methods for program correctness. In recent years these advances have been employed in developing industrial software tools that conduct automated analysis of large industrial codebases.
Ross Donald King is a Professor of Machine Intelligence at Chalmers University of Technology.
The Turing Institute was an artificial intelligence laboratory in Glasgow, Scotland, between 1983 and 1994. The company undertook basic and applied research, working directly with large companies across Europe, the United States and Japan developing software as well as providing training, consultancy and information services.
Inductive programming (IP) is a special area of automatic programming, covering research from artificial intelligence and programming, which addresses learning of typically declarative and often recursive programs from incomplete specifications, such as input/output examples or constraints.
Andrei Anatolievič Voronkov is a Professor of Formal methods in the Department of Computer Science at the University of Manchester.
Thomas G. Dietterich is emeritus professor of computer science at Oregon State University. He is one of the pioneers of the field of machine learning. He served as executive editor of Machine Learning (journal) (1992–98) and helped co-found the Journal of Machine Learning Research. In response to the media's attention on the dangers of artificial intelligence, Dietterich has been quoted for an academic perspective to a broad range of media outlets including National Public Radio, Business Insider, Microsoft Research, CNET, and The Wall Street Journal.
Kristian Kersting is a German computer scientist. He is Professor of Artificial intelligence and Machine Learning at the Department of Computer Science at the Technische Universität Darmstadt, Head of the Artificial Intelligence and Machine Learning Lab (AIML) and Co-Director of hessian.AI, the Hessian Center for Artificial Intelligence.
Javier Andreu-Perez is a British computer scientist and a Senior Lecturer and Chair in Smart Health Technologies at the University of Essex. He is also associate editor-in-chief of Neurocomputing for the area of Deep Learning and Machine Learning. Andreu-Perez research is mainly focused on Human-Centered Artificial Intelligence (HCAI). He also chairs a interdisciplinary lab in this area, HCAI-Essex.
Deepak Kapur is a Distinguished Professor in the Department of Computer Science at the University of New Mexico.
Alessandra Russo is a professor in Applied Computational Logic at the Department of Computing, Imperial College London.