This article may rely excessively on sources too closely associated with the subject , potentially preventing the article from being verifiable and neutral.(October 2023) |
Norman Fenton | |
---|---|
Born | 1956 (age 67–68) |
Nationality | British |
Alma mater | |
Scientific career | |
Fields |
|
Institutions | |
Thesis | Representation of matroids (1981) |
Doctoral advisor | Peter Vámos |
Website | www |
Norman E. Fenton (born 1956) is a British mathematician and computer scientist. He is the Professor of Risk Information Management in the School of Electronic Engineering and Computer Science at Queen Mary University of London. He is known for his work in software metrics and is the author of the textbook Software Metrics: A Rigorous Approach, as of 2014 in its third edition.
Fenton received his bachelor's degree in mathematics from the London School of Economics in 1978. He earned his Master of Science in 1978 and Doctor of Philosophy in 1981 at the University of Sheffield. [1] At Sheffield he was the second research student of Peter Vámos. [2] His doctoral thesis was "Representations of Matroids". [3]
Fenton was a postdoctoral fellow in the mathematics department at University College Dublin from 1981 to 1982 and the Mathematics Institute of the University of Oxford from 1982 to 1984. [1] [2] At the end of that period he changed fields [2] and began publishing papers on structured programming with Robin W. Whitty and Agnes A. Kaposi. [4] [5] In 1984 he joined the department of Electrical and Electronic Engineering at South Bank Polytechnic in London where he headed the Centre for Software and Systems Engineering research group. [1] [2] He began to publish on software metrics as well as program structure. [6] [7] [8]
In 1989 Fenton moved to City University as a reader in software reliability, and became a professor of Computing Science in 1992. [1]
In 1998, Fenton, along with Martin Neil and Ed Tranham, set up the company Agena Ltd in Cambridge. Fenton was CEO between 1998 and 2015 and remains a director. In 2000, Fenton joined Queen Mary University of London (School of Electronic Engineering and Computer Science) where he works as a part-time professor. He is director of the Risk and Information Management Research Group. [9]
In computer science, static program analysis is the analysis of computer programs performed without executing them, in contrast with dynamic program analysis, which is performed on programs during their execution.
In software engineering and development, a software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement, often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, software debugging, software performance optimization, and optimal personnel task assignments.
In the context of software engineering, software quality refers to two related but distinct notions:
Software visualization or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior—and their development process by means of static, interactive or animated 2-D or 3-D visual representations of their structure, execution, behavior, and evolution.
GQM, the initialism for goal, question, metric, is an established goal-oriented approach to software metrics to improve and measure software quality.
Les Hatton is a British-born computer scientist and mathematician most notable for his work on failures and vulnerabilities in software controlled systems.
Search-based software engineering (SBSE) applies metaheuristic search techniques such as genetic algorithms, simulated annealing and tabu search to software engineering problems. Many activities in software engineering can be stated as optimization problems. Optimization techniques of operations research such as linear programming or dynamic programming are often impractical for large scale software engineering problems because of their computational complexity or their assumptions on the problem structure. Researchers and practitioners use metaheuristic search techniques, which impose little assumptions on the problem structure, to find near-optimal or "good-enough" solutions.
Software measurement is a quantified attribute of a characteristic of a software product or the software process. It is a discipline within software engineering. The process of software measurement is defined and governed by ISO Standard ISO 15939.
Elaine Jessica Weyuker is an ACM Fellow, an IEEE Fellow, and an AT&T Fellow at Bell Labs for research in software metrics and testing as well as elected to the National Academy of Engineering. She is the author of over 130 papers in journals and refereed conference proceedings.
In software development, effort estimation is the process of predicting the most realistic amount of effort required to develop or maintain software based on incomplete, uncertain and noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.
Sir Anthony Charles Wiener Finkelstein is a British engineer and computer scientist. He is the President of City, University of London. He was Chief Scientific Adviser for National Security to HM Government until 2021.
Stuart Alan Geman is an American mathematician, known for influential contributions to computer vision, statistics, probability theory, machine learning, and the neurosciences. He and his brother, Donald Geman, are well known for proposing the Gibbs sampler, and for the first proof of convergence of the simulated annealing algorithm.
Software Intelligence is insight into the inner workings and structural condition of software assets produced by software designed to analyze database structure, software framework and source code to better understand and control complex software systems in Information Technology environments. Similarly to Business Intelligence (BI), Software Intelligence is produced by a set of software tools and techniques for the mining of data and the software's inner-structure. Results are automatically produced and feed a knowledge base containing technical documentation and blueprints of the innerworking of applications, and make it available to all to be used by business and software stakeholders to make informed decisions, measure the efficiency of software development organizations, communicate about the software health, prevent software catastrophes.
Magne Jørgensen is a Norwegian scientist and software engineer in the field of scientific computing. Jørgensen is chief research scientist at Simula Research Laboratory and is involved in the Research Group for Programming and Software Engineering as professor at the Department for Informatics at the University of Oslo.
The ABC software metric was introduced by Jerry Fitzpatrick in 1997 to overcome the drawbacks of the LOC. The metric defines an ABC score as a triplet of values that represent the size of a set of source code statements. An ABC score is calculated by counting the number of assignments (A), number of branches (B), and number of conditionals (C) in a program. ABC score can be applied to individual methods, functions, classes, modules or files within a program.
Tore Dybå is a Norwegian scientist and software engineer in the fields of information systems and computer science. He has been a Chief Scientist at SINTEF ICT since 2003.
Thomas K. Porter is the senior vice president of production strategy at Pixar and one of the studio's founding employees.
Marvin Victor Zelkowitz is an American computer scientist and engineer.
Manish Parashar is a Presidential Professor in the School of Computing, Director of the Scientific Computing and Imaging (SCI) Institute and Chair in Computational Science and Engineering at the University of Utah. He also currently serves as Office Director in the US National Science Foundation’s Office of Advanced Cyberinfrastructure. Parashar is the editor-in-chief of IEEE Transactions on Parallel and Distributed Systems, and Founding Chair of the IEEE Technical Community on High Performance Computing. He is an AAAS Fellow, ACM Fellow, and IEEE Fellow.
Ewart R Carson is a British chartered engineer, system scientist, author, and academic. He is a Professor Emeritus of Systems Science in the School of Science and Technology at City, University of London.