Learnability

Last updated

Learnability is a quality of products and interfaces that allows users to quickly become familiar with them and able to make good use of all their features and capabilities

Contents

Software testing

In software testing learnability, according to ISO/IEC 9126, is the capability of a software product to enable the user to learn how to use it. Learnability may be considered as an aspect of usability, and is of major concern in the design of complex software applications.

Learnability is defined in the Standard glossary of terms used in software testing published by the International Software Testing Qualifications Board.

Computational learning theory

In computational learning theory, learnability is the mathematical analysis of machine learning. It is also employed in language acquisition in arguments within linguistics.

Frameworks include:

Related Research Articles

Spaced repetition An evidence-based learning technique performed with flashcards

Spaced repetition is an evidence-based learning technique that is usually performed with flashcards. Newly introduced and more difficult flashcards are shown more frequently, while older and less difficult flashcards are shown less frequently in order to exploit the psychological spacing effect. The use of spaced repetition has been proven to increase rate of learning.

Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs, and verifying that the software product is fit for use.

In machine learning, boosting is an ensemble meta-algorithm for primarily reducing bias, and also variance in supervised learning, and a family of machine learning algorithms that convert weak learners to strong ones. Boosting is based on the question posed by Kearns and Valiant : "Can a set of weak learners create a single strong learner?" A weak learner is defined to be a classifier that is only slightly correlated with the true classification. In contrast, a strong learner is a classifier that is arbitrarily well-correlated with the true classification.

Educational software is a term used for any computer software which is made for an educational purpose. It encompasses different ranges from language learning software to classroom management software to reference software, etc. The purpose of all this software is to make some part of education more effective and efficient.

A learning curve is a graphical representation of the relationship between how proficient someone is at a task and the amount of experience they have. Proficiency usually increases with increased experience, that is to say, the more someone performs a task, the better they get at it.

The technology acceptance model (TAM) is an information systems theory that models how users come to accept and use a technology.

In computational learning theory, probably approximately correct (PAC) learning is a framework for mathematical analysis of machine learning. It was proposed in 1984 by Leslie Valiant.

Algorithmic learning theory is a mathematical framework for analyzing machine learning problems and algorithms. Synonyms include formal learning theory and algorithmic inductive inference. Algorithmic learning theory is different from statistical learning theory in that it does not make use of statistical assumptions and analysis. Both algorithmic and statistical learning theory are concerned with machine learning and can thus be viewed as branches of computational learning theory.

In computer science, computational learning theory is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.

Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.

Language identification in the limit is a formal model for inductive inference of formal languages, mainly by computers. It was introduced by E. Mark Gold in a technical report and a journal article with the same title.

Leslie Valiant

Leslie Gabriel Valiant is a British American computer scientist and computational theorist. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. Valiant was awarded the A.M. Turing Award in 2010, having been described by the A.C.M. as a heroic figure in theoretical computer science and a role model for his courage and creativity in addressing some of the deepest unsolved problems in science; in particular for his "striking combination of depth and breadth".

Grammar induction is the process in machine learning of learning a formal grammar from a set of observations, thus constructing a model which accounts for the characteristics of the observed objects. More generally, grammatical inference is that branch of machine learning where the instance space consists of discrete combinatorial objects such as strings, trees and graphs.

End-user development (EUD) or end-user programming (EUP) refers to activities and tools that allow end-users – people who are not professional software developers – to program computers. People who are not professional developers can use EUD tools to create or modify software artifacts and complex data objects without significant knowledge of a programming language. In 2005 it was estimated that by 2012 there would be more than 55 million end-user developers in the United States, compared with fewer than 3 million professional programmers. Various EUD approaches exist, and it is an active research topic within the field of computer science and human-computer interaction. Examples include natural language programming, spreadsheets, scripting languages, visual programming, trigger-action programming and programming by example.

In computer science, programming by demonstration (PbD) is an end-user development technique for teaching a computer or a robot new behaviors by demonstrating the task to transfer directly instead of programming it through machine commands.

Verification and validation are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose. These are critical components of a quality management system such as ISO 9000. The words "verification" and "validation" are sometimes preceded with "independent", indicating that the verification and validation is to be performed by a disinterested third party. "Independent verification and validation" can be abbreviated as "IV&V".

Inductive programming (IP) is a special area of automatic programming, covering research from artificial intelligence and programming, which addresses learning of typically declarative and often recursive programs from incomplete specifications, such as input/output examples or constraints.

E. Mark Gold is an American physicist, mathematician, and computer scientist. He became well known for his article Language identification in the limit which pioneered a formal model for inductive inference of formal languages, mainly by computers. Since 1999, an award of the conference on Algorithmic learning theory is named after him.

'Learning Engineering is the systematic application of evidence-based principles and methods from educational technology and the learning sciences to create engaging and effective learning experiences, support the difficulties and challenges of learners as they learn, and come to better understand learners and learning. It emphasizes the use of a human-centered design approach in conjunction with analyses of rich data sets to iteratively develop and improve those designs to address specific learning needs, opportunities, and problems, often with the help of technology. Working with subject-matter and other experts, the Learning Engineer deftly combines knowledge, tools, and techniques from a variety of technical, pedagogical, empirical, and design-based disciplines to create effective and engaging learning experiences and environments and to evaluate the resulting outcomes. While doing so, the Learning Engineer strives to generate processes and theories that afford generalization of best practices, along with new tools and infrastructures that empower others to create their own learning designs based on those best practices.

References

  1. Gold, E. Mark (1967). "Language identification in the limit" (PDF). Information and Control. 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5.
  2. Valiant, Leslie (Nov 1984). "A theory of the learnable" (PDF). Communications of the ACM. 27 (11): 1134–1142. doi:10.1145/1968.1972.