Engelbart's law is the observation that the intrinsic rate of human performance is exponential.[ further explanation needed ] The law is named after Douglas Engelbart, whose work in augmenting human performance was explicitly based on the realization that although we use technology, the ability to improve on improvements (bootstrapping, "getting better at getting better") resides entirely within the human sphere.
Engelbart's Bootstrapping concept identifies the general, and particular, meaning of the observation with regards to rate and performance: a quantity, amount, or degree of something measured per unit of something else. [1] That is, Engelbart's law is not limited to an increase in the acquisition, or use of, or quantity of knowledge, nor of the extent or depth of participation among individuals or teams, nor of the period-to-period change. The law is independent of the domain of performance and the quantity, amount, or degree on which one chooses a measure.
Humans have long performed at exponential levels, and in widely varying contexts and domains.
As with other phenomena, when we notice similar results when applying a reagent or catalyst across many contexts and domains, we associate the power to produce or induce those results with the reagent, here the human animal.
Stephen Jay Kline presented an interesting visualization of this exponential phenomenon in his 1995 book. [2] See page 173, figure 14-1. The Growth of Human Powers Over the Past 100,000 Years Plotted as Technoextension Factors (TEFs). The log-log chart (time, TEF) illustrates exponential performance extending over many domains and over hundreds of years.
On this topic Kline's work made heavy use of the work of John H. Lienhard. Kline specifically references Lienhard's The Rate of Technological Improvement before and after the 1830s. [3] Lienhard explored this topic several times at Engines of Our Ingenuity. [4] See specifically Double in a Lifetime. [5] Other relevant episodes include Influence of War, [6] and Influence of War, Updated . [7] In these latter two references Lienhard explores, and discards, the influence of an urgent necessity as a necessary driver to such performance.
In discussing the exponential nature of Moore's law, [8] Gordon Moore locates the roots of his inspiration in Engelbart's observations on the propensity of humans to envision and achieve scale, and its non-linear effects.
Engelbart labeled Collective IQ as the measure of how well people can work on important problems and opportunities collectively. It is, ultimately, a measure of effectiveness.
It has long been the fashion to talk of human performance as if it were dependent on a particular socio-technology fabric. Yet Engelbart felt that what was important was not the particulars of that fabric, but its nature. He called the nature of that fabric The Bootstrap Paradigm. [9] [10]
Central to his realization was a Dynamic Knowledge Repository [11] (DKR) capable of enabling the concurrent development, integration and application of knowledge (CoDIAK). Such a DKR would itself be subject to the CoDIAK process.
This is a co-evolution of the human system and the tool system. To facilitate this, Engelbart observed that a particular structure of human activities is most useful and natural, the A-level ('Business as Usual'), B-level ('Improving how we do that') and C-level ('Improving how we improve') Activities. [12]
In ABC Model, and particularly Turbo Charge the C Activity and Extra Bootstrapping Engelbart addresses the necessity of the C-level activity in the shift from an incremental improvement curve to an exponential improvement curve.
Whereas B-level activities achieve mildly-exponential results, Engelbart held that C-levels activities are necessary to achieve bootstrapping, improving the improvement, a direct dependence on the intrinsically exponential nature of humans.
Although Engelbart never published a metric for measuring Bootstrap effects, the Bootstrap Alliance, in 1997, considered, the characteristics of candidate metrics. [13]
As derived from the above, candidate metrics would necessarily:
In addition to the above acronyms, the following represents the performance of ABC-level Activities by their respective letters: .
Candidate metric:
| (1) |
Although this metric would reflect the augmentation of CIQ, the efficiency in the concurrent development, integration, and application of knowledge (CoDIAK) would be dependent solely on the application of B-level Activity to A-level Activity. (For example: ) There is insufficiently-explicit accounting of improvements to CoDIAK dynamics, and no accounting of C-level Activities.
Although powerful, Engelbart's Bootstrapping effects are also unaccounted in simple exponential power formulations.
Candidate metric:
| (2) |
Although this metric signifies an improved performance rate (e.g., we might say ), it too provides insuffienct accounting of other activities.
Comparing these two insufficient candidates illustrates the same key aspect underpinning Engelbart's law: human organization and directed activity are the essential elements to our performance.
An implication of the foregoing as insufficient Bootstrapping metrics is that simple exponential power relationships are insufficient to improve CoDIAK abilities. By definition this result precludes a simple combination of such metrics or laws, such as a factor or power relationship between Moore's law and Metcalfe's law (or variations thereon).
Various complex power functions, incorporating the effect of efforts at each of the ABC-level Activities were considered as candidates.
| (3) |
| (4) |
| (5) |
The differences illustrate the interplay and interdependencies of A-level, B-level and C-level Activities, and the Bootstrapping effects (improved improvement). In all three, improvement to CIQ, CoDIAK, and Bootstrapping itself depends on a recursive application of B-level Activities and C-level Activities.
[ example needed ]
Although levels of exponential rates of performance, over many time periods and in many domains, with respect to quantity, extent, or degree, is well-documented by Lienhard and many others, modern and real-time ongoing of Bootstrapping levels of performance remain difficult to find.
This section needs additional citations for verification .(July 2013) |
In explicitly placing within the human sphere the locus of ability for improving our improvement, Engelbart's law chides us against choosing anemic measures of change in performance. Linear rates, or simple compound rates fall far short of our intrinsic abilities.
In addition to envisioning the Bootstrap Paradigm, describing the nature of a suitable socio-technical fabric, Engelbart envisioned its particular characteristics, which, when placed into use, and subjected to improvement upon improvement, would meet human requirements.
In this way, to fully use A-, B-, and C-level Activities, and achieve bootstrapping-levels of performance, we may more easily and readily redefine our measures until we have a suitable basis for such performance: [ attribution needed ]
Engelbart, in writing and working, intended to apply this method of working to all domains of human endeavor, from the individual to the whole species, in private or public service [ attribution needed ].
In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input.
Douglas Carl Engelbart was an American engineer and inventor, and an early computer and Internet pioneer. He is best known for his work on founding the field of human–computer interaction, particularly while at his Augmentation Research Center Lab in SRI International, which resulted in creation of the computer mouse, and the development of hypertext, networked computers, and precursors to graphical user interfaces. These were demonstrated at The Mother of All Demos in 1968. Engelbart's law, the observation that the intrinsic rate of human performance is exponential, is named after him.
Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.
In electrical engineering, impedance is the opposition to alternating current presented by the combined effect of resistance and reactance in a circuit.
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
A decision tree is a decision support hierarchical model that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements.
A learning curve is a graphical representation of the relationship between how proficient people are at a task and the amount of experience they have. Proficiency usually increases with increased experience, that is to say, the more someone, groups, companies or industries perform a task, the better their performance at the task.
A performance indicator or key performance indicator (KPI) is a type of performance measurement. KPIs evaluate the success of an organization or of a particular activity in which it engages. KPIs provide a focus for strategic and operational improvement, create an analytical basis for decision making and help focus attention on what matters most.
Bootstrap aggregating, also called bagging, is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method. Bagging is a special case of the model averaging approach.
In mathematics, mixing is an abstract concept originating from physics: the attempt to describe the irreversible thermodynamic process of mixing in the everyday world: e.g. mixing paint, mixing drinks, industrial mixing.
In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in a data set. The output depends on whether k-NN is used for classification or regression:
The Carnegie Foundation for the Advancement of Teaching (CFAT) is a U.S.-based education policy and research center. It was founded by Andrew Carnegie in 1905 and chartered in 1906 by an act of the United States Congress. Among its most notable accomplishments are the development of the Teachers Insurance and Annuity Association (TIAA), the Flexner Report on medical education, the Carnegie Unit, the Educational Testing Service, and the Carnegie Classification of Institutions of Higher Education.
Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from a parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard deviations; under this model, non-robust methods like a t-test work poorly.
Homomorphic encryption is a form of encryption that allows computations to be performed on encrypted data without first having to decrypt it. The resulting computations are left in an encrypted form which, when decrypted, result in an output that is identical to that produced had the operations been performed on the unencrypted data. Homomorphic encryption can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted.
Bootstrapping is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.
In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved:
Supply-chain operations reference (SCOR) model is a process reference model developed and endorsed by the Supply-Chain Council as the cross-industry, standard diagnostic tool for supply chain management. The SCOR model describes the business activities associated with satisfying a customer's demand, which include plan, source, make, deliver, return and enable. Use of the model includes analyzing the current state of a company's processes and goals, quantifying operational performance, and comparing company performance to benchmark data. SCOR has developed a set of metrics for supply chain performance, and Supply Chain Council members have formed industry groups to collect best practices information that companies can use to elevate their supply chain models.
Overall equipment effectiveness (OEE) is a measure of how well a manufacturing operation is utilized compared to its full potential, during the periods when it is scheduled to run. It identifies the percentage of manufacturing time that is truly productive. An OEE of 100% means that only good parts are produced, at the maximum speed, and without interruption.
Sustainability measurement is a set of frameworks or indicators to measure how sustainable something is. This includes processes, products, services and businesses. Sustainability is difficult to quantify. It may even be impossible to measure. To measure sustainability, the indicators consider environmental, social and economic domains. The metrics are still evolving. They include indicators, benchmarks and audits. They include sustainability standards and certification systems like Fairtrade and Organic. They also involve indices and accounting. And they can include assessment, appraisal and other reporting systems. These metrics are used over a wide range of spatial and temporal scales. Sustainability measures include corporate sustainability reporting, Triple Bottom Line accounting. They include estimates of the quality of sustainability governance for individual countries. These use the Environmental Sustainability Index and Environmental Performance Index. Some methods let us track sustainable development. These include the UN Human Development Index and ecological footprints.
The Marketing Accountability Standards Board (MASB), authorized by the Marketing Accountability Foundation, is an independent, private sector, self-governing group of academics and practitioners that establishes marketing measurement and accountability standards intended for continuous improvement in financial performance, and for the guidance and education of users of performance and financial information.