Root Cause Analysis Solver Engine

Last updated
Root Cause Analysis Solver Engine (RCASE)
Class Information science
Data structureinaccurate, incomplete and erroneous data

Root Cause Analysis Solver Engine (informally RCASE) is a proprietary algorithm developed from research originally at the Warwick Manufacturing Group (WMG) at Warwick University. [1] [2] RCASE development commenced in 2003 to provide an automated version of root cause analysis, the method of problem solving that tries to identify the root causes of faults or problems. [3] RCASE is now owned by the spin-out company Warwick Analytics where it is being applied to automated predictive analytics software.

Contents

Algorithm

The algorithm has been built from the ground up to be particularly suitable for the following situations:

RCASE is considered to be an innovator in the field of Predictive analytics and falls within the category of classification algorithms . Because it was built to handle the data types above, it has been proven to have many advantages over other types of classification algorithms and machine learning algorithms such as decision trees, neural networks and regression techniques. It does not require hypotheses. [4] [5]

It has since been commercialised and made available for operating systems such as SAP, [6] Teradata and Microsoft. [7] RCASE originated from manufacturing and is widely used in applications such as Six Sigma, quality control and engineering, product design and warranty issues. However it is also used in other industries such as e-commerce, financial services and utilities where root cause analysis is required. [8]

Notable applications

Motorola, the home of Six Sigma, used the research technology behind RCASE to support their quality processes. It was used to eliminate No Fault Found quality issues for a particular mobile phone model. [9]

Mechanism & architecture

RCASE is non-statistical and thus does not require any hypotheses. [10] If the key parameters causing the issue or fault in a process are not present in a dataset, it will still narrow the search space and advise where the root cause may lie. This is a different approach to statistical theories which try to find a best fit. [11]

RCASE is based on optimised combinatorial theory and runs on either a grid cluster or a high performance in-memory database. The software will interface with all MES and ERP systems. [12] The result is a security system monitoring and preventing defective products from being produced. The output from the analysis will be markers that identify either an exact root cause of failure or a parametric region pointing high probability of failure (i.e. data-driven guidance on where to look next to gather data and resolve the root cause exactly). [13]

The software can be installed on Linux or Microsoft operating systems and deployed as On-Premises or Software-as-a-Service (“SaaS” or “cloud”). [14]

See also

Related Research Articles

Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.

In science and engineering, root cause analysis (RCA) is a method of problem solving used for identifying the root causes of faults or problems. It is widely used in IT operations, manufacturing, telecommunications, industrial process control, accident analysis, medicine, healthcare industry, etc. Root cause analysis is a form of inductive and deductive inference.

The following outline is provided as an overview of and topical guide to software engineering:

Design for Six Sigma (DFSS) is a collection of best-practices for the development of new products and processes. It is sometimes deployed as an engineering design process or business process management method. DFSS originated at General Electric to build on the success they had with traditional Six Sigma; but instead of process improvement, DFSS was made to target new product development. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off.

PDCA or plan–do–check–act is an iterative design and management method used in business for the control and continual improvement of processes and products. It is also known as the Shewhart cycle, or the control circle/cycle. Another version of this PDCA cycle is OPDCA. The added "O" stands for observation or as some versions say: "Observe the current condition." This emphasis on observation and current condition has currency with the literature on lean manufacturing and the Toyota Production System. The PDCA cycle, with Ishikawa's changes, can be traced back to S. Mizuno of the Tokyo Institute of Technology in 1959.

Failure mode and effects analysis is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. An FMEA can be a qualitative analysis, but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. It was one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. An FMEA is often the first step of a system reliability study.

Troubleshooting is a form of problem solving, often applied to repair failed products or processes on a machine or a system. It is a logical, systematic search for the source of a problem in order to solve it, and make the product or process operational again. Troubleshooting is needed to identify the symptoms. Determining the most likely cause is a process of elimination—eliminating potential causes of a problem. Finally, troubleshooting requires confirmation that the solution restores the product or process to its working state.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

Predictive modelling uses statistics to predict outcomes. Most often the event one wants to predict is in the future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred. For example, predictive models are often used to detect crimes and identify suspects, after the crime has taken place.

<span class="mw-page-title-main">Predictive maintenance</span> Method to predict when equipment should be maintained

Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed. This approach promises cost savings over routine or time-based preventive maintenance, because tasks are performed only when warranted. Thus, it is regarded as condition-based maintenance carried out as suggested by estimations of the degradation state of an item.

Eight Disciplines Methodology (8D) is a method or model developed at Ford Motor Company used to approach and to resolve problems, typically employed by quality engineers or other professionals. Focused on product and process improvement, its purpose is to identify, correct, and eliminate recurring problems. It establishes a permanent corrective action based on statistical analysis of the problem and on the origin of the problem by determining the root causes. Although it originally comprised eight stages, or 'disciplines', it was later augmented by an initial planning stage. 8D follows the logic of the PDCA cycle. The disciplines are:

OptiY is a design environment software that provides modern optimization strategies and state of the art probabilistic algorithms for uncertainty, reliability, robustness, sensitivity analysis, data-mining and meta-modeling.

<span class="mw-page-title-main">Data science</span> Interdisciplinary field of study on deriving knowledge and insights from data

Data science is an interdisciplinary academic field that uses statistics, scientific computing, scientific methods, processes, algorithms and systems to extract or extrapolate knowledge and insights from potentially noisy, structured, or unstructured data.

<span class="mw-page-title-main">SAP HANA</span> Database management system by SAP

SAP HANA is an in-memory, column-oriented, relational database management system developed and marketed by SAP SE. Its primary function as the software running a database server is to store and retrieve data as requested by the applications. In addition, it performs advanced analytics and includes extract, transform, load (ETL) capabilities as well as an application server.

In the fields of information technology (IT) and systems management, IT operations analytics (ITOA) is an approach or method to retrieve, analyze, and report data for IT operations. ITOA may apply big data analytics to large datasets to produce business insights. In 2014, Gartner predicted its use might increase revenue or reduce costs. By 2017, it predicted that 15% of enterprises will use IT operations analytics technologies.

Continued process verification (CPV) is the collection and analysis of end-to-end production components and processes data to ensure product outputs are within predetermined quality limits. In 2011 the Food and Drug Administration published a report outlining best practices regarding business process validation in the pharmaceutical industry. Continued process verification is outlined in this report as the third stage in Process Validation.

pSeven For designing software used in electronics and embedded systems

pSeven is a DSE software platform that was developed by pSeven SAS that features design, simulation, and analysis capabilities and assists in design decisions. It provides integration with third-party CAD and CAE software tools; multi-objective and robust optimization algorithms; data analysis, and uncertainty quantification tools.

oneAPI Data Analytics Library, is a library of optimized algorithmic building blocks for data analysis stages most commonly associated with solving Big Data problems.

Industrial big data refers to a large amount of diversified time series generated at a high speed by industrial equipment, known as the Internet of things. The term emerged in 2012 along with the concept of "Industry 4.0”, and refers to big data”, popular in information technology marketing, in that data created by industrial equipment might hold more potential business value. Industrial big data takes advantage of industrial Internet technology. It uses raw data to support management decision making, so to reduce costs in maintenance and improve customer service. Please see intelligent maintenance system for more reference.

Artificial Intelligence for IT Operations (AIOps) is a term coined by Gartner in 2016 as an industry category for machine learning analytics technology that enhances IT operations analytics. AIOps is the acronym of "Artificial Intelligence Operations". Such operation tasks include automation, performance monitoring and event correlations among others.

References

  1. "Overview of Warwick Analytical Software Limited". Business Week. Archived from the original on November 8, 2014. Retrieved 8 November 2014.
  2. "Manufacturing Global, Emerging Predictive Analytics for the Manufacturing Industries". Issuu . Retrieved 8 November 2014.
  3. "When Academia Meets The Real World, The Experience Can Be Life-altering: A First Person Perspective by Dan Sommers, Warwick Analytics". TechNet. Retrieved 8 November 2014.
  4. "Manufacturing 4.0 – From Industrialisation to Data-Driven Product Lifecycle". Citizen Tekk. Retrieved 8 November 2014.
  5. "Removing hypotheses for fault-finding in Six Sigma to revolutionise quality management". Supply Chain Digital. Retrieved 8 November 2014.
  6. "SAP to boost growth opportunities, deliver innovation with disruptive solutions from partner tie-ups". InformationWeek . Retrieved 8 November 2014.
  7. "SAP Spurs Innovation by Powering More Than 500 Startups Globally With SAP HANA". SAP SE . Retrieved 8 November 2014.
  8. "How German-based SAP is creating a startup ecosystem from Silicon Valley". Pando Daily . Retrieved 8 November 2014.
  9. "Advanced analytics solves 'No Fault Found' issues". Warwick Manufacturing Group . Retrieved 8 November 2014.
  10. "Analytical software could solve mass product recall problems". Engineering and Technology Magazine . Retrieved 8 November 2014.
  11. "Warwick Analytics pioneers manufacturing fault finder software". The Engineer . Retrieved 8 November 2014.
  12. "Press release: Midlands Company could solve mass vehicle recall problems". University of Warwick. 25 October 2010.
  13. "Using Big Data to Achieve Zero Defects". European Business Review. Retrieved 8 November 2014.
  14. "Warwick Analytics Revolutionises Manufacturing Processes at DEMO Fall 2013". Boston.com . Retrieved 8 November 2014.