Data transformation

Last updated

In computing, data transformation is the process of converting data from one format or structure into another format or structure. It is a fundamental aspect of most data integration [1] and data management tasks such as data wrangling, data warehousing, data integration and application integration.

Contents

Data transformation can be simple or complex based on the required changes to the data between the source (initial) data and the target (final) data. Data transformation is typically performed via a mixture of manual and automated steps. [2] Tools and technologies used for data transformation can vary widely based on the format, structure, complexity, and volume of the data being transformed.

A master data recast is another form of data transformation where the entire database of data values is transformed or recast without extracting the data from the database. All data in a well designed database is directly or indirectly related to a limited set of master database tables by a network of foreign key constraints. Each foreign key constraint is dependent upon a unique database index from the parent database table. Therefore, when the proper master database table is recast with a different unique index, the directly and indirectly related data are also recast or restated. The directly and indirectly related data may also still be viewed in the original form since the original unique index still exists with the master data. Also, the database recast must be done in such a way as to not impact the applications architecture software.

When the data mapping is indirect via a mediating data model, the process is also called data mediation.

Data transformation process

Data transformation can be divided into the following steps, each applicable as needed based on the complexity of the transformation required.

These steps are often the focus of developers or technical data analysts who may use multiple specialized tools to perform their tasks.

The steps can be described as follows:

Data discovery is the first step in the data transformation process. Typically the data is profiled using profiling tools or sometimes using manually written profiling scripts to better understand the structure and characteristics of the data and decide how it needs to be transformed.

Data mapping is the process of defining how individual fields are mapped, modified, joined, filtered, aggregated etc. to produce the final desired output. Developers or technical data analysts traditionally perform data mapping since they work in the specific technologies to define the transformation rules (e.g. visual ETL tools, [3] transformation languages).

Code generation is the process of generating executable code (e.g. SQL, Python, R, or other executable instructions) that will transform the data based on the desired and defined data mapping rules. [4] Typically, the data transformation technologies generate this code [5] based on the definitions or metadata defined by the developers.

Code execution is the step whereby the generated code is executed against the data to create the desired output. The executed code may be tightly integrated into the transformation tool, or it may require separate steps by the developer to manually execute the generated code.

Data review is the final step in the process, which focuses on ensuring the output data meets the transformation requirements. It is typically the business user or final end-user of the data that performs this step. Any anomalies or errors in the data that are found and communicated back to the developer or data analyst as new requirements to be implemented in the transformation process. [1]

Types of data transformation

Batch data transformation

Traditionally, data transformation has been a bulk or batch process, [6] whereby developers write code or implement transformation rules in a data integration tool, and then execute that code or those rules on large volumes of data. [7] This process can follow the linear set of steps as described in the data transformation process above.

Batch data transformation is the cornerstone of virtually all data integration technologies such as data warehousing, data migration and application integration. [1]

When data must be transformed and delivered with low latency, the term “microbatch” is often used. [6] This refers to small batches of data (e.g. a small number of rows or small set of data objects) that can be processed very quickly and delivered to the target system when needed.

Benefits of batch data transformation

Traditional data transformation processes have served companies well for decades. The various tools and technologies (data profiling, data visualization, data cleansing, data integration etc.) have matured and most (if not all) enterprises transform enormous volumes of data that feed internal and external applications, data warehouses and other data stores. [8]

Limitations of traditional data transformation

This traditional process also has limitations that hamper its overall efficiency and effectiveness. [1] [2] [7]

The people who need to use the data (e.g. business users) do not play a direct role in the data transformation process. [9] Typically, users hand over the data transformation task to developers who have the necessary coding or technical skills to define the transformations and execute them on the data. [8]

This process leaves the bulk of the work of defining the required transformations to the developer. The developer interprets the business user requirements and implements the related code/logic. This has the potential of introducing errors into the process (through misinterpreted requirements), and also increases the time to arrive at a solution. [9] [10]

This problem has given rise to the need for agility and self-service in data integration (i.e. empowering the user of the data and enabling them to transform the data themselves interactively). [7] [10]

There are companies that provide self-service data transformation tools. They are aiming to efficiently analyze, map and transform large volumes of data without the technical and process complexity that currently exists. While these companies use traditional batch transformation, their tools enable more interactivity for users through visual platforms and easily repeated scripts. [11]

Interactive data transformation

Interactive data transformation (IDT) [12] is an emerging capability that allows business analysts and business users the ability to directly interact with large datasets through a visual interface, [9] understand the characteristics of the data (via automated data profiling or visualization), and change or correct the data through simple interactions such as clicking or selecting certain elements of the data. [2]

Although IDT follows the same data integration process steps as batch data integration, the key difference is that the steps are not necessarily followed in a linear fashion and typically don't require significant technical skills for completion. [13]

A number of companies, primarily start-ups such as Trifacta, Alteryx and Paxata provide interactive data transformation tools. They are aiming to efficiently analyze, map and transform large volumes of data without the technical and process complexity that currently exists.

IDT solutions provide an integrated visual interface that combines the previously disparate steps of data analysis, data mapping and code generation/execution and data inspection. [8] IDT interfaces incorporate visualization to show the user patterns and anomalies in the data so they can identify erroneous or outlying values. [9]

Once they've finished transforming the data, the system can generate executable code/logic, which can be executed or applied to subsequent similar data sets.

By removing the developer from the process, IDT systems shorten the time needed to prepare and transform the data, eliminate costly errors in interpretation of user requirements and empower business users and analysts to control their data and interact with it as needed. [10]

Transformational languages

There are numerous languages available for performing data transformation. Many transformation languages require a grammar to be provided. In many cases, the grammar is structured using something closely resembling Backus–Naur Form (BNF). There are numerous languages available for such purposes varying in their accessibility (cost) and general usefulness. [14] Examples of such languages include:

Additionally, companies such as Trifacta and Paxata have developed domain-specific transformational languages (DSL) for servicing and transforming datasets. The development of domain-specific languages has been linked to increased productivity and accessibility for non-technical users. [15] Trifacta's “Wrangle” is an example of such a domain specific language. [16]

Another advantage of the recent DSL trend is that a DSL can abstract the underlying execution of the logic defined in the DSL, but it can also utilize that same logic in various processing engines, such as Spark, MapReduce, and Dataflow. With a DSL, the transformation language is not tied to the engine. [16]

Although transformational languages are typically best suited for transformation, something as simple as regular expressions can be used to achieve useful transformation. A text editor like vim, emacs or TextPad supports the use of regular expressions with arguments. This would allow all instances of a particular pattern to be replaced with another pattern using parts of the original pattern. For example:

foo ("some string", 42, gCommon); bar (someObj, anotherObj);  foo ("another string", 24, gCommon); bar (myObj, myOtherObj); 

could both be transformed into a more compact form like:

foobar("some string", 42, someObj, anotherObj); foobar("another string", 24, myObj, myOtherObj);

In other words, all instances of a function invocation of foo with three arguments, followed by a function invocation with two arguments would be replaced with a single function invocation using some or all of the original set of arguments.

Another advantage to using regular expressions is that they will not fail the null transform test. That is, using your transformational language of choice, run a sample program through a transformation that doesn't perform any transformations. Many transformational languages will fail this test.

See also

Related Research Articles

Extract, transform, load Procedure in computing

In computing, extract, transform, load (ETL) is a three-phase process where data is first extracted then transformed and finally loaded into an output data container. The data can be collated from one or more sources and it can also be outputted to one or more destinations. ETL processing is typically executed using software applications but it can be also be done manually by system operators. ETL software typically automates the entire process and can be run manually or on reoccurring schedules either as single jobs or aggregated into a batch of jobs.

A domain-specific language (DSL) is a computer language specialized to a particular application domain. This is in contrast to a general-purpose language (GPL), which is broadly applicable across domains. There are a wide variety of DSLs, ranging from widely used languages for common domains, such as HTML for web pages, down to languages used by only one or a few pieces of software, such as MUSH soft code. DSLs can be further subdivided by the kind of language, and include domain-specific markup languages, domain-specific modeling languages, and domain-specific programming languages. Special-purpose computer languages have always existed in the computer age, but the term "domain-specific language" has become more popular due to the rise of domain-specific modeling. Simpler DSLs, particularly ones used by a single application, are sometimes informally called mini-languages.

Apache Cocoon, usually abbreviated as Cocoon, is a web application framework built around the concepts of Pipeline, separation of concerns, and component-based web development. The framework focuses on XML and XSLT publishing and is built using the Java programming language. The flexibility afforded by relying heavily on XML allows rapid content publishing in a variety of formats including HTML, PDF, and WML. The content management systems Apache Lenya and Daisy have been created on top of the framework. Cocoon is also commonly used as a data warehousing ETL tool or as middleware for transporting data between systems.

Data Transformation Services

Data Transformation Services, or DTS, is a set of objects and utilities to allow the automation of extract, transform and load operations to or from a database. The objects are DTS packages and their components, and the utilities are called DTS tools. DTS was included with earlier versions of Microsoft SQL Server, and was almost always used with SQL Server databases, although it could be used independently with other databases.

In computing and data management, data mapping is the process of creating data element mappings between two distinct data models. Data mapping is used as a first step for a wide variety of data integration tasks, including:

Analysis of Functional NeuroImages

Analysis of Functional NeuroImages (AFNI) is an open-source environment for processing and displaying functional MRI data—a technique for mapping human brain activity.

SQL Server Integration Services (SSIS) is a component of the Microsoft SQL Server database software that can be used to perform a broad range of data migration tasks.

Data cleansing or data cleaning is the process of detecting and correcting corrupt or inaccurate records from a record set, table, or database and refers to identifying incomplete, incorrect, inaccurate or irrelevant parts of the data and then replacing, modifying, or deleting the dirty or coarse data. Data cleansing may be performed interactively with data wrangling tools, or as batch processing through scripting.

A semantic mapper is tool or service that aids in the transformation of data elements from one namespace into another namespace. A semantic mapper is an essential component of a semantic broker and one tool that is enabled by the Semantic Web technologies.

Sunopsis is a software company based near Lyon, France. It also has a United States headquarters in Burlington, Massachusetts. The company was bought by Oracle in October 2006. The new name of Sunopsis is Oracle Data Integrator - ODI.

Data integration involves combining data residing in different sources and providing users with a unified view of them. This process becomes significant in a variety of situations, which include both commercial and scientific domains. Data integration appears with increasing frequency as the volume and the need to share existing data explodes. It has become the focus of extensive theoretical work, and numerous open problems remain unsolved. Data integration encourages collaboration between internal as well as external users. The data being integrated must be received from a heterogeneous database system and transformed to a single coherent data store that provides synchronous data across a network of files for clients. A common use of data integration is in data mining when analyzing and extracting information from existing databases that can be useful for Business information.

MagicDraw

MagicDraw is a visual UML, SysML, BPMN, and UPDM modeling tool with team collaboration support. Designed for business analysts, software analysts, programmers, and QA engineers, this dynamic and versatile development tool facilitates analysis and design of object oriented (OO) systems and databases. It provides the code engineering mechanism, as well as database schema modeling, DDL generation and reverse engineering facilities.

Sandcastle is a documentation generator from Microsoft. It automatically produces MSDN-style code documentation out of reflection information of .NET assemblies and XML documentation comments found in the source code of these assemblies. It can also be used to produce user documentation from Microsoft Assistance Markup Language (MAML) with the same look and feel as reference documentation.

IBM App Connect Enterprise is IBM's integration broker from the WebSphere product family that allows business information to flow between disparate applications across multiple hardware and software platforms. Rules can be applied to the data flowing through the message broker to route and transform the information. The product is an Enterprise Service Bus supplying a communication channel between applications and services in a service-oriented architecture.

Data wrangling, sometimes referred to as data munging, is the process of transforming and mapping data from one "raw" data form into another format with the intent of making it more appropriate and valuable for a variety of downstream purposes such as analytics. The goal of data wrangling is to assure quality and useful data. Data analysts typically spend the majority of their time in the process of data wrangling compared to the actual analysis of the data.

Domain-specific multimodeling is a software development paradigm where each view is made explicit as a separate domain-specific language (DSL).

An integration platform is software which integrates different applications and services. It differentiates itself from the enterprise application integration which has a focus on supply chain management. It uses the idea of system integration to create an environment for engineers.

KNIME, the Konstanz Information Miner, is a free and open-source data analytics, reporting and integration platform. KNIME integrates various components for machine learning and data mining through its modular data pipelining "Building Blocks of Analytics" concept. A graphical user interface and use of JDBC allows assembly of nodes blending different data sources, including preprocessing, for modeling, data analysis and visualization without, or with only minimal, programming.

Guaraná DSL

Guaraná DSL is a domain-specific language (DSL) to design enterprise application integration (EAI) solutions at a high level of abstraction. The resulting models are platform-independent, so engineers do not need to have skills on a low-level integration technology when designing their solutions. Furthermore, this design can be re-used to automatically generate executable EAI solutions for different target technologies.

Oracle TopLink is a mapping and persistence framework for Java developers. TopLink is produced by Oracle and is a part of Oracle's OracleAS, WebLogic, and OC4J servers. It is an object-persistence and object-transformation framework. TopLink provides development tools and run-time functionalities that ease the development process and help increase functionality. Persistent object-oriented data is stored in relational databases which helps build high-performance applications. Storing data in either XML or relational databases is made possible by transforming it from object-oriented data.

References

  1. 1 2 3 4 CIO.com. Agile Comes to Data Integration. Retrieved from: https://www.cio.com/article/2378615/data-management/agile-comes-to-data-integration.html
  2. 1 2 3 DataXFormer. Morcos, Abedjan, Ilyas, Ouzzani, Papotti, Stonebraker. An interactive data transformation tool. Retrieved from: http://livinglab.mit.edu/wp-content/uploads/2015/12/DataXFormer-An-Interactive-Data-Transformation-Tool.pdf
  3. DWBIMASTER. Top 10 ETL Tools. Retrieved from: http://dwbimaster.com/top-10-etl-tools/
  4. Petr Aubrecht, Zdenek Kouba. Metadata driven data transformation. Retrieved from: http://labe.felk.cvut.cz/~aubrech/bin/Sumatra.pdf
  5. LearnDataModeling.com. Code Generators. Retrieved from: http://www.learndatamodeling.com/tm_code_generator.php
  6. 1 2 TDWI. 10 Rules for Real-Time Data Integration. Retrieved from: https://tdwi.org/Articles/2012/12/11/10-Rules-Real-Time-Data-Integration.aspx?Page=1
  7. 1 2 3 Tope Omitola, Andr´e Freitas, Edward Curry, Sean O'Riain, Nicholas Gibbins, and Nigel Shadbolt. Capturing Interactive Data Transformation Operations using Provenance Workflows Retrieved from: http://andrefreitas.org/papers/preprint_capturing%20interactive_data_transformation_eswc_highlights.pdf
  8. 1 2 3 The Value of Data Transformation
  9. 1 2 3 4 Morton, Kristi -- Interactive Data Integration and Entity Resolution for Exploratory Visual Data Analytics. Retrieved from: https://digital.lib.washington.edu/researchworks/handle/1773/35165
  10. 1 2 3 McKinsey.com. Using Agile to Accelerate Data Transformation
  11. "Why Self-Service Prep Is a Killer App for Big Data". Datanami. 2016-05-31. Retrieved 2017-09-20.
  12. Tope Omitola , Andr´e Freitas , Edward Curry , Sean O’Riain , Nicholas Gibbins , and Nigel Shadbolt. Capturing Interactive Data Transformation Operations using Provenance Workflows Retrieved from: http://andrefreitas.org/papers/preprint_capturing%20interactive_data_transformation_eswc_highlights.pdf
  13. Peng Cong, Zhang Xiaoyi. Research and Design of Interactive Data Transformation and Migration System for Heterogeneous Data Sources. Retrieved from: https://ieeexplore.ieee.org/document/5211525/
  14. DMOZ. Extraction and Transformation. Retrieved from: https://dmoztools.net/Computers/Software/Databases/Data_Warehousing/Extraction_and_Transformation/
  15. "Wrangle Language - Trifacta Wrangler - Trifacta Documentation". docs.trifacta.com. Retrieved 2017-09-20.
  16. 1 2 Kandel, Joe Hellerstein, Sean. "Advantages of a Domain-Specific Language Approach to Data Transformation - Strata + Hadoop World in New York 2014". conferences.oreilly.com. Retrieved 2017-09-20.