Data vault modeling

Last updated
Simple data vault model with two hubs (blue), one link (green) and four satellites (yellow) Data Vault Example.png
Simple data vault model with two hubs (blue), one link (green) and four satellites (yellow)

Datavault or data vault modeling is a database modeling method that is designed to provide long-term historical storage of data coming in from multiple operational systems. It is also a method of looking at historical data that deals with issues such as auditing, tracing of data, loading speed and resilience to change as well as emphasizing the need to trace where all the data in the database came from. This means that every row in a data vault must be accompanied by record source and load date attributes, enabling an auditor to trace values back to the source. The concept was published in 2000 by Dan Linstedt.

Contents

Data vault modeling makes no distinction between good and bad data ("bad" meaning not conforming to business rules). [1] This is summarized in the statement that a data vault stores "a single version of the facts" (also expressed by Dan Linstedt as "all the data, all of the time") as opposed to the practice in other data warehouse methods of storing "a single version of the truth" [2] where data that does not conform to the definitions is removed or "cleansed". A data vault enterprise data warehouse provides both; a single version of facts and a single source of truth. [3]

The modeling method is designed to be resilient to change in the business environment where the data being stored is coming from, by explicitly separating structural information from descriptive attributes. [4] Data vault is designed to enable parallel loading as much as possible, [5] so that very large implementations can scale out without the need for major redesign.

Unlike the star schema (dimensional modelling) and the classical relational model (3NF), data vault and anchor modeling are well-suited for capturing changes that occur when a source system is changed or added, but are considered advanced techniques which require experienced data architects. [6] Both data vaults and anchor models are entity-based models, [7] but anchor models have a more normalized approach.[ citation needed ]

History and philosophy

In its early days, Dan Linstedt referred to the modeling technique which was to become data vault as common foundational warehouse architecture [8] or common foundational modeling architecture. [9] In data warehouse modeling there are two well-known competing options for modeling the layer where the data are stored. Either you model according to Ralph Kimball, with conformed dimensions and an enterprise data bus, or you model according to Bill Inmon with the database normalized [ citation needed ]. Both techniques have issues when dealing with changes in the systems feeding the data warehouse[ citation needed ]. For conformed dimensions you also have to cleanse data (to conform it) and this is undesirable in a number of cases since this inevitably will lose information[ citation needed ]. Data vault is designed to avoid or minimize the impact of those issues, by moving them to areas of the data warehouse that are outside the historical storage area (cleansing is done in the data marts) and by separating the structural items (business keys and the associations between the business keys) from the descriptive attributes.

Dan Linstedt, the creator of the method, describes the resulting database as follows:

"The Data Vault Model is a detail oriented, historical tracking and uniquely linked set of normalized tables that support one or more functional areas of business. It is a hybrid approach encompassing the best of breed between 3rd normal form (3NF) and star schema. The design is flexible, scalable, consistent and adaptable to the needs of the enterprise" [10]

Data vault's philosophy is that all data is relevant data, even if it is not in line with established definitions and business rules. If data are not conforming to these definitions and rules then that is a problem for the business, not the data warehouse. The determination of data being "wrong" is an interpretation of the data that stems from a particular point of view that may not be valid for everyone, or at every point in time. Therefore the data vault must capture all data and only when reporting or extracting data from the data vault is the data being interpreted.

Another issue to which data vault is a response is that more and more there is a need for complete auditability and traceability of all the data in the data warehouse. Due to Sarbanes-Oxley requirements in the USA and similar measures in Europe this is a relevant topic for many business intelligence implementations, hence the focus of any data vault implementation is complete traceability and auditability of all information.

Data Vault 2.0 is the new specification. It is an open standard. [11] The new specification consists of three pillars: methodology (SEI/CMMI, Six Sigma, SDLC, etc..), the architecture (amongst others an input layer (data stage, called persistent staging area in Data Vault 2.0) and a presentation layer (data mart), and handling of data quality services and master data services), and the model. Within the methodology, the implementation of best practices is defined. Data Vault 2.0 has a focus on including new components such as big data, NoSQL - and also focuses on the performance of the existing model. The old specification (documented here for the most part) is highly focused on data vault modeling. It is documented in the book: Building a Scalable Data Warehouse with Data Vault 2.0.

It is necessary to evolve the specification to include the new components, along with the best practices in order to keep the EDW and BI systems current with the needs and desires of today's businesses.

History

Data vault modeling was originally conceived by Dan Linstedt in the 1990s and was released in 2000 as a public domain modeling method. In a series of five articles in The Data Administration Newsletter the basic rules of the Data Vault method are expanded and explained. These contain a general overview, [12] an overview of the components, [13] a discussion about end dates and joins, [14] link tables, [15] and an article on loading practices. [16]

An alternative (and seldom used) name for the method is "Common Foundational Integration Modelling Architecture." [17]

Data Vault 2.0 [18] [19] has arrived on the scene as of 2013 and brings to the table Big Data, NoSQL, unstructured, semi-structured seamless integration, along with methodology, architecture, and implementation best practices.

Alternative interpretations

According to Dan Linstedt, the Data Model is inspired by (or patterned off) a simplistic view of neurons, dendrites, and synapses – where neurons are associated with Hubs and Hub Satellites, Links are dendrites (vectors of information), and other Links are synapses (vectors in the opposite direction). By using a data mining set of algorithms, links can be scored with confidence and strength ratings. They can be created and dropped on the fly in accordance with learning about relationships that currently don't exist. The model can be automatically morphed, adapted, and adjusted as it is used and fed new structures. [20]

Another view is that a data vault model provides an ontology of the Enterprise in the sense that it describes the terms in the domain of the enterprise (Hubs) and the relationships among them (Links), adding descriptive attributes (Satellites) where necessary.

Another way to think of a data vault model is as a graphical model. The data vault model actually provides a "graph based" model with hubs and relationships in a relational database world. In this manner, the developer can use SQL to get at graph-based relationships with sub-second responses.

Basic notions

Data vault attempts to solve the problem of dealing with change in the environment by separating the business keys (that do not mutate as often, because they uniquely identify a business entity) and the associations between those business keys, from the descriptive attributes of those keys.

The business keys and their associations are structural attributes, forming the skeleton of the data model. The data vault method has as one of its main axioms that real business keys only change when the business changes and are therefore the most stable elements from which to derive the structure of a historical database. If you use these keys as the backbone of a data warehouse, you can organize the rest of the data around them. This means that choosing the correct keys for the hubs is of prime importance for the stability of your model. [21] The keys are stored in tables with a few constraints on the structure. These key-tables are called hubs.

Hubs

Hubs contain a list of unique business keys with low propensity to change. Hubs also contain a surrogate key for each Hub item and metadata describing the origin of the business key. The descriptive attributes for the information on the Hub (such as the description for the key, possibly in multiple languages) are stored in structures called Satellite tables which will be discussed below.

The Hub contains at least the following fields: [22]

A hub is not allowed to contain multiple business keys, except when two systems deliver the same business key but with collisions that have different meanings.

Hubs should normally have at least one satellite. [22]

Hub example

This is an example for a hub-table containing cars, called "Car" (H_CAR). The driving key is vehicle identification number.

FieldnameDescriptionMandatory?Comment
H_CAR_IDSequence ID and surrogate key for the hubNoRecommended but optional [23]
VEHICLE_ID_NRThe business key that drives this hub. Can be more than one field for a composite business keyYes
H_RSRCThe record source of this key when first loadedYes
LOAD_AUDIT_IDAn ID into a table with audit information, such as load time, duration of load, number of lines, etc.No

Associations or transactions between business keys (relating for instance the hubs for customer and product with each other through the purchase transaction) are modeled using link tables. These tables are basically many-to-many join tables, with some metadata.

Links can link to other links, to deal with changes in granularity (for instance, adding a new key to a database table would change the grain of the database table). For instance, if you have an association between customer and address, you could add a reference to a link between the hubs for product and transport company. This could be a link called "Delivery". Referencing a link in another link is considered a bad practice, because it introduces dependencies between links that make parallel loading more difficult. Since a link to another link is the same as a new link with the hubs from the other link, in these cases creating the links without referencing other links is the preferred solution (see the section on loading practices for more information).

Links sometimes link hubs to information that is not by itself enough to construct a hub. This occurs when one of the business keys associated by the link is not a real business key. As an example, take an order form with "order number" as key, and order lines that are keyed with a semi-random number to make them unique. Let's say, "unique number". The latter key is not a real business key, so it is no hub. However, we do need to use it in order to guarantee the correct granularity for the link. In this case, we do not use a hub with surrogate key, but add the business key "unique number" itself to the link. This is done only when there is no possibility of ever using the business key for another link or as key for attributes in a satellite. This construct has been called a 'peg-legged link' by Dan Linstedt on his (now defunct) forum.

Links contain the surrogate keys for the hubs that are linked, their own surrogate key for the link and metadata describing the origin of the association. The descriptive attributes for the information on the association (such as the time, price or amount) are stored in structures called satellite tables which are discussed below.

This is an example for a link-table between two hubs for cars (H_CAR) and persons (H_PERSON). The link is called "Driver" (L_DRIVER).

FieldnameDescriptionMandatory?Comment
L_DRIVER_IDSequence ID and surrogate key for the LinkNoRecommended but optional [23]
H_CAR_IDsurrogate key for the car hub, the first anchor of the linkYes
H_PERSON_IDsurrogate key for the person hub, the second anchor of the linkYes
L_RSRCThe recordsource of this association when first loadedYes
LOAD_AUDIT_IDAn ID into a table with audit information, such as load time, duration of load, number of lines, etc.No

Satellites

The hubs and links form the structure of the model, but have no temporal attributes and hold no descriptive attributes. These are stored in separate tables called satellites. These consist of metadata linking them to their parent hub or link, metadata describing the origin of the association and attributes, as well as a timeline with start and end dates for the attribute. Where the hubs and links provide the structure of the model, the satellites provide the "meat" of the model, the context for the business processes that are captured in hubs and links. These attributes are stored both with regards to the details of the matter as well as the timeline and can range from quite complex (all of the fields describing a client's complete profile) to quite simple (a satellite on a link with only a valid-indicator and a timeline).

Usually the attributes are grouped in satellites by source system. However, descriptive attributes such as size, cost, speed, amount or color can change at different rates, so you can also split these attributes up in different satellites based on their rate of change.

All the tables contain metadata, minimally describing at least the source system and the date on which this entry became valid, giving a complete historical view of the data as it enters the data warehouse.

An effectivity satellite is a satellite built on a link, "and record[s] the time period when the corresponding link records start and end effectivity". [24]

Satellite example

This is an example for a satellite on the drivers-link between the hubs for cars and persons, called "Driver insurance" (S_DRIVER_INSURANCE). This satellite contains attributes that are specific to the insurance of the relationship between the car and the person driving it, for instance an indicator whether this is the primary driver, the name of the insurance company for this car and person (could also be a separate hub) and a summary of the number of accidents involving this combination of vehicle and driver. Also included is a reference to a lookup- or reference table called R_RISK_CATEGORY containing the codes for the risk category in which this relationship is deemed to fall.

FieldnameDescriptionMandatory?Comment
S_DRIVER_INSURANCE_IDSequence ID and surrogate key for the satellite on the linkNoRecommended but optional [23]
L_DRIVER_ID(surrogate) primary key for the driver link, the parent of the satelliteYes
S_SEQ_NROrdering or sequence number, to enforce uniqueness if there are several valid satellites for one parent keyNo(**)This can happen if, for instance, you have a hub COURSE and the name of the course is an attribute but in several different languages.
S_LDTSLoad Date (startdate) for the validity of this combination of attribute values for parent key L_DRIVER_IDYes
S_LEDTSLoad End Date (enddate) for the validity of this combination of attribute values for parent key L_DRIVER_IDNo
IND_PRIMARY_DRIVERIndicator whether the driver is the primary driver for this carNo (*)
INSURANCE_COMPANYThe name of the insurance company for this vehicle and this driverNo (*)
NR_OF_ACCIDENTSThe number of accidents by this driver in this vehicleNo (*)
R_RISK_CATEGORY_CDThe risk category for the driver. This is a reference to R_RISK_CATEGORYNo (*)
S_RSRCThe recordsource of the information in this satellite when first loadedYes
LOAD_AUDIT_IDAn ID into a table with audit information, such as load time, duration of load, number of lines, etc.No

(*) at least one attribute is mandatory. (**) sequence number becomes mandatory if it is needed to enforce uniqueness for multiple valid satellites on the same hub or link.

Reference tables

Reference tables are a normal part of a healthy data vault model. They are there to prevent redundant storage of simple reference data that is referenced a lot. More formally, Dan Linstedt defines reference data as follows:

Any information deemed necessary to resolve descriptions from codes, or to translate keys in to (sic) a consistent manner. Many of these fields are "descriptive" in nature and describe a specific state of the other more important information. As such, reference data lives in separate tables from the raw Data Vault tables. [25]

Reference tables are referenced from Satellites, but never bound with physical foreign keys. There is no prescribed structure for reference tables: use what works best in your specific case, ranging from simple lookup tables to small data vaults or even stars. They can be historical or have no history, but it is recommended that you stick to the natural keys and not create surrogate keys in that case. [26] Normally, data vaults have a lot of reference tables, just like any other Data Warehouse.

Reference example

This is an example of a reference table with risk categories for drivers of vehicles. It can be referenced from any satellite in the data vault. For now we reference it from satellite S_DRIVER_INSURANCE. The reference table is R_RISK_CATEGORY.

FieldnameDescriptionMandatory?
R_RISK_CATEGORY_CDThe code for the risk categoryYes
RISK_CATEGORY_DESCA description of the risk categoryNo (*)

(*) at least one attribute is mandatory.

Loading practices

The ETL for updating a data vault model is fairly straightforward (see Data Vault Series 5 – Loading Practices). First you have to load all the hubs, creating surrogate IDs for any new business keys. Having done that, you can now resolve all business keys to surrogate ID's if you query the hub. The second step is to resolve the links between hubs and create surrogate IDs for any new associations. At the same time, you can also create all satellites that are attached to hubs, since you can resolve the key to a surrogate ID. Once you have created all the new links with their surrogate keys, you can add the satellites to all the links.

Since the hubs are not joined to each other except through links, you can load all the hubs in parallel. Since links are not attached directly to each other, you can load all the links in parallel as well. Since satellites can be attached only to hubs and links, you can also load these in parallel.

The ETL is quite straightforward and lends itself to easy automation or templating. Problems occur only with links relating to other links, because resolving the business keys in the link only leads to another link that has to be resolved as well. Due to the equivalence of this situation with a link to multiple hubs, this difficulty can be avoided by remodeling such cases and this is in fact the recommended practice. [16]

Data is never deleted from the data vault, unless you have a technical error while loading data.

Data vault and dimensional modelling

The data vault modelled layer is normally used to store data. It is not optimised for query performance, nor is it easy to query by the well-known query-tools such as Cognos, Oracle Business Intelligence Suite Enterprise Edition, SAP Business Objects, Pentaho et al.[ citation needed ] Since these end-user computing tools expect or prefer their data to be contained in a dimensional model, a conversion is usually necessary.

For this purpose, the hubs and related satellites on those hubs can be considered as dimensions and the links and related satellites on those links can be viewed as fact tables in a dimensional model. This enables you to quickly prototype a dimensional model out of a data vault model using views.

Note that while it is relatively straightforward to move data from a data vault model to a (cleansed) dimensional model, the reverse is not as easy, given the denormalized nature of the dimensional model's fact tables, fundamentally different to the third normal form of the data vault. [27]

Data vault methodology

The data vault methodology is based on SEI/CMMI Level 5 best practices. It includes multiple components of CMMI Level 5, and combines them with best practices from Six Sigma, TQM, and SDLC. Particularly, it is focused on Scott Ambler's agile methodology for build out and deployment. Data vault projects have a short, scope-controlled release cycle and should consist of a production release every 2 to 3 weeks.

Teams using the data vault methodology should readily adapt to the repeatable, consistent, and measurable projects that are expected at CMMI Level 5. Data that flow through the EDW data vault system will begin to follow the TQM (total quality management) life-cycle that has long been missing from BI (business intelligence) projects.

Tools

Some examples of tools are:[ clarification needed ]

See also

Related Research Articles

<span class="mw-page-title-main">Data warehouse</span> Centralized storage of knowledge

In computing, a data warehouse, also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis and is considered a core component of business intelligence. Data warehouses are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating reports. This is beneficial for companies as it enables them to interrogate and draw insights from their data and make decisions.

A relational database is a database based on the relational model of data, as proposed by E. F. Codd in 1970. A database management system used to maintain relational databases is a relational database management system (RDBMS). Many relational database systems are equipped with the option of using SQL for querying and updating the database.

<span class="mw-page-title-main">Extract, transform, load</span> Procedure in computing

In computing, extract, transform, load (ETL) is a three-phase process where data is extracted from an input source, transformed, and loaded into an output data container. The data can be collated from one or more sources and it can also be output to one or more destinations. ETL processing is typically executed using software applications but it can also be done manually by system operators. ETL software typically automates the entire process and can be run manually or on reccurring schedules either as single jobs or aggregated into a batch of jobs.

A surrogate key in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data, unlike a natural key.

Data profiling is the process of examining the data available from an existing information source and collecting statistics or informative summaries about that data. The purpose of these statistics may be to:

  1. Find out whether existing data can be easily used for other purposes
  2. Improve the ability to search data by tagging it with keywords, descriptions, or assigning it to a category
  3. Assess data quality, including whether the data conforms to particular standards or patterns
  4. Assess the risk involved in integrating data in new applications, including the challenges of joins
  5. Discover metadata of the source database, including value patterns and distributions, key candidates, foreign-key candidates, and functional dependencies
  6. Assess whether known metadata accurately describes the actual values in the source database
  7. Understanding data challenges early in any data intensive project, so that late project surprises are avoided. Finding data problems late in the project can lead to delays and cost overruns.
  8. Have an enterprise view of all data, for uses such as master data management, where key data is needed, or data governance for improving data quality.
<span class="mw-page-title-main">Star schema</span> Data warehousing schema

In computing, the star schema or star model is the simplest style of data mart schema and is the approach most widely used to develop data warehouses and dimensional data marts. The star schema consists of one or more fact tables referencing any number of dimension tables. The star schema is an important special case of the snowflake schema, and is more effective for handling simpler queries.

A table is a collection of related data held in a table format within a database. It consists of columns and rows.

A representation term is a word, or a combination of words, that semantically represent the data type of a data element. A representation term is commonly referred to as a class word by those familiar with data dictionaries. ISO/IEC 11179-5:2005 defines representation term as a designation of an instance of a representation class As used in ISO/IEC 11179, the representation term is that part of a data element name that provides a semantic pointer to the underlying data type. A Representation class is a class of representations. This representation class provides a way to classify or group data elements.

<span class="mw-page-title-main">Dimension (data warehouse)</span> Structure that categorizes facts and measures in a data warehouse

A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time.

A slowly changing dimension (SCD) in data management and data warehousing is a dimension which contains relatively static data which can change slowly but unpredictably, rather than according to a regular schedule. Some examples of typical slowly changing dimensions are entities such as names of geographical locations, customers, or products.

An entity–attribute–value model (EAV) is a data model optimized for the space-efficient storage of sparse—or ad-hoc—property or data values, intended for situations where runtime usage patterns are arbitrary, subject to user variation, or otherwise unforeseeable using a fixed design. The use-case targets applications which offer a large or rich system of defined property types, which are in turn appropriate to a wide set of entities, but where typically only a small, specific selection of these are instantiated for a given entity. Therefore, this type of data model relates to the mathematical notion of a sparse matrix. EAV is also known as object–attribute–value model, vertical database model, and open schema.

Dimensional modeling (DM) is part of the Business Dimensional Lifecycle methodology developed by Ralph Kimball which includes a set of methods, techniques and concepts for use in data warehouse design. The approach focuses on identifying the key business processes within a business and modelling and implementing these first before adding additional business processes, as a bottom-up approach. An alternative approach from Inmon advocates a top down design of the model of all the enterprise data using tools such as entity-relationship modeling (ER).

PREservation Metadata: Implementation Strategies (PREMIS) is the de facto digital preservation metadata standard.

In the data warehouse practice of extract, transform, load (ETL), an early fact or early-arriving fact, also known as late-arriving dimension or late-arriving data, denotes the detection of a dimensional natural key during fact table source loading, prior to the assignment of a corresponding primary key or surrogate key in the dimension table. Hence, the fact which cites the dimension arrives early, relative to the definition of the dimension value.

<span class="mw-page-title-main">Metadata</span> Data about data

Metadata is "data that provides information about other data", but not the content of the data itself, such as the text of a message or the image itself. There are many distinct types of metadata, including:

<span class="mw-page-title-main">Apache Hive</span> Database engine

Apache Hive is a data warehouse software project, built on top of Apache Hadoop for providing data query and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop. Traditional SQL queries must be implemented in the MapReduce Java API to execute SQL applications and queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-like queries into the underlying Java without the need to implement queries in the low-level Java API. Since most data warehousing applications work with SQL-based querying languages, Hive aids the portability of SQL-based applications to Hadoop. While initially developed by Facebook, Apache Hive is used and developed by other companies such as Netflix and the Financial Industry Regulatory Authority (FINRA). Amazon maintains a software fork of Apache Hive included in Amazon Elastic MapReduce on Amazon Web Services.

<span class="mw-page-title-main">Anchor modeling</span> Agile database modeling technique

Anchor modeling is an agile database modeling technique suited for information that changes over time both in structure and content. It provides a graphical notation used for conceptual modeling similar to that of entity-relationship modeling, with extensions for working with temporal data. The modeling technique involves four modeling constructs: the anchor, attribute, tie and knot, each capturing different aspects of the domain being modeled. The resulting models can be translated to physical database designs using formalized rules. When such a translation is done the tables in the relational database will mostly be in the sixth normal form.

The following is provided as an overview of and topical guide to databases:

Business metadata is data that adds business context to other data. It provides information authored by business people and/or used by business people. It is in contrast to technical metadata, which is data used in the storage and structure of the data in a database or system. Technical metadata includes the database table name and column name, data type, indexes referencing the data, ETL jobs involving the data, when the data was last updated, accessed, etc.

Daniel Linstedt is an American data architect and inventor of the data modeling method data vault for data warehouses and business intelligence. He developed the model in the 1990s and published the first version in the early 2000s. In 2012, Data Vault 2.0 was announced and it was released in 2013. In addition to data modeling, the data vault method incorporates process design, database tuning and performance improvements for ETL/ELT, Capability Maturity Model Integration (CMMI) and agile software development.

References

Citations

  1. Super Charge your data warehouse, page 74
  2. The next generation EDW
  3. Building a scalable datawarehouse with data vault 2.0, p. 6
  4. Super Charge your data warehouse, page 21
  5. Super Charge your data warehouse, page 76
  6. Porsby, Johan. "Rålager istället för ett strukturerat datalager". www.agero.se (in Swedish). Retrieved 2023-02-22.
  7. Porsby, Johan. "Datamodeller för data warehouse". www.agero.se (in Swedish). Retrieved 2023-02-22.
  8. Building a scalable datawarehouse with data vault 2.0, p. 11
  9. Building a scalable datawarehouse with data vault 2.0, p. xv
  10. The New Business Supermodel, glossary, page 75
  11. A short intro to#datavault 2.0
  12. Data Vault Series 1 – Data Vault Overview
  13. Data Vault Series 2 – Data Vault Components
  14. Data Vault Series 3 – End Dates and Basic Joins
  15. Data Vault Series 4 – Link tables, paragraph 2.3
  16. 1 2 Data Vault Series 5 – Loading Practices
  17. Data Warehousing for Dummies, page 83
  18. A short intro to #datavault 2.0
  19. Data Vault 2.0 Being Announced
  20. Super Charge your Data Warehouse, paragraph 5.20, page 110
  21. Super Charge your data warehouse, page 61, why are business keys important
  22. 1 2 Data Vault Forum, Standards section, section 3.0 Hub Rules
  23. 1 2 3 Data Vault Modeling Specification v1.0.9
  24. Effectivity Satellites - dbtvault
  25. Super Charge your Data Warehouse, paragraph 8.0, page 146
  26. Super Charge your Data Warehouse, paragraph 8.0, page 149
  27. Melbournevault, 16 May 2023

Sources

Dutch language sources
  • Ketelaars, M.W.A.M. (2005-11-25). "Datawarehouse-modelleren met Data Vault". Database Magazine (DB/M) (7). Array Publications B.V.: 36–40.
  • Verhagen, K.; Vrijkorte, B. (June 10, 2008). "Relationeel versus Data Vault". Database Magazine (DB/M) (4). Array Publications B.V.: 6–9.

Literature