Published Date: March 1st, 2022
Whether viewed as a concept for data organization, or an architecture to put it to use, the term data fabric defines a methodology for data integration across all storage and use environments, and applying a common set of protocols, procedures, organization and security. The data fabric concept is inextricably linked to other big data concepts that include data lakes, data warehouses, data meshes and even data lakehouses.
A core principle of data fabric architecture is that it is applied across all data structures and data sources in a hybrid multicloud environment, from on-premises to cloud to edge.
The end goal of a data fabric is to make an organization’s data useful to as many people as possible — as well as data scientists and data engineers — as quickly and as safely as possible, by creating standardized data management and data governance practices for optimization, and to make it visible and provide insights to multiple business users — all while maintaining control, protection and ensuring data security.
Noel Yuhanna of Forrester Research is credited with being one of the first to quantify the idea of a data fabric architecture. Yuhanna refers to data fabric as a platform that helps organizations adopt new business processes faster. According to Yuhanna, data fabric “automates the ingestion, curation, transformation, governance and integration of data across disparate data in real time and near real time.”
The data fabric concept is a step toward moving enterprise data away from central, on-premises databases, decoupling data from physical servers and providing each user with the data access they need in the format they need it, regardless of where they — or the data — are located.
In the following article, we’ll detail use cases of a data fabric framework, compare it to other data architectures and discuss the benefits and how it can bring value to your organization.