More and more companies are increasingly relying on analytics to gain insight into their operations. Two of the more “conceptual models” being discussed today, "data fabric" and "data mesh," offer new approaches to these analytics challenges.
Data Fabric is a combination of technologies, including AI and machine learning, that integrates data sources, types, and locations. Gartner describes it as an approach to analytics that utilizes existing, discoverable, and inferred metadata assets to support the design, deployment, and utilization of data across local, edge, and data center environments.1
Data Fabric identifies, connects, cleanses, and enriches real-time data from various applications to discover relationships between data points. It builds a graph that stores interlinked descriptions of data, such as objects, events, situations, and concepts. The best data fabric based solutions provide robust visualization tools that make their technical infrastructure easy to interpret. The data fabric affords many advantages to organizations, including minimizing disruptions from switching between cloud vendors and compute resources. It also allows enterprises and their teams to adapt their infrastructure based on changing technology needs, connecting infrastructure endpoints regardless of the location of data.
Data Mesh takes a slightly different approach, breaking large enterprise data architectures into subsystems usually managed by separate, dedicated teams. Unlike the data fabric, which relies on metadata to drive recommendations for things like data delivery, data meshes leverage the expertise of subject-matter experts who oversee "domains" within the mesh. Domains are independently deployable clusters of related microservices that communicate with users or other domains through different interfaces. Microservices are composed of many loosely coupled and independently deployable smaller services. Teams working within domains treat data as a product, with clean, fresh, and complete data delivered to any data consumer based on permissions and roles. "Data products" are created to be used for a specific analytical and operational purpose.
In the data mesh architecture, generally each domain is responsible for its own data and data products, including data quality and governance. This enables faster and more flexible development and deployment of new data products, as well as easier scaling and adaptation to changing business needs. The decentralized approach of the data mesh also helps to break down organizational silos, as domain teams work collaboratively with each other and with data consumers.
One of the key benefits of the data mesh approach is that it allows for greater agility and experimentation in data analytics. Instead of relying on a centralized data team to handle all data-related tasks, the domain teams can rapidly prototype and iterate on new data products, testing them with real users and gathering feedback to improve them. This can lead to faster time-to-market for new data-driven products and services, as well as more accurate and effective decision-making across the organization.
Another advantage of the data mesh architecture model is that it can help to improve data quality and consistency across the organization. Data mesh models encourages a culture of data ownership and responsibility. This can lead to better data governance practices, more accurate and reliable data, and improved trust in data-driven decision-making.
Overall, both data fabric and data mesh models have the potential to transform the way organizations approach data analytics and management. By enabling faster, more flexible, and more collaborative data-driven decision-making, these approaches can help organizations to unlock the full value of their data.