Presented at Data Analytics A/NZ Online Series: Part One
In AI/ML systems, applications or machines are set up to learn from new information, and gradually, over time, they acquire the intelligence to support day-to-day business processes and decision making. The more data they receive, the more they learn, and the more accurate their predictions are. However, extracting data from multiple sources and then replicating it to a central repository is the old and inefficient way to do it that often results in 80% of the project time being spent on data acquisition and preparation tasks.
Data virtualization is a modern data integration technique that integrates data in real-time, without having to physically replicate it. It can seamlessly combine views of data from a wide variety of different data sources and feed AI/ML engines with data from a common data services layer.
Watch this session on-demand to learn:
- How data virtualization delivers data in real-time, and without replication.
- How data virtualization can be used for data ingestion and manipulation.