Preparing data is painful
Big Data relevant for analysis exists in many formats (structured, semi-structured and unstructured), across various systems (private and public) and refreshes at varying frequencies (static and streaming). It takes many person-weeks of effort to just get data in a unified, analyzable format.
Searching for relevant data is expensive
With large and varying data sets, it is impossible to comprehensively know what data exists where. Searching for and selecting the right features for analysis is an iterative, time consuming and clumsy exercise.
Analytical models are complex and expensive to build
Specialized statistical and Machine Learning models need to be scripted by specialists before they can be applied on data. The cost of time and talent is so expensive that many problems just get a BI treatment. And BI on Big Data is just not good enough.
Analytical models are not effective for long
Orthodox approaches do not self-learn when data is refreshed. The cost of maintaining and refreshing analytics models get alarmingly high.
Result
The cost of discovering and solving use cases is exorbitant! It takes months of plodding before a use-case can be finally solved to meet the exacting requirements of business.
No comments:
Post a Comment