Machine Learning to Power the Future of Streaming Analytics

14.02.22
~5min
4

The incredible amount of data that will be generated in the next few years demands help to manage, mostly in the form of machine learning algorithms.

This is some text inside of a div block.
Our intelligent devices generate more data than ever before. Today's population of IoT devices numbers more than 10 billion worldwide, and by some estimates, there will be more than 25.4 billion devices by 2025, generating an unfathomable 73.1 ZB (zettabytes) of data. It is not humanly possible to track even a minuscule fraction of that incoming telemetry and analyze it to quickly extract needed business intelligence or spot issues and growing trends in real time.
Consider a nationwide fleet of long-haul trucks that needs to meet demanding schedules and can't afford unexpected breakdowns. With today's IoT technologies, fleet managers attempt to track thousands of trucks as they report engine and cargo status parameters and driving behavior to cloud-hosted telematics software every few seconds. Even with these tools, dispatchers and other personnel cannot possibly sift through the flood of incoming messages to identify emerging issues in the moment, make proactive adjustments across the fleet, and intervene to avoid costly downtime or delays.
The burden of tracking incoming telemetry data to immediately identify actionable issues must be automated using software for streaming analytics. Although analytics code can be manually crafted in popular programming languages, such as Java and C#, creating algorithms that uncover emerging issues hidden within a telemetry stream can be daunting or, at a minimum, complex. In many cases, the algorithm itself may be unknown because the underlying processes which lead to anomalies and, ultimately, device failures are not well understood.
In cases such as these, the fast-maturing science of machine learning (ML) can come to the rescue. Instead of trying to devise code to analyze complex, poorly understood fluctuations in telemetry, application developers instead can train an ML algorithm to recognize abnormal patterns by feeding it thousands of historic telemetry messages that have been classified as normal or abnormal. After training and testing, the ML algorithm can be put to work monitoring incoming telemetry and alerting personnel when it observes suspected abnormal behavior. No manual analytics coding is required.
Once deployed, the ML algorithm needs to run independently for each data source, examining incoming telemetry within milliseconds after it arrives and then logging abnormal events and/or alerting personnel when required. Building a streaming analytics platform that can do this at scale for thousands of data sources (such as trucks in a fleet) can be challenging. To ensure fast analysis, the telemetry from each data source needs to be routed to its corresponding ML algorithm, and these algorithms need to be mapped to a cluster of servers for simultaneous execution. What's needed is a fast, scalable execution platform that can use ML to track and analyze telemetry from thousands of data sources.
A software technique called "real-time digital twins" provides a powerful new way to run these ML algorithms in real time and at scale. This technique assigns each physical data source a unique real-time digital twin, a software component that runs on an in-memory computing platform and hosts an ML algorithm (or other analytics code) along with associated state information required to track the data source. A data source can be any IoT device, such as a truck within a fleet or a specific component from it. Thousands of real-time digital twins run together to track incoming telemetry data from their sources and enable highly granular, real-time analysis that assists in timely decision making. In addition, the system can continuously aggregate state information from all real-time digital twins to help personnel maintain situational awareness.
In this way, real-time digital twins can harness the power of ML to provide predictive analytics that automates finding problems that are otherwise difficult for humans to detect. Once an ML algorithm has been trained using historic data that has been classified as normal and abnormal, it can be deployed to run independently in each real-time digital twin. Real-time digital twins examine incoming telemetry within milliseconds after it arrives and are able to immediately log abnormal events and send alerts.
Incorporating machine learning into real-time digital twins represents a significant step forward in streaming analytics that unlocks new capabilities and enhances situational awareness for fast, informed decision making. It can also help uncover anomalies in telemetry that likely would otherwise remain undiscovered. This combination of technologies gives operational managers and data professionals better insights than ever before into the torrents of telemetry they must track every day.
Have question about digital transformation?
get answer

Related articles

your questions and special requests are always welcome
let's talk

Contact us

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.