Asset Health : Ball Bearing

Case Study


The papermaking process uses spherical roller bearings. These bearings require proper lubrification and maintenance; failure to do so can result in millions of dollars in damage to assets.

In a nutshell, what Quasar delivered

  • Greatly increased accuracy of prediction and models by capturing the full waveform data and performing spectral analysis.
  • A shorter model to production loop that empowers data analysis and reliability engineers
  • 1,000X faster queries compared to a data warehousing solution while using less cloud resources
  • Storage footprint divided by 10
  • Future proof solution thanks to built-in scalability


Technicians need to inspect the machine visually and check its behavior to know if the bearing needs to be changed. When an asset breaks down, it can take a very long time for the technician to understand the root cause.

By collecting the vibration data coming out of the sensors, it’s possible to identify typical patterns before a problem happens. The patterns can also help distinguish between the root cause of the problem.

The goal is to have a dashboard with precise information regarding the imminence of a problem and its type. In addition, it is possible with this approach to anticipate wear and tear long enough to schedule a downtime instead of dealing with a forced interruption.

However, having this level of accuracy requires working on the entire waveform of the vibration data and performing a frequency analysis. Complete waveform data is usually several thousands of points per second per sensor, resulting in massive data sets.

The other challenge is that sometimes problems are identified just minutes before they happen, which means it’s imperative to have an end-to-end data latency update below the minute.

That update latency and write volume disqualify data warehousing technology immediately.

An approach that solves the write volume problem is to store the data in files and then have ad-hoc programs perform the feature extraction. This, however, means that each analysis may have a high CPU time cost, and it requires writing new programs if the feature extraction needs to change.

Finally, these programs must be highly efficient to not add to the data latency. If it takes 15 minutes to extract the feature, the information may come too late to take action.

The Solution

Using Quasar, firms can capture the complete waveform data using dedicated connectors. The capture can happen at the edge or via MQTT.

Quasar will use its unique algorithm to compress the data faster than it arrives and use its micro-indexes to reduce future lookup and feature extraction time. This results in significantly reduced disk usage without impacting the query ability of the data.

Each sensor is stored in a dedicated table storing time series data. Tables can be tagged at will, enabling a flexing querying mechanism based on those tags. For example, obtaining all the sensors of an asset or all the sensors of a mill.

Quasar has built-in feature extraction capabilities that leverage the capabilities of the SIMD capabilities of the processor. It can, in real-time, perform complex statistical analysis, pivot the data, do FFT, and much more.

This means that feature extraction does not require ad-hoc programs and can be tailored by writing a simple SQL query.


  • Firms can perform advanced data analysis to protect their asset health and schedule maintenance, resulting in millions of dollars in savings per year
  • Capture the entire waveform without any limitation in history depth, increasing accuracy and thus potential gains
  • Devising and deploying new strategies is extremely fast since Quasar contains both the history and the live data, increasing the productivity of rare data scientist resources
  • Extracting feature does not require writing custom programs, increasing flexibility
  • Outstanding TCO: extracting features does not require writing custom programs and uses less CPU. Up to 20X less storage space is needed compared to a typical data warehousing solution and up to 1,000X faster feature extraction compared to data lakes

Related Tags: