Predictronics Co-Founder and CTO Discuss Industrial AI and Its Challenges at PHM 2018 Workshop
OCTOBER 3, 2018
This year’s 10th Annual Conference of the Prognostics and Health Management Society on September 24th to September 27th in Philadelphia, Pennsylvania saw definite growth, with over 300 intellectual minds and a greater range of diversity in the industries present, expanding from a mostly aerospace focus to include automotive and railway transportation, heavy industry, energy, manufacturing, medical technologies, and more.
Predictronics co-founder and Director of the NSF I/UCRC for Intelligent Maintenance Systems at the University of Cincinnati, Professor Jay Lee, and CTO, Dr. David Siegel, hosted an Industrial AI workshop on the first day of the conference with colleagues from NIST, The Hess PHM Group, and Noodle.ai.
Numerous industry powerhouses were in attendance, including GM, Ford, GE, Siemens, Northrop Grumman, and Lockheed Martin.
The group discussed innovation in Industrial AI along with the many triumphs and trials within the field.
Here are the five major challenges that Industrial AI faces in the future, as explored by the panel during the workshop:
- There needs to be better ways within Industrial AI to integrate the domain knowledge, acquired by individuals, with the data-driven simulation models developed through machine learning. Only relying on specific data science strategies and solutions can sometimes produce inaccurate results. Not only is there usually no historical data available at the beginning of a project, making data visualization and model development inaccurate, but also advanced machine learning techniques usually need supervision.
- A few broader technical problems exist, an example of which is model accuracy and determining the proper validation of analysis models. One suggestion to combat this would be to ensure the data utilized for analysis is clean, meaning the right data is collected to start and that the noise is removed.
Another technical challenge is the interpretability problem within AI, whereby AI creators cannot explain how the artificial intelligence reached a particular decision or conclusion. Data-driven models have been found to be good explainable models, in that their behavior can easily be understood by humans.
- Questions still arise in the realm of prediction models and whether or not they should be reset. Industrial AI often expands beyond the parameters with which it was initially trained. Will allowing AI to have no bounds lead to greater achievements and innovation or a real-life Terminator situation?
- Simulations can be lacking in labeled data sets. Anomaly detection is usually the default for model building in order to identify problems. But, to achieve better growth and accuracy, data needs examples to be collected over time to validate patterns and trends within the model.
- Many times, businesses utilizing Industrial AI tend to start a project by determining the specific signals and data needed to solve a problem and then they add more sensors as needed at acquire that data. The best approach is to just collect as much data as possible and, then, extract the value, in order to ensure the highest data quality.
For more Predictronics information and updates, visit and follow us: