Experts on Artificial Intelligence: avoid the pitfalls

Artificial Intelligence (AI) is used by an increasing number of industries, but DNV researchers warn against uncritical adoption of the technology: “There could be disastrous consequences if you don't understand the pitfalls and how to avoid them.”

“Today, there are many consulting firms and start-ups claiming they can solve all the world's problems by using AI. We try to take a careful and fact-based approach,” say Martin Høy and Justin Fackrell, data scientists at DNV and among the company's leading experts in AI and machine learning (ML). AI and ML are hot topics: both in business and generally in society, the hype is that AI technology will be able to do most things faster, better and more efficiently. 

Norwegian company DNV is a global leader in certification and the world's largest independent renewable energy consultancy. It has access to enormous amounts of data and is working hard to find new ways to use these data to create value for its customers. Central to this effort is the use of AI and ML. However, according to the two experts there are many reasons to tread carefully in this landscape. One reason has to do with consequences. 

“Artificial intelligence is widely used in marketing and consumer technology. But here the consequences of a bad prediction are often limited,” says Høy. "If an online store suggests an additional product which you are not really interested in, or a video streaming service suggests a movie you have already seen at the cinema, then the consequences are not serious. “But let's say we use ML to make decisions about maintaining critical systems on an oil platform. If the prediction is wrong, the ultimate consequence may be a loss of value, or even of lives,” says Fackrell.

Connecting AI expertise and domain knowledge

ML and AI can release the value locked in data in ways that were not possible before. In a short time, huge amounts of data can be analysed and patterns discovered. Done right, time-consuming processes can be automated and made more consistent. This can increase corporate earnings and create new revenue streams while simultaneously improving safety. “At the same time, it is crucial that AI experts work closely with professional experts from industry when these new technologies are used to solve industry-specific challenges,” say Høy and Fackrell. 

In this aspect, DNV has a clear advantage. “DNV is unique because we have specialists in both ML and specific industries. When these professional experts work together, we ensure that the models they create are relevant and predictable. That’s extremely important in safety-critical applications,” says Høy. This is in clear contrast to the offerings from some stand-alone companies offering AI services. 

“We often see that in cases where a pure AI vendor produces an ML model and sells it to an oil and gas company, then two things happen; firstly, the oil company does not have the necessary know-how to understand the model and explain its decisions; secondly, the model’s developers do not have the domain expertise to assess whether the criteria used in the model are sensible from a domain perspective. Then there can be problems,” says Høy. “Many of our customers in the health, infrastructure and manufacturing sectors are eager to use AI to improve. They face many of these challenges, and they must understand that it is the combination of modelling know-how and domain expertise which is key,” he continues. 

The point argued by the DNV experts is that – in order to be able to use this technology – one must know where the data come from, which model is used, and what can go wrong.

Pitfalls within AI

According to the two experts, the potential benefits of ML are immense. “Machine learning can help increase knowledge within a subject area. If used properly, it can help us discover relationships we were not previously aware of and that are not described by existing theory,” says Høy. But it is not so easy to achieve this. For example, many years ago you could read that the first self-driving cars ‘were probably just around the corner’. But after several years of testing, it is still unclear when they will become part of the traffic around us. “This illustrates that, when trying to introduce AI technology in areas where prediction mistakes can have serious consequences, developments will be slower than in other areas,” says Fackrell. For anyone thinking of implementing ML in their business, the two DNV experts warn of two major pitfalls. Here they also give advice on how to avoid them: 
  • Inadequate data quality
    That the quality of the data you put into ML plays a pivotal role may sound banal. But in this context it is extremely important to have sufficient data, correct data and representative data. Høy and Fackrell explain that almost all ML models are based on learning from historical data. The quality of any model trained on historical data depends on the quality of the training data. In order to predict when a ship arrives at port, an ML model must have been trained using relevant and good data on positions, speeds, weather, winds, waves, ocean currents, etc. Good data quality is not just about making sure a sensor measures what you think it does and relays that information in a robust way. It is also very important that the data are representative of the environment where the model is to be used. “Driverless cars are first trained and tested in a controlled environment. Only then should they be able to go out into real traffic. It’s just the same thing we work on when it comes to unmanned ships through the ReVolt autonomous vessel concept. For this to work, the training environment must be as representative of the real traffic situation as possible. In the automotive domain, a car that is only trained in a warm and sunny urban environment will probably not perform well in heavy snow in the mountains. 

  • Inadequate quality assurance
    ML methods are extremely good at finding patterns in data. But there is a risk that they over-interpret the importance of minor variations in the training data that are irrelevant to future data. This phenomenon is called overfitting. “If you one day break a mirror, and then something unlucky happens to you, you might think that it was the broken mirror that caused it. In other words, it’s a detail on which you place too much value,” says Fackrell. “A solution to this problem is to be very careful when testing and quality assuring ML models. By doing so, you can determine whether or not the model has been overfitted, i.e., whether the model is useful for making future predictions. For self-driving cars, this means that they must be tested in environments that are not completely identical to the training environment. Then it will become clear whether the car can be expected to perform in the real world.”

Basic concepts in artificial intelligence

Artificial Intelligence (AI) and Machine Learning (ML): a family of methods that can find patterns in observations (data). Neural networks: an ML method with many parameters that must be determined. Model: by using an ML method on a given training dataset, you obtain a model. This model can be used to make predictions.

More interviews

To make sure things work

This is the story behind the global, invisible safety net managed from a beautiful fjord-side setting outside Oslo