ALKS  – Difficulties of Independent Virtual Testing Part 1 

Tom Leggett is the Lead Research Engineer for Automated Driving at Thatcham Research. This is the third blog in a series providing the behind the scenes insights into his work developing a world-first consumer rating for Automated Driving Systems.

Virtual testing is not new to the automotive industry. In fact, it has been used for many years in the development of vehicles, allowing for rapid prototyping of emerging concepts and ideas. The key benefit is that virtual simulation permits the user to change and analyse certain parameters at the “click of a button”, as opposed to many hours of physical configuration or construction.

This flexible testing configuration is ideal for an independent assessment program like the ALKS Consumer Safety Confidence Framework, that’s funded by CCAV, co-ordinated by Zenzic and led by Thatcham Research in partnership with WMG, AESIN and CAM Testbed UK. It would allow many scenarios to be tested, in many different configurations, all without the need for additional complex and expensive on-track testing.

So, what’s the catch?

Before an independent test facility can conduct these simulations, three things must be in place:

  1. A list of scenarios they wish to test
  2. A simulation platform to execute the tests
  3. A virtual model to accurately represent the vehicle to be tested

The first is fairly straight forward, as finding relevant tests to assess vehicle performance is already a well-established practise. For example, accident data can be used to identify the most common and dangerous driving scenarios, to then define the tests around them using an open-source language such as Scenario Description Language (SDL), which can be stored in a library like SafetyPool, hosted by WMG.

The second sounds easy enough. Surely there are lots of different simulation platforms that could be used? Correct, there are, but that in itself is the problem. 

Vehicle manufacturers perform many different independent virtual tests during development, but the specific software they use and how it is configured is a closely guarded secret. Many use custom pieces of software with performance indicators relevant to their aims and objectives. And in order to assess vehicles independently and fairly, we must use exactly the same software and configuration.

The third point is the most difficult. As mentioned previously, traditional testing requires the procurement of an “off the shelf” vehicle. Competition, however, means that we do not have the luxury of simply acquiring the equivalent virtual model of the vehicle. The protection of intellectual property is an important concern for vehicle manufacturers, and so most businesses would not want to risk exposing their vehicle models.

The solution is ostensibly simple at the outset – we provide the list of scenarios, and the manufacturers do the simulation testing themselves. They have already invested huge amounts of money into extremely accurate, and highly verified and validated simulation platforms as well as vehicle models. This way the manufacturer’s precious intellectual property remains safe.

But how can we ensure that the results are comparable between manufacturers? How can we trust these results? And how can we ensure that these results are repeatable? Read my next post to dive a little deeper into these questions.

To find out more about the ALKS project, please click here