Dr Siddartha Khastgir is the Head of Verification & Validation of collaborative autonomous vehicles (CAV) at WMG, University of Warwick, UK. His research areas in the CAV domain include test scenario generation, safety, simulation-based testing, Safe AI among many others. He has received numerous national and international awards for his research contributions, including the prestigious UKRI Future Leaders Fellowship, a seven-year Fellowship focused on safety evaluation of CAVs, and is a Forbes 30 Under 30 Europe list maker. He is also the project leader for ASAM standardisation project - OpenODD, and an active participant at ASAM, SAE, ISO and UNECE discussions.

In this episode we talked about verification and validation of autonomous vehicles. This includes the advantages and challenges of simulations and how one research question raises several more questions. We also talked about the low-speed autonomous driving and about the new standard ISO 22737 “Low-Speed Automated Driving (LSAD) systems”. He was the lead author of that standard, as well as of ISO 34503 “Taxonomy for ODD”, where ODD stands for Operational design domain.

Further resources:

  • BSI PAS 1883 - The publicly available standard on how to define an ODD can be found here
  • ISO 22737:2021 - The new standard on low-speed autonomous vehicles can be found here
  • More on openODD can be found here
  • Check out Siddarthas website

Either listen here, on Spotify or on the platform of your choice!

Andreas Gerstinger ist System-Safety-Experte mit Erfahrung in verschiedenen sicherheitskritischen Bereichen, vor allem in der Flugsicherungs- und Bahnindustrie. Er ist bei Frequentis and also lecturer at the UAS Campus Vienna and the UAS Technikum Vienna .

In this episode, we talked about the Boeing 737 MAX crashes in October 2018 and March 2019 that killed more than 300 people in total. A system that was supposed to stabilize the flight attitude was identified as the cause. We discussed in detail the circumstances that led to the incorrect design of this system and ultimately to a system safety failure.

Here are the documents and sources of additional information addressed in the podcast:

The papers mentioned in the episode can be found here:

Either listen here, on Spotify or on the platform of your choice!

Mathis Lechner is a PhD student and Machine Learning Researcher at the Institute of Science and Technology Austria in Maria Gugging near Klosterneuburg (north-west of Vienna). His research topics are Machine Learning, Formal Methods, and Robotics. In this context, he collaborated with researchers from IST, Vienna University of Technology and the MIT .

In this episode we talked about which aspects have to be considered in order to be able to use systems with neural networks in safety-critical systems. Special focus was placed on a work from last year. Together with Ramin Hasani , he was the main author of a work that has shown that with few neurons, better results can be achieved in certain autonomous driving situations than with complex neural networks. The neural network with few neurons was inspired by nature.

The papers mentioned in the episode can be found here:

Either listen here, on Spotify or on the platform of your choice!

Michael Schmid is a Technology Architect and Loss Prevention Specialist in the field of autonomous systems. His research focuses on preventing losses related to the use of Artificial Intelligence (AI) and making AI safe for use in everyday technology.

Previously, Michael has worked on automation features in cars, self-driving software, and has developed a certification approach for automated vehicles. Michael has a Master‘s degree from the Massachusetts Institute of Technology (MIT) and is currently a PhD candidate in the Group for System Safety and Cybersecurity at MIT.

In this episode, Michael provided some insights into his research and explained why we need a systems approach to solve many of today‘s problems in technology. As an example, Michael and I discussed some of the challenges of autonomous cars and he outlined a systems-based approach for their certification. Michael provided a quick overview of his current research, making AI-based technology safe, and described some of his main ideas. STAMP, a new accident causality model developed by Nancy Leveson, Michael‘s supervisor at MIT, serves as the basis for his approach.

Additional sources of information:

  • To learn more about Michael, his projects and current work, or to download his Master‘s thesis on the certification of automated vehicles visit his webpage: michael.systems
  • For info about STAMP and the next STAMP workshop go to: PSAS website

Either listen here, on Spotify or on the platform of your choice!

Ingo Houben is the Business Development and Account Manager for AdaCore responsible for the German speaking regions. He has a Microelectronics and Software Engineering background with many years work experience in the EDA, Embedded and Automotive industry.

In this podcast we talked about Ada and Spark. Ada is a programming language that is because of the strict requirements for validated compiler good for safety-critical applications. Spark is a variant of Ada with additional rules. These rules make it possible to automatically check programs for correctness.

The study mentioned by Ingo can be downloaded here: Controlling Costs with Software Language Choice

If you have any questions, you can contact Ingo via LinkedIn: Ingo Houben - LinkedIn

Either listen here, on Spotify or on the platform of your choice!