Mathis Lechner is a PhD student and Machine Learning Researcher at the Institute of Science and Technology Austria in Maria Gugging near Klosterneuburg (north-west of Vienna). His research topics are Machine Learning, Formal Methods, and Robotics. In this context, he collaborated with researchers from IST, Vienna University of Technology and the MIT .
In this episode we talked about which aspects have to be considered in order to be able to use systems with neural networks in safety-critical systems. Special focus was placed on a work from last year. Together with Ramin Hasani , he was the main author of a work that has shown that with few neurons, better results can be achieved in certain autonomous driving situations than with complex neural networks. The neural network with few neurons was inspired by nature.
The papers mentioned in the episode can be found here:
- Neurales Netzwerk mit wenigen Neuronen (Lechner, Hasani et. Al): Neural circuit policies enabling auditable autonomy
- Verifikation quantisierter neuraler Netze (Henzinger et. Al.): Scalable Verification of Quantized Neural Networks (Technical Report)
- Adversial Training – Genauigkeit vs. Robustheit (Lechner et. Al): Adversarial Training is Not Ready for Robot Learning
Either listen here, on Spotify or on the platform of your choice!
Saliency maps, which are used to visualize Regions-of-Interest of the neural network.