The project LeaRNNify is at the interface of formal methods and artificial intelligence. Its aim is to bring together two different kinds of algorithmic learning, namely grammatical inference and learning of neural networks. More precisely, we promote the use of recurrent neural networks (RNNs) in the process of verifying reactive systems, which until now has been reserved for grammatical inference. On the other hand, grammatical inference is finding its way into the field of classical machine learning. In fact, our second goal is to use automata-learning techniques to enhance the verification, explainability, and interpretability of machine-learning algorithms and, in particular, RNNs.
The project is funded by the Procope programme of Campus France and the German Academic Exchange Service (DAAD).
Lina Ye, Igor Khmelnitsky, Serge Haddad, Benoît Barbot, Benedikt Bollig, Martin Leucker, Daniel Neider, Rajarshi Roy: Analyzing Robustness of Angluin's L* Algorithm in Presence of Noise. Log. Methods Comput. Sci. 20(1) (2024)
Igor Khmelnitsky, Daniel Neider, Rajarshi Roy, Xuan Xie, Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Martin Leucker, Lina Ye: Analysis of recurrent neural networks via property-directed verification of surrogate models. Int. J. Softw. Tools Technol. Transf. 25(3): 341-354 (2023)
Igor Khmelnitsky, Serge Haddad, Lina Ye, Benoît Barbot, Benedikt Bollig, Martin Leucker, Daniel Neider, Rajarshi Roy: Analyzing Robustness of Angluin's L* Algorithm in Presence of Noise. GandALF 2022: 81-96
Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Igor Khmelnitsky, Martin Leucker, Daniel Neider, Rajarshi Roy, Lina Ye: Extracting Context-Free Grammars from Recurrent Neural Networks using Tree-Automata Learning and A* Search. ICGI 2021: 113-129
Igor Khmelnitsky, Daniel Neider, Rajarshi Roy, Xuan Xie, Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Martin Leucker, Lina Ye: Property-Directed Verification and Robustness Certification of Recurrent Neural Networks. ATVA 2021: 364-380