New Challenges for Recurrent Neural Networks and Grammatical Inference
The project LeaRNNify is at the interface of formal methods and artificial intelligence. Its aim is to bring together two different kinds of algorithmic learning, namely grammatical inference and learning of neural networks. More precisely, we promote the use of recurrent neural networks (RNNs) in the process of verifying reactive systems, which until now has been reserved for grammatical inference. On the other hand, grammatical inference is finding its way into the field of classical machine learning. In fact, our second goal is to use automata-learning techniques to enhance the verification, explainability, and interpretability of machine-learning algorithms and, in particular, RNNs.
The project is funded by the Procope programme of Campus France and the German Academic Exchange Service (DAAD).
Igor Khmelnitsky, Daniel Neider, Rajarshi Roy, Xuan Xie, Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Martin Leucker, Lina Ye: Property-Directed Verification and Robustness Certification of Recurrent Neural Networks. 19th International Symposium on Automated Technology for Verification and Analysis (ATVA 2021), pages 364--380, volume 12971 of LNCS, 2021, Gold Coast (Online), Australia.
Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Igor Khmelnitsky, Martin Leucker, Daniel Neider, Rajarshi Roy, Lina Ye: Extracting Context-Free Grammars from Recurrent Neural Networks using Tree-Automata Learning and A* Search. 15th International Conference on Grammatical Inference (ICGI 2021), PMLR 153:113-129, 2021, New York City (Online), United States.
2 - 4 March 2020, ENS Paris-Saclay