Reviewer Nomination
If you’d like to become a reviewer for the workshop, or recommend someone, please use this form.
Overview
This workshop explores the intersection of scale-driven generative artificial intelligence (AI) and the correctness-focused principles of verification.
Formal analysis tools such as theorem provers, satisfiability solvers, and execution monitoring have demonstrated success in ensuring properties of interest across a range of tasks in software development and mathematics where precise reasoning is necessary. However, these methods face scaling challenges. Recently, generative AI such as large language models (LLMs) has been explored as a scalable and adaptable option to create solutions in these settings. The effectiveness of AI in these settings increases with more compute and data, but unlike traditional formalisms, they are built around probabilistic methods – not correctness by construction.
In the VerifAI: AI Verification in the Wild workshop we invite papers and discussions that discuss how to bridge these two fields. See our call for papers.
Speakers & Panelists
Organizers
Related Venues
- Deep Learning for Code (DL4C)
- Large Language Models for Code (LLM4Code)
- Workshop on Mathematical Reasoning and AI (MATH-AI)
- Workshop on Formal Verification and Machine Learning (WFVML)
- Symposium on AI Verification (SAIV)
- Workshop on ML for Systems
Contact: verifai.workshop@gmail.com.
Website template from https://mathai2024.github.io/.