(Singapore, April 27/28, 2025)

Reviewer Nomination

If you’d like to become a reviewer for the workshop, or recommend someone, please use this form.

Overview

This workshop explores the intersection of scale-driven generative artificial intelligence (AI) and the correctness-focused principles of verification.

Formal analysis tools such as theorem provers, satisfiability solvers, and execution monitoring have demonstrated success in ensuring properties of interest across a range of tasks in software development and mathematics where precise reasoning is necessary. However, these methods face scaling challenges. Recently, generative AI such as large language models (LLMs) has been explored as a scalable and adaptable option to create solutions in these settings. The effectiveness of AI in these settings increases with more compute and data, but unlike traditional formalisms, they are built around probabilistic methods – not correctness by construction.

In the VerifAI: AI Verification in the Wild workshop we invite papers and discussions that discuss how to bridge these two fields. See our call for papers.


Speakers & Panelists

Sida Wang
Sida Wang
FAIR
Shan Lu
Shan Lu
UChicago
Kevin Ellis
Kevin Ellis
Cornell
Elizabeth Polgreen
Elizabeth Polgreen
University of Edinburgh
Koushik Sen
Koushik Sen
Berkeley
Pengcheng Yin
Pengcheng Yin
Deepmind
More Info

Organizers

Celine Lee
Celine Lee
Cornell
Wenting Zhao
Wenting Zhao
Cornell
Ameesh Shah
Ameesh Shah
Berkeley

Related Venues


Contact: verifai.workshop@gmail.com.
Website template from https://mathai2024.github.io/.