projects

L3 Lab projects (click a project for related publications)

reasoning and verification

Language models for mathematics and code

Our research uses mathematics as a testbed for reasoning, and integrates language models with interactive proof assistants such as Lean. Moving forward, we are interested in enabling strong reasoning capabilities, verified code synthesis and mathematics, and furthering the science of scaling in these domains.

iterative improvement

Iterative improvement

We aim to develop machine learning systems that improve over time. In the context of language, we have viewed this problem as one of interleaving search and learning, which requires new inference-time algorithms and algorithms for learning from feedback. Moving forward, we are particularly interested in iterative improvement in the context of reasoning and code, along with core algorithmic development.

science of llms

Science of neural language models

We analyze neural language models, including formalizing their behavior, studying their ability to generalize, and evaluating their learned distributions. Moving forward, we are broadly interested in basic research related to large language models, scalable evaluation, and efficient methods for generating sequences.