Advanced Natural Language Processing / Fall 2025
Advanced natural language processing is an introductory graduate-level course on natural language processing aimed at students who are interested in doing cutting-edge research in the field. In it, we describe fundamental tasks in natural language processing as well as methods to solve these tasks. The course focuses on modern methods using neural networks, and covers the basic modeling, learning, and inference algorithms required therefore. The class culminates in a project in which students attempt to reimplement and improve upon a research paper in a topic of their choosing.
Course Details
Instructor
Teaching Assistants

Joel Mire

Chen Wu

Dareen Alharthi

Neel Bhandari
Logistics
- Class times: TR 2:00pm - 3:20pm
- Room: TEP 1403
- Course identifier: LTI 11-711
- Office hours:
Location Day Time Sean Welleck GHC 6513 TBD TBD Joel Mire TBD TBD TBD Chen Wu TBD TBD TBD Dareen Alharthi TBD TBD TBD Neel Bhandari TBD TBD TBD Akshita Gupta TBD TBD TBD Ashish Marisetty TBD TBD TBD Manan Sharma TBD TBD TBD Sanidhya Vijayvargiya TBD TBD TBD
Grading
- The assignments will be given a grade of A+ (100), A (96), A- (92), B+ (88), B (85), B- (82), or below.
- The final grades will be determined based on the weighted average of the quizzes, assignments, and project. Cutoffs for final grades will be approximately 97+ A+, 93+ A, 90+ A-, 87+ B+, 83+ B, 80+ B-, etc., although we reserve some flexibility to change these thresholds slightly.
- Quizzes: Worth 20% of the grade. Your lowest 3 quiz grades will be dropped.
- Assignments: There will be 4 assignments (the final one being the project), worth respectively 15%, 15%, 20%, 30% of the grade.
Course description
The course covers key algorithmic foundations and applications of advanced natural language processing.
There are no hard pre-requisites for the course, but programming experience in Python and knowledge of probability and linear algebra are expected. It will be helpful if you have used neural networks previously.
Acknowledgements. This semester's course is based on Advanced NLP Spring 2025, which itself was adapted from Advanced NLP Fall 2024, designed and taught by Graham Neubig.
Class format
Lectures: For each class there will be:- Reading: Most classes will have associated reading material that we recommend you read before the class to familiarize yourself with the topic.
- Lecture and Discussion: There will be a lecture and discussion regarding the class material. This will be recorded and posted online for those who cannot make the in-person class.
- Code/Data Walkthrough: Some classes will involve looking through code or data.
- Quiz: There will be a quiz covering the reading material and/or lecture material that you can fill out on Canvas. The quiz will be released by the end of the day of the class and will be due at the end of the following day.
Schedule
-
ClassTypeTopicResources
-
# 1 08/26/2025Lecture
Fundamentals
Introduction & FundamentalsMain readings: -
# 2 08/28/2025Lecture
Fundamentals
Fundamentals: Learned RepresentationsMain readings:Additional references
- (Video) Let's build the GPT Tokenizer (Karpathy 2024)
- (Video) Let's build the GPT Tokenizer (Karpathy 2024)
-
# 3 09/02/2025Lecture
Fundamentals
Fundamentals: Autoregressive Language ModelingMain readings:Additional references
- A Neural Probabilistic Language Model (Bengio et al 2003)
- Understanding the difficulty of training deep feedforward neural networks (Glorot & Bengio 2010)
- Adam: A Method for Stochastic Optimization (Kingma & Ba 2015)
- A Neural Probabilistic Language Model (Bengio et al 2003)
-
# 4 09/04/2025Lecture
Architectures
Architectures I: Recurrent Neural NetworksMain readings:- Natural Language Understanding with Distributed Representation (Ch. 4, Ch. 5.5-5.6, Ch. 6) (Cho 2015)
Additional references
- Recurrent neural network based language model (Mikolov et al 2010)
- Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (Cho et al 2014)
- Why LSTMs Stop Your Gradients From Vanishing: A View from the Backwards Pass (Weber 2017)
- Neural Machine Translation by Jointly Learning to Align and Translate (Bahdanau et al 2015)
- Natural Language Understanding with Distributed Representation (Ch. 4, Ch. 5.5-5.6, Ch. 6) (Cho 2015)
-
# 5 09/09/2025Lecture
Architectures
Architectures II: Attention and TransformersMain readings:- Attention Is All You Need (Vaswani et al 2017)
- The Annotated Transformer (Rush et al 2018)
Additional references
- Root Mean Square Layer Normalization (Zhang & Sennrich 2019)
- On Layer Normalization in the Transformer Architecture (Xiong et al 2020)
- RoFormer: Enhanced Transformer with Rotary Position Embedding (Su et al 2021)
- GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints (Ainslie et al 2023)
- (Helpful Blog Post): Why Are Sines and Cosines Used For Positional Encoding? (Muhammad 2023)
- Attention Is All You Need (Vaswani et al 2017)
-
# 5 09/09/2025
Assignment Released
Assignment 1 Released -
# 6 09/11/2025Lecture
Learning & Inference
Learning I: PretrainingMain readings:- Language Models are Unsupervised Multitask Learners (Radford et al 2019)
- The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale (Penedo et al 2024)
Additional references
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al 2018)
- LLaMA: Open and Efficient Foundation Language Models (Touvron et al 2023)
- OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text (Paster et al 2023)
- Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research (Soldaini et al 2024)
- Scaling Laws for Neural Language Models (Kaplan et al 2020)
- Training Compute-Optimal Large Language Models (Hoffmann et al 2022)
- DeepSeek LLM: Scaling Open-Source Language Models with Longtermism (Deepseek AI 2024)
- Language Models are Unsupervised Multitask Learners (Radford et al 2019)
-
# 7 09/16/2025Lecture
Learning & Inference
Learning II/Inference I: In-Context LearningMain readings:- Language Models are Few-Shot Learners (Brown et al 2020)
Additional references
- Prompting Survey (Liu et al 2021)
- Many-Shot In-Context Learning (Agarwal et al 2024)
- Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design (Sclar et al 2023)
- Large Language Models as Optimizers (Yang et al 2023)
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (Wei et al 2022)
- DSPy (Khattab et al 2023)
- Language Models are Few-Shot Learners (Brown et al 2020)
-
# 8 09/18/2025Lecture
Learning & Inference
Learning III: Fine-tuning and DistillationMain readings:- LoRA: Low-Rank Adaptation of Large Language Models (Hu et al 2021)
- Sequence-Level Knowledge Distillation (Kim & Rush 2016)
Additional references
- Universal Language Model Fine-tuning for Text Classification (Howard & Ruder 2018)
- Cross-Task Generalization via Natural Language Crowdsourcing Instructions (Mishra et al 2021)
- Finetuned Language Models Are Zero-Shot Learners (Wei et al 2021)
- Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks (Wang et al 2022)
- Self-Instruct: Aligning Language Models with Self-Generated Instructions (Wang et al 2023)
- Orca: Progressive Learning from Complex Explanation Traces of GPT-4 (Mukherjee et al 2023)
- Symbolic Knowledge Distillation: from General Language Models to Commonsense Models (West et al 2022)
- QLoRA: Efficient Finetuning of Quantized LLMs (Dettmers et al 2023)
- LoRA: Low-Rank Adaptation of Large Language Models (Hu et al 2021)
-
# 9 09/23/2025Lecture
Learning & Inference
Inference II: Decoding AlgorithmsMain readings: -
# 10 09/25/2025Lecture
Modeling
Modeling I: Retrieval and RAGMain readings:- Retrieval-based Language Models and Applications (ACL 2023 Tutorial)
- Retrieval-based Language Models and Applications (ACL 2023 Tutorial)
-
# 10 09/25/2025
Assignment Due
Assignment 1 Due -
# 10 09/25/2025
Assignment Released
Assignment 2 Released -
# 11 09/30/2025Lecture
Modeling
Modeling II: Multimodal IMain readings:- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Dosovitskiy et al 2020)
- Learning Transferable Visual Models From Natural Language Supervision (Radford et al 2021)
Additional references
- Visual Instruction Tuning (Liu et al 2023)
- Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Vision-Language Models (Deitke et al 2024)
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Dosovitskiy et al 2020)
-
# 12 10/02/2025Lecture
Modeling
Modeling III: Multimodal IIMain readings:- Neural Discrete Representation Learning (van den Oord et al 2017)
- Taming Transformers for High-Resolution Image Synthesis (Esser et al 2021)
Additional references
- Neural Discrete Representation Learning (van den Oord et al 2017)
-
# 13 10/07/2025Lecture
Evaluation and Experimental Design
Evaluation TechniquesMain readings: -
# 14 10/09/2025Lecture
Evaluation and Experimental Design
Research Skills and Experimental DesignMain readings: -
# 14 10/09/2025
Assignment Due
Assignment 2 Due -
# 14 10/09/2025
Assignment Released
Assignment 3, 4 Released -
# 15 10/14/2025Break
No Class
Fall Break -
# 16 10/16/2025Break
No Class
Fall Break -
# 17 10/21/2025Lecture
RL and Agents
Reinforcement Learning I: FundamentalsMain readings:- Deep Reinforcement Learning: Pong from Pixels (Karpathy 2016)
- Spinning Up in Deep RL (Part 1, Part 3, Vanilla PG, PPO) (OpenAI)
Additional references
- Proximal Policy Optimization Algorithms (Schulman et al 2017)
- High-Dimensional Continuous Control Using Generalized Advantage Estimation (Schulman et al 2015)
- Deep Reinforcement Learning: Pong from Pixels (Karpathy 2016)
-
# 18 10/23/2025Lecture
RL and Agents
Reinforcement Learning II: ApplicationsMain readings:- Training language models to follow instructions with human feedback (Ouyang et al 2022)
- DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning (DeepSeek-AI 2025)
Additional references
- AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback
- Deep reinforcement learning from human preferences (Christiano et al 2017)
- Fine-Tuning Language Models from Human Preferences (Ziegler et al 2019)
- Direct Preference Optimization: Your Language Model is Secretly a Reward Model
- Training language models to follow instructions with human feedback (Ouyang et al 2022)
-
# 19 10/28/2025Lecture
RL and Agents
AgentsMain readings:- World of Bits: An Open-Domain Platform for Web-Based Agents (Shi et al 2017)
- WebGPT: Browser-assisted question-answering with human feedback (Nakano et al 2022)
Additional references
- WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents (Yao et al 2022)
- SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering (Yang et al 2024)
- VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks (Koh et al 2024)
- OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments (Xie et al 2024)
- Programming with Pixels: Computer-Use Meets Software Engineering (Aggarwal & Welleck 2025)
- World of Bits: An Open-Domain Platform for Web-Based Agents (Shi et al 2017)
-
# 20 10/30/2025Project Hours
Course Project
Project Hours / Assignment 3.1 Presentations -
# 20 10/30/2025
Assignment Due
Assignment 3.1 Due -
# 21 11/04/2025Break
No Class
Democracy Day -
# 22 11/06/2025Lecture
Advanced Topics
Efficiency: QuantizationMain readings:- LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale (Dettmers et al 2022)
- QLoRA: Efficient Finetuning of Quantized LLMs (Dettmers et al 2023)
Additional references
- 8-bit Optimizers via Block-wise Quantization (Dettmers et al 2021)
- The case for 4-bit precision: k-bit Inference Scaling Laws (Dettmers & Zettlemoyer 2022)
- LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale (Dettmers et al 2022)
-
# 23 11/11/2025Lecture
Advanced Topics
Scaling: Parallelism and Distributed TrainingMain readings:- The Ultra-Scale Playbook: Training LLMs on GPU Clusters (Tazi et al 2025)
- The Ultra-Scale Playbook: Training LLMs on GPU Clusters (Tazi et al 2025)
-
# 24 11/13/2025Lecture
Advanced Architectures
Advanced Architectures I: Long Sequence ModelsMain readings:- Self-attention Does Not Need O(n2) Memory (Rabe & Staats 2021)
- Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Gu & Dao 2023)
Additional references
- Self-attention Does Not Need O(n2) Memory (Rabe & Staats 2021)
-
# 24 11/13/2025
Assignment Due
Assignment 3.2 Due -
# 25 11/18/2025Lecture
Advanced Architectures
Advanced Architectures II: Mixture of ExpertsMain readings: -
# 26 11/20/2025Lecture
Advanced Inference
Advanced Inference: Strategies & EfficiencyMain readings:- From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models (Sections 4-7) (Welleck et al 2024)
Additional references
- NeurIPS 2024 LLM Inference Tutorial (Reading List)
- DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning (DeepSeek-AI 2025)
- s1: Simple test-time scaling (Muennighoff et al 2025)
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning (Aggarwal & Welleck 2025)
- From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models (Sections 4-7) (Welleck et al 2024)
-
# 27 11/25/2025Lecture
Advanced Modeling
Advanced Modeling: DiffusionMain readings: -
# 28 11/27/2025Break
No Class
Thanksgiving -
# 29 12/02/2025Poster Session
Course Project
Project Posters I -
# 30 12/04/2025Poster Session
Course Project
Project Posters II -
# 30 12/09/2025
Assignment Due
Assignment 4 Due
Assignments
The aim of the assignment and project is to build basic understanding and advanced implementation skills needed to build cutting-edge systems or do cutting-edge research using neural networks for NLP, culminating with a project that demonstrates these abilities through a project.
Read all the instructions on this page carefully
You are responsible for reading these instructions and following them carefully. If you do not, you may be marked down as a result.
Assignment Policies
Working in Teams:
There are 4 assignments in the class. Assignment 1 must be done individually, while Assignments 2, 3, and 4 must be done in teams of 2-3 (individual submissions will not be accepted for these assignments). If you are having trouble finding a group, the instructor and TAs will help you find one after the first initial survey.
Submission Information:
To submit your assignment you must submit via Canvas a zip file containing:
- your code: This should be in a directory “code” in the top directory unless specified otherwise.
- system outputs (assignments 1 and 2): The format will be specified separately for each assignment.
- a report (assignments 2, 3 and 4, optional for assignment 1): This should be named “report.pdf” in the top directory. This is for assignments 2, 3 and 4, and can be up to 7 pages for assignments 2 and 3 and 9 pages for assignment 4. References are not included in the page count, and it is OK to submit appendices that include supplementary information such as hyperparameter settings or additional output examples, although there is no guarantee that the TAs will read them. Submissions that exceed the page count will be penalized one third grade for each page over (e.g., A to A- or A- to B+). You may also submit report.pdf for assignment 1 if you have any interesting information to convey to the TAs, for example, if you did anything interesting above and beyond the minimal requirements.
- a link to a GitHub repository containing your code (assignments 2, 3 and 4): This should be a single line file “github.txt” in the top directory. Your GitHub repository must be viewable to the TAs in charge of the assignment by the submission deadline. If your repository is private, make it accessible to the TAs by the submission deadline. If your repository is not visible to the TAs, your assignment will not be considered complete, so if you are worried, please submit well in advance of the deadline so we can confirm the submission is visible. We use this repository to check contributions of all team members.
Late Day Policy:
In case there are unforeseen circumstances that don’t let you turn in your assignment on time, 5 late days total for assignments 2 and 3 will be allowed. Note that other than these late days, we will not be making exceptions and extending deadlines except for health reasons, so please try to be frugal with your late days and use them only if necessary. Assignments that are late beyond the allowed late days will be graded down one third-grade per day late (e.g., A to A- for one day, and A to B+ for two days).
Plagiarism/Code Reuse Policy:
All assignments are expected to be conducted under the CMU policy for academic integrity. All rules here apply and violations will be subject to penalty including zero credit on the assignment, failing the course, or other disciplinary measures. In particular, in your implementation:
- Code or pseudo-code provided by the TAs or instructor may be used freely without restriction.
- For assignment 2, you may not just re-use an existing implementation written by someone else. The implementation should basically be your own.
- Code written by other students in the class cannot be used (except, obviously, you can share code within your group for assignments 2, 3, and 4).
- If you are doing a similar project for a graded class at CMU (including independent studies or directed research), you must declare so on your report, and note which parts of the project are for 11-711, and which parts are for the other class. Consult with the TA mailing list if you are unsure.
Consulting w/ Instructors/TAs:
For assignments and projects, you are free to consult as much as you want, any time you want with the instructors and TAs. That is what we’re here for, and in no way is this considered cheating. In fact, if you don’t have much experience with NLP previously, it will be helpful to liberally consult with the instructors and TAs to learn about how to do the implementation and finish the assignments. So please do so.
Because this is a project-based course, we assume that many of the students taking the course will be interested in turning their assignments or project into research papers. In this case, if you have received useful advice from the instructor or TAs that made the project significantly better, consider inviting them to be co-authors on the paper. Of course, you do not need to do so just because the paper is a result of the class, only if you feel that their advice or help made a contribution.
Details of Each Assignment
- Assignment 1: Build Your Own LLaMa (Individual assignment)
- Released: Sep 9
- Due: Sep 25
- Assignment 2: End-to-end NLP System Building (Group assignment)
- Released: Sep 25
- Due: Oct 9
- Assignment 3: Project Proposal & State-of-the-art Reimplementation (Group assignment)
- Assignment 3.1: Literature Review & Project Proposal
- Released: Oct 9
- Due: Oct 30
- Assignment 3.2: Baseline Reproduction
- Released: Oct 9
- Due: Nov 13
- Assignment 3.1: Literature Review & Project Proposal
- Assignment 4: Final Project (Group assignment)
- Released: Oct 9
- Due: Dec 9
Details to be provided later.
Poster Presentation
Time/Location
- Time: 2:00PM-3:20PM, December 2nd, 2025 and December 4th, 2025
- Location: Hallway below LTI (GHC4400)
Goals and Grading
The intention of the poster is several-fold:- That you share your preliminary results with the TAs and instructor so we can give feedback to make any last adjustments to improve your final project report.
- That you can see the other projects in the class to learn from them and get any ideas that may improve your final project report.
- That you can practice explaining the work that you did.
What information should be included in a poster? It should be mostly:
- What is the problem you’re solving
- What is your method for solving that problem
- What are the results