This course is a graduate-level research course covering advanced topics in NLP, introducing the state-of-the-art methods for computational understanding, analysis, and generation of natural language text. In recent years, advances in deep learning models for NLP has transformed the ability of computers to converse with human using language, giving us multi-lingual, multi-models that are capable of answering questions, composing message, translating and summarizing documents. The development of large language models (LLMs) are built on top of neural models such as transformers, that allows for the scaling up of models and training with large amounts of data. In this course, we will focus on current state-of-the-art methods in NLP including how to do parameter efficient fine-tuning, techniques for scaling models to long sequences, etc. We will also go beyond transformers to learn alternative architectures such as state-space models.
Students are expected to have prior experience with deep learning concepts and framework (Pytorch, Tensorflow, etc), and should also be familiar with basic natural language processing.
Each week, students will read papers in a particular area of natural language processing, and discuss the contributions, limitations and interconnections between the papers. Students will also work on a research project during the course, culminating in a final presentation and written report. The course aims to provide practical experience in comprehending, analyzing and synthesizing research in natural language processing and understanding.
Note: This course is NOT an introductory course to natural language processing. If you are interested in taking a introductory course about natural language processing, please take CMPT 413/713.
There are no formal prerequisites for this class. However, you are expected to be familiar with the following:
Lectures: Mondays and Wednesdays 3:30PM - 4:50PM (WMC3510).
Below is a tentative outline for the course.
R: Readings, BG: (Optional) Background material / reading for deeper understanding. Provided for reference.
| Date | Topic | Notes |
|---|---|---|
| Jan 5 | Introduction, NLP review & logistics | BG: Stanford CS324 intro to LLMs |
| Jan 7 | Transformers and LLMs | BG: Attention is all you need |
| Jan 12 | Training LLMs (pre-training) | R: LLaMa R: InstructGPT |
| Jan 14 | Training LLMs (post-training) | BG: Instruction tuning, BG: Post-training, R: DeepSeek-R1 |
| Jan 19 | Prompting, decoding and inference | |
| Jan 21 | Efficient LLMs: Pruning, quantization, distillation | BG: Inference optimization (Lillian Weng) BG: Efficient Transformers BG: UniMem, LolCats |
| Jan 26 | Retrieval | |
| Jan 28 | Reasoning | R: DeepSeekMath-V2, Agent0 |
| Feb 2 | Evaluation and benchmarking | |
| Feb 4 | Model analysis and interpretability | |
| Feb 9 | Project proposals | |
| Feb 11 | Project proposals | Due: Project proposal |
| Feb 16 | No class - Reading break | |
| Feb 18 | No class - Reading break | |
| Feb 23 | State space models | R: Mamba, Hyena |
| Feb 25 | Token-free models | |
| Mar 2 | Multi-modal models I | |
| Mar 4 | Multi-modal models II | |
| Mar 9 | Project milestone presentations | |
| Mar 11 | Project milestone presentations | Due: Project milestone |
| Mar 16 | LLM agents | |
| Mar 18 | VLA models | |
| Mar 23 | Guest lecturer | |
| Mar 25 | Guest lecturer | |
| Mar 30 | ||
| Apr 1 | ||
| Apr 6 | Project presentations | |
| Apr 8 | Project presentations + Conclusion | |
| Apr 9 | Due: Project final report |
SFU’s Academic Integrity web site is filled with information on what is meant by academic dishonesty, where you can find resources to help with your studies and the consequences of cheating. Check out the site for more information and videos that help explain the issues in plain English.
Each student is responsible for his or her conduct as it affects the University community. Academic dishonesty, in whatever form, is ultimately destructive of the values of the University. Furthermore, it is unfair and discouraging to the majority of students who pursue their studies honestly. Scholarly integrity is required of all members of the University. Please refer to this web site.