Skip to main content

Enrolment options

Course image
Course summary text:

Welcome to Robust Large Language Models: From Security to Factuality

Description: 

The course ...

Computer Science and Engineering (2026)
Introduction:

Welcome to Robust Large Language Models: From Security to Factuality

Description: 

The course will focus on two central topics related to robustness in LLMs: Security and Factuality

Security
The course will cover methods for detecting, preventing, and mitigating vulnerabilities in LLM-based systems. Although LLMs are trained with safety and helpfulness in mind, protecting them from adversarial manipulation and misuse remains a pressing challenge. Threats may range from straightforward prompt injections to more elaborate, multi-stage exploits targeting system instructions, fine-tuning data, or connected applications. Addressing security in LLMs requires multiple layers of defense, including careful safeguard design, thorough evaluation, and ongoing monitoring. Participants will learn core principles of LLM security, examine state-of-the-art defense and testing approaches, and work with structured evaluation protocols, including automated red-teaming for multi-turn dialogue scenarios. By the end, they will be able to design, assess, and strengthen LLM applications against real-world threats.

Factuality

The course will also focus on Uncertainty Quantification (UQ), a key approach for improving the reliability of LLM outputs. UQ is increasingly important in NLP for reducing hallucinations, identifying weak or erroneous responses, detecting out-of-distribution inputs, and optimising latency. While UQ is well studied in classification tasks, adapting it to LLMs is significantly more challenging due to the sequential and interdependent nature of generated text, where different tokens contribute unequally to meaning. Participants will gain an understanding of the main concepts in UQ for LLMs, survey current research and techniques, explore applications in diverse settings, and acquire practical skills for designing new UQ strategies to enhance factuality and trustworthiness in LLM-driven systems. 

The course will further touch upon topics such as memorization, evaluation, hallucination.

Prerequisites: Fundamental knowledge of and experience with Large Language Models, e.g. using frameworks such as Huggingface.

Learning objectives
Identify and mitigate vulnerabilities in LLM-based systems, including prompt injection, data poisoning, and integration exploits, using systematic evaluation and defense techniques.
Apply uncertainty quantification (UQ) methods to assess and improve the factuality and reliability of LLM outputs across different application scenarios.
Design and evaluate robust LLM applications by integrating security safeguards, factuality checks, and continuous monitoring into real-world deployment pipelines.

Organizer: Johannes Bjerva

Lecturers: 

ECTS: 2

Time: 8, 9, 10, 11, and 12 June, 2026

Place: Aalborg University

City: Copenhagen

Maximal number of participants: 50

Deadline: 18 May 2026


Important information concerning PhD courses: 

There is a no-show fee of DKK 3,000 for each course where the student does not show up. Cancellations are accepted no later than 2 weeks before the start of the course. Registered illness is of course an acceptable reason for not showing up on those days. Furthermore, all courses open for registration approximately four months before start of the course.

We cannot ensure any seats before the deadline for enrolment, all participants will be informed after the deadline, approximately 3 weeks before the start of the course.

For inquiries regarding registration, cancellation or waiting list, please contact the PhD administration at phdcourses@adm.aau.dk When contacting us please state the course title and course period. Thank you.


Year: 2026
ECTS points: 2
Open in new window