Registration will open in October and be free for students. The anticipated charge for non-students will be small (at most $200).

Wednesday November 26th

Tutorial, 2pm - 5pm

Title: Alignment of Large Language Models with Human Preferences and Values

Location: PNR Learning Studio 310, location, and about the space

Presenters: Usman Naseem, Gautam Siddharth Kashyap, Kaixuan Ren, Iran Zhang, Utsav Maskey, Juan (Ada) Ren (Macquarie University)

Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities, yet their reliability and alignment with human expectations remain unresolved challenges. This tutorial introduces the foundations of alignment and provides participants with a conceptual and practical understanding of the field. Core principles such as values, safety, reasoning, and pluralism will be presented through intuitive explanations, worked examples, and case studies. The aim is to equip attendees with the ability to reason about alignment goals, understand how existing methods operate in practice, and critically evaluate their strengths and limitations.

Pre-requisite: A working knowledge of basic machine learning and NLP methods will help participants follow the technical material, but the tutorial is structured to remain accessible to those without prior alignment experience.

Content Outline:

  1. Foundations of Alignment
  2. Value and Preference Alignment for Helpfulness
  3. Safety and Honesty Alignment
  4. Pluralistic and Cross-Modal Alignment

Evening Event

Details coming soon!

Thursday November 27th

Main conference

Location: PNR Lecture Theatre (2) 302

Social event

Friday November 28th

Main conference

Location: PNR Lecture Theatre (2) 302