Barcelona, Spain

Relying on Hallucinations: The Linguistics Behind Human-AI Interactions

when 22 July 2024 - 26 July 2024
language English
duration 1 week
credits 2 EC
fee EUR 150

ChatGPT frequently produces false information: output that appears plausible but is not factual. This is known as ‘hallucinations’. The reason behind this is the fact that large language models (LLM) are trained to predict strings of words (rather than being a repository of ‘facts’). Crucially, an AI does not “know” about the truthfulness of its output. Nevertheless, AI-tools are increasingly used to provide “information” in professional and private settings. Why are we inclined to rely on this non-reliable source?.

In this course we explore this question from a linguistic angle. We compare the logic and architecture behind LLMs (which underlie AI-tools) with the logic and architecture behind human cognition (including the capacity for language). At the root of our "trust" in AI-tools is the apparent flawless language output, which can lead to anthropomorphization, which in turn leads users to expect that it follows the same conversational principles as humans do.

Course leader

Martina Wiltschko

Target group

Undergraduated students (Sophomore, Junior and Senior)

Fee info

EUR 150: Registration fee (non refundable)
EUR 550: Tuition fee (non refundable)

Scholarships

Non UPF students from universities with an international exchange partnership with UPF will be exempt of paying the 150€ non refundable registration fee.