26 July 2024
Relying on Hallucinations: The Linguistics Behind Human-AI Interactions
ChatGPT frequently produces false information: output that appears plausible but is not factual. This is known as ‘hallucinations’. The reason behind this is the fact that large language models (LLM) are trained to predict strings of words (rather than being a repository of ‘facts’). Crucially, an AI does not “know” about the truthfulness of its output. Nevertheless, AI-tools are increasingly used to provide “information” in professional and private settings. Why are we inclined to rely on this non-reliable source?.
In this course we explore this question from a linguistic angle. We compare the logic and architecture behind LLMs (which underlie AI-tools) with the logic and architecture behind human cognition (including the capacity for language). At the root of our "trust" in AI-tools is the apparent flawless language output, which can lead to anthropomorphization, which in turn leads users to expect that it follows the same conversational principles as humans do.
Undergraduated students (Sophomore, Junior and Senior)
EUR 150: Registration fee (non refundable)
EUR 550: Tuition fee (non refundable)
Non UPF students from universities with an international exchange partnership with UPF will be exempt of paying the 150€ non refundable registration fee.