Nuremberg, Germany
AI-based Early Stage Planning for the AEC-Industry
When:
21 July - 01 August 2025
Credits:
0.0 EC
Read more
Artificial Intelligence
When:
22 July - 26 July 2024
School:
Barcelona International Summer School
Institution:
Universitat Pompeu Fabra
City:
Country:
Language:
English
Credits:
2.0 EC
Fee:
150 EUR
ChatGPT frequently produces false information: output that appears plausible but is not factual. This is known as ‘hallucinations’. The reason behind this is the fact that large language models (LLM) are trained to predict strings of words (rather than being a repository of ‘facts’). Crucially, an AI does not “know” about the truthfulness of its output. Nevertheless, AI-tools are increasingly used to provide “information” in professional and private settings. Why are we inclined to rely on this non-reliable source?.
In this course we explore this question from a linguistic angle. We compare the logic and architecture behind LLMs (which underlie AI-tools) with the logic and architecture behind human cognition (including the capacity for language). At the root of our "trust" in AI-tools is the apparent flawless language output, which can lead to anthropomorphization, which in turn leads users to expect that it follows the same conversational principles as humans do.
Martina Wiltschko
Undergraduated students (Sophomore, Junior and Senior)
Fee
150 EUR, Registration fee (non refundable)
Fee
550 EUR, Tuition fee (non refundable)
When:
22 July - 26 July 2024
School:
Barcelona International Summer School
Institution:
Universitat Pompeu Fabra
Language:
English
Credits:
2.0 EC
Nuremberg, Germany
When:
21 July - 01 August 2025
Credits:
0.0 EC
Read more
Oxford, United Kingdom
When:
30 June - 18 July 2025
Credits:
7.5 EC
Read more
Brussels, Belgium
When:
10 June - 14 June 2025
Credits:
0.0 EC
Read more