Categories
podcast-DG-en podcast-en

How computers manage to deal with language independently

Powerful language models have become available to the broad world public, making the success of computational linguistics evident: Computer architectures have been developed there that are capable of processing intuitive human language. The behaviour shown by these systems is hardly comprehensible in detail, but the results they deliver are all the more impressive: Assistance systems equipped with these language models can be controlled and generate output as if they actually mastered language themselves – and their potential applications go far beyond the sensational chat bots that dominate public debates. However, GPT and related systems have by no means emerged suddenly. They are the result of a persistent learning process: first attempts to map language into algorithms failed in the mid-twentieth century not only because of a lack of resources or because crucial machine learning methods had not yet been developed, but also because the theories on how to abstractly grasp and systematise the meaning level of human language were insufficient. So what do developers of modern systems do differently from the pioneers of computational linguistics?

Chris Biemann is Professor of Language Technology at the University of Hamburg, where he heads the Language Technology Group and the House of Computing and Data Science. In this episode of Digitalgespräch, the expert provides deep insights into the way modern language models are created and how they work, explaining linguistic theories that come into play. He makes comprehensible why the systems deliver such impressive results and describes what they can be used for in scientific contexts. With hosts Marlene Görger and Petra Gehring, Biemann discusses what resources go into the development of such systems, what happens when language models are trained on nearly the entire internet – and what tasks computational linguists face now that they seem to have achieved their great goal.

Episode 36 of Digitalgespräch, feat. Chris Biemann of Universität Hamburg, 16 May 2023
Further informationen:

Link to the textbook “Wissensrohstoff Text – eine Einführung in das Text Minig” by Chris Biemann, Gerhard Heyer and Uwe Quasthoff: https://link.springer.com/book/10.1007/978-3-658-35969-0
Link to the website of the House of Computing and Data Science: https://www.hcds.uni-hamburg.de/hcds.html

all episodes of Digitalgespräch»

Creative Commons Lizenzvertrag

The podcast is in German. At the moment there is no English version or transcript available.