Interaktivní osnova
[Katarína Hudcovicová]: Identifying the Limits of Transformers when Performing Model-Checking with Natural Language 11. 4. 2024
Abstract
Previous works on natural language inference have examined how well transformer models can reason with text. But what was still lacking was addressing whether they could understand the logical semantics in natural language. The reason is mainly because the previously studied logical problems can be, depending on their structure, more or less computationally complex. Therefore, it is unclear whether the reason for lower performance is due to the difference in computational complexity or the inability to comprehend the logical semantics of natural language. The authors chose the model-checking problem, as their computational complexity is always PTIME. The results suggested that the form and type of language used significantly affect how well the transformer models perform. They can grasp some logical meanings in natural language but still fall short when learning the underlying algorithm of model-checking problems.
Slides
Lecture Recordings
Readings
Catering
Překvapení.