July 7 – August 8, 2025, Eugene, OR

The courses below will take place in either the first half-term, July 7 – 22, the second half-term, July 24 – August 8, or the whole term, July 7 – August 8. Each class will meet for 80 minutes twice a week, either on Monday & Thursday or on Tuesday & Friday. Wednesdays will host tutorials, workshops and round tables. Weekends will host co-located conferences

We are excited to have preliminary commitments from the following faculty:

Carolina P. Amador-Moreno (Extremadura), Jenny Audring (Leiden), Harald Baayen (Tübingen), Matthew Baerman (Surrey), Danielle Barth (Australian National), Jóhanna Barðdal (Ghent), Gašper Beguš (UC Berkeley), Henrik Bergqvist (Gothenburg), Balthasar Bickel (Zurich), Idan Blank (UCLA), Alice Blumenthal-Dramé (Freiburg), Paul Boersma (Amsterdam), Laurel Brehm (UC Santa Barbara), Canaan Breiss (Southern California), Esther Brown (Colorado), Josh R. Brown (U Wisconsin-Eau Claire), Lucien Brown (Monash), Aoju Chen (Utrecht), Maria Copot (The Ohio State), Sonia Cristofaro (Sorbonne), Kathleen Currie Hall (British Columbia), Don Daniels (Oregon), Scott DeLancey (Oregon), Christian DiCanio (Buffalo), Dagmar Divjak (Birmingham), Robin Dodsworth (North Carolina State), Jordan Douglas-Tavani (UCSB), Jonathan Dunn (Illinois), Nick C. Ellis (Michigan), Mirjam Ernestus (Radboud), Sara Finley (Pacific Lutheran), Suzana Fong (Newfoundland), Elaine Francis (Purdue), Richard Futrell (UC Irvine), Thanasis Georgakopoulos (Aristotle U of Thessaloniki), Spike Gildea (Oregon), Adele Goldberg (Princeton), Simon Greenhill (Auckland), Stefan Th. Gries (UC Santa Barbara), Zenzi M. Griffin (Texas), Carolina Grzech (Pompeu Fabra), Zara Harmon (MPI for Psycholinguistics), Gaja Jarosz (UMass Amherst), Masoud Jasbi (UC Davis), Vsevolod Kapatsinski (Oregon), Seppo Kittilä (Helsinki), Linda Konnerth (Bern), Maria Koptjevskaja Tamm (Stockholm), Bernd Kortmann (Freiburg), Chigusa Kurumada (Rochester), Natalia Levshina (Radboud), Ryan Lepic (Gallaudet), Jiayi Lu (UPenn), Maryellen MacDonald (Wisconsin), Alec Marantz (NYU), Dimitrios Meletis (Vienna), Stephan Meylan (MIT), Laura Michaelis (Colorado), Jeff Mielke (North Carolina State), Petar Milin (Birmingham), Marianne Mithun (UC Santa Barbara), Emily Morgan (UC Davis), Fermín Moscoso del Prado Martin (Cambridge), Corrine Occhino (UT Austin), Jesus Olguin Martinez (Illinois State), Pavel Ozerov (Innsbruck), Thomas Payne (Oregon), Florent Perek (Birmingham), Marc Pierce (Texas), Janet B. Pierrehumbert (Oxford), Michael Ramscar (Tübingen), Terry Regier (UC Berkeley), Arnaud Rey (CNRS and Aix Marseille), Phillip Rogers (Pittsburgh), Caroline Rowland (MPI for Psycholinguistics), Mark Seidenberg (Wisconsin), Jason Shaw (Yale), Naomi Shin (New Mexico), Shahar Shirtz (Arizona State), Andrea Sims (The Ohio State), Kenny Smith (Edinburgh), Morgan Sonderegger (McGill), Michael Stern (Yale), Sabine Stoll (Zurich), Benedikt Szmrecsanyi (Leuven), Rachel Theodore (Connecticut / NSF), Malathi Thothathiri (George Washington), Catherine Travis (Australian National), Rory Turnbull (Newcastle), Rosa Vallejos Yopán (New Mexico), Charlotte Vaughn (Maryland), Abby Walker (Virginia Tech), Stephen Wechsler (UT Austin), Andrew Wedel (Arizona), Rachel Elizabeth Weissler (Oregon), Colin Wilson (Johns Hopkins), Bodo Winter (Birmingham), Xin Xie (UC Irvine), Roberto Zariquiey (PUCP), Georgia Zellou (UC Davis), Fernando Zúñiga (Bern)

Preliminary course titles and descriptions (to be updated):

Carolina P. Amador-Moreno (Universidad de Extremadura, Spain) and Josh R. Brown (University of Wisconsin-Eau Claire), Historical Sociolinguistics 

This course will offer a general introduction to the relatively new field of Historical Sociolinguistics, defined by the North American Research Network in Historical Sociolinguistics (NARNiHS) as ‘the application/development of sociolinguistic theories, models, and methods for the study of historical language variation and change over time, or more broadly, the study of the interaction of language and society in historical periods and from historical perspectives’ (https://narnihs.org/?page_id=226). Historical Sociolinguistics has grown into a productive area of investigation within the field of general linguistics, but how does one study historical states of language or society, and the historical interaction of language and society? Since Elspaß introduced the notion (e.g., 2005), a strong methodological orientation in historical sociolinguistics has been on “language history from below” and “ego documents”. In this course we will deal with ego-documents such as letters and diaries, and we will learn to identify and exploit some linguistic datasets available to Historical Sociolinguistics. By looking at corpora such as the Corpus of Early English Correspondence (CEEC), we will reflect on the advantages of quantitative analyses historically, discuss new types of data visualization, and consider the balance (and synergies) between macro- and micro- approaches. We will also discover how historical language data can be used qualitatively to understand language ideologies and language use patterns, which may be complicated by modern conceptualizations of multilingualism. And lastly, we will explore the ways that historical language data can be used in interdisciplinary contexts, like forensic linguistics and material culture.

Jenny Audring (Leiden University, The Netherlands), Morphology, the lexicon, and the mind: An introduction to Relational Morphology

Traditional, especially generative, theories of morphology aim to predict all and only the possible complex words of a language. This goal has proved elusive: morphology is riddled with idiosyncrasy, idiomaticity, irregularities, gaps, and unproductive patterns. Embracing this fact opens the door to more inclusive approaches, accommodating not only the possible but also the existing words of a language. This course introduces one such approach, known as Relational Morphology (Jackendoff & Audring 2020). The model belongs to the Construction Grammar family of theories. It differs from many other models by assuming a rich and highly structured lexicon, a usage-based accumulation of grammatical knowledge, and considerable inter-speaker variation. The aim of the course is to provide an informed view of the grammar of words, gracefully integrated with our understanding of the lexicon and the mind. Basic knowledge of morphological concepts and terms will be assumed. Jackendoff, Ray S. & Jenny Audring. 2020. The Texture of the Lexicon. Oxford: Oxford University Press.

Harald Baayen (University of Tübingen, Germany), The Discriminative Lexicon Theory, implementation in the Julia package JudiLing, and applications

This course will take participants through a hands-on book (Heitmeier, Chuang & Baayen, 2024, CUP) on the computational modeling of the mental lexicon. Or, more precisely, on the automatic, subliminal, everyday use of words. In this course, I will not be advocating a one-size-fits all model, instead, I will introduce a modeling framework, and call attention to the many choices that are involved in setting up a concrete model for a given dataset. How do we represent words’ forms? How do we represent words’ meanings? Do we set up linear mappings between forms and meanings, or do we want to use deep learning for these mappings? For implementing concrete models, we will use the JudiLing package, which is written in the Julia language. Julia is optimized for computational efficiency, and outperforms python (and R) for numeric calculations. The worked examples illustrate how one can set up error-driven learning models for comprehension (visual or auditory) as well as for production. By evaluating how well a given implementation performs for held-out data, we can assess to what extent a model implementation is productive. Measures derived from such an implementation can be used to predict a range of dependent variables, including reaction times, spoken word duration, pitch contours, and articulatory trajectories. In this course, I will discuss the kind of problems one runs into when modeling lexical processing. Models simplify, and therefore are wrong. For instance, the general framework that my co-authors and I have been developing concerns individual language users, not communities of speakers. But the data we have from corpora are community data, not individual data. Nevertheless, given a basic understanding of the limitations of our data, and the limitations that come with our model, I believe that a computational framework facilitating understanding the isomorphies between words forms and their meanings is useful.

Matthew Baerman (University of Surrey, UK), Where does morphology come from?

In one sense, morphology is just another mode of packaging information which might otherwise be conveyed through syntax, as in Danish huset versus English the house. But morphology is more than just syntax on a smaller scale: inflection classes, morphomes, and irregularities of all sorts load morphological systems with extra complexity without adding any obvious functionality. Morphological theory has been largely successful in describing the attested patterns, but continues to struggle in motivating and constraining them. It’s long been understood that diachrony must play a key role in determining the shape of a morphological system, but elevating this observation to the level where one can make useful cross-linguistic generalizations has been hindered by the spottiness (or absence) of the historical record, and the specific peculiarities of individual lineages or their philological traditions. The course surveys the processes that lead to the formation of morphological systems: grammaticalization, the genesis of morphosyntactic features, paradigm formation, and analogical change. We look at techniques for investigating morphological evolution, drawing on computational modelling and typological comparison.

Danielle Barth (Australian National University), Corpus Linguistics for Field Linguists

Corpus linguistics is an approach to understanding large amounts of data. This course is hands-on and will lead students through building corpora, annotation, text-mining, quantitative analysis and visualisation techniques using text data from web sources as well as ELAN transcriptions of spoken and signed languages. This course will provide the theoretical foundation to understand how corpora can answer research questions, as well as guided practice with computational skills. We will use an ELAN to R data pipeline to think about the steps of a research project including ethics, videography, transcription, annotation, qualitative and quantitative analysis and language comparison. By the end of this course, students will have achieved basic computational proficiency to perform corpus-based analyses on their own data for their own research. No prior programming experience is required, simply a willingness to learn

Gašper Beguš (University of California, Berkeley) and Stephan Meylan (Massachusetts Institute of Technology), Deep Language Learning: Modeling language from raw speech with fiwGAN

This class will introduce participants to the basics of how to develop and train realistic deep learning models of human spoken language learning (fiwGAN), as well as techniques to uncover linguistically meaningful representations in deep neural networks trained on raw speech. These deep generative neural networks can not only model speech with high fidelity from a fully unsupervised objective, but have also been shown to capture a wide range of linguistic phenomena in their latent and intermediate representations.  The goal of this class will be to familiarize participants with the GAN framework and the linguistic relevance of these models’ representations. Participants will learn how to train and interpret these models, allowing them to pursue their own research interests on new datasets and languages of interest. We will learn how model phonetic, phonological, morphological, and even basic syntactic and lexical semantic learning. We will then discuss techniques to test the causal relationship between learned representation and outputs of the models and test how these techniques can model rule-based symbolic computation. We will show that these networks do not just imitate training data; rather in their internal representations, they capture the causal structure of a wide range of linguistic phenomena, including phonological rules, articulatory targets, allophonic alternations, and reduplication.  We will further discuss how to apply traditional linguistic analyses to these representations and show striking similarities between these artificial neural representations and representations of speech in the human auditory system taken from neuroimaging data. Understanding how deep neural networks learn has consequences both for linguistic theory, cognitive science, and neuroscience as well as for building interpretable machine learning.

Balthasar Bickel (University of Zurich, Switzerland), Linguistic Typology in Evolutionary Perspective

This course introduces and reviews two major shifts in current typlogy, the shift to phylogenetic modeling for capturing current typological distribution and the shift to cross-linguistic experiments for probing the forces that shape typological distributions. These two shifts characterize an evolutionary perspective on language that picks up from where Darwin and Schleicher left it, but profitting from modern methods and technology and considerably richer data. The first part introduces the basic methodology in modern linguistic phylogenetics. We will review the advantages of the evolutionary framework of these methods over purely synchronic and static ways of capturing typological distributions. We will discuss current solutions to problems of applications, such as splits in type, or ways of incorporating data from isolate languages, and we will showcase recent advances in moving beyond discrete types, modeling usage patterns instead. The second part addresses the fact that our models have access to only a very small proportion of the distributions that humanity produced since language emerged. This challenges any claim for universal (that is, species-wide) principles driving typological distributions. The problem can be solved to some extent by convergent evidence from experimentally testable mechanisms. Universality in language change, such as a preference for a certain pattern over another, is supported if it is grounded in a (neuro)biological mechanism that fully persists under maximally diverse conditions, even when usage frequencies in a language are at odds with it. We will review recent work along these lines, especially work that probes the presence of the same mechanism also in the prelinguistic systems of other animals.

Idan A. Blank (University of California, Los Angeles), The Relationship Between Language and Thought

What is the relationship between language and the rest of our mind? This is one of the most fundamental questions to understanding what it means to be human. Whereas it has intrigued philosophers for centuries, the last few decades have allowed us to finally collect empirical data and get some scientific answers. In this course, we will take a deep dive into some of the most fascinating questions in this field: does the language you speak shape the way you think? What kinds of thoughts could you have if you did not have language? Do domains like music and math have “grammars” in the same way that languages do? How do non-linguistic parts of the mind influence language processing? And do universal properties of language reflect general constrains on how minds process information? To address these questions, we will discuss studies using behavioral methods, neuroimaging, and investigations of special populations such as people with language disorders, users of recently emerging languages, and non-industrialized societies. Be prepared to challenge long-held views from traditional linguistics and rethink what you know about language!

Idan A. Blank (University of California, Los Angeles), Language in Intelligent Machines

How do systems of Artificial Intelligence (AI) process language, and what does that tell us about the human mind? In this course, we will critically evaluate the capabilities and limitations of contemporary Large Language Models. Through in-depth discussions of many empirical studies, we will consider methods for characterizing what these systems do “under the hood”, and discover what they know about syntax and semantics. We will also demonstrate that studying these systems is an important tool for usage-based approaches, because it makes insightful contributions to debates about the nature of meaning, representational formats, processing strategies, and learnability and innateness. Throughout the course, we will ask: in what sense are these systems similar to humans, and could they serve as computational models of our linguistic minds? No prior knowledge of AI systems is assumed.

Alice Blumenthal-Dramé (University of Freiburg, Germany), Constructions in the Mind

A primary aim of usage-based cognitive construction grammar (CxG) is to provide descriptive generalizations over usage data that are cognitively realistic. In this framework, the basic unit of description – the construction – is also considered a unit of mental processing, representation, and learning. Traditionally, identifying candidates for mental constructional status involves combining statistical generalizations from corpus data with insights from cognitive science, particularly memory research. However, despite the plausibility of many postulated constructions, limited experimental research has directly tested their mental status. The sparse existing studies have predominantly focused on lexically substantive strings (e.g., “How can I help you?”). In contrast, semi-abstract constructions (e.g., “have a X,” where X can stand for “drink,” “jog,” and other items), whose cognitive status is more debatable, have received less attention from neuro- and psycholinguists. This course is designed to: 1. Discuss how mental construction status can be operationalized in a falsifiable manner that captures the essence of all construction types; 2. Present psycholinguistic paradigms to explore the cognitive building blocks utilized in natural language use; 3. Apply this knowledge by programming small-scale psycholinguistic experiments to evaluate construction status. Overall, this course aims to equip participants with the theoretical and practical tools needed to critically assess the cognitive realism of usage-based claims.

Paul Boersma (University of Amsterdam, The Netherlands), Bidirectional phonology and phonetics

Once we can explicitly model how the production and comprehension of phonology and phonetics work, we can account for many facets of linguistic behavior that are either observed in languages (typology, sound change) or elicited in the lab (psycholinguistics). BiPhon (Bidirectional Phonology and Phonetics) is a framework that aims at precisely that. This course presents existing, new and future findings and directions of the BiPhon model. The course is suitable for people with both computational and non-computational backgrounds. BiPhon employs four (or more) levels of representation: phonological underlying form, phonological surface structure, auditory-phonetic form and articulatory-phonetic form. Both speakers and listeners travel these levels in parallel, using the exact same grammar that consists of ranked constraints or weighted neural connections. The first part of the course explains how several phenomena hitherto ascribed to separate specific mechanisms turn out to emerge instead directly from modeling language acquisition (sometimes over generations) under the single assumption of parallel multi-level bidirectionality: prototype effects (without stored prototypes), auditory dispersion (without teleological mechanisms), licensing by cue, loanword adaptation (with L1-mechanisms only), perceptual magnetism, discrete categorical behavior, and phonological structure itself. The second part addresses how these and other explanations can be made to “scale”, i.e. to extend far beyond the toy languages that they were established on and become more “practical”, via problems that are traditionally interesting to phonologists, moving toward whole-language simulations. The course shows how most mechanisms in partial languages can be understood by reasoning or by drawing up explicit tableaus, although they can often be supported by computer simulations in case where the outcome is not clear in advance. The mechanisms of larger problems often involve computer simulations, whose predicted or surprising results can subsequently be understood by investigating the behavior of the simulations under controlled conditions.

Laurel Brehm (University of California, Santa Barbara), Language Production in the Lab and in the Wild

The field of language production aims to understand how individual speakers plan and articulate sentences, words, and sounds. However, as is typical in all areas of psycholinguistics, much of the language production work since the 1980s has used lab-based experimental methods to try to tame the variability that speakers actually produce: in essence, lab-based paradigms are tightly controlled but often highly artificial. This course reflects on the insights obtained from language production paradigms such as syntactic priming, tongue-twisters, picture naming, picture-word interference, and common ground establishment, and compares these to the insights obtained from naturalistic observation of conversations, corpora, and other ‘wild’ data sources. In so doing, we reflect on what we really know as a field about the mechanisms and representations that allow us to produce language.

Canaan Breiss (University of Southern California), Phonology and the Lexicon

This course will center on how phonologists can build more ecologically valid and empirically robust models of the phonological grammar by integrating a psycholinguistically-informed understanding of lexical representations and dynamics. We will first survey evidence for the phonological contents of the lexicon, drawing on experimental and psycholinguistic evidence, and then discuss a series of case studies where the phonological grammar seems to be influenced by, or make unexpected reference to, lexically-stored properties or paradigm members, including Lexical Conservatism, Paradigm Uniformity, and lexical exceptionality. The emphasis will be on how phonologists can use probabilistic grammar formalisms to jointly model the multiple sources of influence on the data in question. This type of joint modeling can enable the researcher to dissociate the effects of “competence” and “performance” in naturalistic or noisy corpus or experimental data, enabling a clearer understanding of both. The course is designed for participants with a comfortable grasp of contemporary phonological theory, basic psycholinguistics, or the desire to learn.

Esther L. Brown (University of Colorado, Boulder), Lexicalized Accumulation of Patterns of Use

Many well-understood linguistic, extralinguistic and/or discourse~pragmatic factors shape variant realizations of sounds, words and constructions in target production contexts. These phonetic and morphosyntactic variants of words and/or constructions, arising in production contexts, become registered in memory as lexically specific variants. Thus, contexts of use affect linguistic productions and such productions, in turn, are stored as lexical representations. Nevertheless, words and constructions differ significantly with regard to their exposure to conditioning factors of the production context. That is, opportunity biases arise naturally in discourse whereby some words co-occur with specific conditioning factors significantly more than other words do, giving rise to patterns of synchronic variation and diachronic change indicative of words’ accumulation in memory of contextual conditioning effects. In this course, we will closely examine implications of these probabilistic patterns of use. We will consider different examples of conditioning factors, types of conditioning contexts, and research that explores correlations between contextual conditioning effects and variant forms of words. The course will review theories that attempt to account for the patterns of variation, propose methods for testing the effect of contextual conditioning, and explore potential applications to acquisition and bilingual data. Students will work to identify and test novel applications of lexically specific contextualized conditioning.

Lucien Brown (Monash University, Australia), Introduction to Pragmatics

Imagine you’re at a café, and the person next to you says, “It’s cold in here.” While the literal meaning of this statement is straightforward, the intended meaning could vary: it might be a simple observation, a request to close the window, or even a suggestion to leave. This subtlety is where pragmatics comes into play. Pragmatics is the branch of linguistics that looks at how context influences the interpretation of meaning in real-world interaction, and which investigates and interrogates the linguistic choices that speakers make. In the first half of the course, we look at foundational topics in pragmatics including implicature, presupposition, speech acts and deixis. We explore why speakers don’t always say what they mean, and how hearers are able to interpret utterances that are underspecified. Then, in the second week, we look at the social turn in contemporary pragmatics by covering indexicality, (im)politeness, interactional pragmatics, and metapragmatics. Throughout the course, we consider examples from diverse languages and cultures, and also take into account non-anglophone and indigenous perspectives on language usage and context. Through theoretical frameworks and practical analysis, participants will gain a comprehensive understanding of pragmatic principles in everyday communication.

Aoju Chen (Institute for Language Sciences, Utrecht University, The Netherlands), Prosodic development across languages

Prosody (i.e., the melody and rhythm of language) is vital to both the structure of speech and communication. Children exhibit a stunning sensitivity to prosody already from birth and develop remarkable prosodic competence within their first years. This early prosodic ability is foundational to development in word segmentation, vocabulary, and morpho-syntax. Traditionally, language development research has focused on non-prosodic aspects, while prosodic research has primarily centered on adult models. This course offers an innovative perspective, delving into how children acquire the prosodic system of their native language. Children begin processing speech sounds, primarily containing prosodic information, in the last trimester of pregnancy. Perception and production of prosody are not only influenced by the language users’ native prosodic system but also by innate mechanisms (i.e. biologically-motivated processes such as the Iambic-and-Trochaic law, the Biological Codes). Thus, understanding how children acquire native prosody requires a fresh approach that combines both innate mechanisms and input-based approaches and takes account of prenatal language exposure and fetal learning. This course will present state-of-the-art research on prosodic development, exploring the dynamic interplay between innate mechanisms and input-based mechanisms (e.g., distributional learning, frequency, cue weighting, form-meaning mapping transparency) before and after birth. Students will explore the learning of language-specific phonological categories (i.e. discrete building blocks of utterance-level pitch contour), the weighting of cues to prosodic boundaries (which demarcate chunks in continuous speech), and prosodic form-meaning mappings with an emphasis on prosodic focus marking across typologically different languages. Through a mix of lectures, group discussions, student presentations, and plenary discussions, this course will introduce students to the evolving field of prosodic development and provide them with theoretical insights and methodological knowledge highly relevant for conducting interdisciplinary research in this area and the interface of prosody and other areas of language development.

Cynthia Clopper (The Ohio State University), Speech accommodation, convergence, and imitation

Humans adopt aspects of one another’s speech patterns in processes variably called accommodation, convergence, and imitation. This speech accommodation emerges in both interactive and non-interactive (lab) contexts, in the absence of explicit instruction to imitate, and across linguistic levels of representation (acoustic, phonetic, and phonological). Accommodation leads to within-talker synchronic variation and is proposed as a mechanism of diachronic linguistic change. This course explores the empirical nature of speech accommodation and its theoretical implications for our understanding of the linguistic representations underlying and linking human speech perception and production. The topics will include: (1) the cognitive mechanisms proposed to underlie accommodation; (2) proposed metrics for quantifying speech accommodation; (3) acoustic-phonetic normalization in accommodation; (4) levels of phonetic and phonological representation in accommodation; and (5) the role of perceptual salience in the magnitude of accommodation. Each of these topics represents an ongoing debate in the speech accommodation literature and the assigned readings for the course will introduce students to both sides of these debates. Moreover, these topics have broader theoretical implications for key questions in speech processing, including abstraction in the cognitive representations of speech, how cognitive representations are linked for speech perception and production processes, and how these representations are flexibly implemented in language use in real-time interactions.

Sonia Cristofaro (Sorbonne University, France), Language typology and grammatical evolution: Explaining language universals in diachronic perspective

In language typology, the research paradigm that originated from the work of Joseph Greenberg, language universals are empirically observed distributional patterns where languages recurrently display certain grammatical configurations as opposed to others. Explanations of these patterns are usually synchronically oriented: particular grammatical configurations are preferred over others cross-linguistically because their synchronic properties comply with functional principles of optimization of grammatical structure, such as economy or processing ease. The general theoretical premises of the typological approach are, however, essentially diachronic in nature. Starting from Greenberg, practictioners of this approach have emphasized that language universals evolve through recurrent diachronic phenomena leading to the emergence, conventionalization, and retention of the same grammatical configurations in different languages over time. This raises a general issue of whether and how the assumed optimization principles actually play a role in these phenomena. The course will discuss the known diachronic origins of some of the relevant configurations cross-linguistically. Attention will be focused on configurations pertaining to word order, case marking alignment, and the use of overt and zero marking for different grammatical values. An ever growing body of evidence from grammaticalization studies and studies of language change in general show that the developmental processes that give rise to these configurations are triggered by the properties of particular source constructions and their contexts of use, rather than optimization principles pertaining to the synchronic properties of the resulting configuration. Also, individual configurations emerge through a diverse array of developmental processes, not amenable to a unified explanation in terms of general overarching principles. These facts call for a source-oriented approach to language universals, one where the focus shifts form the synchronic properties of the relevant grammatical configurations to disentangling the effects of several different source constructions and diachronic phenomena that give rise to these configurations and shape their cross-linguistic distribution over time.

Don Daniels (University of Oregon), Papuan Languages

This course will introduce students to the typology, diachrony, and prehistory of Papuan languages. There are over 800 Papuan languages, belonging to 80 unrelated families, so this course will need to be selective. Topics covered will include phonology; agreement; TAM marking; clause chaining and switch reference; grammaticalization and syntactic change; and prehistory.

Scott Delancey (University of Oregon), The Trans-Himalayan/Sino-Tibetan Languages

This course will present a typological and historical overview of the Trans-Himalayan (or Sino-Tibetan) language family. This large and remarkably diverse family consists of (at least) seven Sinitic languages spoken in China and emigrant communities, and several hundred languages traditionally labelled “Tibeto-Burman” (although this now appears to be a paraphyletic grouping) spoken primarily in the mountainous areas separating East, Southeast, South and Inner Asia. The first part of the course will present an overview of the TH languages and their typological diversity, a tentative classification of the family, and some discussion of the geographical and historical context which has led to their diversity. The main part of the course will concentrate on several phenomena of broader typological interest, specifically three interrelated problems having to do with alignment, person, and information structure. Problems of “alignment” include differential case marking and case marking patterns not clearly identifiable as accusative or ergative, and hierarchical or “inverse” argument indexation in the verb. The latter connects to interesting problems of person, and will lead to investigation of the history of clusivity and other phenomena in pronominal paradigms. Case-marking phenomena relate directly to issues of the grammar of information structure, particularly zero anaphora (“pro-drop”) and paradigms, sometimes quite elaborate, of information structure markers. We will also discuss, in less detail, tonogensis, nominalization, and a few other interesting phenomena. Finally we will devote one or two lectures to the current social and political situation of the various languages, briefly outlining issues of language endangerment, attrition, and maintenance in minority languages across the Trans-Himalayan area.

Christian DiCanio (University at Buffalo), Phonetic fieldwork

This course provides hands-on training in the analysis of phonetic data collected in fieldwork contexts. Classes will discuss different motivations for phonetic field research, examine many case studies, and train students in the use of various Praat tools for the annotation and analysis of acoustic data. Included within the course is a discussion of both ethical and practical issues in collecting phonetic data. Though the main focus will be on acoustic production data, the course will also discuss aerodynamic, articulatory, and perceptual methods in field contexts. As an over-arching theme, we will carefully examine the relationship between empirical/descriptive findings on individual languages, observed variation in use, and emerging phonetic typologies for specific sound contrasts in human language. Over the duration of the course, students will analyze a (slightly curated) phonetic data set on glottalized sonorants from an Otomanguean language as a guided group research project. The results from these analyses will be examined in relation to similar research on other languages, typological universals, and phonetic theory.

Dagmar Divjak and Petar Milin (University of Birmingham, UK), Operationalizing the Emergence of Structure through Learning

Emergence is a central concept in usage-based linguistic theories that view natural languages as complex systems, developed and shaped by usage through reliance on general cognitive functions and structures: our ability to extract and entrench distributional patterns enables us to build a grammar from the ground up and thus circumvents the need for an innate universal grammar. Despite its crucial importance, however, emergence has remained axiomatically accepted rather than empirically evidenced, with different cognitive functions such as categorisation and abstraction and structures such as memory proposed as contributing factors. In this course, we will make the case that integrating usage-based linguistics with the psychology of learning creates a productive synergy in which learning can serve as an operationalisation of emergence through usage (Divjak & Milin 2023, Milin & Divjak 2024). Loosely defined as a gradual change in the state of knowledge, learning translates the abstract notion of emergence into the discovery of relations between elements in the input which then allows generalisations to be made, if required. We will repurpose the famous Rescorla-Wagner rule (1972) which epitomises the error-correction mechanism or principle of learning, to capture how individuals learn linguistic patterns from exposure to usage, and shed new light on what emerges from exposure to language usage. We will discuss how the types of patterns that emerge from such a learning-based approach change the types of abstractions we may want to allow in theoretical linguistics and how such a change of perspective can overhaul the way in which we approach second language acquisition. We will illustrate these points using linguistic case studies on phonemes, free as well as bound nominal and verbal morphemes spanning the morpho-syntactic continuum with corpus, experimental and computational studies using L1 and L2 from data Germanic and Slavic languages.

Robin Dodsworth (North Carolina State University), Linguistic variation and social networks

Social networks are relevant to many topics in linguistics, including the propagation of linguistic innovations, the maintenance or loss of languages and dialects, linguistic outcomes of intergroup contact, and sociolinguistic perception. This course is an overview of the use of social network data and methods to address linguistic questions; while the focus is on linguistic research, the course reflects an interdisciplinary perspective that draws in particular from sociology. The course begins with a survey of early sociolinguistic studies addressing the relationship between linguistic variables and properties of ego-centric (personal) networks. This early work was motivated by, and offered partial support for, existing sociological hypotheses about the effects of social network content and structure on the spread of innovations. The next part of the course looks at a diverse set of recent approaches including simulation-based experiments, use of online social network data, bipartite methods, and empirical approaches to race, gender, and other axes of social identity as they relate to the linguistic effects of network structures. The course concludes with a practical introduction to interdisciplinary best practices for collecting and processing ego network data together with linguistic data. Participants are welcome to bring data or research questions.

Jonathan Dunn (University of Illinois Urbana-Champaign), Computational Construction Grammar

Computational construction grammar sits at the intersection between usage-based grammar and computational syntax. From a theoretical perspective, this course is about implementing falsifiable models of how constructions are learned given exposure (i.e., a corpus). This is important because the representations posited within construction grammar are more complex than those from other syntactic paradigms like dependency grammar, but usage-based theories expect that these representations are learnable with limited innate structure. From a practical perspective, this course is about working with constructions in very large corpora without relying on introspection for annotation. This is important because usage-based theories expect that individuals have internalized slightly different grammars, which means that the introspection of a single linguist can be inadequate. The course includes Python examples to accompany each session, providing students a means of practicing corpus-based experiments. Topics include how to represent constructions, learnability of grammars, variation within grammars, and the amount of constructional knowledge present in language models.

Nick C. Ellis (University of Michigan), Usage-based Second Language Acquisition

Usage-based approaches to language learning hold that we learn constructions (form-function mappings, conventionalized in a speech community) from language usage by means of general cognitive mechanisms (exemplar-based, rational, associative learning). The language system emerges from the conspiracy of these associations. Although frequency of usage drives learning, not all constructions are equally learnable by all learners. Even after years of exposure, adult second language learners focus more in their language processing upon open-class words than on grammatical cues. I present a usage-based analysis of this phenomenon in terms of fundamental principles of associative learning: Low salience, low contingency, and redundancy all lead to form-function mappings being less well learned. Compounding this, adult acquirers show effects of learned attention and blocking as a result of L1-tuned automatized processing of language. I review a series of experimental studies of learned attention and blocking in second language acquisition (L2A). I describe educational interventions targeted upon these phenomena. Form-focused instruction recruits learners’ explicit, conscious processing capacities and allows them to notice novel L2 constructions. Once a construction has been represented as a form-function mapping, its use in subsequent implicit processing can update the statistical tallying of its frequency of usage and probabilities of form-function mapping, consolidating it into the system.

Mirjam Ernestus (Radboud University, The Netherlands), Morphology in Interaction

Morphologically complex words could be viewed as words consisting of different parts (morphemes), which are combined during language production and taken apart during language comprehension. Research over the past 25 years have shown that this traditional view is too simplistic and that it does not do justice to the many interactions that morphologically complex words are involved in, with simple and other complex words, with the phonological and prosodic context they occur in, and with the phonetic reduction processes of the language. These interactions can only be noticed when morphologically complex words are studied in language use, rather than as dictionary items. This course will discuss old and new research on morphologically complex words based on corpus research, psycholinguistic experiments, and computational modelling. We will discuss, among other studies, those showing that – The frequencies of occurrence of complex words matter, for word recognition, production, and language change; – The affix for a word (in case of allomorphy) is co-determined by systematic analogical effects from morphologically and phonologically similar words and by the prosodic context the word is placed in; – The phonetic realization of complex words is affected by morphology: by the morphological category of the words, and the support for their affixes from morphologically similar words; – The recognition of complex words is affected in several ways by phonologically and morphologically related words. These findings have consequences for our view of morphology, the lexicon, word production, and word recognition. The course will discuss a variety of old and new theories taking (part of ) the findings into account, ranging for Lexical Phonology to the Dual Route model of complex word recognition, to Word and Paradigm Morphology, to Linear Discriminative models. Some of these theories have been computationally implemented and we will discuss the advantages and disadvantages.

Sara Finley (Pacific Lutheran University), Miniature Language Studies and the Cognitive Science of Language

Miniature language learning studies have proven to be a powerful method for answering questions about the nature of linguistic representations. In these experiments, researchers design and manipulate miniature versions of real languages in order to isolate specific linguistic properties that are difficult to control outside of the laboratory. Exploring how adults (and children) learn and generalize these novel, miniature languages, allows us to better understand questions related to the cognitive mechanisms that underlie language, and the universal principles that govern the similarities and differences between languages. These questions include linguistic universals, learnability, linguistic representations, and phonetic grounding for phonological patterns. This course will focus primarily on miniature languages that target questions at the word level (e.g., morphological, and phonological patterns), as well as general methodological considerations and best practices for designing and running miniature language learning studies. Students will have the opportunity to design their own artificial language learning study.

Suzana Fong (Memorial University of Newfoundland, Canada), Introduction to Generative Syntax

This course offers an introduction to the syntax of natural languages as viewed by Generative Grammar. According to this framework, human beings are endowed with the unique capacity for language. Because we are born with this ability, we are able to have robust judgments about sentences that have never uttered before as well as formulate sentences that have never been used by anybody else before. Likewise, this innate capacity accounts for why all children acquire at least one language, even when faced with an incomplete and fragmentary input. In this course, we will delve into the main components of the grammar, including those responsible for establishing the relationship between nominals in a sentence (Binding Theory), for the form and distribution of such nominals (Case Theory), for the combination between predicates and their subjects and objects (Argument Structure). We will also investigate fundamental operations of the grammar, as proposed by current developments of Generative Grammar, i.e. Agree and Merge. At the end of this course, students will have a comprehensive picture of the workings of the grammar that underlie the syntax of natural languages. We will examine data from a diverse set of languages and language families, including English (Germanic), Mongolian (Mongolic), Brazilian Portuguese (Romance), Khanty (Uralic), Korean (Altaic), Lithuanian (Baltic), Acehnese (Austronesian), among others. This course is particularly well-suited to students who drawn to problem-solving and, likewise, to those interested in having first-hand experience analyzing data from different languages.

Richard Futrell (University of California, Irvine), Information-Theoretic Linguistics

Information theory and its application to human language. Basics of coding theory, efficient codes, the role of redundancy, and the information rate of speech. Maximum Entropy approach to phonology. Lossy compression as a basis for lexical semantics (Information Bottleneck models) and pragmatics (Rational Speech Acts models). Surprisal-based models of online language comprehension and production. Information locality in morphological structure, word order, and compositionality.

Spike Gildea (University of Oregon) and Jóhanna Barðdal (Ghent University, Belgium), Reconstructing Syntax

This course builds on the growing body of work in Diachronic Construction Grammar (DCxG) to elucidate a reliable method for reconstructing syntactic patterns to unattested proto-languages. Our method departs from the basics of Construction Grammar (CxG), which posits that syntax is best represented as a combination of form and meaning, each of which usually contains internal complexity. This theoretical approach allows us to add syntax to the linguistic objects that may be reconstructed using the well-understood tools of the Comparative Method and, to a lesser extent, internal reconstruction. The key elements of reconstruction then become identification of (i) constructional cognates and (ii) valid arguments for directionality of change. In addition to building the theoretical case for reconstructing syntax, we offer a number of case-studies in which we have reconstructed a range of syntactic phenomena central to grammatical analysis, e.g. constituent structure, verbal indexation, argument flagging (i.e., case-frames), grammatical relations, and alternating argument structure constructions. Identification of constructional cognates is simple when constructions in related languages are virtually identical; it becomes more difficult as time passes, introducing more (and more diverse) changes into modern reflexes of a particular construction. To recognize kinds of change that obscure constructional cognacy, we introduce two main mechanisms of constructional change: reanalysis (a.k.a. neoanalysis) and analogical extension (a.k.a. extension, analogization). Each mechanism introduces recognizable patterns of change, which makes it possible to argue for constructional cognacy even when modern reflexes of a single source construction diverge in both structure and meaning. We also review some strengths and weaknesses of grammaticalization studies, arguing that the processes of change that overwhelmingly lead from lexis to morphology and from loose syntactic collocation to more tightly bound constituent structure are an outcome of reanalysis and analogical extension within constructions, and do not themselves constitute an independent mechanism of change.

Adele E. Goldberg (Princeton University), A constructionist approach to grammar

Lectures will explain the power of usage-based constructionist approaches to language for capturing the commonalities, relationships and interactions among words, idioms, and more abstract syntactic patterns. Lectures will focus on 1) lexical semantics, 2) conventional metaphors, 3) productivity and constraints on productivity, 4) island constraints, 5) linguistic implications of Large Language Models, and 6) autism and good-enough processing.

Simon J. Greenhill (University of Auckland, New Zealand), Language Phylogenies: Modelling the evolution of language

Recent years have seen Bayesian phylogenetic methods from evolutionary biology applied to questions about language evolution in two major contexts. First, language phylogenies are now routinely used, both in linguistics and elsewhere, to make inferences and test hypotheses about human prehistory. Second, language phylogenies provide a solid backbone to test hypotheses about how aspects of language and culture have evolved in three key ways: by revealing the evolutionary dynamics, by modelling the trait history, and testing coevolutionary hypotheses This course will provide the theoretical background to understand these methods, and demonstrate how to use phylogenetic methods to model and understand the evolution of languages. Through a combination of lectures, discussions, and hands-on exercises, we will explore the ways in which phylogenetics can help us understand the evolutionary relationships among languages, reveal the dynamics of language change over time and space, and shed light into the impact of historical events, geography, and cultural contact on linguistic diversity. Using real-world examples and case studies, you’ll gain practical experience with phylogenetic software and methods, and the skills to critically evaluate the results these methods provide, including: 1. How to build a good dataset for phylogenetic analysis, pros- and cons- of different datatypes, etc. 2. How to visualise your dataset using network visualisation tools like Neighbornet. 3. How to build phylogenies using Bayesian phylogenetic methods for tree inference (e.g. BEAST2, RevBayes). 4. Bayesian and Maximum Likelihood methods for trait modelling (e.g. Bayestraits, R etc).

Stefan Th. Gries (University of California, Santa Barbara & JLU Giessen, Germany), Statistical Measures in Corpus Linguistics: Frequency, Dispersion, Association, and Keyness

By now, corpus linguistics has for quite some time made many connections to (i) cognitive/usage-based theory, (ii) both observational and experimental psycholinguistic work, and (iii) more applied areas. Since corpus linguistics is ultimately a distributional discipline, these connections often take the form of quantitative measures; among those, frequencies of (co-)occurrence, dispersion, association, and keyness are among the most widely used. These notions are often employed to operationalize cognitive notions such as entrenchment, commonness, contingency, and aboutness and dozens of specific statistical measures have been promoted in the literature. In this course, we will first revisit very briefly the main corpus-linguistic measures that have been used most, before we then discuss a new approach towards this cluster of notions and issues, one that tries to improve on the last few decades of work in three different ways. Improvement 1 will be to unify the statistical approaches towards dispersion, association, and keyness by using only a single information-theoretic statistic for each of them. Improvement 2 will be to discuss the degree to which existing measures are correlated with frequency to such an extent that they really don’t measure much else and to discuss a solution to ‘remove frequency from existing measures’ to arrive at cleaner, more valid measures. Improvement 3 will be to realize that 40 years of looking for one measure to quantify X may have been mistaken and that we need to measure and report multiple dimensions of information at the same time. The course will pursue these goals and exemplify them in small case studies by using the programming language R on several corpora. Prior knowledge of R will not be required to follow the conceptual logic, but will be advantageous to follow the programming-related parts of the class.

Stefan Th. Gries (University of California, Santa Barbara & JLU Giessen, Germany), Foundations of Predictive Modeling with R

According to many practitioners and observers, linguistics has undergone a so-called “quantitative turn” such that, over the last 25 years or so, the number of studies using statistical methods in the analysis of empirical data has been steadily increasing. A similar, but slightly delayed in comparison, increase can be observed in the number of studies that are multifactorial or multivariate and the arguably most frequent statistical methods are by now techniques from the domain of predictive modeling, i.e. scenarios where, typically, one response variable’s behavior is modeled on the basis of multiple predictor variables. The most frequently used techniques are regression models — most notably, linear and binary logistic regression modeling — and tree-based models — most notably, classification and regression trees and random forests based on them. This course aims at meeting two objectives: First, it introduces these main predictive modeling techniques and exemplifies them in R using corpus-linguistic data on word durations, reaction time data, genitive choices, and a new data set on clause-ordering choices in complex sentences. In this first part, the focus is on exploration and preparation of data for predictive modeling, model fitting, and efficient model interpretation. The second part discusses (i) a variety of pitfalls in predictive modeling that one should try to avoid and exemplifies them (partially on the basis of presented/published work (apprpriately anonymized) and (ii) several simple techniques that increase the chances of ‘getting the most’ out of one’s data set. In addition, this course aims to be good preparation for more advanced modeling courses. Participants need some basic familiarity with R (loading data and descriptive statistics) but no prior knowledge of predictive modeling and will get Quarto/RMarkdown documents to follow along and work with in class.

Zenzi M. Griffin (University of Texas at Austin), Eyetracking in psycholinguistic research

This course is designed for students with an interest in psycholinguistic research, but no background in eye tracking. Students will gain both a broad understanding of psycholinguistic research and knowledge of this popular and influential method. Even students already familiar with eyetracking and psycholinguistics will gain a deeper understanding of both. We start by covering the principles of vision and visual attention that underlie the use of eyetracking as a research method. We review how eyetrackers function to provide information about where participants look and considerations in designing experiments. We discuss the use of eyetracking measures and findings of studies in various domains of psycholinguistics including reading, first and second language acquisition, spoken and signed word recognition, spoken sentence and discourse comprehension, spoken word production and sentence planning, and dialogue. Classes consist of lecture, demonstrations, and group exercises. By the end, students will have a deep understanding of the strengths and weaknesses of eyetracking as a research method and be able to decide whether it is an approach they wish to pursue further. They will also be knowledgeable about the basic research questions and results in major areas of psycholinguistics.

Carolina Grzech (Universitat Pompeu Fabra, Spain) and Henrik Bergqvist (University of Gothenburg, Sweden), Interaction: A journey from the fringes to the core of linguistic science

This course explores the relationship between linguistic theory and interaction. Its starting point is that natural conversation constitutes our most basic linguistic behaviour. Recent advances in neurolinguistic research (cf. Fedorenko et al. 2024) support this view, and confirm what cognitive-functionally oriented linguists have argued for since the 1970s, namely that language primarily is a communicative tool rather than an instrument of thought, as traditionally assumed by generative, rationalist research (e.g., Chomsky, 1968). Insights from the documentation of minority languages and the cross-linguistic comparison of data from these languages are in accord with the above: empirically adequate analyses of grammar require attention to how language is used in interaction. The course will critically examine what we have learned so far about the role of interaction in shaping linguistic analyses. Its particular focus will be on epistemicity, understood as an umbrella term for expressions of knowledge in language, including, but not limited to, belief, certainty, evidence, and the distribution and of knowledge between the discourse participants. Methodological challenges will be discussed with reference to Grzech et al.’s (2020) special issue of Folia Linguistica, which deals with the study of knowledge in interaction, along with other recent contributions to the same area of research. The course is aimed at students who are keen to explore how communicative interaction shapes and is shaped by linguistic phenomena. The students will be introduced to multiple-turn analysis and dialogical approaches to the study of forms. On completing the course, the student will be able to identify and evaluate traditional and interactional approaches to the study of grammar, and will have improved their understanding of the possibilities and benefits of focusing on interaction in the analysis of language and grammar. The course will combine lectures, group work, analysis of primary data, and exercises in critical thinking.

Gaja Jarosz (University of Massachusetts, Amherst), Exceptionality and Productivity

Morphological and phonological generalizations often have lexical exceptions. Lexically conditioned patterns range from highly idiosyncratic and rare (go -> went) to pervasive and phonologically conditioned (Slavic vowel-zero alternations). Various theoretical devices and hypotheses exist about how language users represent lexical idiosyncrasy, and these hypotheses about lexical conditioning are tightly connected to hypotheses about the representation of productive morphophonological generalizations. For example, many traditional approaches assume a dichotomy between listed exceptions and fully productive, regular patterns that extend to novel examples. However, many other approaches, including usage-based and connectionist models, represent productivity gradiently. In this course, we consider these questions from jointly theoretical, experimental and computational perspectives. We survey experimental evidence from first language acquisition and artificial language learning examining the role of lexical conditioning and distributional information in morphological and phonological learning. We also discuss how hypotheses about lexical conditioning and productivity can be formalized as learning models and what predictions these models make for language acquisition.

Vsevolod Kapatsinski (University of Oregon), Cognitive Mechanisms of Language Change

Usage-based linguistics takes a dynamic, diachronic approach to the central question of linguistic theory (“why languages are the way they are”), by explaining the observed pathways of language change mechanistically. This means showing how recurrent changes in language structure emerge from the repeated application of the cognitive processes that operate in every instance of language use. This class will survey a number of recurrent pathways of language change and their possible explanations. Topics will include: sound change and lexical diffusion thereof, analogy, grammaticalization pathways, the emergence and disappearance of exceptions, semantic and lexical extension, morphological paradigm leveling,  degrammaticalization, and pejoration. Students will be familiarized with methods for experimental and computational methods for testing candidate cognitive mechanisms underlying language change. On the mechanism side, the focus will be on accessibility effects in lexical selection, automatization of production, parallel distributed processing,  reinforcement learning, and hierarchical inference.

Seppo Kittilä (University of Helsinki, Finland) and Fernando Zúñiga (University of Bern, Switzerland), Grammatical Voice

This course will be concerned with one of the traditional topics in linguistics, namely grammatical voice, whose study dates to the first extant descriptions of Ancient Greek. Voice has been understood in many ways, but in this course we will mostly follow the approach adopted by Zúñiga and Kittilä (2019) in their recent textbook (with frequent but brief comments/updates on a very similar recent publication, Creissels 2024). The course will discuss both constructions affecting semantic valency (such as causatives, applicatives and anticausatives) and those affecting syntactic valency (passives and antipassives), as well affected subject constructions (e.g. reflexives) and Agent-Patient (or symmetrical) voices (as attested, for example, in many Austronesian languages). In addition, the relation of voice to other closely related categories, such as transitivity, valency and diathesis will be discussed. Finally, we will also briefly discuss the diachrony and fringes of voice, i.e. how the constructions have emerged, and how voice-like functions can be expressed by other means than grammatical voice. The framework of this course is very strongly functional-typological, and it is expected that the students have basic knowledge of linguistic typology and typological approaches to studying languages. Familiarity with the functional-typological literature on voice and argument marking is also beneficial, even though the approach adopted may vary between scholars. Since the course will concern the topic from a very broad cross-linguistic perspective, and many different examples from (formally and genealogically) different languages will be illustrated and discussed, it is also essential that the students are familiar with the Leipzig Glossing Rules (LGR). The students attending this course will get a very good overview of recent (typological-functional) understanding of voice and its relation to other closely related categories, such as above mentioned diathesis, transitivity and valency.

Linda Konnerth (University of Bern, Switzerland), Grammar Writing

Our knowledge of the world’s languages comes first and foremost from grammatical descriptions. A grammar is a comprehensive and systematic reference work, which connects the many pieces of the inner workings of a language and presents a lasting record of this language as spoken in a particular place and time. In this course, participants will arrive at a better understanding of what it takes to produce a state-of-the-art reference grammar of an un-/underdocumented language. We will discuss how to approach the task and the general workflow; how to design a robust and diverse corpus for a grammar project; how to move from data to description; how to describe interconnected grammatical topics; which audiences to keep in mind and how to make a grammar maximally useful; and how to deal with theory when describing and analyzing a language. Throughout the course, we will examine the role and utility of modern technologies in implementing and facilitating the goal of producing a reference grammar. Upon successful completion of this course, students will be able to – describe how to plan a grammar project – describe the factors that are relevant for writing a state-of-the-art reference grammar – discuss the methodological, analytical, and ethical challenges that arise in grammar writing – explain where and why the author of a reference grammar succeeded or did not succeed in providing a comprehensive description and analysis in particular sections of the publication. The course is primarily aimed at graduate students who are interested in embarking on a descriptive grammar project for their dissertation, or are already working on such a project and want to critically reflect on their task at hand.

Maria Koptjevskaja Tamm (University of Stockholm, Sweden) and Thanasis Georgakopoulos (Aristotle University of Thesssaloniki, Greece), Lexical Typology and Universals of Word Meaning

This course offers an exploration of lexical typology, understood as the systematic study of cross-linguistic variation in words and vocabularies, and word meaning universals. It introduces some of the critical theoretical and methodological issues inherent in lexical typology, which has to find its way in balancing between theoretical semantics, lexicography and general typology. We will use several case studies to highlight both key concepts in the field, among others, co-expression, colexification, syn-expression and conflation, and advantages and disadvantages of different methodological approaches. A significant focus will be on examining which meanings can or cannot be lexicalized across languages, linking this to the issue of basicness in meanings. A central challenge in all cross-linguistic comparison is to what extent an observed cross-linguistic similarity may be explained by universal tendencies, inheritance, contacts, accident, or a combination of these factors. We will consider this issue as applying to word meaning and lexico-semantic patterns. We will discuss precise and approximate meaning universals, and absolute, statistical, and implicational universals. While most lexical universals identified are statistical, absolute lexical universals are of significant theoretical interest, and we will review various approaches to identify them. We will also devote some time to genetic and areal patterns in word meaning and lexico-semantic patterns and give examples of how lexical typology can be used in historical research. As part of our course, we will explore techniques for representing typological universals of linguistic co-expression and identifying associations between co-expressed meanings, with an emphasis on graph-based representations like semantic maps and colexification networks. We will introduce several resources that support large-scale lexicon and meaning research, discussing both their advantages and limitations. Using these resources and data from a sample of languages, we will demonstrate how to construct semantic maps and colexification networks, both manually and automatically. In particular, we will apply the Regier et al. (2013) algorithm for semantic map inference, as well as additional algorithms for colexification network construction, using various datasets such to guide participants step by step through the process. The course will cover weighted and unweighted maps and networks, highlighting their key differences. The course will include a hands-on session where students will automatically plot networks to visualize word meaning universals. No prior programming knowledge is required, ensuring accessibility for all participants.

Bernd Kortmann (University of Freiburg, Germany), The Spread of World Languages: Features – Drivers – Effects

This course will be based on a handbook to be published in 2026, initiated and co-edited by the course instructor, on world languages of previous centuries and millennia (like Aramaic, Classical Chinese, Classical Greek, Latin, Sanskrit) and the present (e.g. Arabic, Chinese, Hindi, English, Spanish, Turkish), about 30 languages in total. The focus of this course will be on the major factors, or: drivers, responsible for, or at least facilitating, the global spread of languages beyond their traditional L1 homelands, and the major effects this spread (has) had on the major contact languages and the world languages themselves. Major candidates for such drivers of spread include factors that are structural-typological, sociolinguistic, political, historical, cultural, religious, economic, or educational in nature. Typically, it is not just one factor, but a combination of these factors that was, or has been, responsible for the spread. In a first step, our class discussions will weigh against each other the different criteria that can be, and have been, used in defining a world language in the first place. In a second step, major drivers of spread will be discussed in turn for world languages of the present and those of the past. In a third step, we will try to identify patterns of spread, especially in terms of recurrent constellations of drivers, and to determine the extent to which these patterns are time-stable. In a fourth step, common effects of spread on the structure and lexicon of world languages and their major contact languages will be explored (e.g. lexical borrowing, structural simplification, contact-induced grammaticalization). In a fifth and final step, we will jointly dare a brief outlook on the role of English as opposed to other world languages and candidates for new world languages in the course of the next 50-100 years.

Chigusa Kurumada (University of Rochester) and Xin Xie (University of California, Irvine), Adaptive Speech Perception and Comprehension

This course provides an overview of psycholinguistic and cognitive science approaches to adaptive speech perception at both the segmental and suprasegmental levels.  The complexity and variability of human speech raises a pertinent question: How do we perceive and process linguistically meaningful representations? When we think we hear a sound, say “t” vs. “d” in English, what is physically accessible to our perceptual systems is the mere vibration of air. The signal must be processed according to our internal knowledge and model of speech sounds to form a representation. And the knowledge and model must remain adaptive to account for the substantial variability between and within talkers – but how? This course covers the literature from classical (e.g., Shannon & Weaver, 1949) to modern theories, as well as behavioral and neuroimaging results. It is intended for students who are interested in speech perception, (socio-)phonetics, L2 perception/production, and prosody. Course topics will incluide 1. Signal vs. Representation: What does it mean to “produce” and “perceive” speech? We will discuss foundational theories of encoding/decoding of linguistic representations through a noisy channel of speech communication. 2. Variability vs. Systematicity: Why is talker-adaptation necessary? The goal of this session is to understand the sources and extent of signal variability as well as the systematic nature of human speech. 3. Flexibility vs. Stability: Students will be introduced to the ideas of signal normalization (= removal of variability) and learning (= storage of variability) to achieve subjective constancy of perception. 4. Mechanisms vs. Paradigms: We will address a common conceptual confusion between the underlying, hypothesized mechanisms vs. paradigms used to test the hypothesis. This session includes a mini tutorial on R-based simulations of adaptive speech perception. 5. Recognition vs. Comprehension: We will have a summary discussion on efficient and robust speech processing across various ecologically relevant situations, tasks, and communicative contexts.

Natalia Levshina (Radboud University, The Netherlands), Communicative Efficiency in Human Languages

There is rich evidence that language users try to communicate efficiently, saving time and effort while making sure that they transfer the intended message successfully. In this course we will start by discussing the general principles of communicative efficiency in human languages and beyond. Then we will survey numerous manifestations of communicative efficiency from different linguistic domains, such as Accessibility Theory, phonological reduction of predictable units, Zipf’s law of abbreviation, omission of funciton words, minimization of domains and syntactic dependencies, diverse markedness phenomena, and many others. In several in-depth case studies we will discuss evidence from typology, corpora and experiments, which will illustrate how different types of linguistic data and methods can be used for detecting efficient structures and usage patterns. During this interactive course, the participants will do practical exercises and brainstorm potential applications of the theory for their own research projects. By the end of the course, students will be able to evaluate language structures and patterns of variation from the perspective of communicative efficiency, and to reflect on the strengths and weaknesses of different theoretical claims and empirical methods.

Jiayi Lu  (University of Pennsylvania), Syntactic adaptation

Recent work in psycholinguistics revealed that speakers constantly track and adapt to the variability in linguistic signals produced by their interlocutors, an effect termed “linguistic adaptation. Linguistic adaptation has been shown to occur at various levels of representation (e.g. at the phonetic, phonological, lexical, pragmatic, and syntactic level.). Adaptation has received increased attention in recent years from linguists and psychologists alike, who have recognized that, though long ignored, it poses a problem for static theories of language. In this course, we will focus on adaptation at the syntactic structural level, review the various experimental work conducted on syntactic adaptation, discuss how syntactic adaptation may relate to other widely studied phenomena including structural priming, alignment, and syntactic satiation, and explore how findings on syntactic adaptation might affect our understanding of linguistic knowledge, language learning, and language use.

Maryellen C. MacDonald (University of Wisconsin-Madison), Language Production and its (Psycho)linguistic Consequences

This course considers the nature of human language production and the consequences of these production processes on human cognition more broadly. We’ll first introduce the production processes that allow humans to turn an intent to communicate a message into a spoken, signed, or written utterance. Production processes will include message development, lexical selection (the mechanisms where an internal message state yields retrieval of words from long-term memory and their selection for an utterance), syntactic and word order processes in utterance planning, and phonological encoding processes (perhaps to a lesser extent than the others). Where possible, results from a variety of languages will be discussed. Next, the course will cover how these production processes may offer insight into other cognitive, linguistic, and psycholinguistic phenomena. For example, does the nature of lexical selection affect the rates of ambiguity in language, a big topic in language comprehension? Do production processes contribute to an explanation for certain kinds of diachronic change? How does the nature of production affect adaptation, priming and related phenomena? Course assignments and activities will be designed to promote students putting the course information to use in their own research. Activities may include short proposals, presentations, discussions about using production research in theorizing about language adaptation, diachronic change, or other topics that students are interested in.

Alec Marantz (New York University), Reconciling Representational and Usage-Based Grammars: Toward an Integrative Cognitive Neuroscience of Language

A generative grammar, whether instantiated as code in a Large Language Model or rules in a linguist’s account of a language, is a finite representation of a potentially infinite set. The most productive accounts of language use describe a process of determining which member of this stored, infinite set is the one being heard or the one that meets the speaker’s communicative needs. This view reveals a tension between seeing speakers as having memorized an infinite set of words and sentences and seeing them as generating the words and sentences in use. This course will explore how the dichotomy between memorized and generated (famously, between words and rules) is a false one. We will examine evidence from linguistic theory, computational linguistics, and psycho and neuro linguistic experiments supporting the claim that words and sentences are both always memorized and always generated, both in representations and in use. This project involves two important theoretical investigations: (1) determining how probabilities infiltrate deterministic rules, and (2) taming recursion in syntax. For the latter project, we will see that a grammar generates an essentially finite set of “phases” (domains of phonological and semantic interpretation) whose recursion potential is quite limited. Here we take the verb phrase and derivational morphology as our test cases. We will tie linguistic theory to neural activity through an examination of recent MEG results from experiments on auditory language comprehension, both at the word and at the sentence level, both in carefully designed experiments and in naturalistic listening to texts.

Dimitrios Meletis (University of Vienna, Austria), Writing systems and their use

Writing systems relate graphic marks (e.g., the letters of Roman script or Chinese characters) with structures of specific languages (e.g., English or Mandarin). As tools of everyday communication deeply rooted in culture, they are processed with our hands, eyes, and brains and socio-pragmatically adapted to serve a wide range of purposes and contexts (consider handwritten note-taking vs. typing a formal email on a virtual keyboard on a phone’s touchscreen). In many ways, writing resembles language—only in a more manageable microcosm: there are fewer types of writing systems than language types, the history of writing is much shorter than that of language, and its development and change can be more easily reconstructed since writing, unlike speech, is semi-permanent, with written records going back thousands of years. All of this renders writing an intriguing alternative lens into core linguistic questions concerning structure and use: On the one hand, remarkable functional universality is found in the few sound-based (phonographic) and meaning-based (morphographic) ways in which writing systems relate to languages as well as the fundamental cognitive processes involved in spelling and reading. On the other hand, the embeddedness of the world’s writing systems in highly diverse cultures and linguistic communities is only one factor resulting in great variation. This means that to get a complete linguistic picture of writing, we need to combine structural and systematic perspectives with use-oriented ones. Accordingly, this course not only covers central topics including the relationship between the spoken, signed, and written modalities of language, the structural description and typology of the world’s diverse writing systems, the psycholinguistics of reading and writing, the instruction and acquisition of literacy, and sociolinguistic aspects of writing, but does so with a focus on how an emerging ‘grapholinguistics’ also informs our understanding of language and linguistics in general.

Laura A. Michaelis (University of Colorado Boulder) and Elaine Francis (Purdue University), Constructions and the Grammar of Context

Why do languages offer different grammatical options for expressing a given message, and how do speakers choose among these options? For example, in the following exchange (adapted from Switchboard), what makes the first two options for B’s response more likely than the third one? A: Uh huh. That’s some pretty good ideas. Why don’t you do something with those? You should run for a local school board position. B: THAT I’m not so SURE about. / I’m not so sure about THAT. / #THAT I’m not so SURE about it. I’ve got a lot of things to keep me busy. In this class, you will learn to use tools from Sign-Based Construction Grammar (SBCG) to answer these and other questions about functions of language in context. We will take constructions to be patterns of sign combination, of varying degrees of lexical fixity, with meanings and communicative functions that are not strictly predictable from the meanings of the component signs. We will use aspects of context, broadly understood as both conversational common ground and ideology/belief, to explain why language users choose the constructions that they do at a given juncture, and how and why they innovate, as when they combine constructions in novel ways. Through lectures, discussions, and data sessions, we will explore: -the ways in which constructions index context, and how this can be modeled using SBCG; -the effect of lexical content and discourse context on interpretation of constructions; -the division of pragmatic labor within the ‘constructicon’; innovation and optimization as a source of new constructions -the realization of context effects in corpus distributions and in patterns of judgments or response times in experimental tasks.

Jeff Mielke (North Carolina State University), Tongue movements and phonology

Spoken language involves a complex system of rapid tongue movements that are typically out of view of speakers and listeners. In this course we will use ultrasound imaging to study tongue motion in real time and in recordings, in order to learn about how we use our tongues in speech production and to explore what lingual articulation reveals about the organization of language. We will begin by learning about tongue motion in general and proceed to exploring lingual articulation of sounds in various languages as well as familiar cases of variation and change in English. We will see what tongue motion reveals about speech planning, and we will see how tongue motion can drive language variation and change. Students will be introduced to best practices for lingual ultrasound imaging, recording, and analysis.

Marianne Mithun (University of California, Santa Barbara), Introduction to Language Documentation

We will work together with a speaker or speakers of a mystery language to begin documentation of the language and discover patterns inherent in it at the levels of sounds, words, and sentences, and their uses in larger stretches of unscripted speech. Students will be introduced to techniques of data collection, data management, and analysis. An emphasis will be on models of collaborative research. Over the course of the project we will consider the variety of goals of documentation and the various audiences for it now and in the future, within both communities and academia. This project will be directed primarily at describing the language in its own terms, but we will also consider the bi-directional relationships between documentation and typology, and between description and linguistic theory.

Fermín Moscoso del Prado Martín (Cambridge University, UK), Information Theory Tools for Language Research

Information Theory is a powerful set of concepts and methods that extend the power of Probability Theory. At its origin, Information Theory was developed for the study of communication over transmission lines, and was readily taken up as a powerful tool by physicist and electrical engineers. Despite its origins in the study of communication, it is only in the last two decades that the application of such tools and methods has become frequent in linguistic research. However its use is now widespread in research covering many sub-areas of linguistics, including, but not restricted to: phonology/phonetics, morphology, syntax, historical linguistics, and language acquisition. This course will present an introduction to the concepts and tools of information theory specifically designed for linguistics students. In particular we will explore the concepts of information and entropy, and the methods for their estimation from data. I will divide the study of Information Theory into two broad blocks:
– Static Methods, roughly corresponding to the paradigmatic aspects of linguistics, including estimations of diversity of linguistic constructions.
– Dynamic Methods, roughly matching the syntagmatic aspects of linguistics, including the study of predictability in sequences.

Emily Morgan and Masoud Jasbi (University of California, Davis), Computational models of generalization and item-specificity

A central tension in linguistics is how to account for the fact that languages contain generalizable structure—which makes linguistic productivity possible—but also rampant idiosyncrasies in how individual words and phrases are used. For example, an English speaker knows that the past tense of a novel verb glorp is glorped but the past tense of run is the irregular ran. This item-specific knowledge is not limited to strict irregularities, but also includes knowledge of quasi-regularities (e.g. feel-felt, deal-dealt, etc.) and statistical usage preferences such as verb subcategorization preferences and frequent collocations. Despite decades of debate, linguists still disagree about how these two types of knowledge relate to each other and to what extent each is recruited in language acquisition and processing. Computational modeling provides a new path to address this old question, allowing us to formalize and make testable predictions about the joint roles of generalization and item-specificity in language acquisition and processing. Advances in computing power, the development of new statistical methods, and the creation of large linguistic datasets are all contributing to new theoretical advances on this front. In this course, we will focus on computational models of how generalization and item-specific knowledge jointly contribute to language acquisition and processing. We will introduce different classes of models, focusing on 1) connectionist and modern neural network models; 2) Bayesian and information-theoretic models; and 3) exemplar models. We will apply these models to case studies in syntax and semantics, in both acquisition and processing.

Corrine Occhino (University of Texas) and Ryan Lepic (Gallaudet University), The Linguistics of Sign Languages

The study of signed languages holds important lessons for linguistics. Research findings from studies of signers and signed languages has supported some and challenged other working theories of language. To better understand the human capacity for language, linguists must understand how language operates across modalities. As such, modern linguists are expected to know some basic information about how sign languages are structured and used. This course, designed for advanced undergraduates and graduate students, provides students a usage-based introduction to the linguistic structure of American Sign Language (ASL). Throughout the course, students will engage with current linguistic research in phonology, morphosyntax, typology, and sociolinguistics. We will compare ASL with spoken languages to find what types of structures and functions seem to be independent of modality and with other sign languages to learn about modality specific features of signed languages. The course structure combines reading of academic articles and in-class discussion with hands-on analysis of primary ASL data. Ultimately, students will develop a short research proposal for a potential research project on a signed language, aligned with their interests in linguistics. Upon completing the course, students will have a foundation in signed language linguistics and an understanding of how signed language research connects to larger questions within the field of linguistics.

Jesus Olguin Martinez (Illinois State University) and Phillip Rogers (University of Pittsburgh), Advanced syntactic typology

Progress in language documentation, and especially the development of naturalistic corpora for a greater number of languages, have made possible more nuanced syntactic typological studies that go beyond classic presence/absence typology (Mithun 2021: 2). In this course, we show that one theoretical goal of syntactic typology should be the comparison of constructions with similar semantic-pragmatic characteristics (e.g., Olguín Martínez 2024abc), as in (1). Associative connections among such constructions reflect the language users’ experience with particular patterns (Croft 2001; Diessel 2019) and analyzing them allows us to construct hypotheses about the organization of syntactic knowledge (Croft & Cruse 2004: 318). The idea that constructions can be organized into groups of formally and functionally connected configurations is not new (e.g., Shibatani 1985), yet much more work remains to be done in this area from a typological perspective. Students will learn how to explore relationships among semantically/pragmatically related constructions through the analysis of pertinent morphosyntactic variables, such as clause-linkage patterns, TAM marking, and clause order, among others. Along the way, we will familiarize students with some useful quantitative tools for both mono- and multi-factorial data (e.g., Multiple Correspondence Analysis [Glyn 2014], Hierarchical Configural Frequency Analysis [Olguín Martínez & Gries, in press]). A primary goal of the course is to equip students with a more holistic framework for syntax—one in which constructions are understood within a functionally and diachronically motivated configuration. Besides hypothetical manner constructions, we use other groups of constructions, traditionally neglected in most syntactic typological studies, as a proof-of-concept for the application of different methods. (1) Hypothetical manner (Olguín Martínez & Gries, in press) a. He swam as if he were a fish (simulative) b. He acted as if he were a fish (mistaken identity) d. It looks as if the removed content never existed (epistemic)

Pavel Ozerov (University of Innsbruck, Austria), Information Structure in interaction, grammar, and typology

Information Structure is concerned with how speakers adjust their message to the context and to its assumed representation in the interlocutors’ minds. In prevalent frameworks, sentence-level information is partitioned into uncontroversial (“presupposition”) and updating (“focus”) parts, and a referent is selected as an interpretation pivot (“topic”). This model is generally regarded as representing universal properties of discourse and language processing, which are reflected in grammar through “topic” and “focus” marking structures. Recent developments have cast doubts on these assumptions and the categories of topic and focus, which are never marked systematically and are notoriously difficult to “correctly” identify cross-linguistically. Instead, we take a bottom-up usage-based approach to Information Management. In particular, two types of novel linguistic data have propelled this line of research: (i) in-depth studies on markers of Information Structure in typologically diverse languages and (ii) natural multimodal interaction. New cross-linguistic research reveals a panoply of pragmatic-semantic categories which take part in the information management process. Studies of natural interaction analyze in detail the process of incremental, locally monitored structuring of information and attention-alignment. In combination, those studies have begun to reveal how these phenomena – and relevant language-specific markers – epiphenomenally produce vague interpretive effects that are traditionally associated with such notions as topic, presupposition, focus, or contrast. As a result, the emerging framework takes usage data seriously in order to produce richer cross-linguistic and pragmatic analyses, while offering a more parsimonious model with no top-down postulation of cognitive or pragmatic universals. Following an introduction to notions of Information Structure and methods of Interactional Linguistics, we will explore the diversity of factors involved in the interactional process of Information Management, including the opportunity to practice the analysis on students’ own data. The class is aimed at students interested in pragmatics, semantics, language documentation and description, and spoken language.

Thomas E. Payne (University of Oregon and SIL International), Grammatical description for language documentation

A descriptive grammar is the centerpiece of any program of linguistic documentation. In addition to the obvious value in providing material for academic research, descriptive grammars allow future generations to interpret other documentary materials such as texts and dictionaries. They also provide a foundation for educational materials and other mother tongue literature. This course is designed to be a “hands on” opportunity for researchers who could profit from four weeks of focused time outlining or making progress on a descriptive grammar. The ideal participant is a native speaker or other linguistic researcher with one or more years of direct experience doing linguistic research on an underdocumented language. Teams of researchers who study the same language may also participate. A digital database of text materials of various genres (narrative, expository, hortatory) is desirable. Issues of phonology and orthography should be largely resolved. The main goal of the course is for each participant or team to produce an outline of a descriptive grammar that highlights key morphological and syntactic features of a particular under-documented language. Topics to be addressed include: What is a grammatical description? Referential expressions Modification Non-verbal predicates Verbal predicates Inflection and derivation Verbal categories to look for (including tense, aspect, modality, evidentiality, location, direction, voice, valence, actional type, and others.) Verb subclasses Complementation Special constructions Questions Imperatives Negation Clause combining Insights from various theoretical traditions may be employed where they are useful, principally Cognitive Grammar, Construction Grammar, and Role and Reference Grammar. However, this will not be a course on linguistic theory. Theory will only be employed as it proves useful to clear, insightful linguistic description. The major theoretical assumption of the course is that any language evolves in a particular community in response to the universal human need to communicate.

Florent Perek (University of Birmingham, UK), Distributional Semantics for Linguistic Research

Distributional semantics seeks to capture the meaning of words by automatically extracting information about their contexts of use from large corpora, under the assumption that words with similar meanings tend to be found in similar contexts. In a distributional semantic model (DSM), also called word embedding, the meaning of a word is represented by an array of numerical values derived from its co-occurrences, turning the informal notion of meaning into a more precise quantification which is built from usage data and lends itself well to quantitative studies. This course will provide an introduction to distributional semantics and to a range of ways it can be used to conduct empirical research in linguistics. I will first describe the basic ideas behind the distributional semantic approach, and the various ways in which it has been computationally implemented, notably the “bag-of-words” approach based on lexical co-occurrences, and neural network approaches like word2vec. I will discuss various off-the-shelf DSMs as well as tools to create tailormade DSMs from your own corpus data. Finally, I will demonstrate various ways in which DSMs can be reliably used as a source of lexical semantic information, notably through semantic measures such as semantic distance, measures of semantic spread, and clustering into semantic classes. Examples will include research in syntactic productivity, language change, language development, and descriptive grammar. The course will feature a mixture of lectures and hands-on sessions. Prior knowledge of R is advised for the hands-on sessions, and some knowledge of a programming language like Python or Java is a plus to take full advantage of this course.

Marc Pierce (University of Texas at Austin), Theories of Sound Change from the Neogrammarians to Today

The study of sound change remains one of the centerpieces of historical linguistics, e.g., the doctrine that sound change is regular is a crucial piece of the comparative method. Yet many students are unfamiliar with the evolution of theoretical approaches to sound change. This course is therefore intended to remedy that deficit. It investigates theories of sound change, beginning with the Neogrammarians of the late 19th century. In this portion of the course, we will read excerpts from two of the most important Neogrammarian works, Osthoff & Brugmann (1878/1967) and Paul (1880/2015), which lay out the Neogrammarian hypothesis of regular, phonetically/phonologically conditioned sound change in detail. We will then turn to Structuralist approaches, looking at Sapir (1921), Bloomfield (1933), Martinet (1952), and Hockett (1958). Sapir, unlike his contemporary Bloomfield, rejected the Neogrammarian idea that sound change must be phonetically/phonologically conditioned, while Bloomfield, Hockett and Martinet all codify Structuralist ideas about sound change. (Martinet’s views are more influenced by European Structuralism, Hockett’s by American Structuralists, and Bloomfield is somewhere in the middle.) The next readings, King (1969), McMahon (1991), and Zubritskaya (1997), present various stages of generative ideas about sound change. From there, we will move to phonetic approaches, focusing on Ohala (1993). The final session of the course will examine sociolinguistic approaches to sound change, specifically Labov (1963) and (1981), as well as the historiographical discussion of such approaches in Kiparsky (2016). The approach taken here is (generally) chronological, but not teleological. It instead focuses on how theories adapt as they encounter new data and new ideas (e.g., it was believed for several decades that sound change was not regular in unwritten languages, until Bloomfield rebutted this idea). Knowledge of German would be useful, but is not required, as all readings will be in English.

Janet B. Pierrehumbert (Oxford University, UK), Words and the Nature of Artificial and Human Intelligence

Is AI intelligent? Can state-of-the-art large language models (LLMs) pass the Turing Test? Part of the answer to this question depends on whether LLMs display human-like capabilities in learning, producing, and understanding words. No other animals share our ability to use words to exchange information about objects, events, and abstractions that are not directly observable. A large and adaptable lexicon is accordingly a hallmark of human intelligence. This course will systematically compare psycholinguistic results on the mental lexicon with the core properties of LLMs. It will distinguish large-scale memorization from generalization, and explain the mechanisms by which successful generalizations occur. About half the course will cover successes of the current transformer-based LLMs. Like the original neural network models, current LLMs capture cumulative, gradient effects of similarity and frequency in predicting the acceptability or likelihood of novel forms. These effects appear in the cognitive literature under the rubric of “analogy” and “lexical gang effects”. Advancing beyond the original neural network models, LLMs can exploit similarities in meaning as well as in form, and they can learn from experience without direct feedback. They can activate and de-activate subfields of the vocabulary depending on the topic of discussion. The other half of the course will cover shortcomings of LLMs, concentrating on two problem areas. First, the LLMs do not bootstrap a mental lexicon in the same way that humans do. Second, although their treatment of word meaning and semantic similarity may work well for topical words, it breaks down for logical operators such as epistemic modals and sentential adverbs. These problems go towards explaining why LLMs often produce logically incoherent discourse. On the assumption that being logical defines the best of human intelligence, we will conclude that LLMs are not yet intelligent.

Michael Ramscar (University of Tübingen, Germany), Introduction to Discriminative Linguistics

Linguistics has traditionally assumed a categorical model of language in which signals comprise discrete form elements that are associated with discrete meanings, and the goal of research has been to identify and classify these elements, and the inductive processes of composition and decomposition they support. By contrast, theories of learning and communication have adopted discriminative (or deductive) model based on systems. “Associative” learning is typically modeled as a discriminative process that enables learners acquire a predictive understanding of the environment via mechanisms that tune systems of representations. Meanwhile information theory does not treat “information” as being a property of individual transmitted signals, but rather as a function of all of symbols that could have potentially been sent. This course will introduce an approach to human communication based on the later approach, describing the basic principles of learning and information theory, and various empirical findings in support of the idea that human communication is subject to the constraints imposed by these principles, and will describe how, from this perspective, natural languages should be seen as probabilistic systems that exhibit continuous variation within a multidimensional space of form-meaning contrasts. This systematic picture of communication indicates that discrete descriptions of language at an individual (psychological) or community (linguistic) level must necessarily be idealizations that inevitably lose information. The course will describes how the development of a discriminative, information theoretic approach to language can lead in turn to the appreciation of the vast array of socially evolved structure that serves to support shared, probabilistic communication systems. In the process, we will look at some systems that have generally been ignored by categorical approaches (either because they appear redundant — grammatical gender — or simultaneously problematic and random — personal names) and show how this perspective can lead to an appreciation of the remarkable amount of evolved communicative structure that they can reveal, both within and across languages. Finally, since humans are linguistic animals, one might expect insights from a successful theory of human communication to extend beyond linguistics: accordingly, the course will describe how the application of discrimination linguistics can shed new light on our understanding of lifespan cognitive development.

Terry Regier (University of California, Berkeley), Computational Semantic Typology

This course will provide an overview of computational work on semantic typology, with emphasis on semantic typology of the lexicon. Topics to be addressed include semantic maps, efficient communication, and the emergence of semantic systems from the dynamics of learning and communication. We will draw on case studies from semantic domains such as color, kinship, numeral systems, and container names, among others.

Arnaud Rey (CNRS and Aix-Marseille University, Marseille, France), Non-human Primates and Language

Why language has not developed in other species, notably in the non-human primates that are genetically closest to us. Contrary to the hypotheses invoking anatomical limitations or the absence of recursion, I will defend the idea of limitations in the motor control enabling primates to simply name the objects of our world, a central mechanism for explaining our capacity for abstraction.

Arnaud Rey (CNRS and Aix-Marseille University, Marseille, France), Implicit Associative Learning and language

In 1957, the publication of “Syntactic Structures” by N. Chomsky and “Verbal Behavior” by B. F. Skinner introduced two radically different approaches to the study of language. After a brief and critical presentation of these approaches, I will pave the way to current approaches based on language use and implicit statistical learning, showing that these approaches have slowly created a favorable climate for a paradigm shift in the study of language processes. I might also argue that the central notion of syntax should certainly be reconsidered, if not abandoned, when language development is considered.

Caroline Rowland and Zara Harmon (Max Planck Institute for Psycholinguistics, The Netherlands), Child Language Acquisition

This course provides an introduction to topics on the acquisition of a first language by infants and children. Language acquisition researchers–linguists, psychologists, psycholinguists, computational scientists and neuroscientists–work together to discover how children learn language and why humans are the only known species capable of learning language. In this course, we will discuss how diverse methodologies address the main theoretical questions of linguistic theory aimed at understanding how children learn language. We will address questions such as: how children learn to associate words with their meanings; how they learn to combine these words into grammatical sentences; what factors play a role in this learning process; what changes in the brain during development, and why some children are much quicker to learn to speak than others. To study these questions, we will survey influential and cutting-edge experimental methods, corpus-based approaches, and computational modelling and discuss how these methods test hypotheses related to language development.

Mark Seidenberg (University of Wisconsin-Madison), Linguistics and Reading Reform

This course will examine the potential role of linguistics in improving literacy outcomes in the US and other countries. Literacy levels are remarkably low, with individuals from some groups (low SES, some ethnic/racial minorities) at particularly high risk for poor outcomes. Efforts to alter this situation focus on using curricula and practices aligned with basic behavioral and neuroscience research on reading and dyslexia. However, progress in reading depends heavily on knowledge of the language used in school and in books, which varies for several reasons. The course will focus on making better use of basic research in linguistics to improve reading achievement, focusing on what teachers need to know about language and what children need to know. Topics will include aspects of morphology, phonology, sentence structure, and language variation relevant to beginning reading; roles of explicit instruction (e.g., in rules) and implicit (statistical) learning; identifying and eliminating biases in curricula and practices that disadvantage children from “atypical” language backgrounds.

Jason A. Shaw and Michael C. Stern (Yale University), Dynamics of Speech Production

Systems that change over time, from particles to climates to stock markets, are often well described as dynamical systems. Speech production involves coordinated movements of articulators (for example, the tongue and lips). These actions are generated and controlled by the nervous system and unfold over time according to laws, which can be formulated using dynamical systems theory. This class provides an introduction to the types of dynamical systems that have been proposed to describe and explain human speech production, including (1) articulatory kinematics, i.e., the movements of speech organs such as the tongue, lips, vocal folds, etc., and (2) neural activity governing intention and control of speech. Dynamical systems bridge traditional divides between phonology and phonetics. They provide a formal language for explicitly relating the continuous dimensions of speech production, i.e., a traditional domain of phonetics, to sound patterns described in terms of discrete categories, i.e., a traditional domain of phonology. At the end of this class, students will be able to relate the sound patterns of the world’s languages to the dynamics that give rise to them.

Naomi Shin (University of New Mexico), Acquisition of Variation

Language acquisition research has traditionally focused on aspects of grammar that are fixed or invariant. For example, in English we say I ate the apple, but not Ate I the apple with the grammatical subject ‘I’ after the verb ‘ate’. Yet, many grammatical patterns are in fact variable. For example, in Spanish, subject pronouns can be expressed or omitted, e.g., yo comí la manzana ~ comí la manzana both mean ‘I ate the apple’. This type of grammatical variation is not random; it is constrained by numerous factors, resulting in complex probabilistic patterns that are highly systematic among adults. How do children acquire these patterns? This course reviews a proposal outlining a four-stage developmental pathway for the acquisition of variation and covers findings from recent research on acquisition of variation, which suggest that development depends on child-internal factors and the nature of the input, including the frequency of the words and grammatical structures in child-directed speech. We will also highlight the importance of understanding linguistic variation for research involving bilingual and bidialectal children, for whom variation is sometimes mistaken as indicating delay or disorder. Students will complete hands-on activities involving corpus data in order to gain a deeper understanding of how to study acquisition of variation. These activities will also guide students to identify outstanding questions that may pave the way for future research projects.

Shahar Shirtz (Arizona State University) and Jordan Douglas-Tavani (University of California, Santa Barbara), Typology, Contact, and Convergence: The Southern Region of the PNW Sprachbund

The Pacific Northwest (PNW) Sprachbund was described by Swadesh (1953), Thompson and Kincade (1990), and most recently, by Beck (2000). From north to south the area extends from southern Alaska, through British Columbia, Washington, and Oregon, and stretches eastwards through parts of Idaho and western Montana. This expansive language area is home to dozens of languages from many different families. Many of these languages remain scarcely described, despite the availability of primary, published, language material. In this course, we will learn about the PNW Sprachbund and the different isoglosses that scholars used to identify it and the current state of the research into the area. We will examine curated texts from several of the less-described languages of the PNW, especially from its southern region (i.e. Oregon and Washington). We will do so by conducting morphological and syntactic analyses of primary published texts of dormant languages, where we do not have the opportunity to consult speakers and language teachers. We will compare and contrast our analyses, identify ways to test them against each other, and use them to critically-yet-generously examine and question previous analyses of individual languages as well as the viability of different PNW isoglosses in the southern region of the Sprachbund.

Andrea Sims and Maria Copot (The Ohio State University), Introduction to Quantitative Morphology: Questions, Methods, Models

Like many areas of linguistics, morphology has been undergoing a quantitative shift. Quantitative methods allow for new ways of examining questions that have long been central to morphological theory. They have also opened the door to new empirical and theoretical questions, and new kinds of theoretical models. This course introduces current questions, methods, and models from the vantage point of quantitative morphology. The focus will be on the Word-and-Paradigm tradition, where information-theoretic, corpus-based, and computational methods have been developed to model relationships in a language’s lexicon. The course will also highlight the importance of thinking of morphological systems as complex wholes for understanding morphological structure language-internal and cross-linguistically. A prior course in morphological theory will be helpful as background, though not strictly necessary. Basic familiarity with Python or R will be helpful. No particular statistics or math background is assumed.

Elena Smirnova (University of Neuchâtel, Switzerland), Diachronic Construction Grammar

When the framework of Construction Grammar (CxG) was originally developed, its main objective was to accurately describe speakers’ linguistic knowledge. Thus, CxG emerged as a synchronic theory of language. It was only later that it proved to be a valuable descriptive tool for analyzing language change. As a usage-based and fundamentally cognitive approach to language structure, CxG is well-suited to model gradual, incremental, bottom-up changes resulting from language use. Concepts such as frequency effects, analogy, chunking, entrenchment, and conventionalization naturally fit within the emerging field known as Diachronic Construction Grammar (DCxG).
This course will provide an overview of recent developments in DCxG research and explore the central questions driving current studies in this area. In the first part of the course, we will focus on theory, discussing the conceptual fundamentals of DCxG and their application to specific instances of language change. In the second part, we will examine individual case studies from various domains of grammar and different languages. In the third part, we will debate the strengths and limitations of the DCxG approach and identify questions for future research.

Kenny Smith (University of Edinburgh, UK), Origins and Evolution of Language

We will review current theories which attempt to explain how and why human language evolved, covering both the biological evolution of the human capacity for language, and cultural evolution of languages themselves. Modern evolutionary linguistics is a highly interdisciplinary field, and we will touch on the basics of evolutionary biology and gene-culture coevolutionary theory, animal communication and animal cognition, computational and experimental approaches pioneered by our group in Edinburgh, and various other topics. No prior knowledge of these areas is assumed, and the course will be accessible regardless of background.

Morgan Sonderegger and Michael McAuliffe (McGill University, Canada)Practical Corpus Phonetics

Corpus phonetics, the study of speech production in non-laboratory settings, has become a major approach in phonetics and phonology research, in contexts from fieldwork data with small numbers of speakers to large-scale cross-linguistic studies of thousands of speakers. This course aims to bridge the gap between the availability of corpus phonetic tools and their practical application. We will survey computational tools for constructing and working with speech corpora to answer linguistic questions, including automatic transcription (e.g. Whisper), speaker diarization (PyAnnote), text-to-speech alignment (“forced alignment”: Montreal Forced Aligner), speech database systems for representing and querying corpora (PolyglotDB), and automatic phonetic measurement (e.g. for vowel formants, VOT). The course will be centered on hands-on labs where participants gain experience with this rapidly-growing ecosystem of tools. We will discuss case studies and best practices for large-scale corpus studies. Participants are welcome to bring their own data for the labs or use data provided by the instructors.

Sabine Stoll (University of Zurich, Switzerland), Linguistic Diversity in Language Acquisition

Language acquisition research faces a big challenge: explaining how children cope with the extreme structural diversity of human languages. Despite the lack of structural universals and the vast diversity space at all linguistic levels, this puzzle remains unresolved. In this course, we will first explore the challenges learners encounter when dealing with the immense diversity found in languages worldwide. We will then discuss how embracing this diversity in our studies can help avoid biased results. Following this, we will delve into the universal mechanisms underlying language acquisition including theory of mind, statistical learning and priming. To better understand how these mechanisms operate within the diverse landscape of human language, we will examine universal input patterns that enable learners to build their lexicon and grammar. We will discuss in some detail how a number of linguistic features are acquired in a diverse set of languages, Finally, we will discuss these mechanisms and linguistic patterns in the light of language evolution.

Benedikt Szmrecsanyi (KU Leuven, Belgium), Language Use and Language Variation

This course will survey recent trends in usage based variationist linguistics and variation studies. We will thus be specifically concerned with variation analysis that builds on the premise that the structure and knowledge of variation and of probabilistic variable grammars is shaped by language use, performance, and by functional needs. This points to perhaps one of the major splits between North-American-style variationist sociolinguistics (which is often, but not always, rather system- and competence-oriented) and European-style variation analysis (which is typically more usage-based and performance-oriented, with a special interest in the fluidity of linguistic knowledge). In the course, I will discuss some recent methodologies and findings of usage-based variation studies. Topics will include the following: (1) Comparative variation analysis, (2) variation across registers, (3) variation in diachrony, (4) psycholinguistics and variation, and (5) the relative complexity of variation.

Rachel M. Theodore (University of Connecticut and the National Science Foundation) and Lynne Nygaard (Emory University), Beyond the binary: The role of lexically guided perceptual learning in speech perception

Mapping speech to meaning requires listeners to solve a massive computational problem that arises due to a lack of invariance between speech acoustics and speech sound representations. There is now a robust body of evidence indicating that listeners solve this problem, at least in part, by dynamically adapting to structured phonetic variation. For example, when exposed to acoustic energy ambiguous between speech sounds, listeners can use lexical information to resolve the ambiguity. Changes in the mapping between speech and meaning persist even when lexical context is subsequently removed. Lexically guided perceptual learning has been observed for myriad speech sound contrasts, languages, and language users; indeed, it is one of the most influential discoveries in the domain of speech perception. This course draws on the lexically guided perceptual learning literature towards two intersecting aims. First, we will present, explore, and challenge key tenets that have emerged from this literature. A computational instantiation of the belief-updating theory of speech adaptation will provide a core framework for interpreting this body of work, through which perceptual learning is viewed as a graded outcome. The second aim is to explore scientific approaches more generally through the lens of perceptual learning. We will explore the consequences of paradigm rigidity (e.g., failure to question conventional tasks), underspecified “verbal” theories, and research that is siloed across disciplines and dissociated from functional communication. Through these aims, course attendees will learn to formalize the belief-updating theory of speech adaptation in a computational model, understand the contributions of formalized models in relation to verbal theories, and recognize perceptual learning as a graded – not binary – outcome. In addition, attendees will learn benefits and potential pitfalls of interdisciplinary research and of the need to understand lexically guided perceptual learning in the context of a complex linguistic system that serves social communicative goals.

Malathi Thothathiri (The George Washington University), Dynamic Language Updating and Use

This course will offer a psycholinguistic and neurolinguistic perspective on how sentence comprehension and production adjust to continual language experience. Which sentence structures we produce or understand easily depends on the current context. Language use is not static in L1 speakers – it can change dynamically depending on verb bias, structural priming, and the recruitment of extralinguistic abilities like cognitive control. The course will cover statistical learning, error-based learning, structural priming, conflict adaptation, and other topics as they pertain to sentence comprehension and production. We will discuss a variety of methodologies including behavioral techniques, eye-tracking, and fMRI. The format will be a combination of lectures and seminar-style readings and discussions with a focus towards empirical research, including posing new questions and designing thought experiments to address those questions.

Catherine Travis (Australian National University), Variationist sociolinguistics: Testing hypotheses about language change

Variationist sociolinguistics seeks to account for the structural variability inherent in linguistic systems, that is, the way in which different (social and linguistic) factors impact on speaker choices between alternate forms that express generally similar meanings. According to this approach, grammar is represented in the probabilistic conditioning of variation, and grammatical change can be observed in shifts in that conditioning over time. In this course, I demonstrate how this methodology can be applied to test hypotheses about mechanisms and sources of change. As well as a general grounding in the variationist approach, students will be introduced to the variationist comparative approach and its application in identifying cross-linguistic similarities and differences and in assessing contact-induced grammatical change. This course will be taught interactively, with opportunities for students to develop testable hypotheses applicable to specific language contexts they are interested in.

Rory Turnbull (Newcastle University, UK), Modeling Linguistic Networks

Network science is the study of complex systems, formalized as networks consisting of nodes and links between nodes. This course provides an introduction to the application of network science to linguistics. Network models are applicable to a wide range of topics in nearly every linguistic subfield. This course will cover the use of word networks to model semantic, syntactic, morphological, or phonological relations among words; the use of social network and epidemiological modeling to examine the spread of linguistic patterns throughout a community; and how dynamical models can be used to simulate the growth or shrinkage of such networks. As such, these models touch on various topics in phonology, morphology, semantics, syntax, first and second language acquisition, historical and sociolinguistics, psycholinguistics, and beyond. The course will provide hands-on experience in coding and developing network analyses. Prior experience in Python or a similar programming language is beneficial but not required. Throughout the course, students will develop a small research project tuned to their interests.

Rosa Vallejos-Yopán (University of New Mexico), Language Contact in Language Shift Ecologies

Much work has been done to understand the effects of language contact between major languages in communities with relatively stable bilingual practices (e.g., Spanish/English in the US, English/French in Canada). However, a large percentage of language contact today occurs in situations of language shift, where the use of ancestral indigenous languages is decreasing across generations in favor of the use of major (often colonial) languages. Research on contact involving indigenous languages requires a deep understanding of the social context, including complex ideologies of authenticity, proficiency, and authority. In addition, it faces numerous methodological challenges. For example, a common assumption is that unilateral diffusion from a dominant to an indigenous language is more likely. However, recent research shows that multilateral diffusion should not be discarded, but the lack of corpus data or data from different time periods for most indigenous languages makes it difficult to assess directions of influence. This course will focus on the linguistic effects of contact in shifting ecologies from both sociocultural and structural perspectives. What determines a particular outcome of language contact? Can “anything” happen language-internally given enough social pressure? What discourse strategies and cultural practices facilitate or hinder the transfer of features across languages? We will address these questions paying particular attention to the origins and development of structural changes in the areas of phonology and morphosyntax in different communities across the Americas. We will analyze language samples to identify potential contact-induced patterns and explore social, cultural, and historical explanations for specific linguistic outcomes.

Abby Walker (Virginia Tech) and Charlotte Vaughn (University of Maryland), Sociolinguistic Perception

This class serves as an introduction to the findings, methods, and current questions in sociolinguistic perception, the process through which listeners/viewers build ideas about and make judgements on speakers/signers. In the class we will look at some classic papers concerning attitudes to different linguistic varieties, but the major focus of the class will be third-wave approaches to understanding the social meaning of linguistic variants (e.g., Hall-Lew, Moore & Podesva 2021), covering enregisterment, indexicality, non-arbitrary sources of meaning, and stylistic obsolescence and change, with a focus on how we can explore these concepts through perception tasks. We will also look at how the social meaning of variants work together – engaging with concepts of markedness, stylistic coherence, and bricolage – and how social meanings interact with non-linguistic information. While the majority of the class will focus on speaker/variety perception, we will also explore how speaker perception interacts with speech perception. Throughout the course we will focus on the methods used by researchers to explore language attitudes and sociolinguistic perception. We will look at how data from these tasks not only informs linguistic theory, but can and has been used to combat linguistic prejudice. At the same time, we will also investigate the possibility that these tasks can sometimes enable and entrench discriminatory practices. As well as engaging with readings and in-class discussion, assessment will involve a proposal for a sociolinguistic perception study that builds on research discussed in the class in combination with students’ own interests. While the class will primarily look at the perception of spoken language, and be heavily focused on phonetic/phonological variables, exploration of perception in other modalities (signed and written language) and at other levels of structure (e.g., lexical, morphosyntactic) will be encouraged.

Steve Wechsler (University of Texas), Evolutionary Lexical Functional Grammar

In this course we will seek explanations for grammatical systems by modeling their acquisition and evolution.  We will learn to build and use stochastic models of reinforcement learning, drawing upon techniques in use in psychology since the 1950s.   With these models we explore the conditions under which grammatical systems are predicted to emerge and grow in complexity, and the forms of the resulting systems.  We posit two types of background condition: message probabilities or patterns of preference for what sorts of messages speakers choose to express in a given context (cp. functionalism); and form probabilities or patterns of preference for how to express a given message (cp. language processing).  In addition, language learning involves imitation and is therefore influenced by the learner’s similarity judgments (cp. lexical semantics).  Turning to grammar emergence and evolution, the message probabilities influence the emergence of grammatical relations paired with semantic composition rules; the form probabilities influence the emergence of formal expressions of those grammatical relations; and similarity judgments influence the emergence of patterns obtaining across words, such as argument structure generalizations.   These three types of grammatical structure are conveniently represented in Lexical Functional Grammar as functional structure, constituent structure, and argument structure, and so the course will include an introduction to LFG.  We will model the emergence and evolution of a range of grammatical phenomena, including verbal argument structure and alternations, function morphemes that signal constructions, dependent and split ergative case systems, fixed word order expressing grammatical relations, complex predicates, and unbounded dependencies.  The goal is for students to learn to apply these analytic techniques to constructions of interest to them.

Andrew Wedel (University of Arizona) and Kathleen Currie Hall (University of British Columbia, Canada), The Message Shapes Phonology

A long-standing hypothesis in research on phonology is that a language’s phonological system is shaped by its use in communication. Traditionally, this view has focused on trade-offs between the effort required to produce phonological units and the accuracy with which they are recognized. Based on theoretical and empirical findings, we argue that this trade-off also, or even primarily, takes into account the information that the phonological unit provides about meaning-bearing units like morphemes and words (the phonological unit’s ‘value’ or ‘utility’ to the transmission of meaning). Within this course we integrate concepts from information theory and Bayesian inference with the existing body of phonological research. In doing so, we show that this important elaboration of existing approaches provides greater explanatory coverage of a diverse range of sound patterns. We will begin by exploring sets of phonological patterns traditionally called strong versus weak, and show how a meaning transmission-centered approach grounded in information theory and Bayesian inference can solve a range of outstanding puzzles in this domain. We end the course by working together extending this framework to problems beyond the strong/weak dichotomy such as vowel harmony and reduplication.

Rachel Elizabeth Weissler (University of Oregon), Neurolinguistic Methods

The central focus of this seminar is the neural machinery that is behind our ability to produce and understand language. We investigate the brain bases for linguistic knowledge regarding as being intertwined with our knowledge about culture, society, and social interaction. We’ll take an integrated approach, drawing on a range of state-of-the-art neuroimaging techniques, as well as theories of how linguistic computations and representations can inform, and be informed by, our understanding of the brain. This course will include a lab visit field trip to get hands-on experience with fMRI, to not only enhance learning experiences for students, but also to inspire future research and critique the functionality of tools like these to answer linguistic questions. While we’ll be drawing primarily on neurolinguistic research, we will also be engaging with theories from sociolinguistics, social psychology, and psycholinguistics. As a seminar, the course is discussion-based and everyone is expected to take an active role during each session and contribute fully to the task of building and sustaining a learning community. Fundamentally, I hope we all see this seminar as a sandbox for intellectual exploration and research development.

Colin Wilson (Johns Hopkins University),  Computational methods for phonology & morphology

A wide range of computational methods have been developed to express and learn generalizations in the domains of phonology and morphology. These methods include regular expressions, weighted finite-state machines, maximum entropy or log-linear models, exact and approximate probabilistic inference in graphical models, and deep neural network modules such as LSTMs and Transformers. This course will survey these methods and their practical application using the data structures and algorithms of established toolkits (e.g., re / stringr, OpenFst / Pynini, PyMC / pomegranate, PyTorch / JAX), with an emphasis on developing a detailed understanding of how lower-level implementations relate to and support higher-level theories of phonological and morphological systems.

Bodo Winter (University of Birmingham, UK), Iconicity in Language

This course provides a comprehensive introduction to current and past research on iconicity, the perceived resemblance between form and meaning in linguistic signals. Examples of expressions which exhibit iconicity include onomatopoeias, such as English “bang” and “beep”, or the American Sign Language sign for ‘tree’, which mimics the shape of a tree. For much of the history of linguistics, iconicity has been thought to be a fringe topic, relegated to the margins of language. In this course, you will learn that contrary to this view, new research from the last couple decades shows that iconicity plays a role across different levels of linguistic analysis (phonetics/phonology, morphology, syntax) in both spoken and signed languages. We will review many different phenomena that exhibit iconicity, including manual gesture, prosody, phonesthemes, ideophones, writing systems, and more. And we will discuss empirical studies demonstrating that iconicity helps jumpstart new communication systems, including in language learning and language evolution. Throughout all of this, we will learn how iconicity interacts with processes of conventionalization, and how this over time can erode iconicity. Against the backdrop of all this research, we will revisit and critically reflect some of the foundational tenets of linguistics, such as the principle of arbitrariness, according to which words are lacking in form-meaning connection.

Roberto Zariquiey (PUCPPeru), Ergativities

The term “ergativity” (or “ergative alignment”) is used to describe a situation in which the more agentive argument of a transitive clause (henceforth A) is somehow grammatically distinguished from the unique argument of an intransitive clause (henceforth S), which patterns like the more patientive argument of a transitive clause (P) (S = P ≠ A). In each language with ergativity, there are always grammatical domains in which ergativity does not manifest. There is no such a thing like an “ergative language”. Ergativity is always split ergativity. One of point of interest is that the number of ergative constructions may vary radically. That is, there are no (fully) ergative languages and languages with ergativity are not equally o similarly ergative either. Based on a carefully discussion of a sample of languages with ergativity from different languages of the world (with focus in Amazonia), we ask if we really find manifestations of a single and well-defined phenomenon that we can called ergativity among different languages. The seminar argues that this is indeed not the case, and that the ergativies (yes, in plural) manifested in the world’s languages have different histories: some are old and can be traced up to the protolanguages and are temporally stable, others are the result of a fairly recent grammatical change and may be lost soon. We use such diversity in distribution, diachrony and processing to start thinking on a new approach to the typology of ergativity.

Georgia Zellou (University of California, Davis), Linguistic Variation during Human-Computer Interaction

We are currently in a new era of human history: people are regularly using spoken language to communicate with technological agents and generative AI systems. This course considers both the theoretical implications and the practical applications of speech communication patterns during human-computer interaction. We will examine linguistic theories accounting for variation in human language patterns in tandem with human-computer interaction frameworks which seek to understand how people interact with non-human entities. We will consider questions such as: how are people’s speech and language patterns during human-AI interactions similar to, or different from, human-human interactions?; what are the mental models people use when communicating with technological agents, and how might they vary based on user experience, context, culture, and over the lifespan?; how can linguistic variation during HCI provide insight to the cognitive and social representations underlying linguistic communication more broadly?. We also touch on the implications of this line of work for addressing major societal issues in speech technology, such as: linguistic and social disparities in the availability and functioning of language models; the role of linguistic variation in credibility and the spread of misinformation; and applications for language learning.

Locations of Site Visitors