This is the companion website for the “Linguistic temporal trajectory analysis” workshop that we teach at the 2018 European Symposium Series on Societal Challenges in Computational Social Science. The workshop will provide participants with an overview of techniques and existing works related to linguistic temporal trajectory analysis in R.
We welcome students and researchers from all disciplines working with text data that are interested in applying computational concepts in the social sciences (e.g., Psychology, Criminology, Computer Science, Linguistics, Digital Humanities, Political Science, etc.).
(this website will be updated in the weeks leading to the workshop)
This workshop focuses on the quantitative analysis of text data using the concept of linguistic temporal trajectory analysis (LTTA). LTTA aims to study how the use of language develops over time by continuously analysing semantic properties of temporal representations of language. Specifically, the temporal development can be studied (i) on the forum-level (i.e., how does the language used on whole platforms or sub-forums change over time?), (ii) on the user-level (i.e., how do individual users change their language over time?), and (iii) on the intra-textual level (i.e., how do text properties change as a function of narrative progression?). In doing so, LTTA combines computational linguistics with statistical modeling (e.g., time series analysis) to provide a new base for understanding social and behavioral phenomena in text data.
LTTA’s continuous and dynamic approach to studying linguistic data can help shed light on dynamic processes in linguistic data that are not captured by static approaches. LTTA is particularly promising for research areas where relevant constructs might be hidden (e.g., hidden advertisement in YouTube vlogs and detecting embedded deception in statements), or where forecasting models could help mitigate or prevent phase transitions (e.g., in the development of radical language).
We will outline the idea behind LTTA, how to implement it in R, and discuss the assumptions and limitaions of the method.
Note: this workshop consists of one WORKSHOP + TUTORIAL part (in the morning) and a PAPER HACKATHON (after the lunch break).
The preliminary workshop schedule is as follows:
|WORKSHOP + TUTORIAL||(morning)|
|Introduction to linguistic temporal trajectory analysis||09:00 - 09:45|
|Rationale, the aim of the method, differences to existing approaches (e.g., simple sentiment analysis), levels of analysis (trajectory of language use over time vs. dynamic intratextual linguistic analysis), extensions: unsupervised (non-)hierarchical clustering of temporal patterns, supervised machine learning with temporal patterns, breakpoint estimation of non-stationary linguistic time series data.||Mini-lectures|
|Examples of linguistic temporal trajectory analysis||10:00 - 10:30|
|Linguistic trajectories of far-right extremists, intratextual sentiment analysis of narrative styles of YouTube vloggers, narrative trajectories in TED talks, emotional arcs of stories, Intratextual sentiment analysis of lone-actor terrorists’ manifestos||Case-studies|
|Linguistic trajectory analysis in R||11:00 - 12:30|
|Running LTTA on simple simulated data (using a plenary walk-through code example), using LTTA on real data reproducing the analysis of our “narrative styles of YouTube vlogs” paper (individually or in teams under the guidance of the organizers)||Tutorial|
|LUNCH BREAK||12:30 - 13:30|
Dataset provided uniquely for this session:
Research question: How does the language of YouTube’s vloggers evolve?
Participants should have some basic knowledge of the R or python programming language. The workshop will focus on R but the transition from python is relatively straightforward (esp. with pandas).
|Bennett Kleinberg||Isabelle van der Vegt||Maximilian Mozes|
|Assistant Professor in Data Science (University College London)||PhD student (University College London)||MSc student (Technical University of Munich)|
(the code for all examples in the tutorials will be available here)
(slides will be available here)
Get the data from the GitHub repo here
(to be provided)