As digital technologies permeate our everyday lives, technology-enhanced learning experiences are becoming increasingly ubiquitous and fluid. In such blended learning across multiple digital and physical spaces, traditional de-contextualized, log-based learning analytics may not be enough to understand the learning process, its meaning or its outcomes. Thus, analyzing evidence from multiple data sources will become increasingly needed and commonplace, if we are to extract meaning from these increasingly fluid, increasingly complex kinds of transformative learning (cf. the conference theme on transforming learning and meaningful technologies).

The field of multimodal learning analytics (MMLA) addresses this need, often by leveraging advances in machine learning and increasingly affordable sensor technologies (Ochoa, 2017). These techniques allow MMLA to provide new types insights into learning processes that happen across multiple contexts between people, devices and resources (both physical and digital), which often are hard to model and orchestrate (Scherer et al., 2012; Prieto et al., 2016). Using such technologies in combination with machine learning LA researchers can now perform text, speech, handwriting, sketch, gesture, affective, neurophysical, or eye-gaze analysis (Donnelly et al., 2016; Blikstein & Worsley, 2016).

However, with this increased complexity in data new challenges also arise. The data gathering, pre-processing, analysis, annotation and sense-making, in a way that is meaningful for learning scientists and other stakeholders (e.g., students or teachers), still pose challenges in this emergent field. It is on these challenges and the sharing of practical solutions to those challenges, that this workshop focuses.

This full-day event will provide participants with hands-on experience in gathering data from learning situations using multiple technologies (in the morning), and how to analyze/annotate such data to obtain insights about the learning experience (in the afternoon):

  • Before the event, teams of participants will be formed, and teams will decide the learning task they wish to address, and which of the proposed multimodal data gathering and analysis methods they intend to use.
  • During the event, participant teams will execute their MMLA project: enacting sample learning activities, gathering multimodal data, and later analyzing and making sense of them.

In this way, the workshop will provide opportunities, not only to learn about exciting new technologies and methods, but also to share participants’ own practical proposals for MMLA, and meet and collaborate with other researchers in this area.


Blikstein, P., & Worsley, M. (2016). Multimodal Learning Analytics and Education Data Mining: using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220–238.

Donnelly, P. J., Blanchard, N., Samei, B., Olney, A. M., Sun, X., Ward, B., … D’Mello, S. K. (2016). Automatic teacher modeling from live classroom audio. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization, 45–53. ACM.

Ochoa, X. (2017). Multimodal learning analytics. Handbook of Learning Analytics, 129–141.

Prieto, L. P., Sharma, K., Rodríguez-Triana, M. J., & Dillenbourg, P. (2016). Teaching Analytics: Towards Automatic Extraction of Orchestration Graphs Using Wearable Sensors. Proceedings of the 6th International Conference on Learning Analytics and Knowledge (LAK 2016), 148–157.

Scherer, S., Worsley, M., & Morency, L. P. (2012). 1st international workshop on multimodal learning analytics. 14th ACM International Conference on Multimodal Interaction, ICMI 2012.