The field of multimodal learning analytics (MMLA) is an emerging domain of Learning Analytics and plays an important role in expanding Learning Analytics goal of understanding and improving learning in all the different environments where it occurs. The challenge for research and practice in this field is how to develop theories about the analysis of human behaviors during diverse learning processes and to create useful tools that could that augment the capabilities of learners and instructors in a way that is ethical and sustainable. CrossMMLA workshop will serve as a forum to exchange ideas on how we can analyze evidence from multimodal and multisystem data and how we can extract meaning from these increasingly fluid and complex data coming from different kinds of transformative learning situations and how to best feedback the results of these analyses to achieve positive transformative actions of those learning processes. CrossMMLA aims at helping learning analytics to capture students’ learning experiences across diverse learning spaces. The challenge is to capture those interactions in a meaningful way that can be translated into actionable insights (e.g., real-time formative assessment, post reflective reviews; Di Mitri et al., 2018, Echeverria et al., 2019).

MMLA uses the advances in machine learning and affordable sensor technologies (Ochoa, 2017) to act as a virtual observer/analyst of learning activities. Additionally, this virtual nature allows MMLA to provide new insights into learning processes that happen across multiple contexts between stakeholders, devices and resources  (both physical and digital), which often are hard to model and orchestrate (Scherer et al., 2012; Prieto et al., 2018). Using such technologies in combination with machine learning, LA researchers can now perform text, speech, handwriting, sketches, gesture, affective, or eye-gaze analysis (Donnelly et al., 2016; Blikstein & Worsley, 2016, Spikol et al., 2018), improve the accuracy of their predictions and learned models (Giannakos et al., 2019) and provide automated feedback to enable learner self-reflection (Ochoa et al, 2018). However, with this increased complexity in data, new challenges also arise. Conducting the data gathering, pre-processing, analysis, annotation and sense-making, in a way that is meaningful for learning scientists and other stakeholders (e.g., students or teachers), still pose challenges in this emergent field (Di Mitri et al., 2018).

This full-day event will provide participants with hands-on experience in gathering data from learning situations using wearable apparatuses (e.g., eye-tracking glasses, wristbands), non-invasive devices (e.g., cameras) and other technologies (in the morning half of the workshop). In addition, we will demonstrate how to analyze/annotate such data, and how machine learning algorithms can help us to obtain insights about the learning experience (in the afternoon half). The event will provide opportunities, not only to learn about exciting new technologies and methods, but also to share participants’ own practices for MMLA, and meet and collaborate with other researchers in this area.

REFERENCES

Wolfe, J. M., et al., (2015). Sensation & perception (4th ed.). Sunderland, MA: Sinauer Associates. 

Blikstein, P., & Worsley, M. (2016). Multimodal Learning Analytics and Education Data Mining: using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220–238.    

Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018). The Big Five: Addressing Recurrent Multimodal Learning Data Challenges. In The 8th International Learning Analytics & Knowledge Conference (pp. 420-424). SoLAR.

Donnelly, P. J., et al. (2016). Automatic teacher modeling from live classroom audio. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization, 45–53. ACM.

Echeverria, V., Martinez-Maldonado, R., & Buckingham Shum, S. (2019). Towards Collaboration Translucence: Giving Meaning to Multimodal Group Data. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 39). ACM.

Giannakos, M. N., Sharma, K., et al., (2019). Multimodal data as a means to understand the learning experience. International Journal of Information Management, 48, 108-119.

Ochoa, X. (2017). Multimodal learning analytics. Handbook of Learning Analytics, 129–141.

Ochoa, X. et al. (2018).  The RAP System: Automatic Feedback of Oral Presentation Skills Using Multimodal Analysis and Low-cost Sensors. Proceedings of the 8th International Conference on Learning Analytics and Knowledge. ACM. 360-364.

Prieto, L. P., Sharma, K., Kidzinski, Ł., Rodríguez‐Triana, M. J., & Dillenbourg, P. (2018). Multimodal teaching analytics: Automated extraction of orchestration graphs from wearable sensor data. Journal of computer assisted learning, 34(2), 193-203.

Scherer, S., Worsley, M., & Morency, L. P. (2012). 1st international workshop on multimodal learning analytics. 14th ACM International Conference on Multimodal Interaction, ICMI 2012. Spikol, D., Ruffaldi, E., Dabisias, G., & Cukurova, M. (2018). Supervised machine learning in multimodal learning analytics for estimating success in project‐based learning. Journal of Computer Assisted Learning, 34(4), 366-377.