Student’s learning happens where the learner is rather than being constrained to a single physical or digital environment. Educational research has revealed the pedagogical benefits of letting students experience different types of content, “real world” challenges, and physical and social interactions with educators or other learners. In this way, students commonly work outside the boundaries of the institutional learning system(s). This inherently blended nature of learning settings makes it essential to move beyond learning analytics that rely solely on a single data source (e.g. log files). Multimodal learning analytics (MMLA) can provide insights into such learning processes that happen across multiple contexts between people, devices and resources (both physical and digital), which often are hard to model and orchestrate [2,3,4,5]. MMLA leverages the increasingly widespread availability of sensors and high-frequency data collection technologies to enrich the existing data available. Using such technologies, in combination with machine learning and artificial intelligence techniques, LA researchers can now perform text, speech, handwriting, sketch, gesture, affective, neurophysical, or eye gaze analyses [1,5].

Collecting and understanding data from the everyday learning environments becomes increasingly challenging. However, pervasive and mobile technologies can be used to allow learners to get remote access to educational resources from different physical spaces (e.g. ubiquitous/mobile learning support) or to enrich their learning experiences in the classroom in ways that were not previously possible (e.g. face-to-face/blended learning support). This is creating new possibilities for learning analytics to provide continued support or a more holistic view of learning, moving beyond desktop-based learning resources. An overarching concern is how to integrate analytics across these different spaces and tools in a coordinated way. Our aim is to make learning analytics relevant across, physical, digital, and blended learning environments. Therefore, researchers and practitioners need to address the larger frame of what is happening across the digital and physical space and between individuals, groups, and the entire class while balancing the data, collection, analysis and visualisation.

[1] P. J. Donnelly, N. Blanchard, B. Samei, A. M. Olney, X. Sun, B. Ward, S. Kelly, M. Nystran, and S. K. D’Mello. Automatic teacher modeling from live classroom audio. In Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization, pages 45–53. ACM, 2016.

[2] X. Ochoa, M. Worsley, N. Weibel, and S. Oviatt. Multimodal learning analytics data challenges. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, pages 498–499. ACM, 2016.

[3] L. P. Prieto, K. Sharma, P. Dillenbourg, and M. Jesu ́s. Teaching analytics: towards automatic extraction of orchestration graphs using wearable sensors. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, pages 148–157. ACM, 2016.

[4] S. Scherer, M. Worsley, and L.-P. Morency. 1st international workshop on multimodal learning analytics. In ICMI, pages 609–610, 2012.

[5] M. Worsley, K. Chiluiza, J. F. Grafsgaard, and X. Ochoa. 2015 multimodal learning and analytics grand challenge. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pages 525–529. ACM, 2015.