Thursday 24 September 2020

See one, Do one, and Teach one - using mobile phone user eye tracking, pausing and touch to zoom and highlight, time spent, and note taking as data points for ML and AI to provide data on learning and training efficiency and effectiveness

Using mobile phone user eye tracking, pausing and touch to zoom and highlight, time spent, and note taking as data points for ML and AI to provide data on learning and training efficiency and effectiveness - A thought experiment, and speculation about near future possibilities

by Poh-Sun Goh (first draft September 24, 2020 @ 1950hrs)

Introduction

'See one, do one, and teach one', is a common term used in medical education and training. With the increasing use of mobile devices in education and training to 'look something up', select and engage with content, peers and instructors; in informal and formal learning; at undergraduate, postgraduate, and lifelong learning settings; one could imagine a near future possibility where the facial recognition camera used to unlock a mobile device could also be used for eye tracking, to obtain data on where the user focuses his or her gaze; and for how long; for example on which images (and parts of an image), text (which text, passages, and in what order), and video segments. Data can be collected on what is clicked or tapped on, where a user spends time and attention, what is highlighted, clipped and curated; what is shared; and how text and illustrations are used as intermediate and final output and outcomes of a learning or training exercise. 

Think of how text underlined and highlighted in a textbook, an examination of what initial notes are taken, and how these notes are revised during revision of the material; after reflection and further discussion with peers and instructors; and self reflection; can all give insight into the learning process to an observer, a trained and experienced teacher or instructor. Now imagine AI (artificial intelligence) and ML (machine learning) algorithms examining this data in real time, to provide customised feedback and coaching to a learner or trainee. Imaging AI on the mobile device chip, or platform giving a novice insight into where and what an experienced practitioner or master pays attention to, what they look at and focus on first, what they highlight, as well as note taking and the thoughts of a seasoned practitioner to the content, for a novice to model their practice on, and learn from. Image then AI modelling the attention process, and thinking of a master practitioner; similar to how a simulator gives feedback on physical skill practice and situational judgement formulation. Imagine carrying this adaptive learning platform on a personal mobile device. 

Imagine this process applied in medical training to detecting, identifying and characterising abnormalities on a Chest XR (and other radiological imaging), in Pathology, Dermatology, and clinical assessments in the Emergency Department, Ward or Clinic setting. Imagine this process applied to learning and training to search for high quality, relevant and accurate information online, skimming through and assessing the value and relevance of this content to a specific clinical problem at hand. Imagine AI and ML ever present on a mobile device delivering personalised coaching and adaptive learning and training for a user, and data for further personal reflection, and experienced human instructor coaching. Imagine this learning and training process used in professional lifelong training and faculty development.


References and Further Reading

Samarasekera DD, Goh PS, Lee SS, Gwee MCE. The clarion call for a third wave in medical education to optimize healthcare in the twenty-first century. Medical Teacher (accepted for publication, July 2018; epub 9 October 2018). https://www.ncbi.nlm.nih.gov/pubmed/30299191 (see section on Learning Analytics and Digital Scholarship within article)

Goh, P.S. Learning Analytics in Medical Education. MedEdPublish. 2017 Apr; 6(2), Paper No:5. Epub 2017 Apr 4. https://doi.org/10.15694/mep.2017.000067 https://www.mededpublish.org/manuscripts/944

https://www.slideshare.net/dnrgohps/applied-learning-analytics-for-modular-content-updated-79912376

https://medicaleducationelearning.blogspot.com/2020/04/iamse-2020-plenary-presentation-medical.html

https://phys.org/news/2016-06-eye-tracking-ordinary-cellphone-camera.html

Kotsis, S. V., & Chung, K. C. (2013). Application of the "see one, do one, teach one" concept in surgical training. Plastic and reconstructive surgery, 131(5), 1194–1201. https://doi.org/10.1097/PRS.0b013e318287a0b3

Matthew M. Cirigliano, Charles D. Guthrie & Martin V. Pusic (2020) Click-level Learning Analytics in an Online Medical Education Learning Platform, Teaching and Learning in Medicine, 32:4, 410-421, DOI: 10.1080/10401334.2020.1754216

Saubern, R., Henderson, M., Heinrich, E., & Redmond, P. (2020). TPACK – time to reboot?. Australasian Journal of Educational Technology, 36(3), 1-9. https://doi.org/10.14742/ajet.6378

Fyfield, M., Henderson, M., Heinrich, E., & Redmond, P. (2019). Videos in higher education: Making the most of a good thing. Australasian Journal of Educational Technology, 35(5), 1-7. https://doi.org/10.14742/ajet.5930

Cochrane, T., Redmond, P., & Corrin, L. (2018). Technology Enhanced Learning, Research Impact and Open Scholarship. Australasian Journal of Educational Technology, 34(3). https://doi.org/10.14742/ajet.4640 https://ajet.org.au/index.php/AJET/article/view/4640

West, D., Tasir, Z., Luzeckyj, A., Si Na, K., Toohey, D., Abdullah, Z., Searle, B., Farhana Jumaat, N., & Price, R. (2018). Learning analytics experience among academics in Australia and Malaysia: A comparison. Australasian Journal of Educational Technology, 34(3). https://doi.org/10.14742/ajet.3836



Valliappan, N., Dai, N., Steinberg, E. et al. Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nat Commun 11, 4553 (2020). https://doi.org/10.1038/s41467-020-18360-5

Brousseau, B., Rose, J., & Eizenman, M. (2020). Hybrid Eye-Tracking on a Smartphone with CNN Feature Extraction and an Infrared 3D Model. Sensors (Basel, Switzerland), 20(2), 543. https://doi.org/10.3390/s20020543

Cazzato, Dario & Leo, Marco & Distante, Cosimo & Voos, Holger. (2020). When I Look into Your Eyes: A Survey on Computer Vision Contributions for Human Gaze Estimation and Tracking. Sensors. 20. 3739. 10.3390/s20133739. 



Using data from real-time student interaction with digital content - attention, gaze; touch screen, mouse and voice interaction; note taking and verbalisation (summarising key points, to question responses), as evidence of ongoing successful learning, and to allow AI to provide dynamic, customised feedback and personalised instruction. Similar to human coaching and teaching in physical classrooms, scaled up online with AI.

Basically using observations of student attention, behaviour and output (both early, e.g. note taking and questions posed, intermediate and final, e.g. assignments and final performance assessments), to dynamically customised learning pathways, exposure to digital content, and practice paradigms.

by Poh-Sun Goh, 3 October 2020 @ 0259am


Thought experiment 2 - personalised adaptive learning (human and AI assisted) though use of real time video observations, data from human-computer interface interaction with content, evidence of content assimilation though note taking and verbalisation, as well as interactions with peers, the instructor, and AI assistants

Combining eye tracking, multiple cameras to record and assess a learner's interaction with digital content (whether on workstation, laptop, tablet, mobile phone, small screen wearable device, AR or VR content); combined with real-time data on interaction with digital content by touch, mouse cursor movements; pausing when scrolling through webpage content, further interaction with this content via touch, mouse clicks or further within content interaction by watching video, clicking or tapping hyperlinks, or verbal search and interaction with chatbots and AI assistants; through note-taking, and verbal articulation of key take-home ideas and key concepts, as evidence of knowledge assimilation, and integration with other new knowledge, or prior knowledge. Similar to 'see one, do one with feedback, practice for improvement, and translate to clinical or workplace practice with initial formative, then summative assessment before certification to practice clinical skills. And development of 'soft skills' including empathy, communication (and teamwork), through exposure then reaction and reflection to digital content (including written case scenarios, illustrations, multimedia content including videos, AR and VR). in short, similar to learning behaviours in class observed by teachers (of students) in traditional classrooms when teachers observe and get insight on students attention to content presented, observing their notes and taking note of their class interactions with peers, and answers to questions posed, and demonstration of skill development - all as proxies of successful, engagement with content and skill learning process. Our aim is to scale up insights gained by experienced, trained and skilled teachers from physical interaction and observation of students classroom behaviour to that of their interaction with digital content, outside the classroom, in the digital space, during self study, group and team based learning, and when practicing skills in online simulated practice.

by Poh-Sun Goh (first draft 3 October 2020 @ 0238am)


Stavros Demetriadis. Interaction between learner’s internal and external representations in multimedia
environment: a state-of-the-art. 2004. ffhal-00190213f

Davison, D.P., Wijnen, F.M., van der Meij, J. et al. Designing a Social Robot to Support Children’s Inquiry Learning: A Contextual Analysis of Children Working Together at School. Int J of Soc Robotics 12, 883–907 (2020). https://doi.org/10.1007/s12369-019-00555-6

Lévêque, Lucie & Bosmans, Hilde & Cockmartin, Lesley & Liu, Hantao. (2018). State of the Art: Eye-Tracking Studies in Medical Imaging. IEEE Access. PP. 1-1. 10.1109/ACCESS.2018.2851451. 

Conati, Cristina & Merten, Christina & Amershi, Saleema & Muldner, Kasia. (2007). Using Eye-Tracking Data for High-Level User Modeling in Adaptive Interfaces.. 1614-1617. 



Thought experiment 3 - Adaptive learning by blending AI and human input, in digitally literate, learning science trained students (and faculty)
Poh-Sun Goh, 3 October 2020 @ 0846am



Thought experiment 3 - Access to, and Learning from the very best content, teachers, instructional and training paradigms, globally; customised and adapted to an individual learner's requirements by AI, and an expert, experienced local instructor; guided by usage and performance data from video and human observations, eye tracking data, human-computing interface interactions, as well as evidence of information registration, short and long term processing of this content for successful learning, and transfer to practice.

To use an analogy from an earlier era; motivated, interested and ambitious students would do 'deep dives' into textbooks, scour the library for information; read and re-read, make notes of, discuss and use what they have learnt, and transfer this into practice. With the advent of the internet, digitisation of content; access to the very best didactic 'lectures', now increasingly in modular, short format videos; the challenge for students now is to find not only the 'best' information, but also the most appropriate ones, for their learning and training needs, on an ongoing basis. This requires not only scanning, skimming through, and sampling what is popular, and recommend by fellow users and peers, and what is related to the current topics and key words that they are searching for; but also a deeper appreciation for the relevance, usefulness, and quality of the material that they are planning to use to learn and train with. This will require more than a reliance of low level AI algorithms driving search results from online search engines. It will required an ability and skill in the student to evaluate the quality, relevance, and suitability of what a search engine throws up on the first page of search results - by understanding how popularity, key words, sustained engagement with the content by other similar online users, likes and recommendations feed search algorithms. Ultimately, only by reflecting upon, and using (digital) content, and getting feedback from the products and outcomes of their learning efforts, initially from informal feedback and formal assessments, and then later from measurable results in the workplace; will learners develop an appreciation of the value, and utility of the content, and training process which they have undertaken. The combined and blended role of AI, and trained experienced human instructors, guides and coaches will be key.

by Poh-Sun Goh, 3 October 2020 @ 0836am




Goh, PS. 'Medical Educator Roles of the Future'. Medical Science Educator. Online publication 30 September 2020. https://doi.org/10.1007/s40670-020-01086-w


Bhatt, I, de Roock, R & Adams, J. (2015). Diving deep into digital literacy: emerging methods
for research, Language and Education, doi: http://dx.doi.org/10.1080/09500782.2015.1041972 

Looi, C.‐K., Seow, P., Zhang, B., So, H.‐J., Chen, W. and Wong, L.‐H. (2010), Leveraging mobile technology for sustainable seamless learning: a research agenda. British Journal of Educational Technology, 41: 154-169. doi:10.1111/j.1467-8535.2008.00912.x

Hirashima, C & Klett, T & Liu, J & A, Nakabayashi & Wong, Kalok & Looi, Chee-Kit & Wong, Lung-Hsiang & So, Hyo-Jeong & Seow, Peter. (2009). An Anatomy of a Mobilized English Preposition Lesson: Toward Personalized Learning. Proceedings of the 17th International Conference on Computers in Education, ICCE 2009. 

No comments:

Post a Comment

Note: only a member of this blog may post a comment.