Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

The Generative Pre-trained Transformer (GPT) plays a pivotal role in driving advancements and innovation in the field of Artificial Intelligence in Education (AIED) due to its remarkable capabilities in understanding, reasoning, and generating. Further refinement can enrich the model’s understanding of domain-specific knowledge, and enhances the capabilities of GPT for AIED. However, there was a lack of studies closely incorporate the application of the GPT and AI with educational theories. Some research studies have been extensively explored and yielded favorable results. For instance, the integration of AI with Open Learner Models (OLMs) implements AIED that provides learners with insights into their progress and aligns with their learning objectives. Despite these, existing methods still encounter several challenges that impact their practicality. Such as, employing a single neural network to store OLM’s data renders continuously modifying difficult, as well as hinders scalability to encompass larger knowledge without substantial augmentation of the neural network’s capacity. Furthermore, given the semantics explicitly represented within the OLM, the neural network encounters difficulties in comprehending the nuanced interdependencies among them. These challenges can result in suboptimal performance when generating meaningful insights from the data in AI tasks, and they can also lead to security issues. To address these challenges, we introduce the first Multi-modal Embedding Open Learner Model (MeOLM) framework, which integrates course embeddings, OLM embeddings, multi-modal embedding module, and task-specific modules. It works with GPT to enhance personalized learning. It maps OLM’s data with implicit neural networks as multi-modal embeddings. This approach enhances the capturing of intricate relationships between each embeddings. Within the MeOLM framework, we introduce an efficient hybrid data structure that combines embeddings using a hierarchical octree structure. This facilitates the rapid allocation and access of embeddings, which in turn benefits downstream tasks such as knowledge graph completion, relation extraction, and dynamic embeddings management. Furthermore, we integrating the MeOLM with Biggs’ constructive alignment, which holds significant influence in education, to align the learning objectives of a subject with its Teaching and Learning (T&L) activities and assessments. The MeOLM framework enables context-aware explanations and interactions, fostering learner engagement and understanding, identifying patterns in learning behaviors, and adjusting pathways.