Overall, the results suggest that AI-generated instructors could be useful in multiple ways in education. They could be used to completely stand in for a teacher in an online lecture, but they could also be leveraged as guest lecturers to complement existing lesson plans. For example, a teacher could either invite a virtual Einstein to teach about the theory of relativity or deliver the lecture while puppeteering the likeness of Einstein. The approach could also be used to create personalized learning experiences.
Furthermore, specific characters could be generated to suit the lecture content and add a special touch based on their unique backstories (e.g., Einstein for physics, Picasso for painting). These virtual teachers could also be used by students in active learning scenarios, where the students can drive the virtual characters through acting or puppeteering and could craft reenactments of important events. AI-generated characters can spark imagination and creativity by blending fiction with reality. There's also the possibility for such technology to increase the representation of minorities in teaching videos by modeling virtual teachers based on generic characters or popular role models. Recent studies have shown that students more positively appraise teachers , score higher on tests, and enter more gifted programs when their teachers are of a similar ethnicity to their own. This factor could likely be replicated in the use of virtual instructors.
The ethics issues of AI-generated media go beyond the educational setting and are the subject of an ongoing, expansive conversation happening across different scales from personal usage to national policies. Here, we focus on ethics around AI-generated characters in education.
In this paper, we focus on the ethical considerations of using AI-generated characters in the context of education. AI-generated characters can be used to create educational content that is inaccurate as well as non-representative of the person being portrayed. For example, if a deepfake of a scientist is created, they could be made to say things that are not supported by scientific evidence. This could lead to students more readily believing false information or being confused when the supposed authority in a topic provides conflicting information. AI-generated characters can be used to deliberately spread inaccurate information. For example, a deepfaked scientist could be made to say things that are not supported by scientific evidence. Students could be lead to believe false information or could be confused when the supposed authority on a topic provides conflicting information. It is important to respect a person's privacy and seek the consent of the person to be portrayed. Deepfakes can easily be used to publicly misportray people and their beliefs, which can inflict profound harm (e.g., defamation, emotional distress). The potential for wide distribution can compound these negative effects. One open question is how to handle consent when the person is deceased. While AI-generated characters for teaching may have economic benefits and increase access to education in low-resource areas, they should primarily be used to augment or supplement human teachers rather than replace them. Research has shown the student-teacher relationship to be a key factor for fostering positive student attitudes, behaviors and development. Moreover, research indicates that a lack of emotional attachment, as experienced in video-conferencing-based classes, decreases learning effectiveness, and that a reduction in social relationships adversely affects mental health, physical health, and mortality risk. Hence, substituting real teachers with virtual instructors could pose a threat to student learning and well-being.