How AI Tools are Impacting Digital Learning

Justin is Assistant Director of District Programming at Michigan Virtual

Since my last blog post on the subject, AI has become a seemingly ever-present topic of discussion in all corners of K-12 and higher ed. ChatGPT became one of the most quickly adopted technologies of all time, with nearly 100 million people using it within the first two months of release, and a recent survey showed that a large portion of both teachers and students are using the tool. OpenAI released GPT-4, which “performed” admirably on a host of standardized tests, including the Uniform Bar Exam, portions of the SAT and ACT, the GRE, and MBA examinations. Federal guidance has been developed, numerous school and classroom policies have been written and enforced (sometimes resulting in high-profile missteps), and educators at all levels have engaged in discussions about what changes need to be made in the teaching and learning process as a result of the ubiquity of this tech. I thought it might be worthwhile, as a follow-up to the previous post, to take a look at what kinds of applications and practices have recently developed and how they are changing the world of digital learning.

As many predicted, the technology powering ChatGPT is beginning to proliferate in numerous other applications. While Google released its own chatbot, called Bard, it is likely hoping to gain more traction in its Workspace tools powered by Duet AI. These tools are available to users directly within Google Docs, Slides, Sheets, and Gmail, and can help with drafting and refining text and generating visuals in a number of different ways. Google users whose accounts belong to an organization, such as a school or business, may only get access to the tools if their organization grants them permission. 

LMS providers, including D2L and Blackboard, recently announced plans to incorporate generative AI tools to help with course design, instruction, and user support. Instructure recently announced a partnership with Khan Academy to pilot the use of its AI-powered educational “guide” Khanmigo with users of the Canvas LMS. Khanmigo is an attempt to harness generative AI technology to tutor or facilitate educational conversations with students that don’t provide an outright answer to the questions they encounter while learning. Khan Academy and early users of the tech are admittedly still working out the kinks

As generative AI technology continues to advance and become available in more commonly used digital spaces like productivity tools and LMSs, digital learning designers, teachers and students will naturally have more opportunities to use it, along with more reason to examine their current practices. A recent podcast episode of The Daily from the New York Times suggests that students are using these tools more frequently than their teachers know about, and that their use is paying off in better grades. Ethan Mollick, of University of Pennsylvania’s Wharton School, describes the coming of a “homework apocalypse” wherein educators will need to totally rethink the use of common assessment types, including essays, class readings/discussions, and problem sets. Given the usual separation in time and space between digital learners and their instructors, the question of whether certain instructional practices and assessments should be overhauled seems even more pressing when it comes to digital learning environments. 

The stark reality is that digital learning providers, just like the rest of the world, are still catching up to our new AI-driven reality. While Sam Altman, the founder of OpenAI, has said that the release of ChatGPT was meant to help gather user feedback for an iterative product cycle, it’s actually served as a wake-up call for folks in all walks of life. Digital learning providers are still working through the toughest questions because answering those questions, of course, takes time. While we’re still in this early stage, some smaller changes are taking hold: instructors in higher ed are attempting to revamp or do away with discussion board assignments; Idaho Digital Learning Alliance updated its Acceptable Use Policy to align AI use with its academic integrity guidance and require citations; the Design and Development team at Virtual Arkansas is researching AI tools and potential opportunities for integrating chatbots into courses for student use; Michigan Virtual is experimenting with using AI tools in learning content design.

As educators, both in digital and face-to-face environments, grapple with more questions and undertake this work, it’s imperative that we get a real understanding of why students are using these tools and how they are helping them learn. There are obviously well-founded concerns around use of these tools when it comes to academic integrity, but it’s also clear that students have adopted them organically to help in genuine learning activities. Incorporating students’ perspectives on how these tools help them learn and how assessment practices can evolve for their benefit can help us collectively move forward as a field and move closer toward the promise of digital learning as a student-centered endeavor.

We welcome your comments in the DLC Community Portal's Blog Discussion Group. If you’re already a DLC user or member, you must log in before you can join or comment in the group. If you’re not yet on the DLC platform, please create a free user account or join as a DLC member to join the discussion.

Previous
Previous

Ceilings and floors

Next
Next

DEI @ DLAC: Can you help?