Building Ethical AI Standards for Education

In this workshop, we will explore the technical and pedagogical foundations necessary for AI platforms to deliver responsible quantitative assessments. We will start by building an understanding of how large language models (LLMs) operate, focusing on the concept of token prediction. This foundation will lead us to examine ways to evaluate how confident an AI model is when giving a response. This will prove key insights into their reliability and limitations. Next, we will shift to a hands-on exploration where participants will experiment with different prompting strategies and AI models. This interactive segment will allow us to observe firsthand how these variations impact the accuracy and consistency of grading assessments. Finally, we will conclude with a collaborative discussion on the crucial role educators play in shaping guidelines for responsible AI use in assessment. We will explore how educators can—and must—take an active role in ensuring AI assessments are fair, ethical, and pedagogically sound.

Facilitated By

Ryan Tannenbaum

For Education

Experienced Tech and academic leader, turned developer and consultant. Ryan spends his days working with schools to help them leverage their data, and give it the ability to talk, move, and guide teaching and learning in the school. Right now, he is most interested in how AI policies and standards can be put in place to guide a pedagogically sound implementation in schools.