Computer Science

A Call for Researchers to Engage with Policy – Communications of the ACM

In October 2023, a year ago, a new presidential executive order (EO) called for safe, secure, and trustworthy artificial intelligence (AI). This order marks a milestone for education, because for the first time, it called upon the U.S. Department of Education to develop policy around AI for the education system. It’s unusual for the U.S. Government to call for policy about technology specifically in education. And it’s not just the U.S. government that turned its attention to AI in education; 24 states also have launched guidance or policies, as have many countries and international organizations. 

At this one-year mark, I’ve been reflecting on my own role as a researcher as I became involved in policy with regards to education beginning in 2020. In particular, I’ve had the honor of serving as the lead Subject Matter Expert for the Office of Educational Technology of the U.S. Department of Education — including supporting the Department as they responded to the EO. Here, I want to reflect personally on what I learned, call upon other researchers to become more involved, and suggest how to get started.

AI in education has a 50-year history in research, but the research has often been overlooked by policymakers, educators, and the public. That changed with OpenAI’s release of ChatGPT, and everyone soon became aware that AI would impact education. What I heard from industry executives worried me: lots of talk of disrupting education without good insight into what education should become. Through documents like the Blueprint for an AI Bill of Rights, I became aware of a coherent alternative perspective focused on protecting students and teachers. I came to realize how important it is that there is coherent guidance written in the public interest. 

I was invited to serve as a subject matter expert by the U.S. Department of Education. As I became more involved, I participated in listening sessions with hundreds of participants nationally, including researchers, educators, parents, and others. To my surprise, neither fear nor excitement dominated, and people had rich perspectives on the risks and opportunities that would arise. In this new role, I was challenged to synthesize the extensive prior research about AI in education without jargon and the ways it could help a wide variety of constituents to make good decisions. And as I provided advice to the Office of Educational Technology, I also had access to thoughtful feedback from across government; I learned so much from the comments policymakers made and the questions they asked. At first, engaging in policy work was a big stretch for me, but I grew as I went, and I’m proud of the reports I contributed to, such as Designing for Education with Artificial Intelligence: An Essential Guide for Developers. Overall, I supported the Department’s work to develop three documents in response to the EO, and all three will soon be available.

Here are my top reasons why more researchers should become involved in policy: 

  • First, policymakers and educational decision-makers want to hear from researchers, and it’s rewarding to make a contribution in the public interest, based on research. The public funded our research; this is an important opportunity to give back.  
  • Second, the issues are complex and rapidly evolving. Researchers are good at organizing knowledge and developing frameworks, which can guide a more coherent response by policymakers and educational decision-makers. 
  • Third, policy will set the context that allows our contributions as researchers and innovators to eventually come to fruition. We need to help policymakers craft approaches that not only protect students now, but also preserve our ability to do research and to deliver benefits to students down the road. For example, since the EO was released, Congress has also gotten involved and is contemplating a variety of laws to regulate student data privacy as well as the uses of AI. Through organizations like the Alliance for Learning Innovation, researchers can have a voice as laws are crafted. 

I became involved in policy work basically by showing up and saying “yes,” even though I wasn’t sure I could contribute to policy in a meaningful way. There are many places where expertise about AI in education are needed, from local school committees to state departments of education and up to the federal government. Show up and say “yes.”

In closing, here’s one more important thing I learned. When you participate, it’s important to align your contributions to issues that educators and policymakers care about. One way to do that is to speak to a policy framework for AI in education. I recommend the SAFE Benchmarks Framework from the EdSAFE AI Alliance. As a researcher, it may feel awkward at first to organize your expertise according to policy issues, yet by learning to do so, I was able to achieve stronger contributions. 

Here’s how the benchmarks frame the public interest and where researchers can contribute to desired qualities of AI in education. The public needs AI that is: 

  • Safe: In the wake of AI, the public is concerned about keeping students safe, for example, protecting students’ data privacy, shielding students from cyberbullying, and protecting them from inappropriate, poor-quality content. Talk about how research can guide a broader response to these threats and how you’ve been able to explore the use of AI with students while preserving safety. 
  • Accountable and Transparent: Many researchers work in close partnership with practitioners, creating a strong sense of mutual accountability and trust as they investigate the use of AI for teaching and learning. You can work with others to create standards that increase accountability and transparency, not only for AI in education researchers, but also for industry.
  • Fair: Researchers know that bias can arise in AI models and that algorithms can be unfair to particular students or groups. In areas like assessment and psychometrics, there is a strong body of knowledge and practice that addresses bias and fairness, and researchers know how to measure and address bias in machine learning algorithms. The EO calls for mitigating risks so opportunities can be realized; researchers have much to contribute.
  • Efficacious: Researchers investigate what works, for whom, and under what conditions. Fortunately, evidence has already become more important to educational decision-making, even before the advent of AI in schools. Yet, producing evidence at the pace of innovation is challenging. Researchers can help address this challenge both by summarizing what we know from the evidence we already have, and by conducting new studies that answer questions that matter to educational decision-makers.

As we reflect on the one-year anniversary of the landmark executive order on AI, I encourage my fellow researchers to add policy as a dimension of their efforts to develop safe innovations for education.

Jeremy Roschelle is Executive Director of Learning Sciences Research at Digital Promise and a Fellow of the International Society of the Learning Sciences.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button