Recent developments in the field of artificial intelligence (AI) have generated much discussion on the role of AI within the education space. Many questions have been raised about risks related to security, privacy, and ethical considerations. Because this field is quickly evolving, guidance is needed to help understand and evaluate these risks.
A recent QuickPoll from Educause identified many areas of concern related to the field of AI, and a Special Report from the same organization serves as an opportunity for more in-depth exploration of the issues.
The National Institute of Standards and Technology (NIST) has published a draft AI Risk Management Framework to help organizations use a formal approach to managing AI risks. The Framework lists the following attributes of trustworthy AI:
Appendix B of the AI Risk Management Framework includes a discussion of risks that are unique to AI. It is recommended to review these risks to understand how AI risk differs from more familiar technology risks.
As a leader in education technology and innovative learning, Stukent takes seriously its responsibility to understand the impacts of Artificial Intelligence (AI) on all aspects of the instruction and learning process. AI is adapting and being adopted at a rate that is outpacing many current policies and guidelines. Through this policy, Stukent is formulating our perspective on the use of AI as a company. As we learn more about AI, we also expect this policy to evolve.
We understand that AI is a transformative force that carries both the potential for significant benefits and for considerable risks. This policy and associated guidelines for use are intended to direct us in the responsible use of AI within the teaching and learning space in which we operate.
The adoption of these guidelines aligns with our mission to help educators help students help the world. One of the ways we do that is by simplifying the learning process for everyone involved. We expect that AI will enhance our ability to continue our important work in the world.
We are committed to following AI best practice frameworks published by the National Institute of Standards and Technology (NIST) and the Institute for Ethical AI in Education. These widely respected standards will guide us in evaluating and mitigating AI risks specific to the Stukent ecosystem: educators, students, administrators, content developers, technologists and employees.
To ensure consistency, we have implemented an AI Governance Committee to provide governance over systems, procedures, evaluation processes and oversight for AI-related topics within our organization, such as accuracy and potential safety issues. Our objective is to understand the possible risks associated with AI and make informed decisions to manage those risks as we move forward.
Risk management will involve expanding technical controls to improve the accuracy of AI, limiting the focus and use of AI within our systems, and increasing the scope of system testing to include scenarios designed to include both expected and unexpected AI behaviors. Ultimately, we are committed to learning as we go, applying what we learn quickly and sharing our learnings with our constituents. This means our products will adapt and improve along the way.
We understand that the rate of AI adoption will be controlled by adherence to regulations, policies and processes that clearly describe the risks and limitations associated with AI interactions within our products.
We foresee the ability to throttle the amount of AI interaction students are involved with based on the comfort level of students, teachers and school districts. This will require our products to be more flexible to accommodate a wide variety of AI adoption rates and turnaround times to meet customers’ needs. Communication and responsiveness are core values within Stukent that will become even more critical as we evolve with AI.
We are committed to maintaining transparency and human oversight at all stages of our product development lifecycle to ensure the integrity and reliability of our products. Transparency will also expand to allow administrators to be able to clearly see how AI is being used in order to evaluate the outcomes they are seeing.
We understand that the rate of AI adoption will be guided by adherence to regulations, policies and processes that clearly describe the risks and limitations associated with AI interactions within our products. We foresee the ability to throttle the amount of AI interaction students are involved with based on the comfort level of students, teachers and school districts. We are committed to maintaining transparency and human oversight at all stages of our product development lifecycle to ensure the integrity and reliability of our products.