Stukent Responsible Adoption of AI Strategy

Recent developments in the field of artificial intelligence (AI) have generated much discussion on the role of AI within the education space.  Many questions have been raised about risks related to security, privacy, and ethical considerations. Because this field is quickly evolving, guidance is needed to help understand and evaluate these risks.  

A recent QuickPoll from Educause identified many areas of concern related to the field of AI, and a Special Report from the same organization serves as an opportunity for more in-depth exploration of the issues. 

The National Institute of Standards and Technology (NIST) has published a draft AI Risk Management Framework to help organizations use a formal approach to managing AI risks.  The Framework lists the following attributes of trustworthy AI:

  1. Valid and Reliable. Trustworthy AI produces accurate results within expected timeframes.
  2. Safe. Trustworthy AI produces results that conform to safety expectations for the environment in which the AI is used (e.g., healthcare, transportation, etc.)
  3. Fair – and Bias is Managed.  Bias can manifest in many ways; standards and expectations for bias minimization should be defined prior to using AI.
  4. Secure and Resilient.  Security is judged according to the standard triad of confidentiality, integrity and availability.  Resilience is the degree to which the AI can withstand and recover from attack.
  5. Transparent and Accountable.  Transparency refers to the ability to understand information about the AI system itself, as well as understanding when one is working with AI-generated (rather than human-generated) information.  Accountability is the shared responsibility of the creators/vendors of the AI as well as those who have chosen to implement AI for a particular purpose.
  6. Explainable and Interpretable.  These terms relate to the ability to explain how an output was generated, and how to understand the meaning of the output.  NIST provides examples related to rental applications and medical diagnosis in NISTIR 8367 Psychological Foundations of Explainability and Interpretability in Artificial Intelligence
  7. Privacy-enhanced.  This refers to privacy from both a legal and ethical standpoint. This may overlap with some of the previously listed attributes.

Appendix B of the AI Risk Management Framework includes a discussion of risks that are unique to AI.  It is recommended to review these risks to understand how AI risk differs from more familiar technology risks.

Stukent Vision of AI Adoption

As a leader in education technology and innovative learning, Stukent takes seriously its responsibility to understand the impacts of Artificial Intelligence (AI) on all aspects of the instruction and learning process.  AI is adapting and being adopted at a rate that is outpacing many current policies and guidelines.  Through this policy, Stukent is formulating our perspective on the use of AI as a company. As we learn more about AI, we also expect this policy to evolve.

We understand that AI is a transformative force that carries both the potential for significant benefits and for considerable risks.  This policy and associated guidelines for use are intended to direct us in the responsible use of AI within the teaching and learning space in which we operate.

The adoption of these guidelines aligns with our mission to help educators help students help the world.  One of the ways we do that is by simplifying the learning process for everyone involved.  We expect that AI will enhance our ability to continue our important work in the world.

Evaluating the current state of AI to find the most practical and beneficial applications to Stukent products, services and business operations.

We are committed to following AI best practice frameworks published by the National Institute of Standards and Technology (NIST) and the Institute for Ethical AI in Education.  These widely respected standards will guide us in evaluating and mitigating AI risks specific to the Stukent ecosystem:  educators, students, administrators, content developers, technologists and employees.  

To ensure consistency, we have implemented an AI Governance Committee to provide governance over systems, procedures, evaluation processes and oversight for AI-related topics   within our organization, such as accuracy and potential safety issues.  Our objective is to understand the possible risks associated with AI and make informed decisions to manage those risks as we move forward.  

Risk management will involve expanding technical controls to improve the accuracy of AI, limiting the focus and use of AI within our systems, and increasing the scope of system testing to include scenarios designed to include both expected and unexpected AI behaviors.  Ultimately, we are committed to learning as we go, applying what we learn quickly and sharing our learnings with our constituents.  This means our products will adapt and improve along the way.

Monitoring the future direction of AI to intentionally understand and shape the impact to Stukent products, services and business operations.

We understand that the rate of AI adoption will be controlled by adherence to regulations, policies and processes that clearly describe the risks and limitations associated with AI interactions within our products. 

We foresee the ability to throttle the amount of AI interaction students are involved with based on the comfort level of students, teachers and school districts.  This will require our products to be more flexible to accommodate a wide variety of AI adoption rates and turnaround times to meet customers’ needs.  Communication and responsiveness are core values within Stukent that will become even more critical as we evolve with AI.

We are committed to maintaining transparency and human oversight at all stages of our product development lifecycle to ensure the integrity and reliability of our products.  Transparency will also expand to allow administrators to be able to clearly see how AI is being used in order to evaluate the outcomes they are seeing.

Balancing innovation with diligence as we move forward.

We understand that the rate of AI adoption will be guided by adherence to regulations, policies and processes that clearly describe the risks and limitations associated with AI interactions within our products.  We foresee the ability to throttle the amount of AI interaction students are involved with based on the comfort level of students, teachers and school districts.  We are committed to maintaining transparency and human oversight at all stages of our product development lifecycle to ensure the integrity and reliability of our products.