RNL’s Commitment to Responsible AI

Artificial intelligence (AI) has emerged as a transformative force in higher education, offering unprecedented opportunities for personalized engagement, data analysis, and resource optimization. However, concerns about the ethical implications of AI usage loom large, prompting the need for a strategic framework that prioritizes responsible AI governance.

In 2023, RNL took a significant step forward by investing in a robust strategic framework that includes AI governance. Led by our Chief AI Officer (CAIO), Dr. Stephen Drew, RNL has established a dedicated AI team committed to implementing responsible AI practices, drawing inspiration from the principles of the National Institute of Standards and Technology (NIST).

The F.E.A.T. Principles

As advocates for responsible AI usage, RNL adheres to the F.E.A.T. principles* —Fairness, Empathy, Accountability, and Transparency.


Recognizing the pervasive issue of bias in AI systems, RNL prioritizes fairness by meticulously addressing bias in data and algorithms. By expanding our perspective beyond the machine learning pipeline, we aim to mitigate bias originating from human, systemic, and societal factors.


Understanding the impact of AI on various stakeholders is paramount. RNL emphasizes the ethical considerations of AI deployment, ensuring that digital assistants and other AI tools uphold standards of empathy and respect in interactions with students, families, alumni, and colleagues.


RNL advocates for accountability by upholding the highest standards of integrity and accuracy in information dissemination. Regular auditing and assessments of AI models and tools are conducted to prevent misuse and ensure compliance with regulatory standards.


Transparency is key to fostering trust in AI systems. RNL is committed to providing comprehensive documentation, training data, and root cause analysis in case of discrepancies or errors, thereby promoting transparency and traceability in AI governance.

As AI continues to shape the landscape of higher education, RNL remains steadfast in its commitment to responsible AI usage.

Our AI Governance Framework

AI governance ensures that AI technologies align with the organizational goals of an institution using it.  At RNL, our AI governance framework includes:

RNL’s approach to responsible AI is built on a governance framework and it is used for every idea that we think might make sense. Our process is:   

  1. Build experiments to test the idea, under the leadership of a dedicated research scientist.
  2. Validate the idea through the results of the experimentation.
  3. Measure the results and use them as the basis to talk about the potential drawbacks.
  4. Decide on whether or not we’re going to move forward.

The framework is overseen by the RNL AI & Product Council that is committed to integrating and advocating ethical AI. The Council also champions AI awareness within RNL and higher education. Our goal at RNL is clear: to be an innovative leader in the AI landscape while staying informed and compliant with evolving legislation.

RNL also has a partnership with a company called Credo.AI which brings in a governance framework and risk management tool to track all our AI use cases, the models we’re using for each use case, the data that’s behind each of those, and the risks that they pose. They also help us use their general models to do risk assessment.  

RNL is using this expertise and experience to develop a consulting service around AI governance. We can go into a university and help them build responsible AI practice, build an AI council, establish methods for evaluating usage of AI models, and track all these efforts. 

*Adopted from the Monetary Authority of Singapore

Experience the Power of RNL Edge Today!

Skip to content