Blog icon

Artificial Intelligence (AI) is considered top strategic technology in many organisations due to its ability to transform, automate and analyse huge amounts of data. Despite its potential, there are serious concerns about its ability to behave and make decisions in a responsible way.

woman and man standing at whiteboard discussing plannning

Compared to traditional software systems, AI systems involve a higher degree of uncertainty and more ethical risk due to autonomous and opaque (black box) decision making. Ethical issues can also occur at any stage of the AI development lifecycle - from planning right through to monitoring.

We have evaluated the effectiveness and limitations of existing AI risk assessment frameworks to provide advice for companies looking to develop responsible AI.

Tasking a risk-based approach to operationalising responsible AI

Our comprehensive analysis includes well-defined responsible AI (RAI) principles, RAI stakeholders, AI system lifecycle stages, applicable sectors and regions, risk factors, and reusable mitigations.

These include an evidence-based guidance catalogue for operationalising responsible AI in your company, as well as an ethical risk assessment tool for risk assessments against AI ethics principles.

Companies looking to co-develop innovative tools and technologies to address responsible AI issues and opportunities can partner with us in a number of different ways to access and develop specialised solutions.

Artificial Intelligence (AI) is considered top strategic technology in many organisations due to its ability to transform, automate and analyse huge amounts of data. Despite its potential, there are serious concerns about its ability to behave and make decisions in a responsible way.

Compared to traditional software systems, AI systems involve a higher degree of uncertainty and more ethical risk due to autonomous and opaque (black box) decision making. Ethical issues can also occur at any stage of the AI development lifecycle - from planning right through to monitoring.

We have evaluated the effectiveness and limitations of existing AI risk assessment frameworks to provide advice for companies looking to develop responsible AI.

Tasking a risk-based approach to operationalising responsible AI

Our comprehensive analysis includes well-defined responsible AI (RAI) principles, RAI stakeholders, AI system lifecycle stages, applicable sectors and regions, risk factors, and reusable mitigations.

These include an evidence-based guidance catalogue for operationalising responsible AI in your company, as well as an ethical risk assessment tool for risk assessments against AI ethics principles.

Companies looking to co-develop innovative tools and technologies to address responsible AI issues and opportunities can partner with us in a number of different ways to access and develop specialised solutions.

AI risk assessment tools and resources

Partner with us or access resources through our partnership with the Responsible AI Network

Work with us on responsible AI research

Get in touch to discuss your responsible AI needs.

Contact us

Contact us

Find out how we can help you and your business. Get in touch using the form below and our experts will get in contact soon!

CSIRO will handle your personal information in accordance with the Privacy Act 1988 (Cth) and our Privacy Policy.


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

First name must be filled in

Surname must be filled in

I am representing *

Please choose an option

Please provide a subject for the enquriy

0 / 100

We'll need to know what you want to contact us about so we can give you an answer

0 / 1900

You shouldn't be able to see this field. Please try again and leave the field blank.