Blog icon

The challenge

How can we ensure that AI systems, including ChatGPT, are developed and adopted in a responsible way?

The concept of a chatbot can be traced back to the 1950s when computer scientist and inventor Alan Turing proposed the Turing Test, which aimed to determine whether a machine could exhibit intelligent behaviour indistinguishable from a human.

In 1966, Joseph Weizenbaum created ELIZA, the first known chatbot, which was designed to simulate a psychotherapist by responding to user inputs with pre-programmed responses. In the 1980s and 1990s, advances in natural language processing and machine learning led to the development of more advanced chatbots, such as Parry and ALICE.

In the early 2000s, the rise of messaging platforms and mobile devices made it easier for businesses to integrate chatbots into their customer service systems. In recent years, advancements in AI, such as deep learning and natural language processing, have made it possible for chatbots to handle more complex and natural conversations with users, leading to the widespread use of chatbots in various industries, including finance, healthcare, and e-commerce. Despite the increasing popularity of chatbots, users aren't sure if they should trust them, and organisations don't fully understand the risks and how to mitigate them.

Globally, significant efforts have been put into programming solutions which focus on privacy, fairness, and explainability. However, there is lack of responsible AI governance and engineering guidance to assess and mitigate the ethical risks of chatbots against all of the AI ethics principles.

Our response

Creating patterns for responsible AI

Based on the results of a review, we analysed successful case studies and generalised best practices to create a set of guidelines, or
patterns, for a range of industries - including the finance industry - to use to shape the development of their AI products.

We applied responsible AI patterns to the development process of chatbots for the financial sector using the IBM Watson Assistant and discuss how this approach can be used to address various responsible AI risks.

The results

Successful identification and mitigation of chatbot risks

From planning, conversation design, implementation, testing, deployment and monitoring, responsible AI practices can be embedded at every step along the way. Using this approach, we successfully identified and mitigated risks through the chatbot development.

Financial services can now work with us to access these skills and resources for their own AI development.

Work with us on responsible AI research

Get in touch to discuss your responsible AI needs.

Contact us

Contact us

Find out how we can help you and your business. Get in touch using the form below and our experts will get in contact soon!

CSIRO will handle your personal information in accordance with the Privacy Act 1988 (Cth) and our Privacy Policy.


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

First name must be filled in

Surname must be filled in

I am representing *

Please choose an option

Please provide a subject for the enquriy

0 / 100

We'll need to know what you want to contact us about so we can give you an answer

0 / 1900

You shouldn't be able to see this field. Please try again and leave the field blank.