There are many benefits to Artificial Intelligence (AI) - it can make fast work of huge amounts of data, speed up processes, and collaborate with humans to provide a better outcome than either could create on their own. However, if not done ethically, the development of AI technology can come with significant risks.
For this reason, the development of responsible AI sits at the heart of our work and we have brought together one of the largest groups of responsible AI researchers in the country. Meet the science leaders guiding this work.
Professor Jon Whittle, Director of CSIRO's Data61 and world-renowned expert in software engineering and human-computer interaction
What is responsible AI?
We want AI systems that benefit individuals, society and the environment. To help achieve this, Australia’s AI Ethics Framework outlines a series of principles for the responsible development of AI.
Many countries across the world have a similar set of AI Ethics Principles. These largely agree that AI systems should respect human values, diversity and the autonomy of individuals. They should respect and uphold privacy rights and data protection, and ensure the security of data.
To be fully transparent and explainable, there should also be responsible disclosure so that people understand when they are being impacted by AI, and can find out when an AI system is engaging with them.
The next major challenge for responsible AI is how best to turn these high level responsible AI principles into practice - this will require new training for those working in the tech industries, new tools, techniques and guidelines to support building AI systems responsibly, and new cultures that actively promote responsibility.
Doctor Liming Zhu, Research Director of the Software and Computational Systems program at CSIRO's Data61 and chair on the Standards Australia's blockchain and distributed ledger committee
What will is the aim of this project and how will the team achieve it?
Our vision for the project is to make Australia among the world's top five in responsible AI science and technology so that Australia’s adoption of AI is inclusive, safe, secure, and reliable.
The team is going to achieve that by 1) advancing the S&T of responsible AI including operationalising high-level principles; 2) making responsible AI a competitive advantage of the Australian industry; 3) embedding responsible AI into broader scientific discovery and technology development processes.
Professor Didar Zowghi, Senior Principal Research Scientist and Software Engineering for AI Specialist and Diversity & Inclusion in AI Lead
What are some of the greatest strengths and challenges this project faces?
The project team is diverse and the environment is inclusive and very collaborative. Our team consists of some of the most experienced researchers and practitioners, effective leadership, and some talented early career researchers new to CSIRO’s Data61. They all share the passion for conducting impactful research to solve real problems and building useful and usable tools for anyone who wants to develop and adopt AI responsibly.
Any project team that takes on Responsible AI as its overall aim would have to be interdisciplinary and has to pay a great deal of attention to the social and human side of AI. This calls for human-centred approaches and methodologies to be applied to all aspects of the design, development and adoption. This shift of focus can present many challenges to the team and to the early adopters of the products and services that will be delivered.
Doctor Qinghua Lu, Team Leader of the Software Engineering for AI group and Science Lead in the Responsible AI research team
How will the work from this project be applied to real-life situations?
The work from the project will address the ethical concerns in AI systems and unlock the AI market where there is currently little trust. The project will build trust in AI to increase adoption through concrete tools and technologies that a wide range of decision-makers and technologists can use to govern, design and build responsible AI systems.
For example, our ethical risk assessment tool identifies the ethical risk factors in the use of AI systems and assesses their likelihood and consequence. To mitigate the ethical risk, our pattern catalogue includes governance patterns at the industry/organisation/team-level, process-oriented patterns making development processes responsible, and product design patterns that can be embedded into the AI systems as product features to enable responsible-AI-by-design.
Our knowledge graph supported ethical compliance checking tool can automatically examine whether the design of AI systems is compliant with regulations and standards based on the structured and traceable responsible AI knowledge base.
Doctor Zhenchang Xing, Senior Principal Research Scientist
How will responsible AI help everyday Australians, businesses, and industries?
AI helps people find targeted information and knowledge quicker and enhances our lifestyle choices. Smart homes can cut down on energy use and provide better security.
AI allows businesses to streamline business processes, gain insight through big data, and enhance customer engagement and experience. It helps reduce talent waste and create new business opportunities.
AI lowers the barrier of technology adoption, including AI itself through low or no-code AI. Human and AI team working relieves human workers from mundane work and expand human creativity and ingenuity as people will have more time to learn, experiment and explore.
Doctor Justine Lacey, Director of CSIRO's Responsible Innovation Future Science Platform
How does CSIRO’s approach incorporate ethics to drive innovation? What are the three key things that will remain front of mind during this process?
It used to be the case that embedding ethics in technology development was seen as ‘too hard’ or ‘too subjective’. But in the last 10 years or so we’ve seen a real shift in that thinking. Today, there’s wide discussion around the intended and unintended consequences of technologies like AI, and research into what those ethical considerations might be. The question we face as responsible AI researchers is how can we best respond to those challenges?
CSIRO’s approach to responsible AI emphasises the need to integrate ethics at all stages of the AI lifecycle: from design through to development and deployment. With this holistic approach in mind, we draw on a wide range of skills from data scientists, engineers, social scientists, human factors specialists, ethicists, lawyers and others. We also work with end users in industry, government and the wider community.
Responsible AI thinking starts with who, why and how, but the approach isn’t always linear. First, we start with ethical design and that means including diverse perspectives. Second, we support the ethical use of those AI systems by end users in ways that best suit their context. Third, we need to ensure that this leads to the ethical outcomes and benefits we collectively set out to achieve, and that involves revisiting and reviewing the impact of those systems with stakeholders and end-users post-deployment.
Doctor Cathy Robinson, Research Group Leader
How can we responsibly co-design AI with First Nation people and Indigenous knowledge?
Although Indigenous Australians are among the most digitally disadvantaged people in Australia many groups are seeking cross-cultural and collaborative ways to design and apply ethical AI and digital technologies to solve complex problems
CSIRO’s approach has been to support Indigenous co-designed AI frameworks and approaches that respect kin-country relationships, ensures and maintains free, prior and informed consent from local Elders, and provides ongoing commitment to deliver mutually useful and useable science and impact.
This relies on working with Indigenous partners to negotiate collaborative pathways to understand and build Indigenous people’s trust in and dialogue with AI. It requires researchers to be mindful that Indigenous groups may have very different expectations of AI and multiple cultural and social factors influence people’s trust in AI, including factors not related to a specific system.
Our efforts to apply Indigenous co-designed AI also highlights that responsible AI cannot be achieved through technology alone. The cultural context in which AI & technology will be adopted is critically important and requires responsible AI science leaders to work with communities to collaboratively create and provide accessible and useful end-to-end solutions.