In this short series, we meet the AI leaders who have taken the helm of the National AI Centre’s AI Think Tanks.
The National AI Centre Think Tanks are providing an action centric approach to building Australia’s responsible AI capability. Today we meet Professor Didar Zowghi, chair of the Diversity and Inclusion in AI Think Tank.
“Professor Didar Zowghi has deep expertise in requirements engineering and leadership of diversity and inclusion initiatives. It is powerful that Didar can effectively bringing these two areas together. Imagine if we could ensure that diversity and inclusion was embedded through the entire AI system development and deployment process, and flowed all the way to positive impact for business and communities.” said National AI Centre (NAIC) Director Stela Solar.
Didar is also a Conjoint Professor at the University of New South Wales and has previously held leadership positions at University of Technology Sydney, including Deputy Dean of Graduate Research School, Director of the Research Centre for Human-Centred Technology Design, Director of Women in Engineering and Information Technology, and Associate Dean Research.
What does Diversity and Inclusion mean to you, and how is it best applied to AI?
I was born in Iran to linguistically and culturally diverse parents belonging to a severely persecuted and discriminated minority religion whose members are still denied of their basic human rights.
Throughout my childhood and adolescence, I faced bias and marginalisation due to this diversity at school and discrimination for my gender in society. I left my birthplace after finishing school and went to England for tertiary education and later migrated to Australia.
I have experienced racial prejudice in both countries. I studied and worked in the male-dominated discipline of computer science and software engineering and experienced plenty of gender bias there too.
Diversity and Inclusion is a cause close to my heart as I see humanity like a beautiful garden populated by many diverse flowers.
In the current state of Artificial Intelligence systems’ development and deployment around the world, we are faced with experiences of biases, discrimination, injustice, and marginalisation for vulnerable groups.
We must be prepared to interrogate and understand these truths about AI to help us find ways of overcoming the challenges and help Australia’s responsible development and adoption of AI technologies.
We now have an excellent opportunity to transform reactions to AI into an “excited” positive feeling of having the agency to shape the future of AI technology. That is why at NAIC we have issued a call for all hands- on deck to participate in building a vision with responsible and inclusive AI for the future.
"I dream of a day, hopefully not in the too distant future, when AI will have achieved a significant degree of fairness, trustworthiness, and inclusivity that humanity can in fact turn to AI to be inspired and learn how best to practice diversity, equity, and inclusion. And my hope is that Australia will be one of the world leaders in demonstrating diversity and inclusion in all aspects of the design, development, and adoption of AI technology."
Why is Diversity and Inclusion important for technology and innovation?
Research shows that inclusion and diversity of views and perspectives are essential elements for responsible innovation. A significant part of my research leadership role at CSIRO’S Data61 and NAIC is about diversity and inclusion in the design, development, and adoption of AI systems.
We need to define and implement AI solutions, products and services that help humanity increase its connection to diverse and inclusive workforces, stakeholders, and citizens.
It is paramount that the AI systems we build do not discriminate and exhibit any form of prejudice and bias. We must adapt and evolve our social and technical constructs to ensure responsibility and trust for these systems.
An inclusive AI system must pay respect and consistent attention to target groups from diverse social, racial, and ethnic backgrounds, different genders, sexual orientations, neurodiverse communities, people with different disabilities, the aged, children, youth, homeless and any vulnerable and potentially marginalised groups.
How do you see the Think Tanks tackling big issues like inherent or unconscious bias which can create significant problems, if unaddressed?
While many operational AI systems pose limited or no risk and can contribute to solving many societal challenges, certain AI systems have the potential to create risks that we must address to avoid undesirable outcomes.
Testing, research, and expertise alone are not enough to create AI systems that are responsible, inclusive, safe, and secure. Principles of diversity and inclusion need to be operationalised right from the beginning and embedded within the development and deployment of AI.
This requires people with different lived experiences and unique perspectives to be involved – in all stages, and have their insights, concerns, and observations listened to and acknowledged. Only when this type of external, comprehensive consultation routinely takes place can we start to address the challenges of AI.
What can we learn from the past?
From the AI “fails” that we have seen in history, it is abundantly clear why AI systems should not exhibit bias or discrimination. They need to be fair and inclusive in learning from historical data, and - most importantly - in the output they produce for human decision making.
If these systems are supposed to learn how to replicate human judgement and support decision making, ideally, we must expect them to learn the best of what humanity has to offer, and not the worst.
AI systems are agents of change: helping us to identify and accelerate the understanding and practices of diversity and inclusion principles and policies in all aspects of life. I believe it is possible to build AI systems that are shaped by, and will shape the practice of, diversity and inclusion principles in our world.
The Think Tanks’ mission is to help the Australian AI ecosystem, businesses, government, and citizens fully realise all the important aspects of AI systems, and their true potential. We want to accelerate responsible, inclusive, and ethical AI system development and adoption – because there are so many benefits that this can bring.
You describe yourself as a serendipitous researcher – what does that mean to you, and how does diversity and inclusion play a role in serendipity?
Throughout my research journey I have recognised the need to focus and dive deep into researching a few specialised topics, but I have also felt excited to explore new research directions arising from interacting and collaborating with a wide network of diverse individuals.
When I say I am a serendipitous researcher I mean I keep an open mind and my door is always open to anyone who is interested to collaborate on a novel and worthwhile research problem.
It has been my honour to collaborate with scholars from many diverse backgrounds and co-author more than 200 research papers with 90+ researchers from 30+ countries.
I thrive on learning new concepts, understanding different cultures and values, exploring diverse problems, being inclusive in my research team, and searching for diverse and multi-disciplinary solutions to challenging problems.
The membership of the Think Tanks has been carefully thought through in an inclusive and diverse setting and we have invited some of the most active, vibrant, and knowledgeable individuals to assist us in our mission.
I am really looking forward to seeing what we can achieve, together.
Meet our Responsible AI leaders here.