Do We Have Unconscious Bias in AI? | A Q&A with Dr Catriona Wallace

21 November 2019

Dr Catriona Wallace, an entrepreneur in the Artificial Intelligence field, and Founder & Executive Director of Flamingo AI dropped into Speakers Corner towers to share her expertise on Artificial Intelligence, Ethics & Human Rights in technology and Women in Leadership.

Needless to say, we were blown away by her visit and decided to learn more.

Find out how Catriona became the second female-led business on the ASX, the importance of neurodiversity in the workplace, and what the future of AI has in store for ethics and the world at large.

You're not only the Founder of artificial intelligence fintech start-up Flamingo Ai, but Flamingo Ai is the second female-led business ever to list on the ASX. Can you tell us a bit about your journey?

I only ever wanted to be a farmer! After a couple of years studying agriculture at University, I realised most of my peers were becoming investment bankers and not farmers. So, I left Uni, decided to rebel, and joined… the Police Force.

After 4 years in the police I left and co-founded a market research company and customer experience design firm. I also owned a nightclub and interestingly enough, having previously been a cop came in handy.

I also studied for a PhD and my thesis was Technology Substituting for Human Leaders. Bit Sci-Fi right? Totally. I could see that the time when humans and machines would work together was upon us – so I was one of the first people globally to study this in depth and statistically model when machines provide better leadership than humans.

In 2014 I recognised that the time of Artificial Intelligence was finally here – so I used all the skills I had from my eclectic background and founded Flamingo AI, a machine learning company that provides software robots for enterprise.

In 2016 we listed on the Australian Stock Exchange and much to my surprise and marginal horror, learned that with a female CEO and Chair, we were the second only ever women-led business to ever list on the Exchange.

You're one of the few women in positions of high leadership, what were some of the challenges you faced while setting up your company?

I work in three hyper masculine environments – high tech, financial services and the capital markets. There are very few women leaders in these sectors. In fact, less than 10% of high-tech jobs are held by women, globally. So that makes it hard but also a fantastic opportunity. The greatest challenges I have relate to the conscious and unconscious bias I encounter.

For example: an investment group stated, “We will invest $1million into Flamingo Ai if the female CEO takes her nose ring out” (I went the next day a bought a bigger nose ring).

Another investment group said: “We were going to invest until we learned the business had a female CEO”.

Another investor: “I could not concentrate on your pitch as I couldn’t help but look at your beautiful dress”.

And another: “Your pitch was very good however could be made better if you brushed your hair”.

Seriously.

What are your feelings towards companies who may be resisting in integrating AI into the business strategy? Is this something to jump on now, or wait until it is more developed?

AI is the fastest growing tech sector in the world. US$38bn was invested in AI in the last 12 months. 40% of jobs in financial services, retail, hospitality and tourism will be automated by machines in the next 5 years and 30% of all customer interactions will be conducted by machines and not humans in the next 3 years. US$1.2trillion will move from those companies that do not have AI to those that do. So, the time is now.

Companies must be conducting trials, experimenting and learning about AI so as not to be left behind. 6.2billion hours of productivity will be saved by AI in the next 12 months so huge benefits should come from those who do AI well.

It’s time.

You mentioned bias in AI. What are some of the problems and how do we begin to create rules, laws, and ethical frameworks to protect our future?

We are faced with a huge challenge at the current time with regard to the hard coding of bias into the machines that will run our world. 90% of coders of AI are male. With this comes the high risk that, consciously or not, the machines are being trained with both bias in coding and bias in data. And the train has already left the station with regard to this.

Many of the existing AI applications have been trained on data sets that contain bias data. The IT Analyst firm, Gartner, predicts that 85% of AI projects in the next 2 years will contain erroneous outcomes. This in a large part will be due to the data that is used.

An easy way to understand what bias in data means is to do a few google searches. Try this: Google ‘Unprofessional Hairstyles’ and hit images. Scroll down and see what google presents. Then do ‘Professional Hairstyles’

You will see clearly that our existing AI applications, those we use every day are already shockingly bias. This is a real problem.

So, one way to mitigate against this is to start to work with Ethical Principles for Emerging Tech.

Here are the 8 Ethical Principles for AI, which are universal in their relevance:

  1. Do no harm: AI must not harm humans, the society and the environment
  2. Human- centred: AI must be built with consideration to human values and autonomy
  3. Fairness: AI must not discriminate against an individual
  4. Privacy & Security: AI must adhere to human privacy and security legislations and rights
  5. Reliability &Safety: AI machines must be reliable and safe to us
  6. Contestability: If a person believes they have been unfairly treated by an AI, they have the right to contest this
  7. Transparency & Accountability: If a person believes they have been unfairly treated by an AI the organisation and the vendor whose AI it is must be transparent and explain how the AI made the decision
  8. Accountability: The organisation and the vendor who are responsible for the AI will be held accountable for decisions that the AI makes.

These of course are guidelines and not regulations or laws. Yet. They provide a good framework for those companies looking to implement AI to have in mind so that we can avoid coding society’s existing problems into the algorithms.

What does the future of AI look like in the workplace look like?

The future of work will be both human + machine. We will see workplaces with human employees, HAMA employees (Human Assisted Machine Assisted) and Digital Labour (like the robots Flamingo Ai makes). We will see that AI augments jobs and amplifies legacy systems to spur new growth and productivity for companies.

But with this also comes the removal of jobs, 1.8m in the next 12 months globally, but also the creation of new jobs, 2.3m also in the next 12 months. The big challenge here is that 90% of the jobs that will be lost will be those of women and minorities. So, we see the gender bias again playing out in very challenging ways.

We need to be gearing up to educate and train people in how to work with AI and how to reskill to do the new jobs – such as Brain Trainers, Data Journalists, Data Auditors, HAMA workers.

You employ many people on the autism spectrum. Can you share more about why it’s important that businesses hire people that are neurodiverse?

I am a big believer in diversity and inclusion. In fact, it is the founding principle of Flamingo Ai. This means gender, racial, sexual, abilities, and neurodiversity (where neurological differences are recognised and respected as any other human variation). In particular, I seek out people on the Autism spectrum.

I once posted a job advertisement that stated, “When you were a teenager, did you use to lock yourself in your room and play on your computer all the time; were a bit uncomfortable talking to people and maybe preferred not to; designed some crazy interesting things that no-one knows about? Well you sound like the right person for us!”

My whole team learned how to create a safe and happy workplace for people on the spectrum so that they could perform their best work. And they do.



Congratulations on becoming a recent electee into the royal institution of Australia for your work in artificial intelligence! We're wondering, what's next for you?

Thanks! There are two areas I am thinking about. One, I am deeply interested in the AI and Ethics field. I believe that there is the potential for AI to be done very badly over the next 10 years and that underserved communities will be further disempowered as a result. This is not ok. The power that AI will bring must be distributed beyond the elite and the already powerful organisations. I will try and figure out the best way to do that.

The second area of interest relates to the Climate Change. At the same time that Europe is flooding, Australia is having catastrophic fires that have destroyed 1.6m hectares of land.

We have a 4,000-hectare family farm that has been burned to the ground, it’s heart-breaking. So, I am thinking about how AI can affect Climate Change in positive ways. The combination of this powerful tech and the urgent need to reduce the affects of Climate Change sound like a good next project to me!

For further information call us on   or email  info@speakerscorner.co.uk.

Newsletter Sign Up

If you liked this article then why not sign up to our newsletters? We promise to send interesting and useful interviews, tips and blogs, plus free event invites too.

Speakers Corner Newsletter

* indicates required

Have an enquiry?

Send us a message online and we'll respond within the hour during business hours. Alternatively, please call us our friendly team of experts on +44 (0) 20 7607 7070.



Speakers Corner (London) Ltd,
Ground and Lower Ground Floor,
5-6 Mallow Street,
London,
EC1Y 8RQ