What is the Future of Artificial Intelligence, Virtual & Augmented Reality?: A Q&A with Dr Victoria Baines

21 November 2019

A leading authority in many fields including online trust, safety and cybersecurity, Dr Victoria Baines is an expert in the fields of digital ethics, cybercrime and the misuse of emerging technologies including Artificial Intelligence, Virtual Reality and Augmented Reality.

She dropping into the Speakers Corner towers to share insight into her world, and teach us some fascinating and terrifying realities of technology. We were transfixed with what she had to say, and luckily she was happy to answer more of our questions!

Open your eyes to some of the incredible possibilities for business with regards to technology, the issues of ethics and lack of legislation, and what the future of 2030 might look like.

There are some incredible possibilities for businesses with the developments of technology, what are some of the more promising elements we can look forward to?

Recently I’ve seen some really interesting applications of Virtual and Augmented Reality. The new Adidas flagship store in London features interactive changing room mirrors that use near field communication to recognise products and provide augmented information. The store boasts over 100 digital “touchpoints”, all designed to make the retail experience more bespoke and efficient. Increasingly, brands are using VR and AR make the customer experience more immediate and diverting. In addition to letting customers try before they buy, Audi worked with Disney to create a VR system that enables back seat passengers to play immersive games as they ride along. VR and AR are already being used for training and serious games, particularly by industries with high stakes operations such as nuclear energy, mining and the like. It’s much safer and less costly to make a mistake in a virtual environment, and the immersive quality of these spaces – how present we feel in them, and how real they feel – is getting better all the time.

Many promises have been made about Artificial Intelligence (AI). Just under 100,000 patents mentioning AI have been registered with the World Intellectual Property Organization, suggesting that the technology is soon to transform our lives and businesses. Machine learning, the less glamorous sibling of AI, is already making strides in healthcare, education, marketing and retail. Recently the world also took a step forward in quantum computing, when Google announced Quantum Supremacy. While this sounds like a title of a superhero movie, actually it’s potentially more exciting – at least I think so! It means that we have reached the point where a quantum computer can solve a problem impossible for a normal computer. This will give us processing power that was previously inconceivable. The related Quantum Advantage is when a quantum computer can process something significantly faster than a normal computer. Both of these will enable us to make much better sense and use of really big data, and to solve complex problems.

We think of these emerging technologies as very separate, but in reality they will interact. Imagine, a combination of AI and quantum computing applied to healthcare could make all the difference in finding cures for diseases that we have previously thought incurable. Also, we shouldn’t underestimate the impact 5G is going to have. Seamless connectivity will be crucial to mobile deployment of any of the technologies being developed right now, especially AR and VR. VR in particular is quite “rooted to the spot” right now. When we all have 5G on our mobile devices, this could be the tipping point for us to have truly augmented and virtual experiences on the move.

To flip that, what are some concerning aspects we should be aware of?

Every technology can be used for good or ill. So we have to be prepared for bad guys interfering with AI in a way that could harm people and networks, or indeed for criminals to develop their own AI to automate attacks. To be honest we’re already seeing that to a certain extent now. Cybercriminal business models for activities like phishing have always relied on mass distribution to a large number of potential victims. Only a small number of people need to take the bait in order for the crime to be profitable. Criminals learned years ago that they could write scripts to auto-generate phishing emails, automate distribution and make their businesses more efficient. So it stands to reason that they seize opportunities promised by further AI development. We often say that cybersecurity is an “arms race” between the good guys and the bad guys, and AI will be no exception.

It’s also important to keep an eye on what’s going on in the world outside the tech sector. Tech doesn’t exist in a vacuum, however much it may seem that Silicon Valley thinks so. All these new technologies will put additional demands on our energy sources. We’re already seeing signals of potential future tension in countries like Iceland, where the electricity used to mine cryptocurrencies like Bitcoin already exceeds what is used in its homes. That kind of power consumption is clearly unsustainable. Against the backdrop of increasing global attention on the climate crisis, it’s not a given that we will have enough energy or power to deliver some of the technologies currently in development – or at least not in all countries. We could end up with greater disparity between countries’ living standards if technological growth is hindered by a lack of sustainable energy.

Dr Victoria Baines on GMB discussing the Cambridge Analytics Scandal

Ethics involved in technology is a huge subject, but there are many that are unaware of the potential implications. What are the biggest issues we face when it comes to ethics and are there solutions in place?

The biggest challenge for ethics is that we don’t all agree on what is ethical! If only we did. It’s front of mind for the big companies working on the next generation of Artificial Intelligence, and my sense is that by simply thinking about ethics they are trying to do the right thing. But whose ethics? We’re seeing increasing evidence that the AI we already have puts minorities at a disadvantage – not recognising the faces of people of colour, or sending medical patients from some communities down different treatment paths. This is profoundly unethical, but may not be intentional. It may reflect the unconscious bias of the people who built the tools, and it may also be a product of a system whose aim is efficiency rather than achieving the best outcome for all people.

I was really struck by the predicament Google found themselves in a few months ago when they announced a new AI ethics council. Just one week after the announcement, the council was disbanded. Employees protested at the appointment of someone who leads a right-wing thin-tank, while people on Twitter found reasons to object to the other members, whom Google had appointed in an effort to reflect “diverse perspectives”. It may be impossible to appoint a single global council to reflect the ethics of over 7 billion people. But not having some kind of ethical oversight of developing technologies isn’t an option either. One solution would be to crowdsource ethics – to ask citizens what they think is (un)acceptable. It’s not the fastest way to progress, and I’m a bit nervous of referenda right now, for obvious reasons! But I do see a trend for leaving ordinary people out of this debate, with tech companies and governments alike making too many assumptions about what citizens want and will accept. On what is shaping up to be yet another new frontier, it may be time to check those assumptions.

Technology is steaming ahead at a rapid pace, are there legislations in place that can handle what is to come? What more needs to be done?

The public debate is currently focused on some technologies that are garnering a lot of media attention, but not others. Challenges with incorporating drones and semi-autonomous vehicles into our existing air space and road networks are already here, so it’s right and proper that we deal with these first. In healthcare, CRISPR gene editing is now being used to treat cancer patients in the US, remove deaf genes from embryos in Russia and make them resistant to HIV in China. Progress has already happened, and now the ethicists and lawmakers are playing catch up.

The trouble with legislation is that, generally speaking, something needs to exist before you can regulate it. We can anticipate the deployment of many technologies, but pre-emptive regulation isn’t in the nature of human beings. So some of the down sides of emerging tech may hit us before we can legislate against them.

In 2012 you predicted what 2020 would look like. What are some of your predictions for 2030?

When you look into the future, you’re always going to get some things wrong. Some technologies come along quicker than expected, while some take a while longer to become mainstream. When in 2012 we built scenarios for 2020 and painted a picture of what the future could look like, the world seemed a lot simpler and more orderly than it does now. There have been seismic shifts in politics alone. When I look ahead to 2030, I see greater uncertainty than we’ve had before – and that’s not just a get out clause for any predictions I make here! It’s part of my job to map these uncertainties and consider how they might interact in the world. For example, how will climate migration affect urban transport networks and work practices? Will ageing populations enjoy greater quality of life as a result of smart home technology and AI healthcare and bionics, and what will be the limit of the human lifespan as a result? How will the entanglement of trade, diplomacy and cybersecurity that we currently see with regard to China play out in the next decade?

If social VR and AR takes off as it should, we’ll spend more time in immersive online environments. We may finally get an improvement on clunky video conferencing, and we’ll be able to physically touch and interact with people on the other side of the world in such a way that we may no longer need to travel around the world to meetings. This could also of course lead to unwanted physical experiences in these spaces, something that we’re not prepared for. In an online game, you can currently kill another character, but you won’t be charged with murder. Philosophers call this moral question The Gamer’s Dilemma. But say, for instance, Call of Duty introduced a multiplayer VR version with physical sensation – for example, wearing haptic body suits and gloves. How would we retrain gamers used to impunity to understand the psychological and physical damage to another player on the other side of the world? Would killing a character leave a bruise on their human agent that would constitute an assault? What would be the legal remedy for that, would it be a crime, and do we have the support systems in place for victims of this kind of behaviour? So far the answer is that we’re not sure.

If there was one message that you would want an audience to take away from your speech, what would it be?

We can’t make any technology 100% safe and secure. That’s as unrealistic as claiming that no one will ever be burgled, or get run over. So far we’ve relied too much on technological fixes to human problems like crime and anti-social behaviour online. It’s as if we’ve decided that the “cyber” part of cybercrime is too hard for ordinary people to understand. It isn’t, and we need to put people back in control – not only of their data and privacy, but of preventing and responding to cyber attacks.

And finally Victoria, what’s next for you?

I’m contributing to a new course at Stanford University that will teach the next generation of tech company engineers about how online platforms are misused. If they can anticipate ‘badness’ they should be better equipped to protect and fight against it when they enter the workforce. I’m also writing a book that’s designed to help people identify the tricks and techniques that are used to persuade them on security issues. I read Classics at Oxford, which means I have the same training in rhetoric as Boris Johnson! My PhD was in rhetoric, too, so I can’t help analysing how arguments are constructed by public figures. I’m hoping that the book will be a useful tool for countering the Fake News phenomenon. The more we question what we’re told, and the more we look behind the headlines, the greater the confidence we can have in our own judgement as citizens. It’s quite a hope, but then again, I’m an optimist – even about the future!

Thank you Victoria for giving us some fascinating insights into the future of technology, and bringing to light some of the concerning aspects of it. We look forward to reading your book when it comes out!

For further information call us on   or email  info@speakerscorner.co.uk .

Newsletter Sign Up

If you liked this article then why not sign up to our newsletters? We promise to send interesting and useful interviews, tips and blogs, plus free event invites too.

Speakers Corner Newsletter

* indicates required

Have an enquiry?

Send us a message online and we'll respond within the hour during business hours. Alternatively, please call us our friendly team of experts on +44 (0) 20 7607 7070.

Speakers Corner (London) Ltd,
Ground and Lower Ground Floor,
5-6 Mallow Street,