Ethics, Technology, and Artificial Intelligence, with Fiona McEvoy

ethics and AI

Consumer behavior and innovation in technology are becoming more and more intertwined, and these interactions present new ethical challenges. Technology is already a constant source of new products and apps, and an incredibly potent personalized marketing platform. But now, more than ever, our everyday technology provides brands and advertisers with a unique window into consumer psychology.

Questions around ethics, consumer rights, transparency, and data privacy are deserving of careful thought and deliberation. This is why we spoke to Fiona McEvoy. Fiona is an AI ethics writer, researcher, speaker, and thought leader based in San Francisco, CA. She was named one of the 30 Women Influencing AI in San Francisco by RE•WORK and one of the 100 Brilliant Women in AI Ethics (2019 & 2020).

As opposed to other business industries, why does the technology sector pose unique ethical challenges?

New and emerging tech products are now embedded in almost every industry, so the ethical challenges of technologies like AI aren’t limited to the sector that develops them. They should concern “user industries” too. Fundamentally, it’s this incredible penetration of tech that has given rise to concerns about ethics and societal consequences. 

As human beings, we now interact with technology on an unprecedented scale and in all kinds of different environments – at work, in the supermarket, in the car, at home. This isn’t necessarily a bad thing, but tech deployers have some responsibility to keep us safe. Whether it’s anticipating systemic bias, recognizing when technologies coerce our decision-making, intercepting malicious actors who wish to weaponize platforms or hack networks, or taking a stand on overzealous surveillance; we need to make sure that tech serves us, not the other way around.

Some would say that the companies themselves should bear no responsibility, since its the users that willingly choose to engage with the products. What’s your perspective on that?

I completely disagree. The fact is, there’s an incredible information imbalance when it comes to tech users and tech companies. As consumers, we don’t know what we don’t know, and therefore it’s almost impossible to make a truly informed decision. From the workings of the systems themselves, to data collection and sharing, to algorithms that seek to exploit our cognitive flaws and hi-jack consumer behavior – users are inevitably on the back foot.

For example, I recently read that if you were to enter into the Nest ecosystem, with all its connected apps and devices, by installing a single Nest thermostat then you would have to read around a thousand contracts to understand the agreements on privacy, third party data-sharing, etc. That’s overwhelming and patently unreasonable.

Let’s not forget that these companies are incredibly smart. They know more about us than we know about ourselves – they know how to persuade us, how to predict our movements and even our thoughts, they know our habits, what makes us tick, what turns us off, which types of articles we read until the end, what our hobbies and even our passions are. I’m quite sure they could find a way of reinventing informed consent in a way that meant we were genuinely informed and authentically consenting.

In your conversations with non-tech audiences, what are they most shocked to learn about the technology industry and its influence on its users?

It’s usually either the disconcerting levels of bias that have been found in machine decision-making, or the extent of surveillance and monitoring. One example I presented recently was from a Gartner study that revealed as many as 50% of medium-to-large scale businesses are currently harvesting data from employee interactions. It predicted that by next year this will rise to 80%. The surveillance in question includes our emails (processing our language and tone), our calendars (who are we meeting? Why? For how long?), and our movements (buzzing in and out of buildings). In some cases, companies are even remotely accessing the computer cameras and analyzing employee facial expressions to detect emotions like anger, frustration or depression.

Employers justify this by saying that they’re trying to understand levels of employee satisfaction without having to send internal surveys that rely upon self-reporting. In reality, most people find these kinds of tactics deeply intrusive, anonymized or not.

AR and VR are exciting technologies. What are some challenges you think these technologies present?

They’re extremely exciting technologies and there is already a good range of genuinely useful deployments. But, as with most other technologies, there are some use scenarios that we should examine. In terms of augmented reality, there is currently no agreement with regards to where we can augment and what kinds of augmentation are unacceptable. Many people will exclaim “AR isn’t real! They are just graphics transposed onto real environments,” and that may be true, but there is still something deeply jarring about the idea of augmenting a school with pornography, or a Hindu temple with an advertisement for beef burgers.

Similarly, in the virtual reality space, it’s unclear whether or not it’s okay to play out unpalatable, abusive scenarios if they are limited to these virtual environments. And there are also psychiatric concerns like desensitization and – as realism improves – the difficulty some users might have distinguishing real life from virtual life (indeed, there is already evidence that this can be problematic).

In your conversations with non-tech audiences, what do you think is the biggest misconception about Artificial Intelligence?

I think most people still struggle to understand precisely what artificial intelligence is, and that’s completely understandable. There has been such an incredible deluge of hype and the actual technology has been mostly buried beneath it. Simultaneously, there has been a great deal of scaremongering about the implications of a new AI era – namely that artificial intelligence is here to overrule and outmode us. This combination of hype and scaremongering has been successful only in masking genuine, less apocalyptic concerns that should be given the oxygen they deserve in the public arena.

What ethical challenges on the horizon are currently ‘flying under the radar’ and not being talked about enough? What issue or specific technology do you think has NOT been given the attention it deserves?

I’m sure there are lots that also fly below my own radar, but a lot of my focus is on the potent yet incredibly subtle influence of tech on our choices and decision-making. Both on and offline, tech companies are able to use all of the information they hold about us and use it to their advantage – “nudging” us towards decisions that are predominately in their interests, and not necessarily always in ours. As we move into more and more immersive and convincing online environments, we’ll have to determine when this kind of “nudge” becomes a “shove” as companies are given a greater opportunity to “hack” our cognition and undercut our agency.

Along the same lines, we should also have deep reservations about tech artefacts that claim to be able to read and interpret our emotions. Again, the idea of a tech product deceiving a child or vulnerable adult into believing it truly “understands them” is extremely concerning, particularly when it comes to the influence that may entail.

If you could educate the general public with one piece of advice regarding their relationship to technology, what would it be?

Tech companies are currently concerned with building trust, and that’s a fine objective, but at the same time, I’d encourage the public to buck against this. They should become more skeptical, scrutinize the products they accept into their lives, learn what those products know and how that information is used, and to decide for themselves whether what they’re getting in return amounts to a fair transaction.

Naively, we all gave so much of ourselves away for free email, search engines, and social media. It’s important that we encourage those growing up in this new AI age to think ahead to unforeseen consequences.

What advice would you have for others who may be interested in going into the field of Technology/Business Ethics?

To begin with, ethics. My background is in philosophy and philosophical ethics, and that gave me an incredible set of frameworks with which to scrutinize a range of different technologies. It’s easy (and honorable) to be emoted by some of the problems that have already been discovered, but what we need are individuals with the skills to anticipate new problems so that they can be mitigated before they manifest.

Photo by Kevin Ku via UnSplash


Fiona McEvoy

About the author

Fiona J. McEvoy is an AI ethics writer, researcher, speaker, and thought leader and founder of the popular websiteYouTheData.com. She has been named one of the 30 Women Influencing AI in SF(RE•WORK) and one of the 100 BrilliantWomen in AI Ethics (Lighthouse3). Fiona contributes often to outlets like Slate,VentureBeat, and The Next Web and regularly presents her ideas at conferences around the globe. Andrew from ATIH sat down with her to discuss how her humanities background influences her work, what tough questions she’s tackling these days, and more.


Previous
Previous

Empathy, Technology, and Social Media with Kaitlin Ugolik Phillips

Next
Next

The Role of Impulse in Consumer Psychology