In this first of six episodes of AI Experience, we’ll take a closer look at the influence of public perception in the future direction of artificial intelligence.
The European Union’s recent proposals to strictly regulate the use of artificial intelligence in its borders are likely to have significant consequences on AI development. The guidelines are designed to address the human and societal rights that may be at risk due to AI’s potential in surveillance and identification. For nations and people to make the right decisions, it’s important that there be public discourse on what AI is and what it should be.
LG Electronics has already begun this process.
Developed last year in partnership between LG and Element AI, the report AIX Exchange: The Future of AI and Human Experience covers the challenges of the artificial intelligence experience (AIX) across the six core themes of public perception, ethics, transparency, user experience, context and relationship.
Public perception, the way people think and feel about a subject, plays an important role in the kind of AI products and services consumers choose to adopt. This, in turn, influences the direction of AI development and advancement. For example, if consumers are obsessed with intelligent toasters, there’s a strong chance many AI companies will pour they resources into coming up with a new AI toaster that can top the current AI toaster. Here, we’ll take a look at five areas that greatly influences the public’s opinion and awareness of AI: news and pop culture, language, marketing, design and education.
News and Pop Culture
From 1927’s Metropolis to 2001: A Space Odyssey (1968) to Ex Machina in 2015, AI rarely paints a bright future that offers us hope. Although there are many stories in the news media about how AI is improving lives, there are as many that portray the technology as the villain, stealing jobs and futures. Those involved in developing the technology are acutely aware of the impact this has on consumer confidence and trust, and just how important it is to separate fact from fiction when communicating AI.
Words can sometimes be open to interpretation, and this seems to be especially true where AI is concerned. The world “learning” could make a person believe that a machine that “learns” can do far more than what it was programmed to do, creating fear and mistrust. As AI is such a vague and amorphous concept for many, companies in the AI sector need to take great care when describing their products and services, leaving no room for misunderstanding. Choosing words wisely may help to better deliver the promise of AI.
On a daily basis, consumers are exposed to marketing that promotes AI products and services – most of which offer convenience by managing simple tasks independently or with little user input. But there are many situations where the marketing suggests these innovations will revolutionize users’ way of life, which raises expectations for AI beyond what is currently possible. The overpromising marketing bubble can lead to disappointment and negatively affect public perception, fueling a phenomenon known as AI winter.
AI winter refers to setbacks in the development of AI resulting from a lack of enthusiasm and interest among consumers and investors. Because the technology is not advancing as fast as people had hoped, and doesn’t yet resemble the AI of Hollywood movies, companies should be wary of overstating what their offerings can do. In this way, they can avoid contributing to negative public sentiment and allow users to fully appreciate what the technology can deliver right now.
The experience that users have when interacting with AI is another critical element in determining public perception. Personal AIX stories – good and bad – are shared every day on social media, reconfirming people’s positive or negative perceptions and influencing the opinions of those who have yet to use any AI-based solutions. To ensure better experiences with AI, the adoption of a human-centered approach to AI design is critical. A thorough examination of what consumers need and desire, as well as pain points that detract from a product’s value, are essential to the development of capabilities that deliver greater practical value in daily life.
“Technology literacy is not only important for the people who are building the systems to help them think about the impact of what they’re building,” said Charles Lee Isbell, Dean of the College of Computing at the Georgia Institute of Technology. “It is at least as important, perhaps more important that we teach people who are not going to build those systems, but are going to be impacted by those systems.”
The more the public is educated about AI, the faster it will adapt to and appreciate the benefits the technology can bring. Conversely, more education will also help the public understand what AI isn’t and that the machine overlord takeover will not be the inevitable consequence of letting a smart speaker into the home.
At the end of the day, there are a variety of factors that impact the public’s perception of AI and the impact of that perception is larger than one might be led to think. Through consumer-centric AIX, companies can help create a path to where the direction and development of AI results in the greatest benefit to humankind.