
There’s a lot of talk about AI, not all of it positive. While some fear it might take control, others are sharing optimistic views about what it can achieve, as well as its potential benefits.
The idealized perception of “good AI” has become a tool used by companies to market their offerings. Yet, many consumers are hesitant about AI in certain products, raising concerns that this upbeat promotion may pressure people into accepting AI more than they want.
AI is so widespread that it feels like we can’t easily refuse to use it. It’s found in our smartphones, TVs, smart speakers like Alexa, and virtual assistants like Siri. Although we’re told our privacy will be secure, given the sensitive data these devices access, can we really trust those claims?
Some politicians echo the “good AI” narrative with great enthusiasm, aligning with the messages from tech companies.
My recent research is discussed in a new book titled The Myth of Good AI. It demonstrates that the data used in AI systems often reflects biases, favoring privileged groups and mainstream viewpoints.
This bias can lead to AI products that overlook the experiences of marginalized or minority groups, which explains ongoing issues like racism, age discrimination, and gender bias in AI systems.
The rapid integration of this technology into our daily lives makes it challenging to fully understand its impact. A more skeptical viewpoint on AI doesn’t fit the typical marketing narrative of tech companies.
Power dynamics
Currently, the positive perception of AI heavily influences innovation in the sector. This is driven by state interests and the profit objectives of tech giants.
The influence of wealthy tech moguls and their ties to governments shape this landscape. A notable example is the relationship between Donald Trump and Elon Musk.
As a result, the public is often at the mercy of a hierarchical system, where major tech firms and their governmental supporters dictate how technology is utilized. This positive narrative around AI primarily serves to maintain power and profit.
Currently, there isn’t a global initiative or manifesto that unites communities to ensure that AI is used for public good or to protect individual privacy rights. The “right to be left alone,” a key tenet in both the US Constitution and international human rights, is rarely mentioned by major tech firms in their AI assurances.
However, some dangers presented by AI are already clear. A report detailing instances where lawyers used AI found 157 cases of incorrect AI-generated information impacting legal decisions.
Certain AI applications can also be exploited for blackmail, create instructions for violence, and engage in other harmful activities.
Tech companies must ensure their algorithms are built with diverse data to minimize discrimination. This approach will prevent the public from feeling pressured to accept the idea that AI can solve all our issues without adequate safeguards. The differences in human creativity, ethics, and intuition highlight fundamental gaps between humans and machines.
It’s crucial for everyday individuals to challenge the myth of good AI. A critical outlook on AI can lead to technology that is more socially responsible and brings benefits to society as a whole, as discussed in the book.
Experts believe we are about a decade away from AI outpacing human capabilities in all tasks. Until then, we must remain vigilant against threats to our privacy and enhance our understanding of how AI operates.
If you would like to see similar Tech posts like this, click here & share this article with your friends!