AI is in our daily lives. How much are we thinking and talking about it?

Stacey Kaleh - Curious Optimist
5 min readJan 3, 2023

Artificial intelligence technologies have been rapidly and widely adopted and accepted in recent years. We encounter them when we open Netflix or our music apps to see recommendations, when we use photo filters to auto-adjust our images, when we scroll through social media news feeds, interact with customer service chatbots, and apply for jobs via job search platforms. AI systems are embedded in public cameras in our cities, and also in things like baby monitors in our private homes. They’re operating in our cars and navigation systems, and also delivery systems.

As someone who is not a technologist, but is generally interested in AI and new technologies and has worked in communications and witnessed the media landscape change dramatically with the rise of AI, these technologies have been enough to give me pause and start a journey into learning more.

Working in the media and marketing gave me a behind-the-scenes glimpse at audience targeting capabilities and a keen eye for social media B.S. But I saw too many friends and family members fall victim to messages from advertisers and bots that pried on their personal data and vulnerabilities. What more could I do? Are there any great media and data literacy programs out there?

When I purchased my Tesla (I was an early adopter of the Model 3), I was motivated to reduce my carbon footprint and use cleaner energy. But I almost didn’t go through with the purchase because I wasn’t clear on how data on my driving behavior would be used, or if the car’s computer could potentially be hacked. When I installed the Cubo AI baby monitor above my daughter’s crib, I was excited about its seamless connectivity to my phone as well as the ability to view sleep statistics and scroll through segments of video of my sweet little one. But I also wondered if the video would be stored somewhere, if someone would study it or use it to build more and better baby monitors, and if the monitor listens to more than just my baby’s cries. Does it pick up all of the chatter around our house, and what does it do with that? Ultimately, I weighed the benefits and uncertainties, and chose to put my faith in the companies.

But that trust isn’t always deserved, and making these types of decisions on what technologies to use or not use seems increasingly difficult. Are there tools out there to help us make these decisions with more confidence?

Being the overthinker and observer that I am, I realized that, for the past several years, I’ve been going through a new layer of analysis every time I make a purchase decision or technology platform adoption decision. When I first built an account on Facebook, I didn’t think twice about posting my personal photos and where they would be stored or how they would be used. When I started using Twitter, I didn’t spend time fact-checking articles — I just trusted the news sources. What was the tipping point? When did I start questioning just enough to go through the added effort of weighing convenience and new tech with what I would be trading for it (data, trust, etc.)?

I’m not sure I can pinpoint the exact moment of this transition, but I think it occurred somewhere between all of those Facebook trials, the Social Dilemma documentary, and a crazy amount of sci-fi shows and movies featuring robots and self-driving cars doing not so great things to humans. And then I just went over the ledge when misinformation started putting people’s lives at risk in a very visible way (global pandemic) and clearly threatening American democracy. In any case, I’m not afraid to say the media influenced me. Public conversation shifted, and my paradigm shifted.

For me, the media is not to be underestimated in shaping public perception and fueling certain behaviors. That’s one reason I’ve worked in communications for so long — the way we communicate, and what we communicate, greatly affects our lives. It can do so much good, but it can also do so much bad, spread so much hate. It can be very influential. Because we humans are knowledge seekers and communicators at our core — we trade in information and trust. For me, and for many others, that trust has been broken. But the media remains a key source of knowledge and key way to share it. What do we do?

These past few months alone, trying to follow AI news has been an absolute whirlwind. The pace of technology never ceases to amaze me. AI-generated art is winning competitions. I’m starting to see Midjourney and DALL-E generated designs replace human designs and graphics in the newsletters I read. A Google engineer believed an AI chatbot was sentient (If it can persuade an engineer that knows it’s an AI, who else can it convince?). OpenAI’s ChatGPT can write poetry, marketing copy, college essays from simple prompts and be passed off as human work (I’ve experimented with this myself — more on this later). It can also scour the Internet to answer your questions in a concise, conversational way — leveling up something like a Google search.

To me, it feels like we’re reaching a new wave of AI technologies — ones that seem to be capable of touching areas of our lives that once seemed like they couldn’t be touched (art and literature are fundamentally human endeavors, right?). What happens when we’re not as adept at being able to sense another’s humanity?

That’s why I’m writing this now. I think it’s important to shine a light on the important role of the media (yes, including influencers and bloggers and all) in covering these topics and investigating these questions. Even in this state of distrust, I have to say I’m glad publications like the New York Times are covering AI and starting to write about ethics in AI for what seems like the first time. And writers like Alberto Romero, author of “The Algorithmic Bridge,” are becoming go-to sources for me. How AI is changing our lives is something we have to talk about. Because change happens quickly, we’re quick to accept new and exciting tech when it offers so many benefits. And this next wave of tech seems even more game-changing or world-view-changing.

These new technologies can be awesome creative tools. They can help us save our most precious resource — time. They can bring people together and improve services at great scale. But could they cause harm? Who do they serve? Are they biased? Are they exacerbating inequities? What are we trading in exchange for them? How is our personal data being collected, stored, sold, and used?

It’s a new layer to add to the already heavy weight of our decision-making-processes— that’s one cost of these technologies. But I think it’s worth adding. What’s deserving of our trust? What’s the right trade-off that makes technology more beneficial than detrimental. WE get to decide.

I think it’s important for us to start asking more questions, to demand responsible data stewardship and transparency. And let’s share what we know about algorithms and bots and data use with each other. Let’s be part of a media/AI/data literacy movement.

Ideas welcome! More to come.

Created by Stacey Kaleh using Midjourney.

--

--

Stacey Kaleh - Curious Optimist

Writer. Expert in museum studies and nonprofit communications. Lover of live music and Texas wine. Interested in Ethical AI. Native Austinite.