Charlotte's Web Thoughts
Charlotte's Web Thoughts
I Promise You Will Be Tricked by AI
0:00
Current time: 0:00 / Total time: -11:48
-11:48

I Promise You Will Be Tricked by AI

It's inevitable.
(Pay no attention to the man behind the curtain. Image credit: Warner Bros.)

[This blog will always be free to read, but it’s also how I pay my bills. If you have suggestions or feedback on how I can earn your paid subscription, shoot me an email: cmclymer@gmail.com. And if this is too big of a commitment, I’m always thankful for a simple cup of coffee.]


Are there things about AI that I find cool and interesting and hopeful?

Of course.

From medicine to education to national security, artificial intelligence has made once improbable strides seem possible, seemingly, to most of us, overnight.

It hasn’t been overnight. The technological advances we’re witnessing—and those on the horizon—are the culmination of decades of the labor of determined and brilliant people.

Yet it sure does feel like it landed here quickly, doesn’t it? And that makes it hard to keep up. We’re being inundated with stories about AI that seem to saturate the topic with a rate of complexity far outpacing our efforts to understand it.

As with every binary-coerced issue these days, the discourse around AI largely beckons us all into two camps: either you’re an AI optimist or an AI pessimist.

I am neither. I see a lot of promise, and I see a lot of challenges. So, I want to be clear that what I’m about to say isn’t a universal indictment of AI, nor should you let my commentary on this particular aspect of AI muddy the waters in every other aspect.

I am not a technologist or a scientist. I will be the first to admit that I have as much business discussing the finer details of tech policy and regulation as I do being behind the wheel at the Indy 500.

But I do know mass communications. More specifically (and tragically), I’m an expert on social media, and what I’ve been seeing, particularly over the past several weeks, has me very worried about how AI is accelerating disinformation online.

I have been especially concerned with a growing attitude among many progressives and Democrats that AI disinformation is a problem limited to conservatives, particularly Trump supporters.

There seems to be this perception among many that being progressive makes one immunized against being tricked by disinformation. We’re too smart for that. Too informed. Too moral.

But make no mistake: no matter your politics or beliefs or values, you will be tricked by AI. It’s gonna happen. It’s inevitable. I guarantee it.

I’ll give you an example. In recent weeks, I’ve seen this particular image circulating online in progressive and Democratic spaces:

Now, before all the yelling starts, I wanna be clear that this particular image is not what bothers me.

There are plenty of real photos online of insecure gun nuts bringing their firearms into spaces where it’s completely unnecessary. This isn’t about that. I am not saying this doesn’t happen. We know this happens.

The issue is how easily grown adults immediately perceived, without any second thoughts, that this image is real. And it’s not. It’s AI-generated.

It may not be obvious at first or second or third glance, but if you look closely, it ain’t that hard to confirm.

Some tells are more conspicuous than others.

This is not a foot:

This appears to be an arm that belongs to no one, unless the lady on the right is astonishingly flexible OR the lady on the left has an especially long and multi-angular forearm:

Less conspicuous is that cup on the counter. It bears a strong resemblance to the brand design of Chick-fil-A, but it’s not. You will not find this brand design in the real world on any fast food beverage because it doesn’t exist:

And for most folks, far less conspicuous are all the scribblings on the overhead menu and countertop computer screen that are meant to represent words but are, in fact, just creepy gibberish generated by AI.

Now, look, if you shared this image somewhere online, believing it’s real, I’m not here to shame you. This is not a lecture intended to make you feel foolish or clownish.

Because as I pointed out above, we’ve all seen real images similar to this one: complete dorkass losers carrying assault rifles to get a burger. It’s understandable why someone would see this image, immediately accept it, and then share it with the folks in their life.

What worries me is how easy it has become to generate a believable image that tricks people who think themselves to be so well-informed on disinformation that they refuse to admit they’ve been had when it’s pointed out, however gently and respectfully.

When this image started going viral, a friend of mine posted it on Facebook. I pointed out the above tells that it’s AI-generated, and one of his friends got very defensive. He said he’s a professional photographer, he knows real images versus fake ones, and I can’t argue with his expertise.

And he wasn’t alone. Every time I saw this image across social media, I would check out the comments, and the same defensive response could be observed: Democrats and progressives who were absolutely incensed they’d been tricked and could not accept, even with the obvious errors, that this is AI.

They knew, of course, that it was AI-generated after the inconsistencies were highlighted. They knew they had been tricked. But the shame of being tricked was so visceral—the realization that being on the left doesn’t mean they’re not vulnerable to disinformation—that they couldn’t admit it.

Because for them to admit they can be tricked by AI might mean they’re not savvier than some—not all but some—Trump supporters who have also been genuinely tricked by fake images and videos.

And also: it’s probably pretty scary to realize it’s this easy to be tricked.

Here’s why I’m saying all this: the best defense against AI-generated disinformation (and disinformation generally) is a good faith centering of personal humility. It’s an understanding that we’re all humans dealing with unprecedented technology and it’s easy to make mistakes.

There shouldn’t be any shame in acknowledging that our brains are wired in such a way that it’s not especially difficult for AI content to manipulate us. The shame should only come when our own pride prevents us from acknowledging our vulnerability to tech that is rewriting mass communications with every passing day.

You’re not a bad person or uncaring or “stupid” because you’re susceptible to AI disinformation. You’re just a human being in a changing world. And that’s okay.

It’s important to embrace this mindset because there are, unfortunately, no obvious fixes to what’s coming. Any clown can generate a believable AI image and share it online and simply call it “art” and it will more than likely be protected speech.

Today, it’s a fake image of gun nut caricatures that simply look like real images we’ve all seen of actual gun nuts in the real world. Tomorrow, it’s a fake image of something that hasn’t happened but looks real and plays to our biases and suddenly, without warning, disinformation becomes active harm.

Here’s my best advice: if you see an image going viral, before deciding to share it, take a few more moments to look closely for clues. Take the time to develop recognition of obvious and less-than-obvious tells that it’s AI-generated.

You don’t need a computer science degree or an expertise in photography to develop this skill. You just need adequate eyesight and humility and the willingness to engage in good faith.

If we’re all committed to that approach and offer each other more grace, it’ll be much harder for disinformation to spread.


yes, please tip me coffee


Charlotte's Web Thoughts is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Discussion about this podcast

Charlotte's Web Thoughts
Charlotte's Web Thoughts
Charlotte Clymer is a writer and LGBTQ advocate. You've probably seen her on Twitter (@cmclymer). This is the podcast version of her blog "Charlotte's Web Thoughts", which you can subscribe to here: charlotteclymer.substack.com