How Generative AI Is Ruining Travel Content on Social Media & What Social Media Companies Should Do

A few weeks ago, I was scrolling through Facebook when three eye-catching images popped up in a row: a dramatic desert view, a breathtaking canyon, and a peaceful mountain scene. At first glance, they looked incredible. But then I realized—they weren’t real. All three were heavily altered or completely created by AI.

One image showed massive, hundred-foot spires rising from a canyon floor—completely made up. Another had saguaro cactuses scattered across a landscape where they don’t even grow. And a mountain supposedly in the Northeast looked like it belonged in the Alps. None of it was accurate. And sometimes off by a very wide margin.

But people in the comments were loving them. “That’s so beautiful,” one person wrote. “God is an amazing creator,” said another.

I felt like I had to say something, so I pointed out that one of the photos was AI-generated. My comment got deleted almost immediately by the page admin. Meanwhile, the photo is still making the rounds on social media, with every comment praising it like it’s the real thing. There’s still no mention of AI—just a stream of awe and wonder. I’m guessing they’re deleting any other comments that call it out, too.

Since then, I’ve seen the same thing happen over and over again. Multiple times a day, it’s like—there goes another AI slop post. And another. And another.

Thankfully, more and more people are starting to call it out. But even those comments, ironically, drive up engagement on the posts. That probably boosts their visibility in the algorithm and just encourages these accounts to crank out even more AI-generated content—for attention, or maybe even a few bucks.

It’s honestly disturbing.

The fact that it only takes me 15 to 30 seconds of scrolling to see dozens—sometimes hundreds—of people being misled by fake AI images really doesn’t sit right with me.

And that’s not even touching on what’s being lost. Real photographers are out there planning trips for weeks, hiking into remote areas, waking up at 3 a.m., and sometimes risking their safety scrambling up steep cliffs—just to capture and share something authentic. Meanwhile, AI can spit out a fantasy scene in seconds, often trained on those same photographers’ work without credit or consent.

Forget the intellectual property issues. Forget the environmental cost of training these models. Even if you put all that aside, what’s left is a deeply unfair—and honestly, distressing—loss of connection to the real world.

What traveler wants to live in a world flooded with false realities? Who wants to scroll past fake mountains, rivers, and forests—places you’ll never actually be able to visit or connect with? Who wants to see less work from real photographers capturing real places?

If this trend continues, what are we left with? A warped version of the world, where the most beautiful landscapes aren’t real, and the ones that are get buried under AI-generated noise.

And what happens then? We start to doubt everything we see. We hesitate to believe, to engage, to get inspired—because we’re never quite sure if it’s real. That doubt seeps into our everyday experience of the internet, and we just… learn to live with it?

In a way, it feels like we’re going backward—to a time before photography, when we relied on secondhand stories and paintings, knowing they might not reflect reality. But now it’s worse, because the fakes look more convincing than ever, and they’re everywhere.

This is very different from the filters and enhancements we’ve grown to accept over the last decade or two. Those tools help photos glisten—bring out colors, sharpen details, maybe add a bit of mood. When done right, they enhance what’s already there. They don’t invent a new reality.

But AI-generated landscapes aren’t just tweaks. They’re fabrications. They create entire scenes that never existed—placing trees, cliffs, lakes, or mountains where none exist. It’s not about elevating a moment—it’s about replacing it with something imaginary.

And that shift changes how we experience the world. It disconnects us from place, from memory, from truth. That’s not just a creative choice—it’s a cultural one. And it deserves a lot more thought than it’s getting.

If I just want to “see something cool,” sure—an AI image might grab my attention for five seconds before I scroll on to the next thing. That kind of quick-hit visual stimulation has its place. But only in the right context, like on a page dedicated to AI imagery or on a post clearly marked as an AI creation.

But if I’m planning a trip somewhere or browsing for inspiration, I want to see what the place actually looks like. I want to understand the light, the terrain, the mood—what it feels like to stand there. And as a content creator, I want to see what’s possible to capture through a real lens, in real conditions.

I have zero interest in some algorithm’s loose interpretation of a place I care about. I don’t want AI slop. I want something real—something I can connect with, and maybe even chase myself.

Social media companies are pouring billions into AGI, but instead of just fueling more AI content, they should be using that power to better detect, flag, and penalize accounts that are slipping AI-generated images into the feed as if they’re real. How hard would it be to create filters that catch repeat offenders—based on comment patterns or even just a report button?

The bigger question is: what should happen to those accounts?

If it were up to me, they’d be deprioritized or shadowbanned until they’ve shown they’re willing to be transparent and stop misleading people. But let’s be honest—these platforms are too invested in engagement numbers to do anything that might reduce clicks, even if it means allowing misinformation to spread.

Yes, there are supposed to be AI content tags. But at least on Facebook, I rarely see them—and yet these posts still get tons of reach.

What I wish platforms would do is give users a choice: let me filter out all accounts with a history of misleading AI image reports. Just give me the option to opt out. Let me purge my feed of this stuff.

I’d be perfectly happy never seeing those accounts again.

Leave a Reply

Your email address will not be published. Required fields are marked *