If you’ve spent any time on social media recently, you’ve probably seen the trend: people uploading selfies to AI services that render them as retro action figures sealed in plastic blister packs. The results are undeniably creative—quirky accessories, heroic stances, vintage fonts. It’s nostalgia wrapped in AI-powered novelty.
But beneath the fun lies a growing data privacy concern that’s far more serious than the aesthetic appeal suggests.
To generate these images, users upload high-quality photos of their face and sometimes even add personal descriptions to fine-tune the results. What’s often overlooked is that these photos—containing biometric features—are fed into third-party systems that frequently retain the right to store, process, and reuse the data, often for model training or performance monitoring. This means your likeness could potentially become part of the service’s internal datasets, with little to no transparency or opt-out mechanism.
The implications go far beyond just losing control of your image. These same techniques feed the pipeline that powers deepfakes—ultra-realistic, AI-generated videos or voices that can impersonate individuals with uncanny accuracy. With just a few images or recordings, sophisticated models based on GANs or transformer architectures can fabricate video footage that places you in scenes or conversations you never had. The technology is now accessible enough that bad actors don’t need to be nation-state hackers—they just need a GPU and an internet connection.
Deepfakes have already been used to execute fraud, manipulate elections, and sabotage reputations. Now imagine your face—uploaded innocently for an AI art challenge—being used in one of those contexts. It’s a future we’re speeding toward without fully appreciating the risks.
But here’s where I draw the line.
Yes, I joined the trend. I created an action figure image of myself. So, does that make me a hypocrite for warning others about the risks while participating myself? Not quite. Because when I work with sensitive data—or even just my own biometric information—I do it under controlled conditions.

I didn’t upload my face to a websites or apps with vague terms and zero guarantees. I started a local GenAI model on my own computer—yes, that computer, the one where the guy at the store gave me side-eye for buying two graphic cards and an oversized power supply. And when I need more scale or specific model support, I turn to guarded environments hosted by hyperscalers like AWS, Microsoft Azure, or Google Cloud, where data governance, encryption, and access policies are firmly in place.
That’s the key difference: I own my processing pipeline. My data stays where I can monitor, control, and erase it. This isn’t about paranoia—it’s about responsible engineering.
The bigger concern is that professionals—especially those in tech, healthcare, finance, or legal sectors—often participate in these trends without considering the broader implications. We’ve seen people upload screenshots of internal dashboards to GenAI tools “just to summarize,” or paste confidential emails into chatbots “to draft a reply.” Each interaction is a data leak waiting to happen.
If you’re handling sensitive customer data or IP, and your prompt touches any part of that, you’re gambling with more than your reputation—you’re playing roulette with compliance, trust, and security.
The solution isn’t to avoid AI altogether. It’s to understand the risks and apply common sense boundaries. Host models in environments you control. Use anonymized or synthetic data when testing. Establish internal policies for AI use. Educate your colleagues, not with fear but with facts.
Because the truth is: technology is only as responsible as the people deploying it. It’s on us to model better behavior—both technically and ethically.
That action figure trend may look like innocent fun. But in a world increasingly shaped by synthetic content, the choices we make about our data today shape the risks we face tomorrow.