Introducing AI Futures: Where Real Research Meets "What If?"
A new series from Charades.net — and a look at how we're building it.
Every day, another headline lands: a deepfake CEO tricks a finance team into wiring millions. An AI-generated voice clones a teenager's call for help. A chatbot impersonates a government agency so convincingly that even security professionals pause before flagging it.
These aren't science fiction. They're Tuesday.
And if you're like most people, you've probably had the same unsettling thought: What happens when this stuff gets better?
That's exactly what our new series — AI Futures — sets out to explore. Charades.net is launching a series of paired posts that tackle the cybersecurity threats emerging from artificial intelligence. Not with jargon. Not with scare tactics. With research, stories, and strategies you can actually use.
Here's what that looks like and why we're doing it this way.
Two Posts. One Threat. The Full Picture.
Each AI Futures installment delivers two companion pieces built around the same cybersecurity topic:
The Factual Report comes first. This is the research post — the one that lays out what's actually happening. It covers the facts: what the threat is, how it works at a high level, who it affects, who the bad actors are, and what solutions exist today. Every claim is grounded in deep research conducted across multiple AI platforms and cross-referenced against published sources. If we say a particular attack vector is growing, we'll show you the data behind that statement.
These reports are written for people who want to understand the landscape without needing a computer science degree. Think of them as the briefing you'd want before a conversation with your company's IT team — enough depth to be useful, enough clarity to be accessible.
The WhatIf Story follows. This is where the research comes alive. Each story takes the factual foundation from its companion report and asks: What if a bad actor actually pulled this off? What would it look like from the inside?
The result is short fiction — a narrative set in the plausible near future (one to five years from now) that follows ordinary people encountering AI-powered threats. A marketing manager whose voice gets cloned. A small business owner whose vendor relationships get exploited by an autonomous AI agent. A retiree whose financial advisor turns out to be entirely synthetic.
These aren't horror stories. They're cautionary tales designed to make you feel the threat that the factual report explains. And every single one ends with concrete prevention strategies — steps you can take today to protect yourself.
Why Both? Because Facts Alone Aren't Enough.
We started Charades.net to make cybersecurity accessible. Our Not-So-Tech-Savvy guides break down concepts like phishing, VPNs, and multi-factor authentication for people who need protection but don't speak the language.
AI Futures takes that mission further. The challenge with AI-powered threats is that they're abstract until they're personal. You can read a report about deepfake voice cloning and think, "That's concerning." But when you read a story about someone's mother getting a panicked phone call from a voice that sounds exactly like her daughter — and the call is fake — the threat becomes real in a way that no statistic can achieve.
That's the logic behind the dual-post format. The factual report gives you the knowledge. The story gives you the motivation. Together, they make the threat both understandable and unforgettable.
Built on Deep Research, Not Guesswork
Every AI Futures topic begins with structured deep research. We run detailed research prompts across multiple AI platforms — currently Claude, ChatGPT, and Gemini — and synthesize the results. This isn't a quick search and summary. Each prompt is designed to surface specific dimensions of a threat: the technical mechanisms, the real-world incidents, the affected populations, the economic impact, the existing defenses, and the gaps where defenses don't yet exist.
We then cross-reference these findings against published reports, incident databases, and expert analysis. The factual posts cite their sources. The stories stay faithful to the research while translating it into narrative.
This multi-platform research approach matters because no single AI has a complete picture. By running the same structured prompts across different systems, we catch blind spots and build a more comprehensive threat profile than any single source could provide.
Yes, We Use AI to Write About AI Threats
It would be strange to publish a series about AI-powered cybersecurity threats without being transparent about our own use of AI in the process.
AI Futures is built with significant AI assistance. The deep research phase uses AI platforms directly. The writing process involves AI collaboration — drafting, refining, fact-checking, and validating content through structured workflows that ensure quality and accuracy.
We think this is the right approach for several reasons. First, the volume of research required to cover eight major threat categories across multiple dimensions would be impractical without AI assistance. Second, the same AI capabilities that enable the threats we're writing about also enable us to research and explain those threats more effectively. And third, we believe in practicing what we preach: AI is a tool, and like any tool, what matters is how you use it.
Every factual claim is verified. Every story is reviewed against its companion research. The AI assists the process; it doesn't replace editorial judgment.
What's Coming
AI Futures covers eight major threat categories:
AI-Powered Social Engineering — voice cloning, synthetic personas, and next-generation phishing
Autonomous AI Attack Systems — self-evolving malware and AI agents that hack without human direction
Deepfake and Synthetic Media — fabricated evidence, identity fraud, and reality manipulation
AI-Enabled Financial Fraud — synthetic identities, investment scams, and automated financial crime
Supply Chain and Infrastructure Attacks — when AI targets the systems that keep society running
Privacy Erosion and Surveillance — stalking, profiling, and the death of anonymity
AI vs. AI: When Defenses Fail — adversarial attacks, prompt injection, and the arms race
Human Factors and Psychological Manipulation — personalized manipulation, addiction engineering, and trust erosion
Each category will generate multiple scenarios over time, meaning you'll see different facets of the same threat explored through different characters, settings, and situations. The factual reports will evolve as the threat landscape shifts — because it will.
A Note About Our Stories
Every WhatIf story carries a disclaimer, and we want to be clear about it here as well: the people, AI systems, and organizations in our stories are fictional. When we refer to AI companies in the stories, we use fictional names. We don't depict real hacking techniques — the technical details are abstracted to illustrate concepts without providing a roadmap. And our time horizon is the near future: plausible scenarios within the next five years, not dystopian fantasy.
The goal is awareness, not anxiety. Every story ends with prevention strategies because the point isn't to scare you — it's to prepare you.
Two Visual Styles, One Mission
You'll notice something distinctive about our imagery. AI Futures uses two visual styles that match its two content types.
The WhatIf stories feature artwork inspired by the beautifully rendered adventure games of the 1990s — think the richly detailed pixel art of classic LucasArts titles. These retro gaming visuals transform cybersecurity scenarios into scenes you want to explore, making the fiction feel like an interactive experience rather than a warning label.
The factual reports use a different approach: dark, cinematic photography with a tech-noir edge. These images ground the research in reality — server rooms, screens in shadow, anonymous figures in urban environments. The photographic style signals authority and urgency.
The contrast is intentional. When you see pixel art, you know you're in a story. When you see the photographic style, you know you're reading research. Two visual languages, one mission: making cybersecurity threats both memorable and actionable.
Stay In the Loop
If you want to be the person in your circle who actually understands what AI threats look like — and knows how to defend against them — AI Futures is for you.
The factual reports give you the knowledge. The WhatIf stories make that knowledge stick. And every post ends with something you can do about it.
The threats are evolving. Your understanding should too.
AI Futures launches Spring 2026. Follow Charades.net so you don't miss it.