What If - The Recruiter Who Wasn't Real

A CHARADES.NET WHAT-IF TALE

The notification appeared at 4:47 PM on a Tuesday, sandwiched between a Slack alert about a failed build and a calendar reminder for a meeting Alex Reeves had already decided to skip.

Natasha Volkov wants to connect.

He thumbed over to LinkedIn. Senior Technical Recruiter at Helios Systems. Stanford MBA. Eight years in the industry. Five hundred and twelve connections. Her profile photo showed a woman in her mid-thirties with dark hair pulled back, minimal makeup, the kind of confident half-smile that said I don't need to try hard because the company name does the work.

Helios Systems. The Helios Systems. The company that had just shipped the real-time threat detection platform everyone in the security industry was talking about.

Alex looked up from his phone at the open-plan office of CipherWall, the midsize cybersecurity firm where he'd spent the last three years doing exactly the kind of work that wasn't getting him promoted. His annual review was still warm on his manager's desk. "Solid contributor. Room to grow into leadership." The corporate equivalent of a participation trophy.

He accepted the connection request.


Natasha's first message arrived the next morning.

Hi Alex — I came across your ByteShield contributions on GitHub. The dependency injection pattern you used for the plugin architecture is exactly the kind of thinking we look for on our offensive security team. Would you be open to a conversation?

He read it twice. ByteShield was a niche open-source security testing tool he'd built on weekends. It had maybe two hundred stars. Nobody outside a small circle of security researchers even knew it existed.

She knew it existed.

He typed back: Always happy to talk. What's the role?

Her response came within minutes — not the instant reply of a bot, but the comfortable pace of someone who'd been waiting for his answer. She described a new red team initiative at Helios. Offensive security research. The kind of work Alex had been begging CipherWall to let him do.

Over the next two weeks, the messages came daily. Not pushy. Not transactional. Natasha asked about his approach to fuzzing methodologies, his opinion on a recent CVE disclosure, whether he'd ever considered relocating to the Bay Area. She remembered that he preferred Go over Python ("I saw your Twitter thread about dependency management — hilarious"). She mentioned his dog, Biscuit, by name — he'd posted a photo on Instagram three months ago.

It felt less like a recruitment pipeline and more like finding a professional soulmate. Someone who finally saw what he could do instead of what he hadn't done yet.

On day ten, Natasha introduced him to David Chen, a staff engineer at Helios. David's LinkedIn was impressive — MIT, seven years at Helios, co-author on two published papers about adversarial machine learning. They had a brief chat about team culture, technical challenges, the kind of problems Helios was solving at scale. David seemed sharp, genuine, and enthusiastic about bringing Alex on board.

Alex didn't run a reverse image search on Natasha's photo. He didn't call Helios's main line to verify her employment. He didn't check whether David Chen's published papers actually existed.

He was too busy being wanted.


The first video call was scheduled for the following Monday. Alex wore a collared shirt — the one Mara said made him look "like someone who gets things done" — and logged into Zoom five minutes early.

Natasha appeared on screen looking slightly different from her LinkedIn photo. New glasses, she explained. Her office background was a tasteful blur of bookshelves and ambient lighting. She was warm, professional, and asked questions that felt like they came from someone who had actually read his resume.

"Tell me about the hardest bug you ever found."

Alex told the story of the session fixation vulnerability he'd discovered in a client's authentication system — the one CipherWall's CISO had praised and then buried because the client didn't want to disclose it. Natasha listened with the kind of attention that made him feel like the story mattered.

The second call came three days later. This time, David was there, along with Priya Sharma, introduced as the team lead for Helios's offensive security division. Three faces in three Zoom tiles. They asked Alex to walk through his approach to a hypothetical penetration testing scenario — a SaaS application with a complex microservices architecture. The questions were technical, specific, and fair. Priya pushed back on one of his assumptions about lateral movement, and he revised his answer. She nodded. "That's the kind of adaptive thinking we need."

Alex hung up feeling something he hadn't felt in months at CipherWall: the electric certainty that he was exactly where he was supposed to be.

None of the three faces on that Zoom call belonged to real people. Natasha, David, and Priya were digital constructs — AI-generated faces rendered in real time through a tool the operators called PhantomForge, animated with micro-expressions and gaze patterns sophisticated enough to pass casual inspection. Their questions were fed through MirrorCoach, a conversation management system that had spent two weeks analyzing Alex's public output — every GitHub commit, every Twitter thread, every LinkedIn endorsement — to build a psychological profile precise enough to know which compliments would land and which questions would make him feel competent.

Alex was twelve targets in a campaign. He was the only one who had made it to the assessment stage.


The email from Natasha arrived on a Thursday evening.

Great news — the team loved your interviews. One final step: a technical assessment. Standard process for all senior candidates. You'll find the testing framework in the GitHub repository below. Clone the repo, run the test suite locally, and submit your results by Monday. Let me know if you have any questions!

The repository link pointed to a private GitHub repo. Alex clicked through. Clean directory structure. Well-documented README. A security testing framework written in Go — his preferred language. The code was organized with the kind of care that suggested a mature engineering team. He scrolled through the test files. Some of them even referenced patterns from his own ByteShield project.

He pulled the repo to his local machine.

For one moment — one lucid, trained moment — something flickered at the edge of his awareness. A voice from a hundred security briefings: Never execute untrusted code on a corporate device. He knew the rule. He'd taught the rule. He'd written a section about exactly this scenario in CipherWall's employee security handbook.

But this wasn't untrusted code. This was Helios Systems. He'd used their public security tools just last month. He'd spent three weeks talking to their team. And Natasha had said there was a deadline. And his work laptop already had the right development environment. And setting up a VM would take an hour he didn't have because Mara had a doctor's appointment and he'd promised to cook dinner.

Just this once.

He ran the tests. The terminal scrolled with familiar output — test results, coverage metrics, a final summary. Everything looked normal. Everything was normal, except for the forty-seven lines of code buried inside a test helper function that executed silently alongside the legitimate tests. Those lines launched a credential harvester called SilkThread — a tool that, over the next seventy-two hours, would quietly extract Alex's VPN credentials, SSH keys, API tokens, browser session cookies, and cached authentication certificates.

Alex reviewed his test results, wrote a brief summary, and submitted them to Natasha.

She responded within the hour: Strong results. We'll have an offer ready by Friday. Welcome to Helios.

He texted Mara: I think I got the job.


Friday morning. Alex badged into CipherWall's office at 8:15, coffee in hand, feeling lighter than he had in months. He was halfway to his desk when he noticed the SOC — the Security Operations Center, the glassed-in nerve center of the office — was full. Every analyst was at their station. The wall displays showed traffic graphs spiking red.

His manager intercepted him in the hallway. "Conference room. Now."

The CISO was already there, along with the head of incident response and a woman Alex didn't recognize from legal. The CISO's face had the controlled stillness of someone delivering very bad news very carefully.

"Alex, we detected anomalous outbound traffic from your workstation beginning Tuesday evening. Someone used your credentials to access the client VPN configuration database, the zero-day vulnerability catalog, and the customer deployment architecture files. The exfiltration totals approximately four hundred gigabytes over seventy-two hours."

The room tilted. Alex heard himself say, "That's not possible."

"The forensics team identified a credential harvesting tool — embedded in a GitHub repository you cloned Tuesday evening. Your endpoint was the initial access vector."

The woman from legal placed a document on the table. Alex didn't read it. He was looking at his phone, where a new LinkedIn notification glowed:

Natasha Volkov sent you a message.

He opened it. "Looking forward to your start date!"

He clicked her profile. The page loaded blank. This profile is no longer available.

David Chen. Gone. Priya Sharma. Gone. Helios's internal recruiter directory — he pulled it up on his phone with shaking hands — had no record of a Natasha Volkov. Had never had a Natasha Volkov.

Six weeks. Dozens of messages. Three video calls. A relationship that had felt more validating than anything in his professional life. And none of it was real.

Alex Reeves — security engineer, handbook author, the guy who trained new hires on social engineering awareness — had been socially engineered.


The exit was quiet. A security escort to his desk. A cardboard box. His badge collected at the front door.

Mara was waiting in the parking lot. He got in the car and stared at the dashboard.

"What happened?"

"I can't — it's complicated."

"Alex."

"I installed malware on my work computer because a fake recruiter told me I was special."

She didn't say anything for a long time. Then: "The Helios job?"

"There was no job. There was no Natasha. The whole thing was generated by an AI. The profile, the conversations, the interviews — all of it."

Three months later, CipherWall disclosed the breach. Alex's name wasn't in the public filing, but the cybersecurity industry is small. People talked. His job search moved slowly, and every recruiter who reached out triggered a physical reaction — a tightness in his chest, a surge of adrenaline — that he couldn't control with logic.

On a Tuesday afternoon, sitting in a coffee shop with his laptop open and a resume he'd rewritten for the fourth time, a LinkedIn notification appeared.

Recruitment Coordinator at a well-known tech company wants to connect.

The profile was polished. The message was personalized. It referenced a project he'd recently published.

Alex stared at the screen. His finger hovered over the accept button.

He closed the laptop.


How to Protect Yourself

The attack in this story exploits something no firewall can protect: the human desire to be recognized. But there are concrete steps that break the chain at multiple points.

Verify recruiters independently. Never rely on a LinkedIn profile to confirm a recruiter's identity. Search for the company's main phone number on their official website — not from a link the recruiter sends — and call to confirm the person works there. This single step defeats the vast majority of fake recruiter operations.

Never run assessments on work devices. Coding assessments, test frameworks, and take-home projects should be treated as untrusted code — always. Use an isolated virtual machine or a dedicated personal device with no access to corporate credentials, VPNs, or sensitive systems. If the company objects to this practice, that's itself a red flag.

Audit your digital footprint. Review what's publicly visible on your GitHub, Twitter/X, LinkedIn, and Instagram profiles. Attackers use this information to craft personalized messages that feel authentic. Ask yourself: could someone build a psychological profile of me from my public posts?

Use hardware security keys. Credential stealers capture passwords, session cookies, and API tokens — but physical FIDO2 security keys cannot be stolen remotely. If your employer offers hardware key authentication, use it for every account that supports it.

Watch for synthetic profile indicators. Reverse-image-search profile photos. Check whether the person's connections include verifiable individuals at the claimed company. Look for signs of a thin profile history — accounts created recently with rapid connection growth but little post history.

Report suspicious contacts. Report fake profiles to LinkedIn and flag them with your company's security team. If you've already engaged with a suspicious recruiter, disclose it immediately — early detection dramatically reduces the blast radius.

For companies: assume the endpoint will be compromised. Implement zero-trust architecture where developer workstations don't have standing access to production systems. Segment networks so that a single compromised laptop cannot reach customer data, VPN configurations, or vulnerability databases.


Although based on factual research, the people, AIs, and organizations in this story are fictional and not based on any real entities. Research sources: CrowdStrike 2024 Threat Report, Microsoft Security Blog (March 2026), Stanford Internet Observatory, FTC Consumer Alert Data, Unit 42 Global Incident Response Report 2025.

For the real-world research behind this story, read our companion post: "The Recruiter Who Wasn't Real: How AI Is Weaponizing LinkedIn Against Tech Workers"

Previous
Previous

The Recruiter Who Wasn't Real: How AI Is Weaponizing LinkedIn Against Tech Workers

Next
Next

Introducing AI Futures: Where Real Research Meets "What If?"