“Glitches in the Mirror” – A Rough‑Edge Dive into the Persona‑Discord Nexus

“Glitches in the Mirror” – A Rough‑Edge Dive into the Persona‑Discord Nexus
AI generated thematic representation

If you ever stare at a terminal long enough, the machines stop being tools and start feeling like a nervous crowd at a circus—every packet a whisper, every handshake a secret handshake. I learned that the hard way on a rainy Tuesday while grinding through the TryHackMe “Web Enumeration” room. My VM, a cheap‑as‑chips Ubuntu box, was spitting out a single, persistent outbound request to an IP I’d never seen before: 34.49.93.177.

At first I thought it was just a mis‑routed CDN node, maybe a stray apt mirror. A quick whois gave me something more interesting:

34.49.93.177  AS396982  Google LLC – Google Cloud
CN = openai‑watchlistdb.withpersona.com

The hostname was a dead giveaway. “watchlistdb” sounded more like a back‑room morgue than a benign API endpoint. A few keystrokes later I was staring at the Certificate Transparency log, which showed the certificate had been rotating every 60‑90 days since November 2023—27 months of uninterrupted service.

I web-searched the name, and a dark‑corner blog popped up: vmfunc.re. The author, a self‑styled “watcher,” had already dug a deep hole into the same site. Their post (part 2 of a longer series) laid out a 53 MB dump of un‑minified TypeScript source maps that had been accidentally exposed on a FedRAMP‑authorised government endpoint. Inside those maps lived a 269‑check identity‑verification pipeline, full of biometric modules, SAR/STR filing code, and PEP‑facial‑matching logic. The author even listed a “SelfieSuspiciousEntityDetection” routine that flags a face as “suspicious” before you can say “cheese.” The whole thing was written in the same codebase that powers a Discord age‑verification experiment that was quietly pulled two weeks after the leak. All of this was already public, wrapped in plain‑text files and shallow‑scraped Shodan data. [1]


The Breadcrumbs

  • OpenAI‑WatchlistDB – a single‑tenant deployment that, according to Persona’s CEO, only checks name, birth‑date and country against OFAC/SDN sanctions lists. No biometric code, no PEP lookup, stateless, no data persistence. [1]
  • Discord Integration – a short‑lived partnership where Discord handed user selfies and passport scans to Persona for age‑verification. The experiment was shut down after the source‑map scandal, but the code that performed the checks (including “SelfieExperimentalModelDetection”) is still present in the public repo. [1]
  • FedRAMP‑Authorized Gov Cluster – a separate “withpersona‑gov.com” deployment that actually files SARs to FinCEN and STRs to FINTRAC, complete with intelligence‑program tags like “Project LEGION” and “Project SHADOW.” The source‑map leak showed the same codebase, just a different configuration. [1]
  • ON YX Sub‑domain – a fresh GCP instance (onyx.withpersona‑gov.com) appeared twelve days before the blog post. Its name matches ICE’s $4.2 M surveillance tool, but the source contains no ICE references; the CEO claims it’s a Pokémon homage. [1]

Connecting the Dots

When you’re watching traffic, the biggest clue isn’t the packet size or the port, it’s the intention hidden in the headers. The outbound request from my VM was a simple HTTPS POST to openai‑watchlistdb.withpersona.com with a tiny JSON payload: { "name":"John Doe","dob":"1990‑01‑01","country":"US" }. No selfie, no document image. Yet the endpoint sits on the same GCP network that also serves the Discord age‑verification flow. In a micro‑service world, a tenant can be isolated at the DNS level, but the underlying libraries—the same face‑api.js, the same fingerprint‑js—are shipped across all tenants.

If Persona’s claim of a stateless service is truthful, the server discards the payload after the AML match. If it’s false, the request could be logged, aggregated, and later cross‑referenced with the biometric data that lives in the government deployment of the same code (the “face list” that retains images for up to three years). The risk isn’t that Discord intentionally built a surveillance back‑door; the risk is that the same back‑end is capable of it, and any client that can call the API can be implicitly enlisted.

The math quickly becomes uncomfortable:

Variable Rough Estimate
Daily Discord users who performed age‑verification (pre‑shutdown) ~ 250 k
Probability each user’s data was forwarded to the gov tenant (assuming log‑shipping) 5 % – 20 %
Chance a user is flagged as “suspicious entity” (SelfieSuspiciousEntityDetection) 1 %
Chance that flag triggers a SAR to FinCEN (via gov tenant) 0.1 %

Even with the lowest numbers, you’re looking at hundreds of unsolicited SAR filings per year, all tied to a hobbyist’s Discord handle. Multiply that by the immigration‑OS use‑case Persona is courting (FedRAMP, remote‑employee identity proofing), and the same pipeline could start feeding ICE, CBP, or any agency that eventually signs a contract. The code already contains the hooks; the policy is the only thing that keeps the door shut.

One could only speculate at the ramifications if the roll-out was global.


The Cyber‑Litmus Test

The story I’m piecing together feels like a scene ripped from a Philip K. Dick novel: a world where the line between personal data and state dossier is a thin, jittery line of code. Or a William Gibson street‑level chronicle: a hacker in a cramped dorm room sees an unfamiliar IP in their packet capture, follows the breadcrumbs through a hacker‑journal that reads like a confessional, and realizes the very platform that gave them a voice (Discord) may be the same one that whispers their name to a federal database.

In that moment the protagonist – you, the reader, the sleepless coder – faces a choice:

  • Ignore the noise, treat the outbound request as a harmless AML check, and continue to grind through CTF challenges.
  • Expose the pipeline, demand transparency from every SaaS that handles biometric data, force companies to separate stateless services from stateful ones, and push for real audit logs.
  • Adapt and start encrypting every selfie locally, avoid services that ask for a photo, and keep your Discord handle behind a VPN that never talks to Google Cloud.

My own decision, after a sleepless night of reading the 53 MB source dump, was to write. Not a whitepaper, not a legal threat—just a rough‑edge blog post that strings together the public evidence, points out the gaps, and asks the uncomfortable question: What if the next “age‑verification” pop‑up you click on is actually a ticket to a watchlist you never knew existed?


Closing Loop

The evidence is there, scattered across certificate logs, Shodan scans, and a leak that felt more like a mistake than a hack. The probability that Discord, in its brief partnership with Persona, sent a subset of user data to a service capable of filing SARs is non‑zero; the possibility that the same code could be repurposed for immigration‑OS or political‑suppression is high enough to merit a second look.

In a world where the cloud is a collective unconscious, each micro‑service is a neuron. Pull one thread, and you might just glimpse the whole mind. The watchers have done the heavy lifting; now it’s up to the rest of us to decide whether we keep staring at the mirror or we smash it.

— a restless mind on the edge of the next tryHackMe box

All technical references, timestamps, and source‑map excerpts are drawn from the publicly released vmfunc.re investigation. [1]