Measure the risk of large-scale deanonymization.
Deanonymous LLM is a privacy-security project that demonstrates how modern language models can link pseudonymous profiles across platforms—then turns that capability into defensive tooling: audit your exposure, reduce identity leakage, and ship safer online systems.
This project is built for privacy audits and prevention. No doxxing, no targeting individuals, no operational deanonymization.
What is Deanonymous LLM?
Pseudonyms don’t automatically protect identity. With enough public writing, patterns emerge: platform references, unique phrases, timezones, hobbies, and cross-links. LLMs can surface these signals at scale.
We turn the threat into a shield by providing tooling that helps you understand exposure and reduce it—before an adversary does.
- 01 // Feature extraction: identity-relevant attributes from raw text.
- 02 // Candidate search: semantic retrieval using embeddings.
- 03 // Verification: reasoning layer to reduce false positives.
- 04 // Output: privacy-risk report + mitigations checklist.
Capabilities
Find identity hints in bios, posts, and comment histories: cross-links, locations, employers, recurring phrases, and unique timelines.
Map writing style and topic fingerprints with embeddings to estimate cross-platform linkability—without exposing raw data.
Verification step forces evidence-based reasoning, making risk estimates more robust and less trigger-happy.
Actionable hardening guides: compartmentalization, handle hygiene, cross-link removal, posting cadence, and metadata reduction.
A simple scoring model you can track over time. Treat privacy like security: measure, fix, re-test.
Dataset + eval harness for defensive privacy research, encouraging better threat models and safer defaults.
Safety & Ethics
Defense-first design
This landing page describes a system inspired by academic privacy research. The product direction is to protect users, not to expose them.
- • No public “search-by-username” deanonymization.
- • No doxxing, no targeting, no operational re-identification.
- • Audit mode focuses on your own data / consented datasets.
Responsible disclosure loop
When the system finds risky patterns, it produces mitigations and encourages safer defaults: less cross-linking, fewer unique identifiers, and better privacy ergonomics.
Token Utility: $deanon
Why a token?
$deanon is proposed as a coordination layer for privacy research, audits, and incentives—funding defensive development rather than exploitation.
Hold/use tokens to access premium risk reports, scheduled scans, and advanced mitigation libraries.
Reward researchers for discovering privacy leaks and proposing patches, docs, and safer defaults.
Community votes on datasets, evaluation methods, and safety constraints.
Sponsor transparency: aggregate privacy-risk dashboards, no personal identities.
Roadmap
- + text signal ingestion
- + risk scoring v1
- + mitigation checklist v1
- + privacy playbooks
- + benchmark datasets (consented)
- + false-positive guardrails
- + report templates
- + community bounties
- + $deanon utility modules
- + governance proposals
- + public aggregate metrics
- + auditor integrations
FAQ
No. The direction here is defensive: assessing privacy exposure and providing mitigations. We don’t build features intended to identify or target individuals.
A privacy risk score, highlighted leakage signals, and a step-by-step mitigation checklist to reduce linkability across platforms.
Through a verification layer that requires evidence-based reasoning and conservative thresholds, plus evaluation on consented datasets.
As a utility/governance layer to fund privacy research, reward mitigations, and coordinate audits—without exposing personal identities.
Access the Console
Build privacy like you build security.
Subscribe for launch updates, research drops, and early access to the defensive risk scanner. No spam. No tracking pixels. Minimal metadata.