Deanonymous LLM
Deanonymous LLM
$deanon // privacy-security protocol
LLM + semantic search + verificationdefensive privacy researchopen threat model

Measure the risk of large-scale deanonymization.

Deanonymous LLM is a privacy-security project that demonstrates how modern language models can link pseudonymous profiles across platforms—then turns that capability into defensive tooling: audit your exposure, reduce identity leakage, and ship safer online systems.

status:protocol in developmentticker:$deanonfocus:privacy hardening
deanon://console
read-only demo
> initialize pipeline --mode=defense
ingest public text signals (bio, posts, comments)
extract identity-relevant features via LLM
retrieve candidates with semantic embeddings
verify matches to reduce false positives
~output: privacy risk report + mitigation checklist
SAFETY BOUNDARY

This project is built for privacy audits and prevention. No doxxing, no targeting individuals, no operational deanonymization.

cross-platform risk scoringleakage detectionmitigations library
MISSION // DEFENSE

What is Deanonymous LLM?

THREAT REALITY

Pseudonyms don’t automatically protect identity. With enough public writing, patterns emerge: platform references, unique phrases, timezones, hobbies, and cross-links. LLMs can surface these signals at scale.

We turn the threat into a shield by providing tooling that helps you understand exposure and reduce it—before an adversary does.

PIPELINE
  • 01 // Feature extraction: identity-relevant attributes from raw text.
  • 02 // Candidate search: semantic retrieval using embeddings.
  • 03 // Verification: reasoning layer to reduce false positives.
  • 04 // Output: privacy-risk report + mitigations checklist.
note: the system is designed for audits, red-teaming your own footprint, and improving privacy defaults.
MODULES // SIGNALS

Capabilities

Leakage Detector

Find identity hints in bios, posts, and comment histories: cross-links, locations, employers, recurring phrases, and unique timelines.

Semantic Match Radar

Map writing style and topic fingerprints with embeddings to estimate cross-platform linkability—without exposing raw data.

False-Positive Guard

Verification step forces evidence-based reasoning, making risk estimates more robust and less trigger-happy.

Mitigation Playbooks

Actionable hardening guides: compartmentalization, handle hygiene, cross-link removal, posting cadence, and metadata reduction.

Privacy Score

A simple scoring model you can track over time. Treat privacy like security: measure, fix, re-test.

Research Benchmarks

Dataset + eval harness for defensive privacy research, encouraging better threat models and safer defaults.

BOUNDARIES // RULES

Safety & Ethics

Defense-first design

This landing page describes a system inspired by academic privacy research. The product direction is to protect users, not to expose them.

  • No public “search-by-username” deanonymization.
  • No doxxing, no targeting, no operational re-identification.
  • Audit mode focuses on your own data / consented datasets.

Responsible disclosure loop

When the system finds risky patterns, it produces mitigations and encourages safer defaults: less cross-linking, fewer unique identifiers, and better privacy ergonomics.

policy:
Only publish aggregated metrics and general findings. Individual identities are not a product.
ECONOMY // FUEL

Token Utility: $deanon

Why a token?

$deanon is proposed as a coordination layer for privacy research, audits, and incentives—funding defensive development rather than exploitation.

Access & Rate-Limits

Hold/use tokens to access premium risk reports, scheduled scans, and advanced mitigation libraries.

Bounties

Reward researchers for discovering privacy leaks and proposing patches, docs, and safer defaults.

Governance

Community votes on datasets, evaluation methods, and safety constraints.

Public Metrics

Sponsor transparency: aggregate privacy-risk dashboards, no personal identities.

TOKEN SNAPSHOT
symbol$deanon
roleutility + governance
focusprivacy defense
Replace with your real tokenomics when ready (supply, vesting, chain, contract, etc.).
Join Waitlist →
PHASES // EXEC

Roadmap

PHASE 01
Leakage Audit MVP
  • + text signal ingestion
  • + risk scoring v1
  • + mitigation checklist v1
  • + privacy playbooks
PHASE 02
Evaluation Harness
  • + benchmark datasets (consented)
  • + false-positive guardrails
  • + report templates
  • + community bounties
PHASE 03
Protocol Layer
  • + $deanon utility modules
  • + governance proposals
  • + public aggregate metrics
  • + auditor integrations
QUESTIONS // ANSWERS

FAQ

Is this a doxxing tool?

No. The direction here is defensive: assessing privacy exposure and providing mitigations. We don’t build features intended to identify or target individuals.

What does the system output?

A privacy risk score, highlighted leakage signals, and a step-by-step mitigation checklist to reduce linkability across platforms.

How is false-positive risk handled?

Through a verification layer that requires evidence-based reasoning and conservative thresholds, plus evaluation on consented datasets.

Where does $deanon fit?

As a utility/governance layer to fund privacy research, reward mitigations, and coordinate audits—without exposing personal identities.

JOIN // SIGNAL

Access the Console

Build privacy like you build security.

Subscribe for launch updates, research drops, and early access to the defensive risk scanner. No spam. No tracking pixels. Minimal metadata.

@
by requesting access you agree to use the system for defensive privacy purposes only.