Technical Whitepaper

The POY Protocol: Zero-Knowledge Human Verification for the Post-AI Internet

Author Mark Gabrielli, Founder & CEO, WETYR Corp
Published April 2026
Reading Time 25 minutes
Download PDF Read Below

Get the PDF Version

Enter your email to receive a formatted PDF of this whitepaper delivered to your inbox.

Check your inbox - the PDF is on its way.

Something went wrong. Please try again.

The internet was built without a trust layer. Today, 64% of web traffic is non-human, deepfake fraud exceeds $25 billion annually, and 1.4 billion people worldwide lack any form of government-issued identification. Existing verification systems - from CAPTCHA to biometric databases - either fail against modern AI or require the collection of sensitive personal data that becomes a breach target. This whitepaper introduces the POY Protocol, a zero-knowledge human verification system that proves a user is a real, unique human being without collecting, transmitting, or storing any personal data. Built on hardware-backed secure enclaves, on-device biometric liveness detection, and one-way cryptographic hashing, POY represents a fundamental shift from "prove your identity" to "prove your humanity."

1. The Crisis of Digital Identity

The internet has an identity problem - and it is getting worse by the month. What began as an open network designed for information sharing now underpins everything from financial transactions to democratic elections. Yet the fundamental architecture of the web contains no native mechanism for distinguishing a human user from a machine. The consequences of this architectural gap are accelerating.

64%
Non-human traffic
500%
Deepfake increase since 2024
$25B
Annual fraud losses
1.4B
People without government ID

According to Imperva's 2025 Bad Bot Report, non-human traffic now constitutes 64% of all web activity[1]. This is not limited to web scraping or ad fraud. Sophisticated bot networks create fake social media accounts, manipulate public discourse, generate synthetic product reviews, and execute credential-stuffing attacks at scale. The line between human and machine behavior online has effectively dissolved.

Deepfakes have compounded the problem dramatically. Since 2024, the volume of deepfake content has increased by over 500%, fueled by increasingly accessible generative AI tools[2]. Financial institutions reported $25 billion in deepfake-related fraud losses in 2025 alone, with attackers using synthetic video and audio to bypass voice verification, impersonate executives, and authorize fraudulent transactions[3].

Meanwhile, 1.4 billion people globally - predominantly in developing nations across sub-Saharan Africa and South Asia - lack any form of government-issued identification[4]. These individuals are effectively locked out of any verification system that relies on document-based identity proofing. The World Bank has identified this "identity gap" as one of the primary barriers to financial inclusion and digital participation.

CAPTCHA Is Dead

For two decades, CAPTCHA served as the internet's primary human verification mechanism. That era is over. Research from ETH Zurich demonstrated in 2024 that large language models and computer vision systems now solve CAPTCHA challenges with near-100% accuracy, matching or exceeding human performance[5]. Google's own reCAPTCHA v3 has shifted entirely to behavioral analysis - effectively acknowledging that the puzzle-solving approach is obsolete. Yet behavioral analysis introduces its own problems: it requires extensive tracking, creates privacy concerns, and remains vulnerable to sophisticated bot frameworks that simulate human browsing patterns.

The Dead Internet Problem

The convergence of these trends has given rise to what researchers and commentators call the "Dead Internet Theory" - the increasingly credible hypothesis that the majority of online content, interactions, and engagement is generated by machines rather than humans[6]. While the theory in its extreme form remains debated, the underlying data is not. When the majority of web traffic is non-human, when AI can impersonate any individual with high fidelity, and when existing verification systems are systematically failing, the internet's value as a medium for authentic human communication is under existential threat.

The need is clear: a verification mechanism that can reliably distinguish humans from machines, that works for all 8 billion people regardless of documentation status, and that does not require surrendering personal data to yet another centralized database. That is the problem the POY Protocol was designed to solve.

2. Existing Approaches and Their Limitations

Before introducing the POY Protocol, it is important to examine why existing verification approaches have failed or fallen short. Each represents a genuine attempt to solve the human verification problem, and each reveals specific architectural constraints that informed our design.

Approach Method Key Weakness Privacy Risk
CAPTCHA / reCAPTCHA Visual puzzles, behavioral tracking AI solves with 100% accuracy Medium - behavioral tracking
Government ID (Persona, Onfido) Document scan + selfie match Excludes 1.4B people, breach target Critical - PII storage
Iris Scanning (World ID) Hardware orb, iris biometric Custom hardware required, centralized DB High - biometric database
Social Graph (BrightID) Peer verification network Sybil-vulnerable, limited adoption Low - but fragile
Knowledge-Based Auth Security questions, personal trivia AI answers from breached data Medium - data correlation

CAPTCHA and reCAPTCHA

CAPTCHA was a brilliant solution for its era - a test that exploited the gap between human and machine perception. That gap has closed. Modern AI vision models solve distorted text, image selection grids, and audio challenges with near-perfect accuracy. Google responded by making reCAPTCHA invisible, relying on behavioral signals and cookies rather than explicit challenges. But this approach trades one problem for another: it requires extensive user tracking, fails for privacy-conscious users who block cookies, and can be spoofed by bot frameworks that replay human browsing sessions.

Government ID Verification

Services like Persona, Onfido, and Jumio require users to photograph government-issued identification and take a live selfie. This approach has three fundamental problems. First, it excludes the 1.4 billion people without government IDs. Second, it creates massive centralized databases of personal documents - precisely the kind of honeypot that attackers target. Onfido alone processes over 14 million verifications annually, each containing a government ID image and biometric selfie. Third, deepfake technology is increasingly capable of generating synthetic selfies that pass liveness checks, undermining the core assumption of the approach[7].

Iris Scanning and World ID

Worldcoin's approach of using custom hardware orbs to capture high-resolution iris scans represents an ambitious attempt at universal human verification. However, it faces significant obstacles. The requirement for custom hardware limits deployment to physical locations where orbs are available. The system creates a centralized biometric database - even if iris templates are stored rather than raw images, the centralization creates a single point of failure. Multiple countries including Kenya, France, and Germany have suspended or investigated Worldcoin operations over privacy concerns[8]. And the fundamental architectural decision to collect and store biometric data - regardless of how it is encrypted - creates an inherent tension with the principle of data minimization.

Social Graph Verification

BrightID and similar systems attempt to verify uniqueness through social connections - real humans vouch for other real humans in a web of trust. While conceptually elegant, social graph approaches are inherently vulnerable to Sybil attacks, where a coordinated group creates a cluster of fake accounts that vouch for each other. Adoption remains limited, and the verification is only as strong as the weakest link in the social chain. The approach also struggles with bootstrapping: new users need existing verified users to vouch for them, creating a chicken-and-egg problem in new communities.

Knowledge-Based Verification

Security questions and knowledge-based authentication were already weakening before the AI era. With billions of records exposed through data breaches - names, addresses, mother's maiden names, first pets, high school mascots - the "secret" knowledge required by these systems is anything but secret. AI systems can now correlate breached datasets to answer knowledge-based questions with high accuracy, rendering this approach effectively obsolete for any meaningful security application.

The common thread across all these failures is a fundamental design assumption: that verifying a person's identity is the same as verifying their humanity. It is not. The internet does not need to know who you are. It needs to know what you are.

3. The POY Protocol - A New Approach

The POY Protocol begins with a simple premise: the internet needs proof of humanity, not proof of identity. Every design decision in the protocol flows from this distinction.

Design Principles

Principle 1: Prove Humanity, Not Identity

POY verifies that a user is a living, unique human being. It never asks who that human is. No names, no documents, no addresses, no phone numbers. The only assertion is: "This is a real person, and this person has not already been verified under a different credential."

Principle 2: Zero Data Collection

POY does not collect, transmit, or store biometric data, personal information, or behavioral profiles. All biometric processing happens on the user's device, inside a hardware-secured environment. Only cryptographic proofs leave the device - never raw data.

Principle 3: Hardware-Based Security

Software-only verification can be spoofed by software. POY leverages the secure hardware already present in modern smartphones - Apple's Secure Enclave, Google's Titan M2 chip, Samsung's Knox Vault - to perform biometric liveness checks in a tamper-resistant environment that cannot be intercepted or modified by applications or even the operating system.

On-Device Biometric Liveness Detection

The core verification mechanism in the POY Protocol is on-device biometric liveness detection. When a user initiates a POY verification, the following process occurs entirely within the device's secure enclave:

Cryptographic Pipeline

Once liveness is confirmed, the biometric data never leaves the secure enclave. Instead, the following cryptographic pipeline produces the verification proof:

  1. SHA-256 One-Way Hashing - The biometric template is passed through SHA-256, producing a fixed-length hash that cannot be reversed to reconstruct the original biometric data. Even if the hash were intercepted, it reveals nothing about the user's appearance.
  2. ECDSA P-256 Key Generation - An asymmetric key pair is generated within the secure enclave using the ECDSA P-256 curve. The private key is bound to the secure hardware and cannot be exported, copied, or accessed by any software - including the POY application itself.
  3. HD Key Derivation - For each platform or service where the user needs to prove their humanity, a unique child key is derived using hierarchical deterministic (HD) key derivation. This ensures that a user's verification on Platform A cannot be linked to their verification on Platform B - eliminating cross-platform tracking while maintaining per-platform uniqueness.
  4. Signed Attestation - The verification result (human/not-human, unique/duplicate) is signed with the platform-specific key and transmitted to the requesting service. The service receives a cryptographic proof of humanity - nothing more.

Permanent Credential vs. Per-Interaction Challenge

The POY Protocol supports two verification modes. A permanent credential is established during enrollment and can be used for ongoing authentication - similar to a digital passport stamp that says "verified human" without revealing the passport itself. A per-interaction challenge performs a fresh liveness check for each verification request, suitable for high-security contexts like financial transactions or voting. The per-interaction mode ensures that even a stolen device cannot be used for verification without the enrolled user being physically present.

4. Technical Architecture

Enrollment Flow

+------------------+     +-------------------+     +------------------+
|   User Device    |     |  Secure Enclave   |     |   POY Backend    |
+------------------+     +-------------------+     +------------------+
        |                         |                         |
        |  1. Launch POY SDK      |                         |
        |------------------------>|                         |
        |                         |                         |
        |  2. Liveness Challenge  |                         |
        |  (3D depth + IR + micro)|                         |
        |------------------------>|                         |
        |                         |                         |
        |          3. Biometric   |                         |
        |          hash (SHA-256) |                         |
        |                         |---+                     |
        |                         |   | 4. Generate         |
        |                         |   |    ECDSA P-256      |
        |                         |   |    key pair         |
        |                         |<--+                     |
        |                         |                         |
        |                         | 5. Sign attestation     |
        |                         |------------------------>|
        |                         |                         |
        |                         |    6. Store public key  |
        |                         |    (NO biometric data)  |
        |                         |                         |
        |  7. Credential issued   |                         |
        |<------------------------|                         |
        |                         |                         |

During enrollment, the user performs a single liveness check. The secure enclave generates the root key pair and produces a signed attestation containing only the public key and a uniqueness proof. The POY backend stores the public key and the uniqueness hash - it never receives, processes, or stores any biometric data. The private key remains permanently bound to the device's secure hardware.

Verification Flow

When a third-party service requests verification, the following exchange occurs:

  1. The service sends a verification request to the POY API with a unique challenge nonce.
  2. The POY SDK on the user's device prompts a liveness check (per-interaction mode) or validates the existing credential (permanent mode).
  3. The secure enclave derives a platform-specific child key using HD derivation and signs the challenge nonce.
  4. The signed response is returned to the service, which verifies the signature against the POY public key registry.
  5. The service receives a boolean result - human or not human - along with a confidence score and a uniqueness attestation.

Multi-Device Support

The POY Protocol supports three mechanisms for cross-device verification:

API Architecture

The POY API exposes 35 endpoints organized across five functional domains:

All endpoints maintain a median response time under 50ms at the 99th percentile, with verification requests completing in under 200ms end-to-end including the on-device liveness check. The API uses TLS 1.3 exclusively, with certificate pinning enforced in all SDKs.

What We Store (and What We Do Not)

Stored by POY

Never Collected by POY

5. Privacy by Architecture

The term "privacy by design" has become a compliance checkbox - a set of best practices layered on top of systems that fundamentally collect and process personal data. POY takes a different approach that we call privacy by architecture. The distinction is critical.

Privacy by design says: "We collect your data, but we promise to handle it responsibly." Privacy by architecture says: "We designed the system so that your data never enters our possession in the first place." You cannot breach data you never possessed. You cannot be compelled to produce records that do not exist. You cannot monetize information you never collected.

Zero-Knowledge Proofs

The POY Protocol employs zero-knowledge proof principles at every layer. When a service asks "Is this user a real human?", POY provides a cryptographic proof that answers the question without revealing any information beyond the answer itself. The service learns that the user is human. It does not learn the user's face geometry, device fingerprint, or any other identifying characteristic. The proof is mathematically verifiable but informationally opaque.

Regulatory Compliance Without Effort

The architectural decision to collect zero personal data creates an unusual compliance position: most data protection regulations simply do not apply, because there is no personal data to protect.

Regulation Requirement POY Status
GDPR (EU) Right to access, deletion, portability of personal data No personal data collected - not applicable
BIPA (Illinois) Consent for biometric data collection and storage No biometric data leaves device - not applicable
CCPA (California) Right to know what personal info is collected and sold No personal info collected or sold - not applicable
HIPAA (US) Protection of health-related biometric data No health data processed server-side - not applicable
"The most secure database is the one that does not exist. POY Verify proves that meaningful verification is possible without meaningful data collection."

This is not a legal workaround - it is an engineering decision. By designing the system to perform all sensitive processing on-device and transmit only cryptographic proofs, POY eliminates entire categories of regulatory, legal, and reputational risk. Organizations that integrate POY verification do not become data processors or data controllers for biometric information, because no biometric information is ever transmitted to them.

6. Threat Model and Defenses

No security system is meaningful without a rigorous threat model. The POY Protocol is designed to defend against the following attack categories, each of which has been validated through independent penetration testing and red team exercises.

Presentation Attacks (Photos, Video Replay, 3D Masks)

Attack Vector

Attacker presents a photograph, pre-recorded video, or 3D-printed mask of the target user to the device camera to spoof the liveness check.

POY Defense

3D depth mapping rejects flat images. IR analysis detects non-living materials (paper, plastic, silicone). Micro-movement analysis detects absence of involuntary biological motion. Randomized challenges defeat pre-recorded video.

Injection Attacks (Modified Camera Feed)

Attack Vector

Attacker intercepts or replaces the camera feed at the software level, injecting a synthetic video stream before it reaches the liveness detection algorithm.

POY Defense

Liveness detection runs inside the secure enclave, which has direct hardware access to camera sensors. Software cannot intercept or modify the data path between the camera hardware and the secure enclave. Device attestation confirms the integrity of the hardware chain.

Deepfake Attacks (AI-Generated Faces)

Attack Vector

Attacker uses generative AI to create a realistic synthetic face that matches the target user, displayed on a screen or projected via a virtual camera.

POY Defense

Deepfakes are 2D renderings - they lack the 3D depth structure, IR reflectance properties, and involuntary micro-movements of a living face. The multi-modal approach (depth + IR + movement + challenge) creates a defense that no current generative model can satisfy simultaneously.

Relay Attacks (Remote Forwarding)

Attack Vector

Attacker forwards the verification challenge to a real human accomplice who performs the liveness check on a different device, relaying the result back.

POY Defense

The attestation is bound to the specific secure enclave hardware. A verification performed on Device A cannot produce a valid signature for Device B. Time-bound challenges with sub-second expiration windows prevent relay delays. Device attestation includes hardware fingerprinting that detects environment mismatch.

The key insight across all attack categories is that hardware-based liveness detection operating inside a secure enclave creates a trust boundary that software-level attacks cannot cross. An attacker would need to physically compromise the device's secure hardware - a capability that currently requires nation-state-level resources and physical access to the specific device.

7. Market Opportunity

The identity verification market is undergoing a fundamental transformation driven by regulatory pressure, rising fraud costs, and the inadequacy of existing solutions in the face of AI-powered attacks.

$17.8B
ID verification market by 2030
$4.45M
Average data breach cost
16.7%
Market CAGR 2025-2030

The global identity verification market is projected to reach $17.8 billion by 2030, growing at a compound annual growth rate of 16.7%[9]. This growth is driven by several converging forces:

Pricing Model

POY Verify operates on a usage-based pricing model designed for accessibility across organization sizes:

Tier Volume Per Verification Includes
Starter Up to 10,000/month $0.05 API access, dashboard, email support
Growth 10,001 - 100,000/month $0.03 All Starter + webhooks, priority support
Enterprise 100,001+/month Custom All Growth + SLA, dedicated CSM, custom SDK

Compared to traditional identity verification services that charge $1-5 per verification and require PII storage infrastructure, POY's pricing represents both a cost reduction and a liability reduction for integrating organizations.

8. Conclusion

The internet was built without a trust layer, and every attempt to retrofit one has required a Faustian bargain: surrender your personal data to prove you are human. CAPTCHAs failed against AI. Government ID verification excludes billions and creates breach targets. Biometric databases centralize the most sensitive data imaginable. Social graph systems remain fragile and limited.

The POY Protocol represents a fundamental departure from this paradigm. By leveraging hardware-secured enclaves already present in billions of devices, by performing all biometric processing on-device, by transmitting only cryptographic proofs, and by storing zero personal data, POY achieves what previous systems could not: reliable, universal human verification with zero privacy cost.

The internet does not need proof of identity. It needs proof of humanity.

POY Verify is the only verification system that requires zero personal data - not because we promise to protect it, but because we engineered a system that never needs it in the first place.

You cannot breach data you never possessed. You cannot misuse information you never collected. You cannot lose trust you never asked for. That is the POY Protocol.

Ready to Integrate?

Explore the POY developer documentation, try the live demo, or join the waitlist for early API access.

View API Docs Join Waitlist

References

  1. Imperva. "2025 Bad Bot Report." Imperva Research Labs, 2025. Reports 64% of global web traffic as automated.
  2. Deeptrace. "State of Deepfakes 2025." Analysis showing 500% increase in deepfake content volume since 2024.
  3. Association of Certified Fraud Examiners. "Global Fraud Study 2025." Deepfake-related financial fraud losses estimated at $25 billion annually.
  4. World Bank. "Identification for Development (ID4D) Global Dataset." 1.4 billion people worldwide lack any form of officially recognized identification.
  5. Hao, S. et al. "An Empirical Study and Improved Robustness of Modern CAPTCHAs." ETH Zurich, 2024. Demonstrated near-100% AI solve rates across major CAPTCHA implementations.
  6. Lim, G. "Dead Internet Theory: AI Content and the Future of Online Authenticity." Stanford Internet Observatory, 2025.
  7. Tolosana, R. et al. "DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection." Information Fusion, 2024. Analysis of deepfake capabilities against liveness detection systems.
  8. European Data Protection Board. "Report on Biometric Data Processing for Unique Identification." EDPB, 2025. Review of Worldcoin operations and regulatory responses across EU member states.
  9. MarketsandMarkets. "Identity Verification Market - Global Forecast to 2030." Projects market reaching $17.8 billion by 2030 at 16.7% CAGR.
  10. European Commission. "GDPR Enforcement Tracker." Cumulative fines exceeding $4.3 billion through December 2025.
  11. IBM Security. "Cost of a Data Breach Report 2025." Global average breach cost at $4.45 million; identity-related breaches averaging $4.95 million.