iDmission
iBeta Level 1 — ISO 30107-3
iBeta Level 2 — ISO 30107-3
Passive Liveness Detection

Passive liveness.
As simple as a selfie.

IDmission's AI-powered passive liveness detection. No blinking. No smiling. No head turns. Just a single selfie — and a 160-layer deep neural network that instantly determines if the person is real, physically present, and who they claim to be.

No Blinking Required
50 ms Latency
ISO 30107-3 Certified
Real Identity
CONFIDENCE: 0.998
Neural Engine
160 Deep Layers
Real-time
50ms Latency
The 3 P's of IDmission's AI-powered passive liveness detection.

Three layers of proof in a single selfie

Each selfie is evaluated across three dimensions to establish identity with confidence.

Person

Is this a real human being — not a statue, mask, face bust, or virtual persona posing as a person?

Physically Present

Is this person authenticating in real time — not using a photo, video, deepfake, or replay attack?

Precisely Who They Claim

Is this person who they say they are — and do they have the right to access the service requested?

Passive Liveness

Remove hat & glasses

How It Works

Completely passive verification

No commands. No friction. The user simply takes a selfie and the CNN does the rest — in under a second.

1

Remove hat & glasses

The SDK follows ICAO guidelines and prompts the user to remove obstructing accessories for optimal capture quality.

2

Fit face in the oval

Real-time guidance checks face fit (65–100% tolerance), head angle (±10° Y, ±5° Z), eye openness, lighting, and saturation.

3

Auto-capture & AI analysis

Once all criteria pass, the selfie is captured automatically. The 160-layer CNN analyzes the image and returns a real/spoof determination with a confidence score — requiring a real score above 0.9.

4

Result in milliseconds

50 ms on-device latency. 600 ms end-to-end response time. The user never waits — the entire process feels like taking a normal selfie.

Explore the Data

Go beyond the surface—drill down into the specific model precision metrics and our document realness performance benchmarks.

Threats Defeated

Trained against 15 attack types

The CNN is trained with over 500,000 balanced real and spoof images, then validated against over 50,000 test images — covering every known presentation attack vector.

160
CNN Layers Deep
Neural network depth for precision
3.4M
Trainable Parameters
Optimized for mobile inference
502K
Training Images
Diverse global dataset
51K
Test Images
Rigorous spoof-attack validation

Attack Types Defeated

Glossy printed photos
Matte printed photos
Low-res printed photos
Cutout masks
Cutout masks (eyes out)
3D paper masks
3D masks (eyes out)
Masks with wigs
3D busts with wigs
Still photo on laptop
Still photo on mobile
Video on laptop
Video on mobile
Deepfake video & images
Latex masks
Availability

Available everywhere you build

IDmission's AI-powered passive liveness detection works across every integration channel — native mobile, web, or server-side API.

Android SDK

Native Kotlin/Java SDK with on-device TFLite models.

iOS SDK

Native Swift/ObjC SDK with CoreML inference.

WebSDK

JavaScript library with in-browser ML models. Any browser.

REST API

Server-side integration for custom capture pipelines.

Get Started with Liveness

Stop spoofing. Start with a selfie.

Certified to ISO 30107-3 Level 2. 99% accuracy. 50 ms latency. Zero user friction. See IDmission's AI-powered passive liveness detection in action.