For Immediate Release

safety moderation CSAM infrastructure

Gettit Deploys Multi-Layer Safety Infrastructure — NCMEC Reporting, Azure CSAM Detection, and AI Moderation on Every Upload

April 7, 2026

FOR IMMEDIATE RELEASE

NEW YORK, NY — APRIL 2026

Gettit, the privacy-first dating app launching in New York City this spring, today disclosed the full scope of its safety infrastructure — a multi-layered system that combines AI moderation, human review, real-time ban enforcement, and direct NCMEC reporting to create the most safety-hardened consumer dating app available.

While most dating apps treat safety as a reactive support ticket, Gettit bakes child safety, CSAM detection, and proactive moderation into its core infrastructure before a single user signs up.

The Safety Stack: Layer by Layer

Layer 1: CSAM Detection via Azure Content Safety

Every photo uploaded to Gettit is run through Azure Content Safety’s CSAM detection pipeline. A severity score of 6 or higher triggers an immediate cascade:

  1. The upload is blocked
  2. The account is flagged in our moderation database
  3. Account access is suspended in real time via server-side access control
  4. An admin alert is sent via SendGrid

This is proactive detection — it fires on upload, before the content reaches any user’s screen.

Layer 2: AI-Powered NSFW Moderation

Our NSFW detection AI runs on every photo upload, separate from the CSAM pipeline. Explicit content that violates Gettit’s policies is quarantined before it appears on any user’s screen. Face detection confirms that profile photos contain identifiable faces — a baseline requirement for the selfie verification system.

Layer 3: NCMEC CyberTipline Integration

Gettit has registered with NCMEC (National Center for Missing and Exploited Children) as an Electronic Service Provider. The reporting pipeline is automated: CSAM detections that meet the threshold trigger a CyberTipline report without requiring human intervention in the reporting step. This meets and exceeds EARN IT Act reporting requirements.

Layer 4: Human Moderation Queue

AI detection is the first line. Human moderation is the second. Gettit’s admin panel surfaces a prioritized moderation queue where flagged accounts, reported content, and borderline AI detections are reviewed by human moderators. Reports from users feed into this queue in real time.

Layer 5: Real-Time Ban Enforcement

Our real-time access control means that when an account is banned, it’s banned immediately — not at the next session, not at the next refresh. The enforcement is real-time and cannot be bypassed by re-opening the app.

“Building a dating app safely isn’t hard — it just requires actually caring enough to do it. We treat safety infrastructure the same way a bank treats fraud detection: multiple independent layers, proactive detection, and zero tolerance.” — Jax Sterling, CEO & Co-Founder, Gettit

How This Compares to the Industry

Most major dating apps operate on a report-and-react model: content is visible until someone reports it, at which point it enters a moderation queue. This means every piece of harmful content reaches at least one user before it’s addressed.

Gettit’s model is detect-before-display: harmful content is caught at the upload layer, before it reaches any user. The report-and-react system still exists as a backup — but it’s the last line of defense, not the first.

This approach is consistent with our CSAE Policy, which details our full commitments around child safety and exploitation prevention, and our blog post on the fake profile problem that explains why proactive moderation matters structurally.

About Gettit

Gettit is an inclusive dating and social networking platform built for everyone. Available on iOS and Android. Sign Up Now and get 6 months of Plus free.


Media Contact Gettit Communications press@gettit.app www.gettit.app

Ready to Try Gettit?

Sign Up Now and get 6 months of Plus free.