Problem How It Works Proof Pricing FAQ Get Early Access

One URL. Every AI call compliant.

SentinelAI is a proxy gateway between your code and AI providers. It automatically enforces EU AI Act compliance, blocks prohibited uses, and routes risky requests for human review.

- base_url = "https://api.openai.com"
+ base_url = "https://gateway.sentinelai.app"
+ headers["X-Api-Key"] = "sg_live_xxxx"
HTTP Response
HTTP 451 EU-AIA-5.1c Social scoring is prohibited under the EU AI Act Legal reference: EU AI Act Art. 5(1)(c)

EU AI Act Compliance — Your AI Calls May Already Violate the Law

The EU AI Act is in force. Fines up to €35M are real. A regulatory audit can come at any time.

gavel

Fines up to €35 million

The EU AI Act imposes fines up to €35M or 7% of global revenue for prohibited AI practices. A single wrong API call can be costly.

assignment_late

Your ChatGPT call for HR violates Annex III

Using GPT for candidate screening? That's a high-risk AI system under Annex III §4. Human oversight is required — without it, you're in breach.

find_in_page

The regulator asks for an audit log — where's yours?

Every AI call needs a trail: who sent it, what for, what data was involved, what the outcome was. Without that, compliance is impossible.

AI Act Compliance Automation in Three Steps

No refactoring. No new SDKs. Just change one URL and get full AI governance.

1

Change one URL

Route your AI calls through SentinelAI gateway instead of directly to the provider.

- api.openai.com
+ gateway.sentinelai.app
2

Automatic screening

Every request passes through the rule engine: PII anonymization, risk classification, legal compliance.

→ PII detected → anonymized
→ UseCase: hr_screening
→ Risk: High (Annex III §4a)
→ Outcome: pending_review
3

Review on the dashboard

Your compliance officer sees the request, risk level, and relevant law. Approve, reject, or escalate.

Review Task #4821
Status: pending_review
Actions: ✓ Approve  ✗ Reject  ↑ Escalate

AI Risk Assessment & Compliance Enforcement — Live Examples

Real examples from the EU AI Act rule engine. See how the gateway classifies and enforces every AI request.

social_scoring EU AI Act Art. 5(1)(c)
arrow_forward block HTTP 451 — Blocked
hr_screening EU AI Act Annex III §4(a)
arrow_forward rate_review Sent for human review
chatbot (no AI disclosure) EU AI Act Art. 52(1)
arrow_forward warning Warning — disclosure required
biometric_mass_surveillance EU AI Act Art. 5(1)(d)
arrow_forward block HTTP 451 — Blocked
credit_scoring EU AI Act Annex III §5(a)
arrow_forward rate_review Sent for human review
deepfake / synthetic_media EU AI Act Art. 50(3)
arrow_forward label_important Disclosure metadata required

AI Compliance Software vs Building It Yourself

You could build an AI governance layer yourself. But should you?

Category SentinelAI Build It Yourself
IntegrationOne line of codeMonths of development
Legal coverageEU AI Act + GDPR + UK + USHow many lawyers do you have?
Audit logAutomatic for every requestDesign it from scratch
Human reviewBuilt-in workflow + dashboardCustom review app needed
PII protectionAuto-anonymization (16+ types)Regex for every format individually
Law updatesAutomatic — zero effort from youTrack the EU Official Journal yourself
CostFrom €0/mo€50k–150k+ first dev cycle
Time to production~15 minutes3–6 months

Simple, transparent

Start free. Scale as you grow.

Free
€0
forever — no credit card
  • check_circle 1,000 requests / month
  • check_circle EU AI Act coverage
  • check_circle PII anonymization
  • check_circle Basic audit log
  • check_circle 1 user
Start Free
Starter
149
per month — annual billing available
  • check_circle 10,000 requests / month
  • check_circle EU + UK jurisdictions
  • check_circle Human review workflow
  • check_circle Compliance dashboard
  • check_circle Up to 10 users
  • check_circle Email support
Reserve Access
Most Popular
Business
799
per month — annual billing available
  • check_circle 150,000 requests / month
  • check_circle EU + US + UK jurisdictions
  • check_circle Copilot & Azure OpenAI support
  • check_circle Advanced compliance scoring
  • check_circle Up to 50 users
  • check_circle Priority support
Reserve Access
Enterprise
Custom
tailored to your needs
  • check_circle Unlimited requests
  • check_circle All jurisdictions
  • check_circle On-premise deployment option
  • check_circle Custom rule engine policies
  • check_circle Unlimited users
  • check_circle Dedicated account manager
Contact Us

Reserve your free access

Be among the first to use SentinelAI. No obligations.

Help us build the right product

What is your primary concern regarding AI compliance?

Which AI provider do you currently use?

Responses are anonymous and help us build the right product.

Frequently Asked Questions About AI Act Compliance

Everything you need to know about EU AI Act compliance and how SentinelAI helps.

What is the EU AI Act and who does it apply to?

The EU AI Act is the world’s first comprehensive AI regulation, in force since 2024. It applies to any company deploying or developing AI systems that affect people in the EU — regardless of where the company is based. This includes using AI APIs like OpenAI, Anthropic, or Azure OpenAI.

How does SentinelAI automate AI Act compliance?

SentinelAI works as a proxy gateway between your code and AI providers. You change one URL, and every AI request is automatically screened against EU AI Act rules — prohibited uses are blocked, high-risk uses are routed for human review, PII is anonymized, and a full audit trail is created.

What AI practices are prohibited under the EU AI Act?

The EU AI Act prohibits social scoring systems (Art. 5(1)(c)), real-time biometric mass surveillance (Art. 5(1)(d)), emotion recognition in workplaces and schools, manipulative AI techniques, and exploitation of vulnerable groups. SentinelAI automatically detects and blocks these prohibited uses.

What are the fines for EU AI Act non-compliance?

Fines for prohibited AI practices can reach up to €35 million or 7% of global annual revenue. High-risk system violations can result in fines up to €15 million or 3% of revenue. Even transparency violations can cost up to €7.5 million.

Do I need AI Act compliance if I just use ChatGPT or GPT-4 APIs?

Yes. If you use AI APIs for high-risk purposes like HR screening, credit scoring, or legal decisions, you’re operating a high-risk AI system under Annex III of the EU AI Act. Human oversight, documentation, and risk management are required — even if you didn’t train the model yourself.

How long does it take to integrate SentinelAI?

About 15 minutes. You change one URL in your code (from your AI provider to the SentinelAI gateway) and add an API key header. No new SDKs, no refactoring, no infrastructure changes needed.