PalmerAI logo
PalmerAI
Operational AI Governance

Know Your AI.
Prove Your Control.

Most teams can't answer: âEUR śWhat AI is running, who approved it, and can you prove it?âEUR ť
PalmerAI gives you the control layer to say yes âEUR " with evidence.

Visibility see every AI request before it runs
Control policy enforcement + approval workflows
Proof audit-ready evidence exports

The question you can't answer

When leadership asks

  • Board asks: âEUR śWhatâEUR ™s our AI governance posture?âEUR ť
  • CISO asks: âEUR śWhat AI tools are running?âEUR ť
  • Auditor asks: âEUR śHow do you prevent unauthorized AI usage?âEUR ť

Reality on the ground

  • Many teams using AI tools (ChatGPT, Claude, custom)
  • No central visibility
  • No approval process
  • No audit trail

You have shadow AI everywhere. And no way to prove control.

PalmerAI fixes this in 30 days.

What it is / what it is not

PalmerAI is the single control point between your teams and AI providers. Before any AI request runs, we check policy, enforce approvals if needed, and log evidence. Think of it as a security gateway for AI âEUR " the same way you'd never let teams hit production databases without access control.

What PalmerAI does

One Control Point
Policy Enforcement
Approval Workflows
Audit Evidence

What PalmerAI is NOT

  • Not a compliance certification tool
  • Not an AI risk classifier
  • Not a bias-testing platform

Live control plane

Live Control Plane

Operator console

Human reviewers stay in the loop with clear summaries of high-risk AI actions.

Policy gateway

A single service where policy rules are applied and enforced before AI runs.

Proof pack

Decision metadata and policy references bundled into reports for review or audits.

No prompt storage by defaultDesigned to keep raw prompts out of storage by default.
Redaction + approvalsOperator approvals for high-risk / exceptions.
Evidence-first recordsPolicy references + version hooks for reviewable records.
Cloudflare WorkersBuilt for edge deployment and isolation.
Governance-first postureDesign language and controls aligned to review needs.

How it works (in 90 seconds)

  1. Request -> The client sends the AI request to the gateway.
  2. Policy checks -> The gateway evaluates the request against your rules.
  3. Decision -> The gateway decides: allow, require approval, or block.
  4. Approval + retry -> If approval is required and granted, the request is re-run under control.
  5. Audit event -> The decision and policy version are recorded for evidence.
Built to be deployable in a pilot, but structured for governance-grade operation from day one.
REQUEST Client -> Gateway
->
POLICY Evaluate request
->
DECISION Allow / approval required / block
->
APPROVAL Operator review
->
AUDIT Evidence record
This keeps high-risk automation reviewable and evidence-backed - without slowing down safe paths.

Pilot (30 days)

Scope Disciplined

One AI use case. Clear success criteria. Evidence-ready in 30 days.

Built for security, compliance, or IT owners who need a real pilot with minimal integration work: policy + approvals + audit evidence.

Scope
What we pilot
  • One primary workflow (for example: ticket triage, code review, or support drafting).
  • Defined risk levels and approval triggers for that workflow.
  • Minimal footprint in v1 - we sit in front of your existing tools, not inside them.
Deliverables
What you get
  • An approval-aware execution flow (allow / block / approval required).
  • Evidence-first decision metadata (request ID + policy reference + timestamps).
  • A review-friendly console to inspect incidents during the pilot.
Outcome
Success criteria
  • Demonstrable policy enforcement before AI actions run.
  • An audit report you can share with security / compliance stakeholders.
  • A clear go / no-go decision and rollout options after 30 days.
Practical by design: evidence-heavy, claim-light. Capabilities needing verification stay in the pilot test plan.
Download 1-pager

Alternatives

vs. general AI gateways

  • They optimize cost/latency.
  • PalmerAI optimizes governance (approvals, evidence).

vs. edge routing & caching

  • They provide edge performance.
  • PalmerAI provides policy enforcement & approvals.

vs. building in-house

  • 6âEUR “12 months engineering.
  • PalmerAI deploys in 30 days.

vs. manual governance (spreadsheets + chat)

  • Manual approval messages.
  • PalmerAI: structured approval queue + logs.

Regulatory context

Feb 2025 EU AI Act enters into force
Now Governance readiness expectations
Aug 2026 High-risk AI full obligations

PalmerAI does not claim full EU AI Act compliance.

It provides the operational governance layer: oversight, approvals, and audit evidence.

Pricing preview

Transparent, straightforward pricing. Start with a planning sprint or pilot. Move to managed governance when you're ready. All prices exclude VAT.

Manual governance

Free but unscalable

Compliance consultant

~€50KâEUR “150K/year

Building in-house

6âEUR “12 months dev

PalmerAI Managed Standard

€1,900/month

PalmerAI costs less than one month of a compliance consultant âEUR “ and runs 24/7.

AI Governance Planning Sprint (5 days)

EUR 1,500 (excl. VAT)

One-time, fixed scope

A focused planning engagement to validate your use case, define success criteria, and scope your pilot with zero risk. 100% credited toward a pilot started within 30 days.

  • Risk assessment of current AI usage
  • Success metrics and KPIs
  • Pilot scope and timeline
  • Initial governance policy draft

Pilot (Single Use Case - 30 days)

EUR 3,900 (excl. VAT)

One-time, fixed scope

Prove AI governance on one real workflow. Test policies, approvals, and audit evidence before you commit to a managed plan.

  • One primary AI workflow governed
  • Approval rules and policy enforcement
  • Audit evidence report at end of pilot
  • 1 review session at day 30
  • 100% credit applied to first 3 months of Managed plan if you convert

Pilot Plus (Two Use Cases + Workshop - 30-45 days)

EUR 7,900 (excl. VAT)

One-time, fixed scope

Expand governance across two AI workflows and build an internal operating model for longer-term rollout.

  • Two primary AI workflows governed
  • Approval rules and policy enforcement (both workflows)
  • Governance operating model workshop (4 hours, your team + ours)
  • Audit evidence report at end of pilot
  • 2 review sessions (day 15 and day 30)
  • 100% credit applied to first 6 months of Managed plan if you convert

Managed Governance Plans

Move to ongoing governance once you've validated the pilot.

Managed Light - Essential AI Governance

EUR 1,200 (excl. VAT)

/ month

Best for 10-50 person teams starting their first AI governance layer.

  • 1 governed AI use case
  • Monthly audit reports + evidence export
  • Business-hours email support (up to 2 hours/month)
  • 1 policy review and update per month
  • Safe mode + kill switch
  • No long-term prompt storage

Excludes: SSO / SIEM integration, on-premise deployment, 24/7 support.

Managed Standard - Growing Governance Most popular

EUR 1,900 (excl. VAT)

/ month

Best for 20-150 person mid-market teams with AI across multiple workflows.

  • Up to 2 governed AI use cases
  • Weekly audit reports + evidence export
  • Business-hours support (up to 4 hours/month)
  • 3 policy reviews and updates per month
  • Safe mode + kill switch + incident review
  • Controlled rollout guidance
  • No long-term prompt storage

Excludes: SSO / SIEM integration, on-premise deployment, 24/7 support.

Managed Plus - Enterprise Governance

EUR 3,400 (excl. VAT)

/ month

Best for mid-market to small enterprise teams with complex governance needs.

  • Up to 4 governed AI use cases
  • Weekly audit reports + evidence export
  • Business-hours support (up to 8 hours/month)
  • 6 policy reviews and updates per month
  • Safe mode + kill switch + incident response
  • Annual incident simulation exercise
  • SSO integration scoping (custom pricing)
  • No long-term prompt storage

Excludes: SIEM integration (custom engagement), on-premise deployment, 24/7 support.

Discovery Sprint credit

If you start a Pilot within 30 days, we credit 100% of the Discovery fee toward the Pilot.

Managed tier transparency

Typical tiers: EUR 1,200 / 1,900 / 3,400 vary by support hours, reporting cadence, and number of governed use cases.

All pilot fees are 100% credited toward your first 3-6 months of Managed plan if you convert.
Not sure which plan is right for you? Start with the 5-day Planning Sprint or book a call.

Procurement-friendly

Low-friction evaluation posture.

Designed for teams that need a clean pilot scope, clear evidence, and minimal operational overhead.

  • No cookies / no tracking by default
  • Evidence-first audit summaries (request ID + policy reference)
  • Scope-disciplined pilot (one use case)
  • Clear go / no-go success criteria
  • Built for security / compliance stakeholders
  • Minimal integrations in v1 to keep risk low
Security and data-handling review pack (pilot-ready) available on request.

Why simple wins

Faster pilot

Small scope and few dependencies mean faster time to first results.

Fewer moving parts

Clear control points keep operations and review straightforward.

Evidence-first

Decision metadata supports reviews without introducing extra data risk.

Use cases

CISO / Head of Security

Problem: answering the board about AI governance.

Solution: PalmerAI dashboard + export.

VP Engineering / CTO

Problem: teams using AI tools with no central visibility.

Solution: single gateway, rules enforced.

Compliance / Legal

Problem: nothing to show auditors.

Solution: evidence exports with IDs, decisions, policy versions.

Product / Ops

Problem: customer-facing AI content needs approval.

Solution: approval queue with logged decisions.

Audit reports

Audit reports that survive scrutiny.

Compact, evidence-first summaries with request IDs, decisions, policy references, and timestamps - enough to explain what happened, without storing prompt content.

Security

Data minimization Default posture avoids raw prompt storage; focus is on decisions + evidence metadata.
Operator controls High-risk actions require explicit approval -- with audit context and timestamps.
Governance-ready logging Policy references and versioning support incident reviews and compliance reporting.
No cookies by default. No tracking. This page stores nothing locally beyond normal browser operation.

FAQ

What is an AI Gateway? A control layer that applies policy checks to AI requests and records decision metadata before execution.
Discovery vs Pilot: what is the difference? Discovery is planning and de-risking. Pilot delivers a working control layer for a real use case.
What triggers approval? Policy rules define which requests require approval based on risk and scope.
What do you log? Decision metadata such as request ID, decision, policy reference, and timestamps. Content storage depends on the deployment scope.
What do you need from us? A defined use case, success criteria, and a point of contact for approvals and policy review.
Timeline and typical pilot scope? A 30-day pilot with one or two use cases and a clear go/no-go decision.

Ready to prove AI governance in 30 days?

Start with the 5-day planning sprint or jump directly to a pilot. We'll show you how the PalmerAI Gateway works in your environment and leave you with real evidence you can act on.

Schedule a call
Copied