ThinkTech

PUBLIC INTEREST AI GOVERNANCE

Practical AI ethics for teams deploying AI.

ThinkTech publishes evidence-based guides, risk checklists, benchmark explainers, and policy trackers for people making real decisions about AI systems.

Example AI Risk Register

Use caseCustomer support chatbot
Data sensitivityMedium
AutonomyPartial

Primary risks

Hallucinated advice
Data leakage
Overreliance
Escalation failure

Recommended controls

Human escalation
Logging
Vendor model disclosure
User notice

Full template includes scoring, owner, mitigation status, review cadence, and evidence fields.

Vendor-neutral
Evidence-based
Regularly updated
Built for product, policy, and procurement teams

AI Risk Control Library

Common AI deployment risks with recommended controls and evidence requirements.

View full library
RiskRecommended controlEvidence needed
HallucinationHuman review before actionEvaluation logs, accuracy metrics
Data leakageRedaction, access loggingVendor data handling policy
Prompt injectionInput filtering, sandboxingAdversarial test results
OverrelianceEscalation path to humanUX review, user feedback
BiasDemographic testing, auditFairness metrics, audit report

Controls are based on public standards, documented failure patterns, and practical AI governance review. Read the methodology

Latest AI governance guides

View all guides

Policy tracker

View full tracker
FrameworkStatusAffectsUpdated
EU AI ActIn forceHigh-risk systems, GPAIApr 2026
GPAI Code of PracticeActiveFoundation model providersMar 2026
NIST AI RMFGuidanceAI governance teamsFeb 2026
US state AI lawsEvolvingEmployers, vendors, platformsApr 2026
Open Policy Tracker

AI risk library

View all entries

Stay informed on AI governance

A monthly digest of policy changes, new tools, and research findings. No spam, no hype.