PUBLIC INTEREST AI GOVERNANCE
Practical AI ethics for teams deploying AI.
ThinkTech publishes evidence-based guides, risk checklists, benchmark explainers, and policy trackers for people making real decisions about AI systems.
Example AI Risk Register
Primary risks
Recommended controls
Full template includes scoring, owner, mitigation status, review cadence, and evidence fields.
AI Risk Control Library
Common AI deployment risks with recommended controls and evidence requirements.
| Risk | Recommended control | Evidence needed |
|---|---|---|
| Hallucination | Human review before action | Evaluation logs, accuracy metrics |
| Data leakage | Redaction, access logging | Vendor data handling policy |
| Prompt injection | Input filtering, sandboxing | Adversarial test results |
| Overreliance | Escalation path to human | UX review, user feedback |
| Bias | Demographic testing, audit | Fairness metrics, audit report |
Controls are based on public standards, documented failure patterns, and practical AI governance review. Read the methodology
Featured tools
View all toolsAI Risk Register Template: A Practical Framework for Teams
A ready-to-use AI risk register template with pre-filled examples for five common use cases. Includes scoring methodology, review cadence, and a structured format for tracking AI risks across your organization.
Model Card Template: Structured Documentation for AI Systems
Structured template for documenting AI model capabilities, limitations, intended use, and evaluation results. Based on the Model Cards framework.
AI Procurement Checklist: 47 Questions Before Buying AI Tools
A structured checklist of 47 questions for procurement teams evaluating AI vendors. Covers data handling, model transparency, compliance, support, and exit strategy.
Latest AI governance guides
View all guidesResponsible AI Development: Think Before You Code
A practical guide for developers, students, and product teams building AI systems that affect real people.
AI Vendor Due Diligence: 30 Questions in Five Categories
A structured framework for evaluating AI vendors before procurement. Covers 30 questions across model transparency, data practices, security, compliance, and commercial terms, with a red flags table and comparison framework.
AI Ethics for Product Teams: A Practical Checklist
A structured checklist for product managers and engineers building AI-powered features. Covers data sourcing, model selection, user disclosure, and ongoing monitoring.
Policy tracker
View full tracker| Framework | Status | Affects | Updated |
|---|---|---|---|
| EU AI Act | In force | High-risk systems, GPAI | Apr 2026 |
| GPAI Code of Practice | Active | Foundation model providers | Mar 2026 |
| NIST AI RMF | Guidance | AI governance teams | Feb 2026 |
| US state AI laws | Evolving | Employers, vendors, platforms | Apr 2026 |
AI risk library
View all entriesPrompt Injection: Definition, Attack Patterns, Detection, and Mitigation
Prompt injection allows attackers to override AI system instructions through crafted inputs. Patterns, detection methods, and mitigations.
Overreliance on AI: Automation Bias, Skill Atrophy, and Organizational Controls
Overreliance on AI systems leads to degraded human judgment and missed errors. Patterns, warning signs, and organizational controls.
AI Bias: Types, Impact Assessment, Detection, and Mitigation
AI systems can encode and amplify existing biases in training data, model design, and deployment context. Assessment frameworks and mitigation approaches.
Benchmark explainers
View allHow to Read AI Leaderboards Without Getting Fooled
AI leaderboards rank models by benchmark scores, but the rankings often mislead. This guide covers six common ways leaderboards deceive, what to check before trusting results, and how major benchmarks actually differ.
Why AI Benchmarks Mislead Buyers and Decision-Makers
AI benchmarks are the primary tool vendors use to compare models. They are also systematically misleading. Here is what benchmark scores actually tell you, and what they hide.
Stay informed on AI governance
A monthly digest of policy changes, new tools, and research findings. No spam, no hype.