Open-source skill that turns Regulation (EU) 2024/1689 into a tool you can actually use. Classify systems, check obligations, draft documentation. With article-level citations. Works with Claude Code, OpenAI Codex, and any AGENTS.md-aware coding agent.
MIT licence · no telemetry · no SaaS · runs locally in your coding agent
Most AI Act content is written by lawyers, for lawyers.
If you build AI systems, you've probably tried reading the regulation and given up around Article 6. This skill is the missing layer between the regulation and your codebase: structured tasks for classification, obligations check, FRIA, GPAI, transparency. Same articles, mapped onto questions engineers actually ask.
Six tasks. The ones every team running AI in the EU has to answer.
In scope under Article 2? Prohibited (Art. 5)? High-risk (Art. 6 + Annex III)? Limited-risk (Art. 50)? GPAI? GPAI with systemic risk? Provider, deployer, importer, distributor?
Walk through the eight prohibited practices: subliminal manipulation, exploitation of vulnerabilities, social scoring, predictive policing, untargeted scraping, workplace emotion recognition, sensitive biometric categorisation, real-time remote biometric ID.
Articles 8–22: risk management system, data governance, technical documentation (Annex IV), record-keeping, transparency, human oversight, accuracy/robustness/cybersecurity, QMS, conformity assessment, CE marking, registration.
Article 26 deployer duties and Article 27 Fundamental Rights Impact Assessment for public bodies, providers of public services, and Annex III 5(b)/(c) deployers. Dovetails with the GDPR DPIA.
Articles 51–55: GPAI vs GPAI with systemic risk (10²⁵ FLOPs threshold), Annex XI/XII documentation, copyright policy, training-data summary, adversarial testing, serious incident reporting. Aligns with the GPAI Code of Practice.
Chatbot disclosure, machine-readable marking of synthetic content, emotion recognition / biometric categorisation notices, deep-fake disclosure, AI-generated public-interest text. With editorial-responsibility and law-enforcement carve-outs.
The repo IS the skill.
No SaaS sign-up. No API keys. Clone the repository, point your coding agent at it, done. Pick the path that matches your tool.
The bundled ./install.sh script supports --target claude, --target codex, or --target both, using symlinks so git pull upgrades the installed skill in place.
I'm Davide Morelli. I build AI things, and now I help others build AI things that comply with EU law.
18 years in AI in production. I co-founded BioBeats and was its CTO until our acquisition by Huma in 2021. I've shipped clinical AI systems, safety-critical inference, and consumer products at scale.
When I read the AI Act for the first time, I kept thinking the same thing: the structure is there, but it's buried under legal language that engineers won't navigate. So I open-sourced this skill — for the engineering teams I work with, and for the ones I haven't met yet.
The skill is free. The hard cases are where I come in.
More on what I do, write, and have built: www.davidemorelli.it
When the skill isn't enough.
Engineering-led AI Act work for organisations that need more than a structured questionnaire. I work mostly with EU companies of 50–500 people, technical leadership.
One call. You bring the system, I run the skill on it live and we talk through the output. You leave with classification confirmed and a written list of next steps.
Book →You send me your draft Annex IV technical documentation, FRIA, DPIA dovetail, or Article 50 implementation. I send you redlines and structured recommendations within five working days.
Get a quote →End-to-end classification of your AI portfolio (3–15 systems), gap analysis, and drafting of initial Annex IV technical documentation. 4–8 weeks elapsed, run with your engineering team.
Discuss scope →Half-day or full-day, on-site or remote, customised to your AI portfolio. Your team leaves able to use the skill autonomously and with an internal playbook.
Book a session →4–16 hours per month of ongoing advisory: review new systems before deploy, monitor AI Office guideline updates, support pre-audit prep. 3–6 months minimum. For companies that don't need a full-time AI Act person but need someone reachable.
Open a conversation →Engagements include professional indemnity insurance. Outputs are advisory and engineering-grade; final legal sign-off remains with qualified counsel.
Quick answers.
No. The skill is a structured support tool. It produces drafts and gap analyses, with article citations, that materially accelerate compliance work. Final legal sign-off should be done by qualified EU counsel, especially for high-risk systems and prohibition borderlines.
Three install paths cover the common cases. Claude Code and OpenAI Codex load the SKILL.md format natively — clone into ~/.claude/skills/ or ~/.agents/skills/ respectively. Cursor, Windsurf, GitHub Copilot, Devin, Amp and other AGENTS.md-aware agents pick up the bundled AGENTS.md; the adapters/ directory ships ready-made rules for Cursor MDC and Copilot. Content is plain markdown — portable to any other agentic framework. PRs for additional adapters welcome.
Those are enterprise GRC platforms with policy packs and audit workflows. This is a developer-facing skill that runs in your editor, free, no signup. Different audience, different stage. If you need full enterprise governance with audit logs and SSO, those tools are the right answer. If your engineering team needs to start understanding the regulation today, this is.
The skill ships versioned. Commission guidelines, AI Office templates, and harmonised standards from CEN-CENELEC JTC 21 will continue to land through 2026 and 2027. I update the skill as those drop and tag new versions; subscribe to the GitHub releases feed.
Yes. MIT licence. Use it freely with clients, fork it, adapt it. Attribution appreciated, not required. If you build something on top of it, I'd love to hear about it.
The skill is content-complete (six tasks, source-grounded references, install paths for Claude Code, Codex, and AGENTS.md-aware agents) but external validation is in progress. If you are a compliance officer, EU AI law expert, or engineering lead who has worked on the regulation, your review is welcome. Open an issue on GitHub, or email hello@davidemorelli.it. Acknowledged contributors are credited in the README and the v0.1.0 stable release notes.
Free, MIT, no telemetry. If it misses something, open an issue. If you'd rather have someone walk it through with you, book an audit.