LLMs in your stack: what EU rules mean for you!

Publication date: 2025-09-11
TIPS

TL;DR


The EU AI Act is in force and phasing in: bans on unacceptable AI since 2 Feb 2025; GPAI (general-purpose AI) provider duties from 2 Aug 2025; broad transparency and most remaining obligations by 2 Aug 2026; some high-risk cases to 2 Aug 2027. This article gives you a practical, no-nonsense checklist to get compliant without stalling innovation. And because evidence matters, BaseFortify.eu is an ideal place to collect component details (runtimes, SDKs, drivers, images) so you can track risk and match CVEs later.

 

Why this matters now

 

  • The Act entered into force on 1 Aug 2024 and applies in stages. Your security and compliance posture should be reviewed ahead of each activation date.
  • Dutch guidance highlights two milestones: GPAI obligations from Aug 2025 and transparency duties from Aug 2026 (e.g., machine-readable labels for AI-generated or manipulated content).

 

Quick timeline (bookmark this)

 

  • 2 Feb 2025 — Bans on unacceptable AI practices apply.
  • 2 Aug 2025GPAI obligations apply (provider documentation; information for downstream users; extra measures for systemic-risk models).
  • 2 Aug 2026 — Most remaining rules apply, including broad transparency obligations.
  • 2 Aug 2027 — Extended deadline for some high-risk systems embedded in regulated products.

 

First, know your role

 

  • Deployer (most businesses): You integrate or use AI systems (copilots, chatbots, RAG apps). Focus on transparency, human oversight, data protection, and security measures.
  • GPAI model provider: You provide a general-purpose model (or self-host in a way that places provider obligations on you). Expect technical documentation, up-to-date information for downstream users, and—if your model may pose systemic risk—enhanced evaluation and incident duties.

 

What to do this quarter (practical checklist)

 

  1. Inventory AI use — Where do LLMs appear (SaaS, plug-ins, self-hosted services)? What data can they touch? Which tools can they call?
  2. Set transparency defaults — Policy + CMS patterns for clearly marking AI-generated/manipulated output in a machine-readable way; chatbot notices that users are interacting with AI.
  3. Write a one-page AI policy — What data may/may not be prompted; human-in-the-loop checks for critical outputs; logging; incident steps.
  4. Harden self-hosted stacks — Pin versions; restrict tool calls; enforce rate/length limits; enable audit logs; document model/runtime configs.
  5. Vendor diligence — Collect model cards, evaluation summaries, data-use terms, change logs, and versioning; keep them with your risk register.
  6. Plan periodic reviews — Because obligations phase in (2025 → 2026 → 2027), schedule security/compliance reviews ahead of each date.

 

If you provide or self-host models

 

  • Maintain technical documentation of training/testing and share sufficient, up-to-date information with downstream users (capabilities, limits, known risks).
  • For potential systemic-risk models, prepare adversarial evaluations, risk assessments, and incident procedures.

 

Records you’ll need (keep these handy)

 

  • Model ID/commit and runtime/server version
  • Container image tag and digest
  • SDK/library versions (e.g., transformers, openai)
  • Configuration notes (context limits, tool allow-lists)
  • Change history for prompts/pipelines (owner, date, reason)

 

Where to store it: BaseFortify.eu is an ideal place to collect information on components so security issues (CVEs) against runtimes, SDKs, drivers, and gateways are findable later.

 

References

 

 

 

---------------------------------------------------------------------------------------------------------------------------------------------------------

Next in the series: what actually breaks in LLM stacks (real vulns, concrete mitigations), followed by how to model LLM components so CVEs match correctly in your inventory.