EU AI Act 2026: What’s Changed and What Austrian Businesses Must Do Now
The EU AI Act is no longer a future concern — it is the law. With prohibited AI practices already enforceable since February 2025 and general-purpose AI rules active since August 2025, Austrian businesses are running out of runway before the high-risk system requirements land in August 2026. This guide breaks down every deadline, every risk tier, and every concrete step your company needs to take to stay compliant and avoid penalties that can reach €35 million or 7% of global turnover.
Table of Contents
- Enforcement Timeline: Three Waves You Cannot Ignore
- Risk Classification: Where Does Your AI System Fall?
- What Austrian Companies Specifically Must Do
- Documentation and Technical Requirements
- Penalties: The Real Financial Exposure
- Practical Compliance Steps You Can Start Today
- How On-Premise AI Deployment Supports Compliance
1. Enforcement Timeline: Three Waves You Cannot Ignore
The EU AI Act did not arrive all at once. The European Commission deliberately staggered enforcement into three phases so organisations would have time to adapt. That staggering, however, has created a false sense of security. Many Austrian businesses treated the February 2025 date as a distant milestone and are now scrambling to understand that the second wave already landed in August 2025 and the third and most impactful wave is just months away.
Understanding these dates is not optional. Each phase carries its own enforcement mechanisms, and the penalties apply retroactively to systems already deployed if they fall outside the permitted boundaries. Here is the complete timeline as it stands in March 2026:
| Date | Phase | Status | What It Covers |
|---|---|---|---|
| Feb 2, 2025 | Wave 1 | ACTIVE | Prohibited AI practices banned outright. AI literacy obligations for all operators. |
| Aug 2, 2025 | Wave 2 | ACTIVE | Rules for general-purpose AI (GPAI) models. Transparency obligations for GPAI providers. |
| Aug 2, 2026 | Wave 3 | 5 MONTHS AWAY | High-risk AI system requirements. Full compliance, conformity assessments, registration in EU database. |
Wave 1 (February 2, 2025 — Already Active): This phase banned outright the most dangerous categories of AI. Social scoring systems, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), emotion recognition in workplaces and schools, and AI systems that manipulate human behaviour through subliminal techniques are all now illegal. If your company deployed any system that falls into these categories, it must have been decommissioned before this date. Additionally, this wave introduced AI literacy requirements meaning every organisation using AI must ensure relevant staff understand the basics of how the systems work, their limitations, and their risks.
Wave 2 (August 2, 2025 — Already Active): General-purpose AI models like GPT-4, Claude, Llama, and Mistral now fall under transparency obligations. Providers of these models must publish training data summaries, comply with EU copyright law, and maintain technical documentation. For Austrian businesses that build products on top of these models, this means your upstream provider must be compliant, and you have a responsibility to verify that. Models classified as posing “systemic risk” (generally those trained with more than 10^25 FLOPs) face additional obligations including adversarial testing, incident reporting to the European AI Office, and cybersecurity protections.
Wave 3 (August 2, 2026 — Five Months Away): This is the big one. All high-risk AI systems must meet stringent requirements around risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. Systems must be registered in the EU database before being placed on the market. Conformity assessments must be completed. For Austrian companies deploying AI in HR, healthcare, financial services, education, critical infrastructure, or law enforcement, this wave will require the most substantial changes to your operations.
2. Risk Classification: Where Does Your AI System Fall?
The entire regulatory architecture of the EU AI Act rests on a four-tier risk classification system. Where your AI system sits in this hierarchy determines everything: what documentation you need, what assessments you must conduct, what transparency obligations you carry, and what penalties you face for non-compliance. Getting this classification right is the single most important compliance task for any Austrian business deploying AI.
Unacceptable Risk — Banned Outright
These systems cannot exist in the EU under any circumstances. There is no compliance pathway; they must be dismantled.
- Social scoring by governments or private companies that leads to detrimental treatment
- Real-time remote biometric identification in publicly accessible spaces (with narrow exceptions)
- AI that exploits vulnerabilities of specific groups (age, disability, social situation)
- Emotion recognition in workplaces and educational institutions
- Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
- AI-based subliminal manipulation techniques that cause or are likely to cause harm
- Predictive policing based solely on profiling or personality traits
High Risk — Permitted with Strict Obligations
These systems are legal but subject to the most demanding compliance requirements. Austrian businesses in these sectors should be preparing now.
- HR and recruitment: CV screening, interview scoring, automated candidate ranking
- Credit and insurance: AI-driven credit scoring, risk assessment, pricing models
- Education: Automated grading, admission decisions, learning analytics that affect outcomes
- Healthcare: Diagnostic AI, surgical robotics, patient triage systems
- Critical infrastructure: AI managing energy grids, water systems, transport networks
- Law enforcement: Lie detection, evidence evaluation, recidivism prediction
- Migration and border control: Visa application processing, risk assessment
- Biometric identification: Remote biometric systems (where not banned outright)
Limited Risk — Transparency Obligations
These systems must inform users that they are interacting with AI. The primary obligation is transparency, not extensive documentation.
- Chatbots and conversational AI (users must know they are talking to AI)
- Deepfake generation systems (content must be labelled as AI-generated)
- Emotion recognition systems outside of banned contexts
- AI-generated text published as if it were human-written (must be disclosed)
Minimal Risk — No Specific Obligations
The vast majority of AI systems fall here and can operate freely. The Act explicitly encourages voluntary codes of conduct for these systems.
- Spam filters and email categorisation
- AI-powered search engines and recommendation systems (unless manipulative)
- Inventory management and demand forecasting
- AI-enhanced photo editing and creative tools
- Game AI and entertainment systems
A critical point for Austrian businesses: classification is not always obvious. A chatbot used for general customer queries is limited risk. But if that same chatbot is deployed in a healthcare context to triage patient symptoms and influence treatment decisions, it jumps to high risk. Context determines classification, not the technology itself. We strongly recommend that every Austrian company using AI conduct a formal risk classification exercise for each AI system they deploy, document the reasoning, and review it quarterly.
3. What Austrian Companies Specifically Must Do
Austria’s implementation of the EU AI Act adds a layer of national specificity that businesses must understand. The Austrian Federal Ministry for Digital and Economic Affairs (BMDW) has been designated as the coordinating authority, and the Austrian Data Protection Authority (DSB) retains jurisdiction over AI systems that process personal data, which in practice means most of them.
Here is what Austrian businesses must act on now, broken down by company size and AI usage:
All companies using AI (regardless of size): You must ensure AI literacy among staff who interact with AI systems. This is not a suggestion; it is a legal requirement active since February 2025. This means training programmes, documented competencies, and records that demonstrate your workforce understands the AI tools they use. The Austrian Chamber of Commerce (WKO) has published guidance on minimum AI literacy standards, and we recommend using these as your baseline. Additionally, any AI system that interacts with natural persons must disclose that it is AI. This applies to every chatbot, automated email responder, and AI-driven phone system you operate.
Companies deploying high-risk AI: If you use AI in any of the high-risk categories listed above, you face the full weight of the Act’s requirements. You need a documented risk management system that is continuously maintained. You need data governance practices that ensure training data is relevant, representative, and free from errors. You need to implement human oversight mechanisms that allow a qualified person to intervene, override, or shut down the system. You need to register your high-risk AI system in the EU database before August 2, 2026. And critically for Austrian firms, you must conduct or commission a conformity assessment. For most high-risk categories, this can be done internally, but for remote biometric identification systems, an independent notified body must be involved.
Companies building or modifying AI systems: If you develop AI in-house or significantly modify an existing system for a new purpose, you may be classified as a “provider” under the Act, even if you did not build the underlying model. This is a common trap. An Austrian firm that takes a base Llama model, fine-tunes it on proprietary data, and deploys it for credit scoring has become the provider of a high-risk AI system and bears all provider obligations.
Austrian-specific GDPR interplay: Austria’s DSB has signalled that AI compliance will be assessed in conjunction with GDPR compliance. This means Austrian companies face a dual regulatory framework where the AI Act obligations and GDPR obligations reinforce each other. A data protection impact assessment (DPIA) under GDPR Article 35 is practically mandatory for any high-risk AI system, and the DSB has indicated it will use DPIA deficiencies as a vector for enforcement. Austrian companies should treat their GDPR compliance programme and their AI Act compliance programme as a single, integrated effort.
Works council involvement: Under Austrian labour law (Arbeitsverfassungsgesetz), the works council (Betriebsrat) has co-determination rights over systems that monitor employee performance or behaviour. AI systems used in HR — from productivity tracking to automated scheduling — require works council agreement under section 96a. This is not new, but the EU AI Act adds another compliance layer on top. Austrian companies deploying AI in the workplace must satisfy both the works council and the AI Act’s transparency and human oversight requirements.
4. Documentation and Technical Requirements
Documentation is the backbone of EU AI Act compliance. The Act requires extensive technical documentation for high-risk systems, and the level of detail expected is significantly more demanding than what most Austrian businesses currently maintain. This section outlines every documentation requirement so you can begin assembling your compliance file immediately.
Risk Management System Documentation: You must maintain a living document that describes your risk management process. This includes the identification and analysis of known and foreseeable risks associated with each high-risk AI system, the estimation and evaluation of those risks, the adoption of appropriate risk management measures, and evidence of testing to ensure those measures work. This is not a one-time exercise. The risk management system must be updated throughout the AI system’s lifecycle.
Data Governance Documentation: For high-risk systems that learn from data, you must document your training, validation, and testing data sets. Specifically: the data collection methodology, the data preparation and labelling processes, any assumptions about what the data represents, an assessment of data availability and suitability, an examination for possible biases, and the measures taken to address identified data gaps or shortcomings. If your system processes personal data, this documentation must align with your GDPR records of processing activities.
| Document | Required For | Update Frequency |
|---|---|---|
| Risk management plan | High-risk systems | Continuous / at every significant change |
| Technical documentation (Annex IV) | High-risk systems | Before market placement + updates |
| Data governance records | High-risk systems using training data | With each data update |
| Conformity assessment | High-risk systems | Before deployment + renewal |
| EU declaration of conformity | High-risk systems | Before deployment |
| Human oversight procedures | High-risk systems | Annually or at system change |
| Incident log | High-risk systems | Continuous |
| AI literacy training records | All AI-using companies | Annually |
| Transparency notices | Limited + high-risk systems | At deployment + updates |
| GPAI model documentation | GPAI providers | Before making model available |
Logging and Monitoring: High-risk AI systems must have automatic logging capabilities that record events throughout the system’s operation. These logs must be retained for a period appropriate to the intended purpose (at minimum six months, and longer where required by sector-specific legislation). The logs must allow traceability of the system’s decision-making process. For Austrian companies, this logging requirement intersects with GDPR’s data minimisation principle, creating a tension that must be managed carefully. You need to log enough to demonstrate compliance and traceability, but not so much that you create a disproportionate personal data store.
Annex IV Technical Documentation: This is the comprehensive technical file that describes your AI system in detail. It must include a general description of the system, a detailed description of development elements (design specifications, architecture, algorithms, data requirements), information about monitoring and functioning, a description of the system’s accuracy and cybersecurity measures, and a description of any change made to the system throughout its lifecycle. This is a substantial document and for most Austrian SMEs will require external assistance to produce.
5. Penalties: The Real Financial Exposure
The EU AI Act’s penalty regime is designed to ensure that non-compliance is always more expensive than compliance. The fines are structured in three tiers, and they scale based on whether you are an SME or a large enterprise. For Austrian businesses accustomed to GDPR fines, the AI Act penalties operate on a similar but even more severe scale.
| Violation Type | Max Fine (Large Enterprise) | Max Fine (SME/Startup) |
|---|---|---|
| Deploying a prohibited AI system | €35M or 7% global turnover | Proportionate / capped lower amounts |
| Non-compliance with high-risk obligations | €15M or 3% global turnover | Proportionate / capped lower amounts |
| Providing incorrect information to authorities | €7.5M or 1.5% global turnover | Proportionate / capped lower amounts |
For context, consider a mid-sized Austrian company with €50 million in annual revenue. If that company deployed a prohibited AI system, it would face a maximum fine of €3.5 million (7% of turnover). If it failed to comply with high-risk system requirements, the maximum fine would be €1.5 million. These are maximum penalties, and actual fines will depend on the severity, duration, and intentionality of the violation. However, the DSB has historically issued fines at the higher end of its GDPR discretion, and there is no reason to expect leniency under the AI Act.
Beyond direct fines, non-compliance carries additional costs. Authorities can order the withdrawal of an AI system from the market, which means lost revenue and operational disruption. Reputational damage in Austria’s tight-knit business community can be significant. And the cost of retroactive compliance — rebuilding documentation, re-engineering systems, conducting emergency conformity assessments — is invariably higher than proactive compliance.
The message is clear: even for Austrian SMEs, the cost of compliance is a fraction of the cost of non-compliance. A thorough compliance programme for a typical Austrian SME with two or three AI systems will cost between €15,000 and €50,000. A single fine for non-compliance with high-risk obligations starts in the hundreds of thousands and can reach into the millions.
6. Practical Compliance Steps You Can Start Today
With five months until the high-risk system deadline, Austrian businesses need a concrete action plan. Here are the steps we recommend, ordered by priority:
Step 1 — AI System Inventory (Week 1–2): Catalogue every AI system in use across your organisation. Include third-party AI services, embedded AI in software platforms, and any internal tools that use machine learning. For each system, record its purpose, the data it processes, who is responsible for it, and how decisions are communicated. Many Austrian companies are surprised by the number of AI systems they discover during this exercise — AI is embedded in CRM platforms, marketing tools, HR software, and accounting systems that are not always recognised as “AI.”
Step 2 — Risk Classification (Week 2–3): Using the four-tier risk system described above, classify each AI system. Document the reasoning for each classification. Pay special attention to systems that are near the boundary between limited and high risk. When in doubt, classify upward — it is far better to over-comply than to under-classify and face enforcement.
Step 3 — Gap Analysis (Week 3–5): For each high-risk system, compare your current documentation and processes against the Act’s requirements. Identify where you have gaps in risk management, data governance, logging, human oversight, and technical documentation. Prioritise these gaps by severity and effort required to close them.
Step 4 — AI Literacy Programme (Week 3–6): If you have not already done so, implement an AI literacy training programme. This should be legally required since February 2025. The programme should cover what AI is, how the specific systems used in your organisation work at a conceptual level, their limitations, how to interpret their outputs, and how to escalate concerns.
Step 5 — Documentation Build (Week 5–16): Begin producing the required documentation. Start with the risk management plan (it informs everything else), then move to data governance records, human oversight procedures, and Annex IV technical documentation. This is the most time-intensive step and where most Austrian businesses will need external support.
Step 6 — Conformity Assessment (Week 14–18): Conduct the conformity assessment for each high-risk system. For most categories, this can be done internally using the harmonised standards being developed by CEN and CENELEC. Document the results and prepare the EU declaration of conformity.
Step 7 — EU Database Registration (Week 18–20): Register each high-risk AI system in the EU database. This must be completed before August 2, 2026. The registration process requires information about the provider, the system, its intended purpose, its risk classification, and the conformity assessment results. Build in buffer time — the registration system is new and administrative delays are likely.
7. How On-Premise AI Deployment Supports Compliance
One of the most effective compliance strategies we see Austrian businesses adopting is the shift to on-premise AI deployment. Running models like Llama, Mistral, or DeepSeek on your own infrastructure (or on dedicated servers within Austrian or EU data centres) directly addresses several of the Act’s most challenging requirements.
Data sovereignty and GDPR alignment: When your AI system runs on-premise, personal data never leaves your infrastructure. This eliminates an entire category of GDPR and AI Act compliance challenges. You do not need to evaluate the data processing practices of a third-party AI provider. You do not need to negotiate data processing agreements for model inference. You do not need to conduct transfer impact assessments for data flowing to US-based cloud AI providers. Your DPIA is simpler because the data processing chain is shorter and fully under your control.
Logging and auditability: On-premise deployment gives you full control over logging. You can implement exactly the logging regime the Act requires — recording inputs, outputs, confidence scores, and decision paths — without depending on a third-party API’s logging capabilities. You own every log entry. You can retain them for exactly as long as required. You can produce them for auditors on demand without navigating a cloud provider’s data export processes.
Human oversight implementation: On-premise systems can be configured with hardware-level kill switches, network isolation capabilities, and direct human override mechanisms that are difficult or impossible to implement with cloud-based AI APIs. When the Act requires that a human can “intervene on the functioning of the high-risk AI system or interrupt the system,” an on-premise deployment gives you the physical and logical control to guarantee this capability.
Transparency and explainability: With on-premise open-source models, you have access to model weights, architecture details, and training methodology documentation. This makes it significantly easier to produce the technical documentation required by Annex IV. You can describe exactly how the model works, what data it was trained on, and what its known limitations are, because this information is publicly available for open-source models.
Cost predictability: Compliance is an ongoing cost. On-premise deployment converts the variable cost of cloud AI (which can spike unpredictably with usage) into a fixed infrastructure cost. This makes compliance budgeting more predictable and allows Austrian businesses to plan their total cost of compliance with greater confidence. For a typical Austrian SME, the total cost of an on-premise AI deployment (hardware, setup, and first-year operation) ranges from €8,000 to €25,000 — often comparable to or less than the annual cost of equivalent cloud AI services, with vastly better compliance characteristics.
Ready to get started?
We help Austrian businesses navigate EU AI Act compliance with practical, cost-effective strategies — from risk classification to on-premise deployment. Let’s build your compliance roadmap.
Get a Compliance Assessment