Reimagine growth at Elevate – Dallas 2025. See the Agenda.

If you work in or around Artificial Intelligence (AI) right now, you’ve probably noticed that the EU AI Act is dominating headlines again. The reason? A key milestone passed on August 2, 2025, and it’s the first real test of how this regulation will play out in practice.  

General-purpose AI (GPAI) models, the large and versatile models powering most of today’s AI breakthroughs (such as ChatGPT, Claude, Gemini), are now officially under the Act’s obligations. This is no longer a theory. For providers, service partners, and enterprise buyers alike, the rules of the game have changed. 

Reach out to discuss this topic in depth.  

What changed on August 2? 

The update focused squarely on general-purpose AI models. Providers that want to place new GPAI models in the EU market must now meet baseline requirements: 

  • Publish detailed documentation about the model 
  • Adopt and maintain a copyright policy aligned with EU law 
  • Provide a public summary of training data sources 
  • Perform adversarial testing, carry out risk evaluations, establish incident reporting processes, and apply strong cybersecurity practices (only for models deemed high-risk) 

While the enforcement doesn’t start until August 2026, this milestone is critical because it sets the expectations. Models already available before this date get a longer runway (until August 2027) to catch up. Importantly, once enforcement begins, violations can attract penalties of up to €35 million or 7% of global turnover, making these obligations far more than just paperwork. 

Why this is more than just compliance 

On the surface, this might sound like another regulatory box-checking exercise. In reality, it signals a much deeper shift. 

  • For tech providers: Model documentation, safety testing, and copyright policies will become far more essential to the product lifecycle. Providers that get ahead by embedding compliance into engineering and release processes will not only avoid penalties but also set the tone for industry’s best practices. This is an opportunity to differentiate on trust, security, and transparency. As AI regulation proliferates, we may even start to see geospecific versions of the same models emerge, each tuned to local compliance regimes and regulatory expectations. 
  • For service providers and System Integrators (SIs): The Act opens a new services segment. Beyond advisory, there will be a need for execution and ongoing support in automating documentation, building repeatable risk and incident frameworks, and integrating these with clients’ broader governance models. Service providers who can package “AI Act readiness” into structured offerings such as gap assessments, toolkits, and managed services, will become indispensable partners as enterprises look for scalable compliance solutions. Over time, we may even see the rise of AI audit-as-a-service, where independent providers continuously assess models, documentation, and risk frameworks, much like financial audits today. 
  • For enterprises using AI: Buyers cannot afford to treat compliance as the vendor’s problem alone. Procurement functions need to get sharper and more informed, embedding compliance criteria directly into Request for Proposals (RFPs), vendor scorecards, and renewal cycles. This means demanding concrete evidence like technical credentials, safety and robustness evaluations, copyright policies, and incident response commitments. Just as importantly, enterprises remain responsible for how AI systems operate once deployed in their workflows. This will push Chief Information Officers (CIOs), risk leaders, and business owners to work more closely together, aligning technology strategy with legal and ethical obligations. 

How this redefines the ecosystem 

The August milestone can be best understood as a design constraint rather than a roadblock. It will influence product design, model choice, and deployment strategy in multiple ways: 

  1. Documentation as a foundation for trust: Think of model cards, risk logs, and training-data summaries as living, versioned assets that will need to evolve alongside releases. Over time, this documentation will serve as the institutional memory of AI systems and will be critical for audits, regulatory reviews, and even internal decisionmaking on when to retire, retrain, or scale a model. 
  1. Stricter pre-launch gates: Testing for bias, robustness, and safety will become standard before any rollout, especially for agentic and high-impact use cases. While this may extend timetomarket slightly, it reduces the chance of costly incidents or rollbacks. In the long run, this disciplined approach will likely be seen as good engineering practice, not just compliance. 
  1. Portfolio shifts and strategic choices: Some enterprises may prefer smaller or specialized models where compliance is easier to manage, while keeping high-capability models for carefully governed scenarios. This could also influence vendor lockin strategies, with enterprises spreading risk across multiple providers to balance performance with regulatory exposure. 

Final takeaway 

The EU AI Act has always been about shaping trustworthy AI. But August 2, 2025, marked the point where trust moves from aspiration to operational reality. Providers must prove safety and transparency, service partners must help operationalize compliance, and enterprises must become sharper buyers and risk managers. 

It’s not a cliff edge yet as real enforcement will begin next year. But the smartest players are already treating compliance as a competitive differentiator.  

Will compliance-driven governance slow down innovation, or will it finally separate serious AI players from opportunistic ones?  

How should enterprises outside the EU prepare for cross-border compliance? 

If you found this blog interesting, check out our Decoding The EU AI Act: What It Means For Financial Services Firms | Blog – Everest Group, which delves deeper into the EU AI act. 

To discuss these questions and more in detail, reach out to Abhishek Sengupta ([email protected]) or Ishi Thakur ([email protected]).  

More from Blogs