Blog

Rethinking MSSP pricing in the age of agentic AI 

GettyImages 2171106734
A new paradigm is emerging in cybersecurity, agentic artificial intelligence (AI), autonomous, decision-making AI agents that investigate threats and act without constant human intervention. Industry leaders are embedding security tools and services with these agents to supercharge defenses.  

But as agentic AI transforms how cyber threats are detected and mitigated, it also challenges the way organizations pay for security. Legacy pricing models like per endpoint, per EPS, or fixed tiers were built for a human-analyst-centric world. This blog explores why traditional pricing models struggles in the agentic AI era, how both vendors and enterprises stand to benefit from innovative approaches and proposes bold pricing models designed for AI-driven security. 

Reach out to discuss this topic in depth.  

Why old pricing models fall short 

Outdated models vs. autonomous agents: Most security services today use pricing schemes suited to manual operations. Common approaches include per-device or per-user fees, charges by data volume or alert count, and tiered subscription plans. These have their merits, but each reveals serious shortcomings when autonomous AI agents enter the picture: 

  • Resource unit-based pricing: Charging a fixed fee per device or user is predictable, but each user now has multiple devices or cloud workloads, so per-device pricing becomes “costly and potentially inefficient” at scale. Agentic AI does not map cleanly to endpoint counts, one AI agent can monitor thousands of endpoints, while one rogue device might consume disproportionate resources. This model overcharges large fleets with few incidents and undercharges small setups that generate high volume of threats  
  • Alert or volume-based pricing: In contracts, we have seen service providers charge by number of alerts processed or gigabytes of logs ingested per day/month specifically for SIEM. While this aligns cost to workload, cyberattack volumes are wildly unpredictable, making budgeting a nightmare. Worse, if you charge per alert, an AI that filters out false positives will lower the alert count and the provider’s revenue even while improving security. That perverse incentive could discourage using AI to reduce noise. Security telemetry volumes “fluctuate unpredictably,” so this model can lead to nasty budget surprises. Charging per alert in an AI-driven SOC (where success means fewer alerts) is simply backwards 
  • Fixed subscription tiers: Many MSSPs offer tiered packages at flat fees. This provides clarity, but it is a blunt instrument. Companies often pay for unused services in a bundle or face steep jumps in cost to access necessary features. In an AI-driven world, you might want just one advanced capability (say an autonomous phishing agent) temporarily, without jumping to a higher tier. Rigid tiers do not allow that. They also do not account for efficiency gains, if AI cuts incidents by 50%, your subscription still costs the same, even though the service now costs less to deliver. The customer sees no savings, and the vendor gets no reward for being more efficient 

Agentic AI in action: How vendors are changing the game 

Major vendors are already embedding agentic AI into their offerings. Microsoft now has Security Copilot agents that autonomously manage tasks like phishing analysis, a Defender agent sifts through billions of emails (over thirty billion phishing emails in 2024) to spot real threats without human help.  

CrowdStrike’s Charlotte AI acts as an autonomous analyst in its Falcon platform, independently triaging and investigating incidents (it even triages identity-based attacks on its own and can automatically contain compromised devices). Cisco is injecting agentic AI across its Security Cloud; its XDR can automatically verify and investigate threats (Instant Attack Verification) using AI, and Cisco is prototyping a firewall that writes and updates its own rules overnight. These examples show the technology leap is here and AI agents are already being deployed on the front lines of cyber defense. 

New Pricing Models for the Agentic AI Era 

  • Digital FTE pricing: This model is based on the idea of treating AI agents as digital employees, not abstract algorithms, or API calls, but contractual equivalents of human FTEs. In this framework, each AI agent is quoted in contracts with a rate card, just as an analyst or engineer would be. By mapping AI agents to digital FTEs, cost becomes predictable and easily comparable to existing labor structures. This abstraction spares executives from wrestling with the minutiae of tokens, Application Programming Interface (API) calls, or GPU hours. Instead, the question shifts to: what is the equivalent value of this digital worker compared to its human counterpart? Behind the scenes, of course, the economics are still driven by compute: model size, token throughput, infrastructure usage, and oversight overhead. But the pricing abstraction makes it familiar, auditable, and strategically legible. It also forces accountability, whether an AI “analyst” is contracted like an employee, its performance and Return on Investment (ROI) must stand scrutiny like one. The risk lies in complacency of assuming these digital FTEs are infallible. Just as junior analysts require supervision, digital agents demand governance. But treated wisely, this model provides a robust foundation, a way to scale security capacity without fragile headcount math, while aligning cost and value in terms business leaders are already familiar with 
  • Dynamic risk-based pricing: In this pricing model, instead of quoting a flat fee, the price will adjust based on the client’s real-time risk profile. A stable, hardened environment pays less; an organization under constant attack or with higher number of vulnerabilities pays more. It is akin to insurance: safe behavior earns discounts; elevated risk demands a premium. For example, an AI-driven SOC could charge a lower rate to a small business with strong security and ramp up the price for a large, complex enterprise with weak defenses. This model will reward customers for reducing risk and ensures security service provider get paid more when the job is truly harder. The challenge is having objective, transparent risk metrics but as cyber risk scoring improves, this approach could become very practical 
  • Workflow/execution-based pricing: This model works on simple philosophy of you only pay when the work is actually done. Instead of being tied to licenses, seats, or hours, the bill is triggered only when an AI agent executes a workflow end-to-end. Every action is treated as a unit of value, whether it is isolating an endpoint, deploying a firewall rule, or quarantining a phishing campaign. No theoretical coverage, no paying for capacity that sits idle. The appeal is its clarity. Security leaders can map spend directly to tangible, auditable actions. If the AI did the work, it shows up on the invoice; if it did not, it does not. This eliminates the inefficiencies of noisy alerts, inflated subscriptions, or idle analysts on retainer. It forces security investment to line up precisely with delivered defense. The discipline, however, is in the contract and validation. Poorly defined outputs risk incentivizing the wrong behaviors, counting trivial or low-value actions as “billable.” The model only works if contracts specify quality-assured actions and exclude false positives. For executives, the value is pragmatic and obvious: you fund execution, not promises. In a field plagued by noise, that distinction is what makes defenses resilient rather than fragile 
  • Outcome based pricing (Value or SLA Guarantees): Instead of paying for inputs, clients pay for results. The client will get charged based on tangible outcomes like threats neutralized, vulnerabilities reduced, or risk scores/cyber risk posture improved. If the security team prevent breaches or cuts your incident count by certain percentage, the provider earns a premium; if not, the client will pay less (or the provider pays a penalty). The vendor might take a percentage of the estimated losses it prevented or offer money-back guarantees for failures. For instance, some MDR services now include breach warranties (Arctic Wolf’s covers up to $3M), a bold example of putting skin in the game. Another approach could include tying fees to strict SLAs: say you pay full price only if every critical incident is contained within defined time, with credits for any misses. In all cases, the service provider’s revenue is directly tied to delivering the promised security outcomes. This makes them truly accountable and assures the client they are paying for performance, not just promises 
  • Compute based pricing: This model mimics the cloud economics and treats security like cloud computing – pay for what you use, on-demand. Instead of fixed subscriptions, the client will buy credits or get metered by usage: e.g., pay per thousand events analyzed, per agent hours, per automated response action executed, or per endpoint per hour. The appeal is ultimate flexibility: you pay more only when you actually need more (during a spike in usage or attacks), and costs scale down when things are quiet. Clients can monitor usage via dashboards, with AI auto-scaling like a serverless cloud. The downside is unpredictable bills. In practice, a hybrid pricing model will make the most sense, with a baseline fee for stability, plus usage-based charges for surges, with safeguards (caps or alerts) to prevent “bill shock.”  
  • Blockchain inspired decentralized model: This is one of the more provocative and radical idea for pricing security services which envisions a blockchain on-demand marketplace for security services built on permissioned blockchain like Hyperledger, where clients will use utility tokens to rent AI agents for the security services. Agents will bid on jobs via smart contracts, with humans earning bounties for escalations. To develop this model, time needs to be spent on topics like blockchain scalability, liquidity, token volatility, alignment with security frameworks like SOC2, NIST CSF and regulatory compliance requirements, etc 
  • Barter model – data for agentic AI security services: Another radical and potentially disruptive model, where clients will be able to trade anonymized threat data, which the providers can use to train AI agents for better AI powered autonomous SOCs, in exchange for discounted security services. It is akin to federated learning in AI where the data will stay local whereas the insights will be shared. Clients can integrate via APIs to upload anonymized datasets using differential privacy tools and providers use privacy-enhancing technologies (PETs) to process data without full access. This data will be fed into a federated learning system where agents can be trained on aggregated and non-identifiable insights and then deployed for autonomous security tasks. This creates an intelligence loop, where attacks on one surface harden defenses for all. Of course, trust and regulatory clarity are prerequisites for this model. But the real innovation lies in pricing aligned with contribution: the more high-quality (and compliant) data a client shares, the more value they receive. It is not just about buying protection; it is about earning it by investing in a shared intelligence layer 

The advent of autonomous AI in cybersecurity means we must rethink the economics of defense. If AI is doing the heavy lifting, then security budgets must pivot to paying for outcomes, not for effort. 

Providers and clients that embrace outcome-based, risk-aligned pricing will ultimately form stronger partnerships. With both sides having skin in the game (if the vendor fails, they take a hit; if they excel, they share in the reward), security stops being a grudging expense and becomes a true investment in resilience. 

In short, you should get what you pay for. In the agentic AI era, organizations should demand to pay for actual security results, not vague promises, or outdated metrics. Vendors who deliver that transparency and accountability will earn trust (and business) in an industry where trust is everything. It is time to evolve not just our technology, but also the pricing models for security services. 

If you found this blog interesting, check out Agentic AI: True Autonomy or Task-based Hyperautomation? – Everest Group Research Portal, which delves deeper into another acquisition in the marketplace in recent weeks. 

To discuss this further, please contact Varnit Tyagi ([email protected]) and Ricky Sundrani ([email protected]).