Blog
Beyond build versus buy: how large language models are reshaping technology decisions in financial services enterprises
Artificial Intelligence (AI) is entering a new phase in financial services. What began as generative assistance is now expanding into agent-driven workflow execution. Large Language Models (LLMs) are increasingly embedded as decision-support layers and workflow orchestrators across banking, capital markets, and wealth management.
Deployments at Goldman Sachs and HSBC illustrate this shift. These initiatives do not simply introduce new tools. They signal a broader reassessment of where AI-driven intelligence sits in the enterprise stack and how it interacts with systems of record.
This evolution reframes the traditional build-versus-buy discussion. The focus is moving from product ownership to architectural control and orchestration.
Reach out to discuss this topic in depth.
Agentic AI is reshaping the workflow layer
Agent-driven AI systems are increasingly positioned as an interface between users and enterprise platforms. They interpret policies, summarize research, route cases, draft communications, and support compliance investigations. In doing so, they operate across multiple data sources and workflows.
Early evidence suggests that impact is most visible in areas such as research synthesis, portfolio mandate interpretation, compliance review support, client reporting, and operational case management.
Core systems remain structurally distinct. Fund accounting engines, core banking platforms, settlement systems, and regulatory reporting stacks continue to operate within structured and rule-based environments. They are embedded in regulatory obligations and operational dependencies that are not easily displaced.
The pressure is therefore concentrated on the intelligence and workflow layer rather than on the system of record itself.
The build-versus-buy question is evolving into an ownership and orchestration question
In earlier technology waves, build meant large internal engineering programs, while buy meant adopting integrated platforms from established providers. LLMs introduce a third reality: enterprises can assemble capability through orchestration.
They can layer a model on top of existing platforms, connect it to enterprise data, enforce entitlements and policy controls, and deploy targeted agents into specific workflows. This capability model makes the ownership conversation more granular.
Rather than asking whether to build or buy, financial institutions will need to increasingly determine:
- Where enterprise intelligence should reside within the architecture
- Which workflows should incorporate AI-enabled decisioning support
- How accountability should be structured when AI influences outcomes
- What components require differentiation, and what should remain standardized
This perspective better reflects financial services architecture, where layered coexistence is more common than wholesale replacement.
Key considerations for enterprise architecture and operating models
Recent enterprise deployments suggest that the shift toward agent-driven AI is less about tool adoption and more about architectural positioning. As institutions experiment with model-led workflows, several considerations are beginning to shape internal discussions. We recommend that financial institutions look at the following five dimensions before scaling deployment:
- Process characteristics and control requirements: The first consideration is economic and operational impact. Agent-driven AI appears most compelling in workflows that rely on interpretation, documentation, and exception handling. In highly deterministic processes, its role may remain supportive. This distinction influences where institutions prioritize early deployment
- Data readiness and context: The second pressure point is data ownership. Models can generate outputs, but enterprise value depends on contextual grounding in mandates, policies, entitlements, and operational data. Institutions are increasingly recognizing that intelligence without structured context does not translate into durable advantage
- Architecture and integration maturity: Third, integration depth is emerging as a major constraint. Agent-driven systems must connect across platforms while preserving controls and monitoring. Institutions with interoperable architectures are better positioned to experiment broadly, while legacy-heavy environments may need to move more cautiously
- Governance and accountability: Fourth, governance is shaping deployment scope. AI systems that influence credit decisions, Anti-money Laundering (AML) alerts, suitability checks, or accounting interpretations require auditability and traceability. Early deployments suggest that governance design, more than model capability, determines how far automation extends. Goldman’s use of Claude and HSBC’s AI initiatives reflect this pattern. These are not isolated model deployments. They represent structured integration into existing control environments
- Talent and partner ecosystem: Finally, talent and accountability models are coming into focus. Scaling agent-driven systems requires sustained engineering oversight, monitoring, and risk management. Many institutions are evaluating how much of this capability should reside internally and how much should be supported by partners
Exhibit 1 illustrates how LLMs are being applied across financial services workflows, concentrating value in interpretation-heavy tasks while core systems and governance frameworks remain unchanged.
Exhibit 1: Where LLMs shift value, and what must be in place
| Domain and process | Where LLMs add value | What must remain anchored | Governance focus |
| Wealth onboarding | Document intake, case summaries | Identity and Customer Relationship Management (CRM) systems | Audit trail, privacy |
| Portfolio review | Scenario narratives, rationale drafting | Holdings and mandate engines | Suitability proof |
| Loan origination | Underwriting summaries | Credit policies and bureau data | Fairness, explainability |
| AML investigations | Case summarization, Suspicious Activity Report (SAR) drafting | Alert engines and entitlements | Strong auditability |
This perspective reinforces that LLMs enhance workflows, but value depends on integration, data context, and governance.
Technology providers are embedding generative AI within core platforms
The market is not shifting toward enterprise build alone. Technology providers are also embedding LLM-driven capabilities directly into their platforms, often with an emphasis on workflow integration, control, and managed deployment.
Broadridge’s OpsGPT and SS&C’s AI agents are examples of this shift, integrating generative AI into post-trade and operational environments rather than positioning it as a separate overlay.
For enterprises, this shift reframes evaluation criteria. The question becomes whether provider-embedded intelligence meets strategic needs, or whether enterprise-owned layers are required for reuse and cross-platform control.
What this shift means for the ecosystem
The rise of LLM-enabled architectures does not reduce the relevance of existing ecosystem players. It reshapes their roles and the basis of competition.
Technology platform providers will see generative AI capabilities become baseline expectations rather than points of differentiation. Their advantage will increasingly depend on how seamlessly AI is embedded within core workflows, how open their architectures are to enterprise-led integration, and how effectively they support governance and control requirements.
System integrators and IT services firms will play a central role in designing and operationalizing hybrid architectures. Demand will grow for integration expertise, data harmonization, model deployment, and operating model redesign. Over time, managed services will extend beyond infrastructure to include AI lifecycle monitoring and governance support.
Hyperscalers and model providers will need to demonstrate enterprise-grade security, compliance alignment, and observability. Their success in financial services will depend not only on model performance but also on tooling that supports controlled deployment in regulated environments.
Managed services and operations providers will find new opportunities in supervising AI-enabled workflows. Human-in-the-loop oversight, monitoring, and exception handling will become recurring components of enterprise AI operations.
In this environment, value shifts away from standalone functionality and toward orchestration, integration depth, and sustained governance capability.
The market is moving toward selective ownership and hybrid architecture
The realistic outcome is neither full internalization nor exclusive reliance on external platforms. A selective, layered architecture is emerging.
Institutions will:
- Develop intelligence layers internally where differentiation and reuse matter
- Retain established platforms for control-intensive systems of record
- Rely on partners for integration, modernization, and operational management
LLMs expand what institutions can orchestrate, but they do not remove regulatory intensity, ecosystem interdependence, or architectural complexities.
Enterprises that align AI ambition with disciplined operating models, realistic capability assessments, and governance will be better positioned to convert experimentation into sustained enterprise value.
If you enjoyed this blog, check out, The Next Frontier of Banking and AWM Experiences: The Power of Customer Experience Orchestration – Everest Group Research Portal, which delves deeper into another topic relating to banking.
To discuss technology strategy, operating model transformation, and platform modernization across banking, capital markets, and wealth management, contact Ronak Doshi at [email protected], Kriti Gupta at [email protected], Ketan Kumar at [email protected], or Laqshay Gupta at [email protected].