Beyond productivity: AI’s structural shift in active management
AI in asset management has moved beyond initial productivity gains. Pablo Riveroll explains the benefits of integrating proprietary data and research with large language models, while Charlotte Wood discusses the company-wide vision for agentic AI.
Autheurs
Pablo Riveroll, Fund Manager and Global Head of Equities Research:
The active management industry is entering a second phase of AI adoption. The first wave focused largely on efficiency, using large language models (LLMs) to summarise information, accelerate research and support analysts in their day-to-day work. The next phase looks more structural. It centres on integrating proprietary data, internal research and portfolio systems directly with AI models, with the potential to reshape how investment insight is formed and risk is understood.
Last year, we described how our public markets division was beginning to use AI to enhance our investment edge. The tools we discussed – ChatGPT Enterprise, Bloomberg DSX, our proprietary analyst co-pilot, and Context AI – were delivering real efficiency gains: accelerating initial company assessments, democratising knowledge across the team, and freeing analysts to focus on higher-value work.
That article captured AI adoption 1.0: using LLMs primarily as sophisticated research assistants that could process and summarise publicly available information. The results were encouraging, and the tools have since become embedded across our equity and credit desks.
But the landscape has shifted significantly. We are now entering what we think of as AI adoption 2.0, a phase defined not by faster information retrieval but by deep integration of AI into the core investment workflow. The difference is structural: we are in the process of connecting our proprietary data, our analysts’ own research, and our portfolio management systems directly to powerful LLMs. Multiple AI agents that don’t just answer questions but actively help us track our investment theses, understand our risk exposures, and make better decisions.
Multiple agents connecting our data and IP directly to powerful language models
The most significant development over the past year has been the maturation of Agentic AI – having task specific agents working together – and a technology called Model Context Protocol, or MCP. Think of MCP as a universal connector – the equivalent of a USB port, but for AI applications. It provides a standardised way to plug our internal data sources directly into large language models, transforming what these models can do for us.
In the first phase of AI adoption, our tools were single agents, largely working with publicly available information. An analyst could ask ChatGPT to summarise a company’s latest earnings call or assess an industry’s competitive dynamics, and the output was useful but generic. Any asset manager with the same tools could produce essentially the same analysis.
Integration of our data and our IP to the models changes this equation fundamentally. By connecting our internal databases, financial models, research notes, and proprietary datasets to LLMs, we can create an AI environment that is uniquely ours. When an analyst queries our system, the model will draw not only on public information but on Schroders’ own accumulated research, our internal macroeconomic forecasts, our analysts’ proprietary models, and our portfolio positioning data. Different agents will focus on different tasks: data extraction, charting, natural language processing, thesis monitoring. The quality and specificity of the output will be in a different league.
For example, an analyst covering Latin American equities could ask a question such as “what is our commodities team’s view on copper prices, and how does that compare with the assumptions for copper demand embedded in our projections of EV adoption and grid infrastructure buildout in China?” The system, once connected with our internal IP, third party research and market data, could then provide an integrated answer that would previously have required hours of manual cross-referencing.
Integrating our own research into AI
Perhaps the most powerful shift is feeding our own research – the accumulated intellectual output of hundreds of analysts and portfolio managers – back into AI systems. This means we are not just consumers of AI-generated content; we are building AI tools that learn from and build upon our collective knowledge.
Analysts produce a vast body of work: company notes, sector reviews, thematic research, model commentaries, and investment recommendations. Much of this institutional knowledge sits in shared drives, inboxes, and Teams chats, accessible only to those who knew where to look or who happened to attend the right meeting. The future is one where research is being structured, indexed, and made available to our AI tools as context.
The practical implications are significant. A portfolio manager preparing for a meeting on European industrials will soon be able to ask the system to summarise our firm’s different views across the sector, surfacing not just the latest published notes but the pattern of how our analysts’ convictions have evolved over time. A new analyst joining the team would be able to rapidly access the accumulated institutional knowledge on their coverage universe, understanding not just what we think today but the reasoning and evidence trail behind those views.
Critically, this will allow us to use AI to help analysts track their own investment theses more systematically. When an analyst publishes a thesis – for instance, that a particular company will see margin expansion driven by lower competition and hence more pricing power – the system will be able to monitor incoming data (earnings releases, management commentary, industry reports) and flag if new evidence supports or challenges that thesis. This will move us from a world where thesis monitoring was sporadic and manual to one where it is continuous and systematic.
Thesis tracking and holistic risk understanding
One of the most important developments in this second phase of AI adoption is how these tools can help us understand our risk exposures in a more holistic and timely way.
Traditionally, risk management in active equity investing has been dominated by quantitative factor models. These are valuable, but they describe risk in terms of statistical exposures – how much of a portfolio’s return variation can be attributed to factors such as value, momentum, or size. What they do not capture well is the fundamental, thesis-level risk that active managers actually take: the specific investment views and judgments that drive portfolio positioning, and different themes that are interlinked even if a factor model does not pick that up.
This kind of thesis-level transparency, augmented by AI’s ability to continuously process information, represents a genuine step change in how we understand and manage risk. In an AI enabled world, we can have different agents tracking the various drivers we are expecting for a company, and the likelihood that identified risks materialise. It allows portfolio managers to make more informed decisions about which views to lean into, which to hedge, and which to revisit.
What are the benefits of AI adoption 2.0?
Building on the efficiency gains of the first phase, this deeper integration of AI into our investment process delivers benefits across several dimensions:
- From generic to proprietary insight. By connecting our own data and research to LLMs, the output is no longer something any competitor could replicate. The analysis reflects our firm’s accumulated knowledge, our specific models, and our proprietary views. This is a genuine source of differentiation.
- From single tasks to agentic AI. AI-assisted thesis tracking means our investment theses are continuously tested against incoming evidence, rather than reviewed only at scheduled intervals. This allows faster recognition of when a thesis is playing out, stalling, or being invalidated.
- From factor-level to thesis-level risk understanding. Linking positions to specific theses and monitoring those theses in real time gives portfolio managers a much richer understanding of what they are actually exposed to, complementing traditional quantitative risk tools.
- From individual to collective intelligence. Making our analysts’ research accessible to AI systems means that the firm’s collective knowledge can be queried, cross-referenced, and synthesised in ways that were not previously possible. An insight from a Japan analyst can inform a European analyst’s thesis in real time, without relying on chance corridor conversations.
- From unstructured to measured forecasting. As we move toward explicit, tracked forecasts, we create the infrastructure for a genuine learning system. Over time, this helps every analyst and portfolio manager improve their judgement, which we believe will translate to better investment outcomes.
Looking ahead
In our first article, we described 2025 as the year when AI tools would become truly embedded and used as a matter of routine. That is now a reality. LLMs and our other tools are now part of the daily workflow for hundreds of analysts and portfolio managers across our equity and credit desks.
2026 is shaping up to be the year when AI moves from a productivity tool to investment insight. The integration of our proprietary data and research into models, the development of thesis-tracking capabilities, and the deeper connection between AI tools and portfolio management workflows represent a qualitative shift in what AI means for our investment process.
We continue to believe that AI’s role is to augment, not replace, human judgment. But the nature of that augmentation is deepening rapidly. In the first phase, AI helped us process information faster. In this second phase, it is helping us organise our thinking, track our convictions, understand our exposures, and learn from our outcomes. The firms that harness this well – not just deploying the tools, but integrating them thoughtfully into how they invest – will, we believe, deliver better outcomes for their clients over time.
Charlotte Wood, Head of AI and Innovation:
AI is now firmly part of how Schroders works day to day. Over the past year we have seen adoption accelerate sharply, with our proprietary tools at the centre, and many use cases deployed and translating into tangible value. Importantly, we view AI as a visible differentiator for Schroders as part of our clear commitment to innovation and evolving how we deliver client outcomes.
As described above, the next shift is already underway, moving from assistive AI to a more agentic model. In practical terms, this means progressing from human-prompted tasks (e.g. “analyse these sources”, “draft content”) to AI systems that act proactively, first with human approval for each action, and over time operating autonomously within human-defined policies and controls.
Our vision is for AI agents to become an integral part of Schroders’ workforce, working alongside and on behalf of employees to take on increasingly complex tasks so our people can focus on judgement and the highest-value activities for clients. Ever more tasks may be largely done in the background, only requiring human attention for exceptions, escalations or to add specific value.
To achieve this responsibly requires strong foundations, which my team are putting in place now. These fundamentals include establishing a platform to build and run agents, giving agents the ability to access data sources, connecting agents to business systems so they can take actions, and crucially implementing controls and monitoring so agent behaviour is safe and observable.
We see this as the pathway from AI as a productivity tool to AI as a scalable capability – embedded, controlled and genuinely transformative for how we operate.
Autheurs
Topics