JPMorgan Asset Management has for several years been placing greater emphasis on data science and analytics to give its portfolio managers and traders an edge.
The US asset manager has spent around $100m annually over the past three years developing its investment platform, which brings together data science, equity trading and analytics, as well as derivatives and broker relationships.
Kristian West, who joined JPMorgan in 2008, was appointed to lead the investment platform in 2021. He was previously global head of equity trading and equity data science.
As the asset management sector ramps up investment in data analytics, West spoke to Financial News about the approach taken by the $2.7tn asset manager.
How did the investment platform come about?
When I joined the business, the focus was on equity trading. What stood out to me was that if we had access to a lot more trading data, we could make better decisions on behalf of our clients, but also help direct where we should spend our money and where we could have the most positive impact in terms of execution performance.
We went down a path of building a robust data environment. Many other firms have gone through this process where you have a traditional trading desk, and the team is split into different functions — one of them being this analytical capability.
The objective was to have a data environment and people analysing data from a best execution perspective, but also trying to make the process more efficient.
We had a team and a capability that sat across the equity organisation globally and data that anyone could access via our trading systems.
We had developed machine learning tools to make decisions on when and where to trade. At that point, the head of equities thought we could scale it out across the equity business from a research and portfolio construction perspective.
How is data being used across JPMorgan AM?
Everything we do is driven by data. For example, when we engage with clients, we pick up the phone and speak with them. That generates data, such as a voice transcription. Recording conversations allows you to in real time throw up alerts or recommendations to advisers about which products to recommend based on the topic of conversation. You can also understand themes of interest to clients much better.
Traditional ways of business can also generate a lot of data. Meeting a company would usually be on our site or theirs, with research analysts writing up notes based on conversations.
We wouldn’t want to influence how analysts write up their notes, but we can use natural language processing to digest and tag them. For example, if someone is looking across a sector or for a particular theme, research notes can get highlighted. They are put into a machine-readable format that can alert other people across the organisation, such as portfolio managers, research analysts and traders.
Another area is private equity, where companies are less well understood versus public companies and where materials available on these companies can be harder to come by.
Online, there are a number of companies that try to curate knowledge and insight on these private companies, and investors are trying to stitch all these sources together. A tool we’ve created scrapes the internet and brings all that information together under one dashboard to give scores and metrics on companies.
We also have a machine learning model that tries to replicate the thinking of a portfolio manager. It will screen thousands of internal notes and external documentation, such as regulatory filings and news data. It takes all that data and effectively tries to give a price forecast. It allows fundamental research analysts or investors to look at what is driving a stock, using non-traditional information.
Where does JPMorgan source its data?
We don’t discriminate and get data from different sources. The starting point is our internal data – across JPMorgan, not just asset management. There’s a ton of information an organisation that the scale of JPMorgan has.
We also use new vendors and on board traditional data sets, such as satellite imagery data or footfall data, as well as some of the newer firms around ESG.
We spend a lot of money on this and our external vendor spend is significant. We want to make sure we take advantage and can distribute it across the organisation as much as possible. It’s about identifying the data sources and making them as accessible as possible.
What plans are there to develop the investment platform?
Within the data space, the future plans are a feature in everything we do. This includes getting access to data more easily and managing data more proactively. Data scientists might say a vast majority of their time is managing data to a position where they can use it. Making that process as easy as possible is a key focus.
Because we have access to so much data – internally and externally – a firm of our scale and size can be complex. But we want to take advantage and be in a position where no other firm in the industry could or should have as much data as us.
How difficult is it to hire and retain talent in this area?
In terms of people, we have around 120 currently. They span trading, derivatives, analytics and data science. That team is very closely tied with technology. We have around 1,200 technologists across the organisation, so there’s a lot of resource available to us.
Acquiring and retaining talent is a challenge for everyone in the industry. We definitely look at non-traditional firms.
You hear about firms hiring from Google and Facebook – we’ve hired people from those organisations, but we have also lost people to them. There’s a lot of fluidity in this space.
There’s a persona with some of these technology firms. Having hired people from these firms, the distinction between those organisations and ours is a lot less than you’d imagine. The application might be different, but their functions and roles are very similar.
To contact the author of this story with feedback or news, email David Ricketts