Systematic hedge fund strategies have been around for years. But thanks to technological advances and an ever-growing pool of data, they're more sophisticated than ever.
Hedge funds started using algorithms long before social media and companies like Netflix brought them into living rooms around the world. And personalising content was the last thing they were interested in - they wanted to use algorithms to analyse markets more efficiently and keep emotions out of their investment decisions. Which is exactly what they did. Today, systematic hedge funds use these math-based rules, instructions and statistical models to identify attractive buy and sell opportunities, and to power automated risk models that determine whether positions should be entered into, increased, decreased or closed.
The roots of today's systematic trading strategies can be traced back to the 1960s and 1970s, to the time when computers were becoming increasingly mainstream and accessible. It was then that pioneers like Ray Dalio started harnessing this new technology to guide their trading. Dalio founded his company, Bridgewater Associates, in 1975, and was quick to adopt a systematic investment approach.
The company has stuck to this approach over the years, and continues to analyse massive volumes of data to identify market opportunities and make investment decisions. It follows a set of clearly defined rules and uses algorithms to keep human emotions such as fear and greed at bay when making decisions. And to ensure its risks are broadly diversified, Bridgewater simultaneously takes into account different markets, instruments and investment styles. This systematic approach has paid off for Bridgewater - it is now one of the biggest players in the industry.
The 1980s and 1990s saw the emergence of new systematic hedge funds such as Renaissance Technologies and Citadel. Like Bridgewater, the two companies now rank among the world's biggest hedge funds, managing combined assets of over 100 billion US dollars. Renaissance and Citadel have kept their top spots in the league tables by continuously developing their models and focusing on innovation. Today, they use complex algorithms to sift through large amounts of data and are reaping the benefits of the advances in computing power. And like many other financial market players, they started harnessing the power of AI and machine learning a number of years ago.
Machine learning is a broad field that offers exciting possibilities to further optimise the investment process. These include the sub-fields of supervised and unsupervised learning.
Supervised learning, such as regression analysis, was first incorporated into systematic strategies several decades ago as a way to increase the accuracy of market predictions. These models focus on identifying the relationships between input and output data, and can for example be used to predict foreign exchange prices based on parameters like interest rate movements, inflation and currency flows.
In the unsupervised learning space, advances in computer technology and the growing availability of financial data are resulting in the creation of new methods and models, such as deep learning. Unlike their supervised counterparts, unsupervised learning models have the ability to discover new patterns using unlabelled raw data. So instead of looking for specific price patterns or dependencies, they use a wide range of data and deep learning methods to identify relationships.
Each application of supervised and unsupervised machine learning is implemented differently and has a specific use case based on the goal to be achieved and the available data. In a nutshell, supervised learning is ideal for specific, clearly defined tasks. Unsupervised learning, on the other hand, is more flexible and can be used to discover new patterns, giving it the ability to recognise market anomalies that would be undetectable for a human investor.
Despite their distinct differences, these two types of learning also have some important similarities. For example, they both rely on high-quality data. Systematic investment strategies are built on data, so sub-par data quality makes it impossible to develop, test or implement effective strategies. Also, both are susceptible to overfitting, which is what happens when a strategy is too strongly aligned with the training data and therefore has difficulty making accurate predictions. This can have some very unpleasant and potentially costly repercussions, so it's crucial that systematic strategies avoid overfitting.
Machine learning has proven to be a useful tool for developing efficient trading strategies for decades now. And with the constant technological advances that are being made, there's no doubt it will become ever more sophisticated and powerful as time goes on. So while the technology does harbour certain risks such as overfitting, it also has enormous potential if applied correctly, and can be an excellent complement to other investment approaches. The American psychologist Abraham Maslow might have put it best when he wrote: "If the only tool you have is a hammer, you tend to see every problem as a nail."
LTCM was a well-known hedge fund founded by John Meriwether in 1994 that employed a star-studded cast of highly regarded financial experts including Nobel Prize winners Myron Scholes and Robert Merton.
The company used sophisticated mathematical models to exploit price differences in the bond market, and it did so with great success. But there was a problem: these models relied heavily on historical data and assumptions, and as a result, weren't able to make accurate predictions in the event of unexpected market developments - a concept referred to as overfitting.
When the Russian financial crisis hit in 1998, the market conditions were suddenly totally different than the conditions in the historical data. LTCM suffered massive losses as a result and ultimately had to be bailed out by a consortium of banks in a deal brokered by the Federal Reserve. The company was liquidated and dissolved in 2000.