Trading technology has always been at the cutting edge of financial markets. As technology has evolved from carrier pigeons, to telegraphs, to microwave communications and beyond, traders have always been pioneers. Over the course of the last 100 years or so it is not an exaggeration to say we have progressed from the speed of flight to the speed of light as financial firms have continued to push the envelope of innovation.
The same is true on the execution desk, where sell-side firms have had to innovate to differentiate in a highly competitive market. In addition to competing with peers for order flow, sell-side brokers have seen their customers grow in sophistication over the last decade; asset managers and hedge funds have increasingly deployed advanced analytics, often recruiting expertise from the banks themselves. Where speed of execution had once been the battlefield for so long, advanced analytics is where the arms race has shifted.
Development has progressed at an ever increasing pace as sell-side desks look to win increased order flow by offering more advanced tools than their competitors or, indeed, what institutional customers can deliver for themselves. Perhaps the most important development in the last 15 years has been algorithmic execution. Together with the fragmentation of liquidity across a multitude of new lit and dark trading venues, this innovation has led to an explosion in the number of execution options available to the buy-side. But the question remains: has all this led to better performance in trade execution?
Algo selection remains a thorny issue and has proved to be surprisingly hard to grapple with, even when using technological or statistical solutions. Measuring and reporting best execution continues to be loosely defined by regulators and is open to interpretation. This, coupled with a bewilderingly wide range of alternative algos to select from and the fact that the majority of total cost analysis (TCA) tools are provided either by the brokers or trading platform providers, has made algo-selection challenging for the buy-side. It has also left sell-side firms struggling to differentiate themselves from their peers.
New algorithms are launched into the market with great fanfare but data supporting the veracity of their performance has sometimes lagged behind. Third party vendors have tried to provide technology-based solutions to fill this gap. Independent ‘best execution’ benchmarks have been created ranging from TCA tools to ranking the execution quality of brokerage firms on a “broker wheel”. Many Order Execution Management Systems (OEMS) vendors now provide these analyses directly to the buy-side trading desk through their execution platforms.
However, like all algorithmic approaches, these have several inherent flaws and while buy-side institutions are obliged to repeat the disclaimer that “past performance is no guarantee of future success”, this is just as apparent when benchmarking algos as it is when back-testing portfolios.
Ultimately, traditional tools and methodologies fail to address two fundamental questions:
– How can we determine what future performance will look like using only historical data?
– How can we measure the impact that any individual trade has had on the market?
The first question stems from the fact that algos are optimized using historical time series data. This is a significant business, with the industry spending hundreds of millions of dollars on historical tick data, data capture and replay technology. And while it represents an important tool in verifying execution strategies, it can also be highly misleading when data is weak or of poor quality, or where it is unrepresentative of future market dynamics. Where either is true, then algos will not perform to their advertised benchmarks. The second question is an existential issue that is defined by the fact that historical data only shows us the impact a trade has had, and not what would have happened had the trade not occurred.
To achieve optimal execution, what is really needed is data that is representative of market dynamics the way they exist today or even better; how those dynamics may look in the future. If only this was possible! Actually, it is.
Agent-based simulations provide a framework for replicating complex adaptive systems and, post the Global Credit Crisis and ensuing market meltdown, market practitioners have increasingly accepted that financial markets are exactly this type of system. Given this, an alternative approach is rapidly gaining traction, with agent-based methodology progressively being adopted by algo execution teams to better train their algos in a wider range of environments than is possible with purely historical data. Like an “algo-gym”, agent-based simulations can help ensure execution algorithms are ready to be deployed in as wide a range of potential futures as possible. Rather than training algos by looking purely at the “surface” data produced by exchanges, simulations can re-create the data generating process, producing synthetic market data that is indistinguishable from a given real-world time series. However, having recreated the underlying system, such tools have the significant advantage of parameter adjustment which enables users to construct any potential scenario they choose. By introducing multiple potential market dynamics, sell-side brokers can ensure their algos function under many types of stressed scenarios and are performant in the broadest range of eventualities.
This approach has other important advantages, but some understanding of the mechanics of these simulations is needed for them to become apparent. The individual ‘agents’ inside the simulation are themselves autonomous trading algorithms, and the micro-level interactions of these entities, as they take trading decisions and submit orders to a venue, are what produces the synthetic market data. An execution algo can be inserted into this agent-based framework while the simulation is running, not only showing how it reacts to other algorithms but also simulating the emergent behavior of the market in response to such trades. For the first time, algos can be tested against adaptive strategies that may be taking advantage of a sell-side firm’s current execution strategy, causing market slippage and price degradation. By running large numbers of identical simulations, either with or without the order, a quantifiable sense of likely market impact can be calculated to ultimately help firms optimize every single trade they execute.
The benefits are highly significant which is why these techniques are increasingly being adopted by major sell-side banks around the world. By leveraging technology platforms like Simudyne to build advanced simulation models, trading desks are once again leading the way for the early application of the latest innovations.