IBKR Quant Blog


1 2 3 4 5 2 22


Quant

Back to Basics: Introduction to Algorithmic Trading - Part 5


In the previous post Kris shared his views on the programming skills quants need to build on.

In this post, he continues the discussion on Technical skills.

Statistics

It would be extremely difficult to be a successful algorithmic trader without a good working knowledge of statistics. Statistics underpins almost everything we do, from managing risk to measuring performance and making decisions about allocating to particular strategies. Importantly, you will also find that statistics will be the inspiration for many of your ideas for trading algorithms. Here are some specific examples of using statistics in algorithmic trading to illustrate just how vital this skill is:

  • Statistical tests can provide insight into what sort of underlying process describes a market at a particular time. This can then generate ideas for how best to trade that market.
  • Correlation of portfolio components can be used to manage risk (see important notes about this in the Risk Management section below).
  • Regression analysis can help you test ideas relating to the various factors that may influence a market.
  • Statistics can provide insight into whether a particular approach is outperforming due to taking on higher risk, or if it exploits a genuine source of alpha.

Aside from these, the most important application of statistics in algorithmic trading relates to the interpretation of backtest and simulation results. There are some significant pitfalls – like data dredging or “p-hacking” (Head et.al. (2015)) – that arise naturally as a result of the strategy development process and which aren’t obvious unless you understand the statistics of hypothesis testing and sequential comparison. Improperly accounting for these biases can be disastrous in a trading context. While this issue is incredibly important, it is far from obvious and it represents the most significant and common barrier to success that I have encountered since I started working with individual traders. Please, spend some time understanding this fundamentally important issue; I can’t emphasize enough how essential it is.

It also turns out that the human brain is woefully inadequate when it comes to performing sound statistical reasoning on the fly. Daniel Kahneman’s Thinking, Fast and Slow (2013) summarizes several decades of research into the cognitive biases with which humans are saddled. Kahneman finds that we tend to place far too much confidence in our own skills and judgements, that human reason systematically engages in fallacy and errors in judgment, and that we overwhelmingly tend to attribute too much meaning to chance. A significant implication of Kahneman’s work is that when it comes to drawing conclusions about a complex system with significant amounts of randomness, we are almost guaranteed to make poor decisions without a sound statistical framework. We simply can’t rely on our own interpretation.

As an aside, Kahneman’s Thinking, Fast and Slow is not a book about trading, but it probably assisted me with my trading more than any other book I’ve read. I highly recommend it. Further, it is no coincidence that Kahneman’s work essentially created the field of behavioral economics.

Risk Management

There are numerous risks that need to be managed as part of an algorithmic trading business. For example, there is infrastructure risk (the risk that your server goes down or suffers a power outage, dropped connection or any other interference) and counter-party risk (the risk that the counter-party of a trade can’t make good on a transaction, or the risk that your broker goes bankrupt and takes your trading account with them). While these risks are certainly very real and must be considered, in this section I more concerned with risk management at the trade and portfolio level. This sort of risk management attempts to quantify the risk of loss and determine the optimal allocation approach for a strategy or portfolio of strategies. This is a complex area and there are several approaches and issues of which the practitioner should be aware.

Two (related) allocation strategies that are worth learning about are Kelly allocation and Mean-Variance Optimization (MVO). These have been used in practice, but they carry some questionable assumptions and practical implementation issues. It is these assumptions that the newcomer to algorithmic trading should concern themselves with.

Probably the best place to learn about Kelly allocation is in Ralph Vince’s The Handbook of Portfolio Mathematics, although there are countless blog posts and online articles about Kelly allocation that will be easier to digest. One of the tricky things about implementing Kelly is that it requires regular rebalancing of a portfolio that leads to buying into wins and selling into losses – something that is easier said than done.

MVO, for which Harry Markowitz won a Nobel Prize, involves forming a portfolio that lies on the so-called “efficient frontier” and hence minimizes the variance (risk) for a given return, or conversely maximizes the return for a given risk. MVO suffers from the classic problem that new algorithmic traders will continually encounter in their journey: the optimal portfolio is formed with the benefit of hindsight, and there is no guarantee that the past optimal portfolio will continue to be optimal into the future. The underlying returns, correlations and covariance of portfolio components are not stationary and constantly change in often unpredictable ways. MVO therefore does have its detractors, and it is definitely worth understanding the positions of these detractors (see for example Michaud (1989), DeMiguel (2007) and Ang (2014)). A more positive exposition of MVO, governed by the momentum phenomenon and applied to long-only equities portfolios, is given in the interesting paper by Keller et.al. (2015).

Another way to estimate the risk associated with a strategy is to use Value-at-Risk (VaR), which provides an analytical estimate of the maximum size of a loss from a trading strategy or a portfolio over a given time horizon and under a given confidence level. For example, a VaR of $100,000 at the 95% confidence level for a time horizon of one week means that there is a 95% chance of losing no more than $100,000 over the following week. Alternatively, this VaR could be interpreted as there being a 5% chance of losing at least $100,000 over the following week.

As with the other risk management tools mentioned here, it is important to understand the assumptions that VaR relies upon. Firstly, VaR does not consider the risk associated with the occurrence of extreme events. However, it is often precisely these events that we wish to understand. It also relies on point estimates of correlations and volatilities of strategy components, which of course constantly change. Finally, it assumes returns are normally distributed, which is usually not the case.

Finally, I want to mention an empirical approach to measuring the risk associated with a trading strategy: System Parameter Permutation, or SPP (Walton (2014)). This approach attempts to provide an unbiased estimate of strategy performance at any confidence level at any time horizon of interest. By “unbiased” I mean that the estimate is not subject to data mining biases or “p-hacking” mentioned above. I personally think that this approach has great practical value, but it can be computationally expensive to implement and may not be suitable for all trading strategies.

So now you know about a few different tools to help you manage risk. I won’t recommend one approach over another, but I will recommend learning about each, particularly their advantages, disadvantages and assumptions. You will then be in a good position to choose an approach that fits your goals and that you understand deeply enough to set realistic expectations around. Bear in mind also that there may be many different constraints under which portfolios and strategies need to be managed, particularly in an institutional setting.

One final word on risk management: when measuring any metric related to a trading system, consider that it is not static – rather, it nearly always evolves dynamically with time. Therefore, a point measurement tells only a tiny fraction of the true story. An example of why this is important can be seen in a portfolio of equities whose risk is managed by measuring the correlations and covariance of the different components. Such a portfolio aims to reduce risk through diversification. However, such a portfolio runs into problems when markets tank: under these conditions, previously uncorrelated assets tend to become much more correlated, nullifying the diversification effect precisely when it is needed most!

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


17857




Quant

K-Means Clustering For Pair Selection In Python - matplotlib subplot functionality


In the previous post Lamarcus Coleman explored Python’s matplotlib

In this article, he will compare the clusters he created from the toy data to the ones that the K-Means algorithm created based on viewing the data.

 

Now that we have both our toy data and have visualized the clusters we created, we can compare the clusters we created from our toy data to the ones that our K-Means algorithm created based on viewing our data. We’ll code a visualization similar to the one we created earlier. However, instead of a single plot, we will use matplotlib subplot method to create two plots, our clusters and K-Means clusters that can be viewed side by side for analysis. If you would like to learn more about matplotlib subplot functionality, you can visit here.

#now we can compare our clustered data to that of kmeans
#creating subplots

plt.figure(figsize=(10,8))
plt.subplot(121)
plt.scatter(data[0][:,0],data[0][:,1],c=data[1],cmap='gist_rainbow')
#in the above line of code, we are simply replotting our clustered data
#based on already knowing the labels(i.e. c=data[1])
plt.title('Our Clustering')
plt.tight_layout()

plt.subplot(122)
plt.scatter(data[0][:,0],data[0][:,1],c=model.labels_,cmap='gist_rainbow')
#notice that the above line of code differs from the first in that
#c=model.labels_ instead of data[1]...this means that we will be plotting
#this second plot based on the clusters that our model predicted
plt.title('K-Means Clustering')
plt.tight_layout()
plt.show()

Clustering

 

The above plots show that the K-Means algorithm was able to identify the clusters within our data. The coloring has no bearing on the clusters and is merely a way to distinguish clusters. In practice, we won’t have the actual clusters that our data belongs to and thus we wouldn’t be able to compare the clusters of K-Means to prior clusters. This walkthrough shows the ability of K-Means to identify the presence of subgroups within data.

 

At this point in our journey toward better understanding the application and usefulness of K-Means we’ve created our own clusters from data we created, used the K-Means algorithms to identify the clusters within our toy data and travelled back in time to a Statistical Arbitrage trading world with no K-Means.

We’ve learned that K-Means assigns data points to clusters randomly initially and then calculates centroids or mean values. It then calculates the distances within each cluster, squares these, and sums them, to get the sum of squared error. The goals is to reduce this error or distance. The algorithm repeats this process until there is no more in-cluster variation, or put another way, the cluster compositions stop changing.

Ahead, we will enter a Statistical Arbitrage trading world where K-Means is a viable option for solving the problem of pair selection and use the same to implement a Statistical Arbitrage trading strategy.

 

 

To see the previous posts in this series, click Part I, Part 2, Part 3, Part 4 and Part 5

------------------------------------------------------------

*Disclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only.

If you want to learn more about K-Means Clustering for Pair Selection in Python, or to download the code, visit QuantInsti website and the educational offerings at their Executive Programme in Algorithmic Trading (EPAT™).

This article is from QuantInsti and is being posted with QuantInsti’s permission. The views expressed in this article are solely those of the author and/or QuantInsti and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


18142




Quant

qplum - Why is machine learning in finance so hard? A case study in generating hypothetical data


In case you missed it! The webinar recording is available on IBKR YouTube channel.

 

There is a lot of interest in using machine learning in data. However, there are aspects unique to finance that makes it really difficult to use machine learning in trading. If machine learning fails to generate outstanding alpha, there is a chance that interest and investment into ML in finance might wane, similar to what happened to neural network research in the 90s. In this talk, we want to touch upon five reasons why machine learning does not seem to work in finance and how to address them.

 

https://youtu.be/Szzp6pe4cns

Webinar

 


18774




Quant

Back to Basics: Algorithmic Trading - Part 4


The first installment in this series is available as follows: Part I, Part II and Part III

There is a lot of information about algorithmic and quantitative trading in the public domain today. The type of person who is attracted to the field naturally wants to synthesize as much of this information as possible when they are starting out. As a result, newcomers can easily be overwhelmed with “analysis paralysis” and wind up spending a lot of their valuable spare time working on algorithmic trading without making much meaningful progress. This article aims to address that by sharing the way in which I would approach algorithmic trading as a beginner if I were just starting out now, but with the benefit of many years of hindsight.

This article is somewhat tinged with personal experience, so please read it with the understanding that I am describing what works for me.

In this post, we will go a little further and investigate the things that people who are just starting out should think about. In particular, I aim to provide you with something of a roadmap for getting started, by sharing some of the practical things that I’ve learned along the way. 

Note on terminology

The term “algorithmic trading” is sometimes used in professional settings to refer to execution algorithms, for example algorithms that split up a large order to optimize the total cost of the transaction. In this post, I generally use the terms systematicalgorithmic and quantitative trading interchangeably to refer to strategic trading algorithms that look to profit from market anomalies, deviation from fair value, or some other statistically verifiable opportunity.

Learning the theoretical underpinnings is important – so start reading – but it is only the first step. Put the theory into practice is a theme that you will see repeated throughout this article; emphasizing the practical is my strongest message when it comes to making it in this field.

Having said that, in order to make it in algorithmic trading, one typically needs to have knowledge and skills that span a number of disciplines. This includes both technical and soft skills. Individuals looking to set up their own algorithmic trading business will need to be across many if not all of the topics described below; while if you are looking to build or be a part of a team, you may not need to be personally across all of these, so long as they are covered by other team members. These skills are discussed in some detail below.

Technical skills

The technical skills that are needed for long-term algorithmic trading include, as a minimum:

  1. Programming
  2. Statistics
  3. Risk management

There are other skills I would really like to add to this list, but which go a little beyond what I would call “minimum requirements.” I’ll touch on these later. But first, let’s delve into each of these three core skills.

1.       Programming

If you can’t already program, start learning now. To do any serious algorithmic trading, you absolutely must be able to program, as it is this skill that enables efficient research. It pays to become familiar with the syntax of a C-based language like C++ or Java (the latter being much simpler to learn), but to also focus on the fundamentals of data structures and algorithms at the same time. This will give you a very solid foundation, and while it can take a decade or longer to become an expert in C++, I believe that most people can reach a decent level with six months of hard work. This sets you up for what follows.

It also pays to know at least one of the higher-level languages, like Python, R or MATLAB, as you will likely wind up doing the vast majority of your research and development in one of these languages. My personal preferences are R and Python.

  • Python is fairly easy to learn and is fantastic for efficiently getting, processing and managing data from various sources. There are some very useful libraries written by generous and intelligent folks that make data analysis relatively painless, and I find myself using Python more and more as a research tool.
  • I also really like using R for research and analytics as it is underpinned by a huge repository of useful libraries and functions. It was written with statistical analysis in mind, so it is a natural fit for the sort of work that algorithmic traders will need to do. The syntax of R can be a little strange though, and to this day I find myself almost constantly on Stack Overflow when developing in R!
  • Finally, I have also used MATLAB and its open source counterpart Octave, but I would almost never choose to use these languages for serious algo research. That’s more of a personal preference, and some folks will prefer MATLAB, particularly those who come from an engineering background as they may have been exposed to it during their work and studies.

When you’re starting out, I don’t believe it matters greatly which of these high-level languages you choose. As time goes on, you will start to learn which tool is the most applicable for the task at hand, but there is a lot of cross-over in the capabilities of these languages so don’t get too hung up on your initial choice – just make a choice and get started!

Simulation environments

Of course, the point of being able to program in this context is to enable the testing and implementation of algorithmic trading systems. It can therefore be of tremendous benefit to have a quality simulation environment at your disposal. As with any modelling task, accuracy, speed and flexibility are significant considerations. You can always write your own simulation environment, and sometimes that will be the most sensible thing to do, but often you can leverage the tools that others have built for the task. This has the distinct advantage that it enables you to focus on doing actual research and development that relates directly to a trading strategy, rather than spending a lot of time building the simulation environment itself. The downside is that sometimes you don’t quite know exactly what is going on under the hood, and there are times when using someone else’s tool will prevent you from pursuing a certain idea, depending on the limitations of the tool.

A good simulation tool should have the following characteristics:

  • Accuracy – the simulation of any real-world phenomenon inevitably suffers from a deficiency in accuracy. The trick is to ensure that the model is accurate enough for the task at hand. As statistician George Box once said, “all models are wrong, but some are useful.” Playing with useless models is a waste of time.
  • Flexibility – ideally your simulation tool would not limit you or lock you in to certain approaches.
  • Speed – at times, speed can become a real issue, for example when performing tick-based simulations or running optimization routines.
  • Active development – if unexpected issues arise, you need access to the source code or to people who are responsible for it. If the tool is being actively developed, you can be reasonably sure that help will be available if you need it.

 

In the Next post Kris will discuss Statistics and Risk Management.

 

Learn more about Robot Wealth here: https://robotwealth.com/

This article is from Robot Wealth and is being posted with Robot Wealth’s permission. The views expressed in this article are solely those of the author and/or Robot Wealth and IB is not endorsing or recommending any investment or trading discussed in the article. This material is for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IB to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.


17856




Quant

Kristal AI - The Utility of Utility Functions


Join us for a free webinar on Thursday, July 19, 2018 at 4:30 AM EDT

Register


Kristal AI - The Utility of Utility Functions

A million dollars is a lot of money for a millionaire, but not much for a billionaire. This is because the incremental utility of the million dollars differs based on current wealth.

Extending this to personal finance, rather than targeting a particular level of risk or returns, one way to invest optimally is by optimising for overall "utility". We can use "utility functions" to convert returns levels to a measure of utility (or happiness), and then optimise for average utility.

 

Speaker:    Karthik Shashidhar, Head of Quant, Analytics and Data Science at Kristal.AI

Sponsored by:     www.kristal.ai

 

Information posted on IBKR Quant that is provided by third-parties and not by Interactive Brokers does NOT constitute a recommendation by Interactive Brokers that you should contract for the services of that third party. Third-party participants who contribute to IBKR Quant are independent of Interactive Brokers and Interactive Brokers does not make any representations or warranties concerning the services offered, their past or future performance, or the accuracy of the information provided by the third party. Past performance is no guarantee of future results.


18515




1 2 3 4 5 2 22

Disclosures

We appreciate your feedback. If you have any questions or comments about IBKR Quant Blog please contact ibkrquant@ibkr.com.

The material (including articles and commentary) provided on IBKR Quant Blog is offered for informational purposes only. The posted material is NOT a recommendation by Interactive Brokers (IB) that you or your clients should contract for the services of or invest with any of the independent advisors or hedge funds or others who may post on IBKR Quant Blog or invest with any advisors or hedge funds. The advisors, hedge funds and other analysts who may post on IBKR Quant Blog are independent of IB and IB does not make any representations or warranties concerning the past or future performance of these advisors, hedge funds and others or the accuracy of the information they provide. Interactive Brokers does not conduct a "suitability review" to make sure the trading of any advisor or hedge fund or other party is suitable for you.

Securities or other financial instruments mentioned in the material posted are not suitable for all investors. The material posted does not take into account your particular investment objectives, financial situations or needs and is not intended as a recommendation to you of any particular securities, financial instruments or strategies. Before making any investment or trade, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice. Past performance is no guarantee of future results.

Any information provided by third parties has been obtained from sources believed to be reliable and accurate; however, IB does not warrant its accuracy and assumes no responsibility for any errors or omissions.

Any information posted by employees of IB or an affiliated company is based upon information that is believed to be reliable. However, neither IB nor its affiliates warrant its completeness, accuracy or adequacy. IB does not make any representations or warranties concerning the past or future performance of any financial instrument. By posting material on IB Quant Blog, IB is not representing that any particular financial instrument or trading strategy is appropriate for you.