When we learn about complexity in college, we typically focus on algorithms that scale with large containers. We prefer O(1) algorithms and containers to O(logN) solutions because the large case scenario is the one that will create problems and bottlenecks.
Whenever a mechanism is proposed for general use, like the STL, developing such algorithms and containers takes a lot of effort to concentrate on scaling for large case scenarios.
But in low-latency trading, we often encounter cases where we need to store a small number of objects inside the container and then perform a set of operations on them, such as insert, delete, traverse, and access them randomly.
Typically, each of these vectors and lists has a very small number of elements, ranging from a handful to a few hundred, which is by no means large by programming standards.
One of those cases is maintaining the other book for an exchange. The exchange order book is typically a hash map of several individual security books indexed by their symbol ID. This book has two sides—the bid and the ask. Each side comprises a list of price levels, ordered by price, and each price level contains a list of orders, ordered by arrival time.
In practice, most of our use cases deal with small containers, which is very much the case that those containers built for the standard Library often do not perform well at that scale. Worse yet, many times, the complexity metrics get so scrambled by local cache effects that an O(N) algorithm can perform orders of magnitude faster than its respective O(1) constant time.
In this article, we will attempt to test and demystify some of these assumptions so they drive your initial development bets in a more educated way.