A Blog for Everyone and No One
I have written another post in response to Catherine Malabou’s question – about which crisis and lack animates our attempts to (re)ground – here. There, I followed Heidegger to consider the crisis as a lack of any essential crisis in the ground of our being: an insufficient attunement to the processes that constitute us as subjects. Here, I want to discuss one obvious crisis of recent times – the near-collapse of the financial system in 2008, and the subsequent loss of all confidence in the aptitude of economists and Governments – through a single market event that took place over 20 minutes on 6 May 2010. This market event could be read as a synecdoche of the wider ongoing financial/economic/governmental crisis. This synecdoche could have a bearing on the recent philosophical interest – reignited by Meillassoux’s After Finitude (2006) – in the possibility of access to the thing-in-itself.
We all know about the implosion of the Western financial-governmental system since the subprime mortgage bubble burst in 2008. But one telling moment has received far less press attention, and it is this I will consider a synecdoche of the whole crisis. This moment was more precisely a period of around 20 minutes, from 2.40pm on 6 May 2010. According to the official report, over that time
the prices of many U.S.-based equity products experienced an extraordinarily rapid decline and recovery. That afternoon, major equity indices in both the futures and securities markets, each already down over 4% from their prior-day close, suddenly plummeted a further 5-6% in a matter of minutes before rebounding almost as quickly.
More interestingly, the prices of some individual stocks fluctuated enormously; as Donald MacKenzie writes in the LRB:
Shares in the global consultancy Accenture, for example, had been trading at around $40.50, but dropped to a single cent. Sotherby’s, which had been trading at around $34, suddenly jumped to $99,999.99.
With no reason in the financial news to spur this change, these shares dropped and jumped to the minimum and maximum prices possible. And as the official report states, ‘Over 20,000 trades across more than 300 securities were executed at prices more than 60% away from their values just moments before.’
This was a ‘flash crash’. What happened? Well, a large proportion of trades on modern stock markets are not made through human decisions but are conducted by algorithms. More than half of trades on US stock markets are algorithmic. These can drip-release sell orders to avoid the market dropping on a large sell; or can calculate movements against average prices across a number of shares in the same or related areas and buy and sell accordingly; or can detect and pre-empt the activity of other algorithms. MacKenzie, whose article provides most of the information in this piece about trading and the flash crash, shows the importance of speed for high-frequency trading (trades that make tiny profit margins and so are made very quickly and repeatedly). These trades are made algorithmically at speeds far quicker than any human could react:
Speeds are increasing all the time. … [In 2007-8] the salient unit of trading was still the millisecond, but that’s now beginning to seem almost leisurely: time is often now measured in microseconds (millionths of a second). The London Stock Exchange, for example, says that its Turquoise trading platform can now process an order in as little as 124 microseconds.
Because of this, it is of great importance how physically close your trading server is to the stock exchange’s computer systems. MacKenzie points out that if a trader is based in Chicago and trading on the NYSE, the fastest an electronic trade can get to New York is 16 milliseconds, through a fibre optic cable. This is a delay sufficient to time the trader’s algorithm out of contention. To get around the problem, traders run their algorithms on servers located at the Stock Exchange itself – which can cost $10,000 per month for a single rack space.
The 6 May 2010 ‘flash crash’ was caused by a complex interaction of algorithms, and resulted in a vicious downward spiral and in the market needing to be paused for five seconds (an age in trading) and then essentially frozen again for 15 minutes while algorithms caught up. Trades at erroneous prices were later cancelled, and, while the market was down overall at the end of the day, losses were not huge. Then followed the attempt to understand what had happened:
For five months, large teams from the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) researched what had gone wrong in great detail, ploughing through terrabytes of data.
For more information on the specifics of the crash I recommend MacKenzie’s LRB article. What interests me is quite how non-human the stock market is, for an institution so socially significant, and the extent to which the 6 May 2010 event displayed its fragility. MacKenzie stresses that algorithmic trading is not necessarily any worse than human trading, and can balance out market instability: this is worth remembering in the face of tempting ‘rise of the robots’ narratives. But does it matter that we have constructed a financial system more complicated than we can understand (at any given moment of its operation), that operates at speeds much faster than we can follow?
In this way the flash crash can be read as a synecdoche of the whole financial crisis. In both cases we have an incredibly complex, deeply interlinked system, with more variables and factors than can be comprehended by any human or any regulatory system at any one moment. Regulations can always be put in place based on crises that have taken place in the past, but these guard only against recurrence, not against a different future meltdown. The issue here is not necessarily the existence of the stock market per se, but rather the extent to which people’s reality is exposed to the market’s inherently unstable fluctuations: the possibility of countries being made bankrupt, of a generation being out of work, of basic food prices rocketing or collapsing.
In our context, where the guess-based pseudo-science of economics rules Government policy, it seems our focus should be on emphasising the inherent unknowability of the systems we have constructed. Does a philosophical issue also emerge here? Arguably, yes: in the question of our access to the infinite and the finite, in Kant and Meillassoux.
Kant’s antinomies in the first Critique display the kernel of unknowability at the heart of infinite totalities. Kant depicts the four transcendental ideas of world, divisibility, freedom and God in terms of infinite mathematical series. The problem of dialectial illusion in Kant then becomes one of the possibility of an unconditioned condition in an infinite series. With the first two antinomies, this problem means that the antinomy cannot be resolved. The latter two antinomies can be resolved, but only at the price of freedom and God being transferred to the noumenal, i.e. as being absolutely unknowable for us. Kant rescues freedom and God, but not as phenomenal objects, rather as transcendental ideas that cannot be disproved.
So in Kant, the unconditioned conditioned within the infinite is unknowable, or, perhaps better put in ontological rather than epistemological terms, it has no being-for-us. It is instructive that Kant makes these arguments using mathematical infinite series (or at least the image of these), because Meillassoux’s challenge to Kant’s prohibition of things-in-themselves utilises mathematics as the privileged route to knowledge of the in-itself. Meillassoux writes:
all those aspects of the object that can be formulated in mathematical terms can be meaningfully conceived as properties of the object itself.
Ultimately, in After Finitude, the aspect of any object that is unveiled by mathematics is its contingency. This result, strange as it is, can be put to one side, because it is more immediately significant that Meillassoux wants to regain direct access to the absolute, to total knowledge of the essence of things. There are two ways in which the flash crash and wider financial crisis may inform us here: firstly as a political argument, secondly as a starting point for an ontological argument.
We could say that it is politically vital to press the Kantian view, of the unknowability of things-in-themselves, because disastrous decisions have resulted from the belief of ministers and policy-makers that they have knowledge of the essence of financial systems or of the outcome of invasions. What is needed is to start in some way to loosen the grip of economically-informed thinking.
This political argument would not trouble Meillassoux, of course, who is operating on the level of ontology, of essence and logic. But it seems that the 20-minute implosion of the stock market shows that it is possible for systems to be consistent – there were reasons for the events, which the investigators dredged from the data in months of analysis – and not therefore completely contingent (whatever that would mean), but still absolutely unknowable for us, at the time. The ultra-complex network of interlinked high-speed financial algorithms, which are running each day and which freakishly imploded on 6 May 2010, constitutes a finite system, not even an infinite one. It could be said to run in tune with basic 19th century scientific determinism – every effect has a cause, the whole tangle can be unravelled after the fact with sufficient patience – but nevertheless at any given moment, the speed and complexity is such that the state and direction of the system is inherently unknowable for any human observer. That is why money can be made and lost on it. It’s why it should play a more minor part in the welfare of many of the world’s people. Does it also mean that Meillassoux’s challenge to Kant’s prohibition of the in-itself fails?
Here we have a finite system, with a clear starting point (each day’s trading starts with the trading bell, and picks up on the closed set of the previous day’s results and overnight news developments), made up of human-written mathematical algorithms. The trading on 6 May 2010, from the market’s opening until 2.40pm, is a finite system, containing, from the perspective of the subsequent trading, two unconditioned conditions. Firstly, a temporal starting point, the ringing of the bell; and, secondly, the mathematical content of the algorithms, pre-programmed to take various actions in the case of particular market conditions. The concept of a starting-point for the world is considered in Kant’s first antinomy: in our day’s trading, though, we do not have an infinite world but a finite, constructed system. From the starting point of these two unconditioned conditions, the market activity unfolds. There is human trading, of course, but this is not significant to the flash crash: here, algorithmic feedback loops interact in ways that, as 6 May 2010 demonstrates, are outside of human understanding at any given moment. Within the finite, let alone the infinite – within the human-created, let alone the natural – it seems that a combination of mathematised complexity and non-human timespans instills the Kantian noumenon (qua ontological void) into the stock market and, due to the latter’s political and social significance, into everyday reality.
Further reading on the 2010 flash crash:
MacKenzie, Donald, ‘How to Make Money in Microseconds’ London Review of Books 33.1 (May 2011) http://www.lrb.co.uk/v33/n10/donald-mackenzie/how-to-make-money-in-microseconds.
US Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC), Findings Regarding the Market Events of May 6, 2010 (Sept 2010) http://www.sec.gov/news/studies/2010/marketevents-report.pdf.