The Mismeasure of Finance

March 3rd, 2014

Britain’s finance industry is believed to be so vital to our interests that the UK government has apparently ruled out denying the Russians access to it.

Being beholden to this sacred cow makes people uneasy, for example Sir Howard Davies, who published a comment article last week where he warned about the damaging impact of unrestricted financial sector growth. As founding chairman of the FSA, Davies played a not insignificant role in creating London’s disastrous light-touch regulatory regime, so perhaps he now feels a sense of responsibility to put things right.

Davies’ article reminded me of Diane Coyle’s excellent new book on GDP (my review will be out soon) where she dissects the way the finance sector contributes to national accounts.

Consider the UK’s GDP figures for the fourth quarter of 2008, when the UK financial sector showed record growth, outstripping the contribution of manufacturing to the economy. This as at the same time as the sector required massive government bailouts, putting the UK economy into a tailspin. As Coyle points out, this absurdity means that something must be wrong.

The problem comes from the way financial services output is measured. While fee-based services (such as debt underwriting) are straightforward, most banking services are not explicitly charged to customers. For example, price of deposit taking and lending are embedded in the interest rates that banks set for these services.

As Coyle relates, the economists’ solution to this conundrum was an ugly acronym called FISIM, or ‘financial services intermediation indirectly measured’. This is calculated as the difference between the interest rate banks earn on loans or pay on deposits and a benchmark risk-free rate, multiplied by the outstanding loan balance.  FISIM is recognised as the international standard for measuring bank contributions to GDP (See note 1).

The trouble with FISIM is that it doesn’t account for risk. Before the 2008 crisis, banks’ increased risk-taking and leverage was counted as ‘growth’ in GDP. Since then studies have shown that adjusting for risk-taking would reduce financial sector contributions to GDP by 25-40%. If the financial sector contribution to GDP is largely a statistical mirage, then why isn’t the methodology changed?

The depressing answer is that by boosting the importance of financial services in national accounts, FISIM became a self-fulfilling prophecy: policymakers became captured by financial services because the numbers said it was important. Why else, two years after the Bank of England published papers criticising the methodology, did incoming governor Mark Carney deliver a speech in October praising finance as contributing 10% of UK GDP?

Such entrenchment of flawed measuring sticks shows why genuine reform of finance is so difficult.

 

Note 1) As well as fees and FISIM there is also net spread earnings (NSE) from trading on behalf of clients. Since no-one actually buys a product called FISIM, government statisticians create an imaginary customer that buys it as an intermediate input and adjust overall GDP statistics accordingly.

Disruptive Business Models, Uber and Plane Crashes

January 30th, 2014

Disruptive internet-based business models have upended traditional industries like recorded music, newspapers and retailing. The latest flurry of innovation involves start-ups that take a service traditionally provided by a regulated firm – such as a hotel or taxi company – transforming it into commission-paying transactions between buyers and sellers. Accessed via smartphone apps and ‘regulated’ by user reviews, these new services are compelling at first sight.

AirBnB is an example that is already alarming the hotel industry and local governments. Another is Uber, the online ride sharing firm that was recently valued at $4 billion. Uber has gained notoriety because its pricing of taxi rides skyrockets on high-demand days such as New Year’s Eve, but the company insists its market-clearing role absolves it of any opprobrium. Indeed, you might say that Uber’s growth and popularity speaks for itself.

However, one issue that is less well understood is that of safety and risk. Uber recently attracted its first lawsuit after one of its drivers ran over and killed a six-year old pedestrian, while another Uber driver reportedly attacked a passenger (of course the company says it doesn’t have any drivers as such – they are all freelancers). So does the disruption of traditional ways of providing services make us more or less safe?

A report into a plane crash in Ireland provides a potential answer. At first sight airlines ought to be a ripe target for disruptive business models. They house a multitude of functions under a single corporate roof – why not let the market experiment with new models of delivering plane flights?

This was the approach of Manx2, a company that provided regional flights between non-hub airports in the UK and Ireland. I write ‘provide’ but what Manx2 actually did was sell tickets. For each particular route, Manx2 then contracted with a plane operating company to fly the passengers.

On its route from Belfast to Cork, Manx2 used a Barcelona-based company called Flightline. Flightline provided the planes and pilots and was responsible for safety. Early on 10 February 2011, 10 Manx2 passengers boarded a turboprop plane in Belfast with two pilots. When they arrived at Cork, the runway was obscured by fog. After making two abortive attempts to land and considering going to an alternative fog-free airport, the pilots made a third attempt and crashed the plane, killing themselves and four passengers.

This week, Ireland’s Air Accident Investigation Unit published a lengthy report into the crash. It concludes that Flightline’s pilots were tired, were unprepared for the route they were flying on, and made some fundamental mistakes. Flightline’s ground crew also failed to spot a technical flaw in one of the plane’s engines that contributed to the crash.

The Spanish regulator that oversaw Flightline had no clue that the crew who had trained and been accredited in sunny Spanish climes were working remotely for Manx2, flying to fogbound Irish airports. And the passengers who bought tickets from Manx2, which the report says was ‘portraying itself as an airline’ had no clue about the risks they were taking by flying in such a plane run by a freelance operator. Reading the report, it’s hard not to get the impression that the virtual airline business model of Manx2 was partly to blame for what happened.

As disruptive start-ups like Uber continue to grow and transform traditional services, such questions of risk and regulation are likely to become more important. In virtual models, managing the risks of linkages becomes as important as those of institutions. The providers of start-up capital may reap the upside in this new world.  It’s important that they share the additional costs of regulating the downside.

Taper Tantrums, Bank Lending and Emerging Markets

January 7th, 2014

As quantitative easing measures or asset purchases by central banks are scaled back or ‘tapered’, how might big international banks be affected? This was the question prompted by a bank risk manager who responded to one of my recent posts by writing:

“You should be looking at plain vanilla lending, across many types of loans, including to sovereigns. The next systemic problem is brewing there fast, supported by the very loose monetary policies around the world. It will happen sooner than many expect.”

Let’s start by considering a subset of bank lending, namely to emerging market economies – the BRICs, MINTs and others – by developed-nation institutions. While I don’t downplay the importance of emerging market-based lenders – such as Chinese or Indian banks – any cross-border contagion is likely to be channelled via multinational banks.

A starting point is to look at the impact of last summer’s so-called ‘taper tantrum’, or sell-off prompted by Fed discussion of tapering that began in May 2013. In a recent working paper, Barry Eichengreen and Poonam Gupta analysed the effect on exchange rates, reserves and equity prices for emerging markets and identified some interesting differentiating factors. In particular, countries with the deepest, most liquid markets such as Brazil, India, Turkey and South Africa saw the biggest exchange rate declines.

What about the effect on banks? For the most part, they seem to have avoided getting caught by the summer sell-off in liquid emerging market assets, which isn’t a surprise given the regulatory disincentives to hold such stuff on their balance sheets. Bank lending on the other hand is supposed to be insulated from market swings, and losses don’t have to be recognised until loans become impaired. However, Basel rules require banks to reflect a souring credit environment by increasing the risk-weightings of loans, and thus the equity capital cushion they need to hold.

Indeed, HSBC provides a useful example of how the taper tantrum affected risk weightings. In its third-quarter results, the bank said that a credit rating downgrade for Hong Kong increased its risk-weighted assets by $8.7 billion. As it happens, this effect was more than offset by reductions in RWAs as a result of improved credit conditions in Europe and the US.

Even without the offsets, the Hong Kong effect was small in the context of HSBC’s $1.1 trillion total RWAs and 13.3% Basel 2.5 core tier 1 ratio. It would take more than 40 such downgrades to reduce that ratio below 10%. However, past experience shows that emerging market crises, such as 1991 or 1997, produce dozens of downgrades clustered together.

A full blown tapering crisis might be uncomfortable for the likes of HSBC, even with the benefit of US growth on its balance sheet. It would be even less pleasant for banks with thinner capital cushions and more concentrated emerging market exposure, such as Standard Chartered or Santander.

At first sight, the US banks, whose balance sheets are dominated by the rebounding US economy, are better positioned for tapering (indeed, every data point supporting a US rebound will make tapering happen faster). Still, things could get tough at Citigroup, the US bank most exposed to emerging market lending.

And let’s not forget the wild card of derivatives. Remember the cross-currency swaps with Korean banks that gave JP Morgan a nasty shock in 1997? Basel credit valuation adjustments would probably amplify the impact of emerging market derivatives this time round.

Of course, bank CEOs tend to dismiss concerns about emerging market taper risk when questioned by analysts. Risk managers in the same institutions are privately more bearish, including the person who sent me the email above.

But let’s defer to the ultimate arbiter, the Federal Reserve. The Fed quietly updated its bank stress test methodology in November to crank up the severity of a hypothetical Asian downturn.

“The sharp slowdown in developing Asia featured in this year’s severely adverse scenario is intended to represent a severe weakening in conditions across all emerging market economies”, the Fed said. While the Fed insists that its scenarios aren’t forecasts, the change in methodology is telling. If the Fed hasn’t an inkling of what’s coming, who does?

Most-read articles of 2013

December 30th, 2013

As the year ends, I thought it would be interesting to share with you the most-read articles on this website for 2013. Unsurprisingly, the review of Nassim Taleb’s book Antifragile takes first place. Published just over a year ago, this review not only got the most views in the past year, but is the most-read article of all time on this site. My review of Daniel Kahneman’s Thinking, Fast and Slow was published in May 2012 but takes second place. The eighth-ranked article is also interesting. When I published my review of Hyman Minksy’s Stabilizing an Unstable Economy in July 2010, almost no-one viewed it. This year’s resurgence surely must be connected to the appointment of self-proclaimed Minskian Janet Yellen to the chairmanship of the Federal Reserve.

Let me take the opportunity to thank you for sending me comments and feedback (please keep it coming), and to wish you the best for 2014.

1. Review of Nassim Taleb’s Antifragile (December 2012)

2. Review of Daniel Kahneman’s Thinking, Fast and Slow (May 2012)

3. The New York Fed, Goldman and regulating conflicts

4. Cyprus, land of the structured product

5. JP Morgan vs. Fukushima

6. Confidence, doubt and climate change

7. Bank leverage, Cyprus and the lessons of the crisis

8. Review of Hyman Minsky’s Stabilizing an Unstable Economy (July 2010)

9. Regulatory tussle over bail-ins, depositors highlights complexity of bank debt

10. Goldman swap shows Greece was Europe’s subprime nation

Volcker’s CDO Smackdown

December 24th, 2013

If there’s one financial innovation that was toxic from start to finish it has to be the collateralised debt obligation (CDO). Bankers will say things like ‘there are good CDOs as well as bad ones’ or ‘it is an appropriate risk transfer tool in the right circumstances’. What nonsense.

As I discuss at length in the Devil’s Derivatives, the CDO became a vehicle for corrupting bankers and rating agencies, fooling regulators and misleading investors. One of the worst variants, the CDO of asset-backed securities, is the poster child for the financial crisis. According to a 2012 study by the Philadelphia Fed, $641 billion of ABS CDOs were issued between 1999 and 2007. The Fed estimates that $420 billion or 65 percent of this issued amount was written down by early 2011.

According to my reckoning, about half of these losses occurred within regulated banks, making ABS CDOs one of the biggest contributors to systemic risk in the crisis. If regulators are worth their salt, they should be keeping CDOs off bank balance sheets, especially as memories of the crisis begin to fade.

Until recently, the only game in town for curtailing bank CDO positions was Basel III capital requirements. While an improvement on catastrophically lax Basel II requirements, Basel III remains mired in the gameable model-driven risk-weighted assets system and contains some of the same incentives that encouraged the rampant growth of CDOs in the first place. Reports of a comeback in synthetic CDOs earlier in the year and recent decisions to water down the rules aren’t exactly grounds for optimism.

That’s why the final version of the Volcker Rule published on 10 December is so encouraging. The five US regulatory agencies apparently agreed that the Basel Committee wasn’t doing enough, and moved decisively to ban US banks from investing in CDOs. The way this was done is interesting.

While the restrictions on proprietary trading and hedging activity have been crafted to curtail the kinds of CDO-related bets that happened at Morgan Stanley in 2007 or JP Morgan in 2012, the most important CDO firebreak is in a different part of the Volcker rule, the part that restricts bank investments in hedge funds or private equity.

What the rule does is include securitisation special-purpose vehicles as a type of ‘covered fund’ that banks aren’t allowed to invest in. Basic securitisations consisting solely of loans or mortgages are exempt from the restriction, but CDOs are not. In other words, irrespective of how Basel rules evolve, US banks are forbidden from either retaining tranches of CDOs they originate or purchasing CDOs as assets from other banks, either in cash or synthetic form. For example, JP Morgan’s watershed 1997 Bistro securitisation would probably be banned under the Volcker rule. (see Note 1).

This aspect of Volcker has already riled the US banking industry, because it affects a type of CDO that many still hold on their balance sheets. So-called trust-preferred or TRuPS CDOs bundled together equity-like securities issued by banks before the crisis, effectively turning them into debt – which was then bought by many of the same banks based on inflated credit ratings. Of $59 billion in TRuPS CDOs issued, $21.4 billion was written down by 2011, according to the Philadelphia Fed’s estimates.

Until recently, US banks holding TruPS CDOs have avoided recognising such writedowns by accounting for them in the same way as loans. The Volcker rule changes all that, and last week Zions Bancorp let the cat out of the bag by writing down its positions by $387 million. The American Banking Association on 23 December threatened legal action against regulators unless this part of the Volcker rule was amended. To some people that sounds a bit like, “we’ll sue you unless you let us keep lying about the value of our CDOs”. (For more background read this informative post on the AFR blog).

However that debate turns out, the US regulators have outflanked Basel in a move that hopefully ensures the sorry history of CDOs won’t repeat itself in the American banking system. It’s a shame that the same can’t be said for European banks and their regulators.

 

Note 1: It’s still possible for US banks to sell CDO tranches directly to counterparties via portfolio default swap contracts without using an SPV. The retained risk on the bank trading book would then be subject to the proprietary trading and hedging restrictions of the Volcker rule.

 

The Hidden Winners of QE

November 15th, 2013

As we approach the fifth anniversary of the start of quantitative easing measures by central banks, McKinsey has produced a helpful report on the winners and losers of QE. The report looks at the impact of asset purchases on the borrowing costs and income of governments, corporations, insurance companies and households.

The headline finding is that governments have been the biggest beneficiaries as a result of lower borrowing costs and remittances from central banks that hold their bonds. Non-financial corporations also did well out of QE, as did US banks that could pay rock-bottom deposit rates and boost interest margins. The big losers were households whose cheaper borrowing costs were outweighed by reduced savings income and annuity rates on defined contribution pensions.

While the near-meltdown of the system that prompted QE in 2008 has receded from peoples’ memories, the debate has shifted to whether the continued measures – which feel semi-permanent – are benefiting the most vulnerable corners of the economy or are merely increasing inequality by making the rich richer.

Some commentators have pointed to McKinsey’s headline findings as evidence that QE doesn’t only benefit the rich. They do that by conflating ‘governments’ with ‘taxpayers’. This may be naïve, because it ignores a huge principal-agent problem. To understand this, consider the hidden beneficiaries of QE not discussed by McKinsey.

As McKinsey explains, households got hammered because interest on savings or annuities plummeted as a result of QE. That happened because they live in a mark-to-market environment. Those with corporate defined benefit pensions – promised a lifetime income regardless of the price of government bonds or annuities – have done better, except for the fact that companies sponsoring such plans are shutting them down.

That leaves the biggest recipients of DB pensions, whose employers are insulated from market volatility. Yes, government employees. The UK spends about £20 billion a year on central government pensions according to the National Audit Office. That’s roughly the same as the annual benefit that the UK government has enjoyed from QE since 2007, according to McKinsey’s data.

In the US, Federal employee pensions and healthcare benefits cost about $85 billion per year according to the Congressional Research Service. That’s 40 percent of the US annual QE benefit identified by McKinsey. In a sense, the ‘taxpayer benefit’ of QE amounts to raiding the savings of non-DB households and paying it straight to these government retirees. (See Note 1)

These figures don’t include the benefits of local government employees but arguably the same wealth transfer is taking place there too. Although policymakers in the UK and US have attempted to cut public sector payrolls and trim DB pensions, the results have been limited so far. Without QE, higher government borrowing costs would dramatically increase the pressure for reform.

Then look at the impact of QE on asset prices. The McKinsey report says it is hard to quantify the impact on equities and real estate, but the boost in the value of government bonds is obvious. Did that compensate those households who lost out from QE?

The answer is only very selectively. The FT’s Gillian Tett highlights a study by graduate student Sandy Hager who finds that the richest 1 percent of Americans own 42 percent of privately-held US Treasury bonds. Although Tett doesn’t make the connection with QE, this figure seems to support the argument that QE has enriched a minority.

What these numbers suggest is that there are two constituencies that have benefited most strongly from QE – government employees and very rich people. These vocal constituencies may have the most to lose from QE ending. Perhaps that explains why central bankers are in no hurry to upset the apple cart they created.

 

Note 1) It might be argued that only the unfunded part of these pension payments are true wealth transfers because the rest is funded by employee contributions.

Note 2) The McKinsey report considers the economic benefits of additional consumption resulting from the wealth effect as QE boosts asset values. In other words, the windfall spent by government retirees and rich holders of bonds and real estate trickles down to the unluckier households.

 

The New York Fed, Goldman and Regulating Conflicts

October 17th, 2013

In a week dominated by the government shutdown, the most interesting financial regulatory story to come out of the US was the lawsuit by former employee Carmen Segarra against the New York Fed. The lawsuit, which was first reported by Propublica on October 10 and widely covered afterwards, alleges that the NY Fed unjustly fired Segarra because she refused to agree that Goldman Sachs complied with a Fed requirement that it should have a firm-wide conflict-of-interest policy.

The lack of such a policy became apparent to Segarra in late 2011 when she examined Goldman’s role in advising energy company El Paso on its sale to Kinder Morgan, a company in which Goldman held a stake. The filing implies that Goldman exerted pressure on Segarra via her New York Fed colleagues, including relationship manager Michael Silva, a powerful figure who used to be Tim Geithner’s chief of staff and whose name appears in many emails relating to the bailout of AIG.

Since we have only one side of the story, we can’t say too much about the lawsuit beyond what has already been reported. The New York Fed is refusing to say much on the grounds that its relationship with Goldman is confidential, and on October 15 lost a bid to have the court proceedings sealed. Goldman says it does currently have a conflict of interest policy, and that it doesn’t have knowledge of internal Fed discussions (Interestingly, the PR who told this to Bloomberg, Andrew Williams, was a NY Fed spokesman until January 2009).

However, when I read the lawsuit I was reminded of what I wrote in Chapter 5 of The Devil’s Derivatives. According to a source who used to work in the market risk team at the Federal Reserve Board in Washington DC, the New York Fed’s relationship managers for large Wall Street banks such as JPMorgan and Citigroup were uncomfortably close to these banks before the financial crisis. My source listed examples where these relationship managers either blocked access by the FRB to these banks or dismissed FRB concerns about risk – concerns that were vindicated in the crisis.

Given these issues one might have expected the New York Fed to have its regulatory autonomy reined in by the FRB after the crisis. Instead, the NY Fed found its importance vastly increased as Goldman and Morgan Stanley became bank holding companies and thus fell directly under NY Fed supervision. Sarah Dahlgren, a relationship manager strongly criticised by my source, was promoted to run the NY Fed’s expanded bank supervisory team, a key Wall Street role that arguably makes her the second most-powerful woman in the Federal Reserve System after Janet Yellen.

This expansion in powers shouldn’t concern anyone, according to the official version of events. According to this narrative, the FRB’s point man on bank supervision, Daniel Tarullo, has fixed the kinds of problems highlighted in my book and centralised FRB control of regional Fed bank supervisors. In a recent speech, Dahlgren – who is Silva’s boss – gives the impression that the NY Fed is moving in lockstep with Tarullo, improving its supervision of Wall Street banks and setting an example for regulators globally.

That is why Segarra’s lawsuit and Propublica’s story is well-timed, because it blows a hole in this convenient narrative, one that already took a knock when the NY Fed failed to spot the compliance failures that led to the London Whale’s losses. It suggests an alternative narrative in which the NY Fed followed Tarullo’s party line by hiring risk specialists like Segarra, only to punish them once they identified problems at a powerful client that Silva the relationship manager was determined to protect. It suggests that the insularity and determination of the NY Fed to protect its fiefdom and stay friends with its ‘clients’ is as problematic as ever. And it suggests that as much as Goldman, the NY Fed needs to have a conflict-of-interest policy too.

The dilemma for the NY Fed is that to disprove this narrative in court, it may have to annoy Goldman by disclosing more information it promised would stay confidential. Then again, at least that would demonstrate its independence from the banks it regulates.

Confidence, Doubt and Climate Change

October 4th, 2013

24 years ago I was a graduate student studying earth and planetary science at Harvard. My advisor, Professor Mike McElroy, was an expert on atmospheric chemistry and I helped him teach a course to 300 undergrads in what was known as the ‘core curriculum’. As a special treat for the students, McElroy invited his friend, Senator Al Gore, to give a talk.

Gore delivered an explosive presentation about the greenhouse effect and climate change (it amounted to an early draft of his film An Inconvenient Truth). At a reception for Gore in the department common room afterwards I got a feel for how important the subject was to my professor and his fellow academics. The 1988 Montreal Protocol to address manmade ozone depletion was a template for what was to come: the science of global warming was becoming established, and a massive global policymaking campaign would soon follow.

The gossip I overheard was that $1.1 billion was about to be appropriated by Congress for climate research over the next decade. “Get your NSF applications in now,” someone said. For meteorologists, chemists, oceanographers and other scientists dependent on government funding, the bandwagon was about to become a gravy train.

Then I attended a talk by MIT professor Richard Lindzen, a brown-bearded expert on cloud physics. He was known as the ‘prince of darkness’ in the Harvard planetary science department and he spent his lecture debunking what he argued were exaggerated global warming predictions. Lindzen seemed particularly hostile to Gore whom he described as a ‘demagogue’.

I was reminded of this episode when I read last week’s report by the International Panel on Climate Change, the scientific body set up to advise policymakers on global warming. The report was the fifth published by the IPCC and some of the characters I met at Harvard are still around. Al Gore shared a Nobel peace prize with the IPCC at the time of its fourth report in 2007. Lindzen has been adopted by climate sceptics and recently lambasted the IPCC’s latest document as vociferously as I remember him doing 24 years ago.

What’s interesting is the difference between the fourth and fifth reports (called AR4 and AR5 by the IPCC). Six years ago, AR4 trumpeted a ‘robust finding’ that global mean surface temperatures would increase in the near term by 0.2°C per decade. Last week, AR5 conceded that the actual increase per decade observed over the past 15 years was only 0.05°C, in what it delicately calls a ‘hiatus’ in the predicted trend. Expressed statistically, the observations fell outside the 95th percentile of the range of temperature predictions – an embarrassing signal of model failure. (the accompanying chart taken from AR5 shows model predictions in grey and observations in red)Screen Shot 2013-10-04 at 10.37.40

Although this failure of prediction has been seized upon by the sceptics, it’s only a part of the global warming evidence assembled in AR5. For example, even including the hiatus, the last 30 years were the warmest such period in eight centuries. The IPCC also identifies increasing variance in climate which is arguably more important than trends in averages, because it increases the likelihood of extreme weather events. However, the failure to get the trend in the average right seems to have chastened the IPCC, and this can be seen in the two reports’ respective treatment of uncertainty.

There are three major types of uncertainty in climate modelling. The historical state of the climate system is uncertain because of observational gaps and errors. The myriad physical, chemical and biological mechanisms that fit together in climate models are uncertain. Finally, the predictions that come out of models based on uncertain mechanisms and uncertain inputs are also uncertain.

Starting in AR4, the IPCC adopted a two-track approach to describe uncertainty. For stuff it was relatively sure about, it proposed a quantitative scale of beliefs in different findings. For example, the assertion that the last 30 years were the warmest in eight centuries is ‘very likely’ according to the IPCC which means at least a 90 percent chance of being true. Then there is a more qualitative measure of belief in a finding based on the consistency of evidence and agreement among scientists, ranging from ‘very high confidence’ to ‘very low confidence’.

By counting the number of times the different levels of confidence appear in the text of AR4 and AR5, you get a striking impression of how the belief of IPCC scientists has changed in the past six years (see histogram).IPCC belief map

In 2007, when the IPCC and Al Gore won their Nobel prize, only 7% of qualitative assessments were ‘low confidence’ or worse. By 2013, that proportion had increased to 28%.

An example of the IPCC’s newfound caution appears in the section discussing the notorious warming hiatus. The scientists think that the hiatus was caused by a combination of two things – natural variability in the climate (the oceans absorbed more heat than expected) and a downturn in solar forcing of the atmosphere connected to sunspot cycles. But the scientists’ belief in this explanation is characterised as ‘low confidence’!

At least this is healthier than the cockiness that infected AR4, and the IPCC seems to make a coded admission of this when it says in AR5: “In the case of expert judgments there is a tendency towards overconfidence both at the individual level and at the group level as people converge on a view and draw confidence in its reliability from each other”.

While much of the science of global warming is sound, the IPCC scientists have learned the hard way that only by putting their doubts and disagreements on display can they fight the impression that they are lobbyists for their own gravy train.

ICAP, the half-way house on the road to Libor reckoning

September 27th, 2013

What are we to learn from the charges that came out against interdealer broker ICAP and its staff over the manipulation of Libor rates? In three separate regulatory complaints, from the UK Financial Conduct Authority, US Commodity Futures Trading Commission and Department of Justice, once again we read emails and chat logs outlining intent to collude or deceive in setting Libor. We learned of a new nickname, ‘Lord Libor’, the nom-de-guerre of ICAP cash broker Clive Goodman, named along with two colleagues in the DOJ’s criminal complaint.

The Libor investigations began in 2012 and have already led to hefty settlements at Barclays, RBS and UBS, while Deutsche Bank, Rabobank and Citigroup are yet to come. I would characterise ICAP as a half-way point.

Looking backwards to what we already knew, the complaints further clarify the central role of former UBS and Citigroup trader Tom Hayes, known as ‘Rain Man’. Hayes was named in a DOJ complaint last December and was charged by a London court in June.

The complaints also emphasise the importance of interdealer brokers in facilitating Libor manipulation. Because of the way Libor is set by averaging contributions from panel banks and excluding outliers, a single bank can only have a limited impact on the daily fixing.

In its 2012 complaint against Hayes, the DOJ identified examples where UBS singlehandedly moved Japanese Yen Libor by one-eighth of a basis point, generating a few hundred thousand dollars profit for Hayes on a trade. However, to earn the $260 million in profit he made for UBS between 2006 and 2009, Hayes needed other banks to push Libor in the direction he wanted.

Traders at some banks, such as RBS, helped Hayes do this at his request, but brokers were more useful because they could influence other banks without them twigging what was happening. The way it worked was that brokers such as Goodman provided so-called ‘run-throughs’, or daily emailed assessments of Libor levels.

These run-throughs were supposed to be an independent view of the money market, but in reality Goodman was controlled by Hayes via the ICAP yen derivatives desk whom Hayes rewarded with ‘bro’ or brokerage fees. The quasi-comic aspect of the ICAP complaints is Goodman’s increasing impatience over time, as he grew tired of free curry and champagne and asked for cash payments instead.

Goodman’s activity highlights once again the tail-wagging-the-dog role of over-the-counter derivatives compared to cash markets. Libor was originally developed as a benchmark for London’s offshore dollar deposit market. By 2007, unsecured interbank lending dried up in the financial crisis, while derivatives referencing the benchmark continued to grow.

As the UBS and RBS complaint documents already show, the big banks effectively created figures like Hayes by giving their derivatives traders control of Libor submissions. ICAP did no more than mirror what some of its largest clients were doing for profit.

Given that there are an average of 2,000 new interest rate derivative trades per day according to data from the Depository Trust & Clearing Corporation, compared with a handful of interbank Libor loans according to BNP Paribas ( See Note 1), the incentives to do this are not going away.

The charges are likely to raise questions about the role of brokers such as ICAP as independent price reference points for the over-the-counter derivatives market. Bank risk departments often use third-party broker quotes to check the accuracy of their traders’ marks, but JPMorgan’s failure to properly do this was highlighted in the recent London Whale scandal. If these third-party quotes are tainted by the dependence of brokers on dealer fees, then the functioning of OTC derivatives is called into question.

This is particularly relevant to ISDAfix, a benchmark controlled by ICAP and used for pricing numerous interest rate derivatives such as constant-maturity products. The FCA and CFTC are currently trawling through millions of emails and messages relating to ISDAFix. If evidence of wrongdoing at ICAP is found here (and none has so far) then regulators will be under pressure to curtail the market even more aggressively.

Indeed, we may never know how much manipulation really takes place under the current system for trading OTC derivatives. Comically, Hayes and his co-conspirators use instant messages to warn each other not to be too obvious in documenting requests for manipulation. What about traders who were more circumspect?

In the absence of self-incriminating messages, regulators are going to have to depend on statistical analysis to prove that market manipulation or collusion took place. Dubbed ‘forensic economics’ by Justin Wolfers, this is a burgeoning and fashionable area of study, and has only just started to be applied to Libor.

In a 2012 working paper, Connan Snider and Thomas Youle deploy an economic model of Libor rigging which they test against historical data. They find evidence of manipulation in the form of bunching together of Yen Libor submissions, but their research doesn’t single out UBS and doesn’t take brokers into account

Contradicting what ICAP and the banks have said so far, Snider and Youle also say that their research implies that manipulation is systematic and not a question of a few ‘bad apples’ at particular institutions. This is relevant to Hayes, who in his forthcoming trial is likely to point to senior bankers who condoned his activities.

The dog that hasn’t barked so far is the question of who was a victim of market rigging. After all, if Hayes made $260 million for UBS partly by manipulating Yen Libor, does that mean UBS counterparties who traded with him are victims? Without full disclosure of Hayes’ trading records, it’s hard to say.

A suit brought by the City of Baltimore against Libor-setting banks was largely dismissed by a US court in March. In the UK, Barclays faces an important civil test case from Guardian Care Homes, which has succeeded in including Libor manipulation as part of an interest-rate swap mis-selling complaint. But how much of Guardian’s losses were the result of Libor rigging, as opposed to moves in sterling interest rates?

These are the questions we have to look forward to as the Libor scandal passes its half-way point.

 

(1)  “We always kept our fixed income trading business separate from the treasury, so the treasury which was quoting Libor had no interest in the bonus pool of the traders. Today, we have a problem because there are far fewer transactions which makes it difficult to quote. Sometimes, for USD 3m Libor we only have four transactions per week, something like that. It’s difficult to give a quote every day”. – Philip Bordenave, chief operating officer of BNP Paribas, Bloomberg Risk interview, May 2013

JPMorgan vs. Fukushima

August 30th, 2013

The news of JPMorgan’s increasing legal problems, coming at the same time as news that Japan’s Fukushima nuclear plant had begun leaking highly radioactive water more than two years after the original meltdown in 2011, inspired me to extend an analogy I made at the time.

The big banks were effectively factories, taking in assets such as loans or mortgages as raw material and processing them – often several times over – transforming them into structured products for investors. Instead of generating power in the way that Fukushima did, these factories generated money.

Like Fukushima, the banks concentrated risks in a very dangerous way. When the production lines for structured securities seized up from 2007 onwards the result was the spewing of financial radioactivity into the global economy and the near-collapse of numerous large financial institutions.

We know how things went wrong.  The value-at-risk models used to measure risks inside these factories, and the VaR-based shareholder capital that was supposed to buffer the banks against the unexpected, proved woefully inadequate. That was by design, since if the models had been any good, the capital needed to support the securitisation factories would have made them unattractive to shareholders.

The fact that the bankers running the factories were incentivised by the revenues and bonuses they were making to keep using bad VaR models (and thus arbitrage the system), was really a failure of regulation, particularly the Basel rules covering the capital required for trading portfolios.

Regulators tried to fix these deficiencies with so-called Basel 2.5 rules which European banks implemented in 2011 and US banks last year. The results can now be seen in regulatory filings published by the banks.

Regulators didn’t have the guts to throw out VaR completely, but they did beef it up with something called stress VaR which basically forces a bank’s model to remember how bad things were in 2008. Bank filings from the second quarter of 2013 typically put the capital required for stress VaR at about three times the capital for standard VaR. However, to the degree that bankers have an incentive to keep the risks of financial innovations out of their model, it’s still the same bad old VaR.

Perhaps with this in mind, the regulators looked at what happened in 2007-8, in and particular, how the securitisation factories replicated traditional bank lending by warehousing a lot of credit risk. That’s why we now have the incremental risk charge (IRC) which allows for the fact that loans traded by banks sometimes default in addition to declining in price, which was supposedly captured by VaR.

The Fukushima-type financial radioactivity from 2008 is captured by the IRC, as well as two more Basel inventions, the specific risk charge and the comprehensive risk measure (CRM). They cover parts of the investment bank production line such as securitisation, credit derivatives and correlation trading – activities that caused tens of billions of dollars of losses at banks such as UBS, Deutsche Bank, Morgan Stanley, Royal Bank of Scotland, Merrill Lynch and Citigroup.

You get a feel for how important these charges are to chastened regulators when you see how much the charges actually bite. For example, almost half of Goldman’s and Bank of America’s market risk capital comes from the specific risk charge. It’s also telling that firms such as Morgan Stanley or Deutsche Bank are selling off parts of their trading portfolios to try and reduce these charges.

The CRM is interesting for a couple of reasons. Firstly because it’s the one new charge that banks are allowed to model for themselves (Well, not completely because the Basel Committee spelled out the risks Screen Shot 2013-08-30 at 14.50.58 that the banks had to model). Secondly, the CRM cuts to the heart of the credit derivatives innovation bubble set in motion by JPMorgan with its 1997 BISTRO trade celebrated by Gillian Tett in her book Fool’s Gold.

Ironically, the bank with the biggest CRM capital requirement in 2013 happens to be JPMorgan, with a $2.5 billion charge that is more than the other four large US banks combined. It’s most likely that this is a result of the London Whale’s credit derivative trades that lost the bank $6.2 billion last year.

At first sight, this visibility seems like progress. Not only do Basel’s new market risk charges reveal the capital cost of pre-2008 activity still lurking in bank portfolios, but they show, like Fukushima, when the radioactivity starts spewing out all over again.

Unfortunately, by allowing banks to calculate the CRM themselves, Basel left a massive loophole in the system. Suppose that JPMorgan had reported its mammoth CRM charge in early 2012 when the Whale trades were implemented. The bank’s capital ratio would have declined and shareholders might have understood that they were being asked to take additional risk.

As we learned from the March 2013 US Senate report into the Whale fiasco, JPMorgan did something different. It did internally calculate a huge CRM number, and then its quants and traders promptly tried to game the model in order to reduce it down again, arguably deceiving shareholders.

So while Basel may have seemingly caught up with the financial crisis in its last round of tweaks, it left in the modelling incentives that serve as catnip for greedy traders. For bank shareholders, these incentives increase the risk of ‘unknown unknowns’ within banks – in other words, the most dangerous stuff of all.

Now, Basel does have a charge for ‘unknown unknowns’ – it’s called the operational risk charge, which covers Whale-style internal cock-ups and rogue employees as well as external threats like litigation costs or terrorist attacks.

European banks already have to allocate shareholder capital to operational risk under Basel rules. For example, at group level Deutsche Bank allocates €4.16 billion, while Barclays allocates £4.3 billion at group level and £1.9 billion within its investment bank, according to second-quarter filings. You can argue whether this is sufficient to cover things like mis-selling settlements and Libor fixing, but at least it’s there.

US banks on the other hand won’t have to allocate operational risk charges for another couple of years, when they implement Basel III rules. However we do have some inkling of what banks think the charges are likely to be. In its latest quarterly filing, JPMorgan said it would have to allocate an additional $14.1 billion of capital if Basel III was applied today. This includes operational risk as well as other stuff like new charges for derivative counterparty risk (of which JPMorgan has a lot).

Is this sufficient to cover potential Whale scenarios as well as litigation and the whole gamut of operational risks that JPMorgan faces? Given that the bank currently predicts its potential losses from litigation alone to be $6.8 billion, I suspect not.

Comparing JPMorgan’s current available capital (Tier 1 common) of $147 billion to the $127 billion that the bank says it needs under Basel III you have an effective solvency ratio of 115 percent (see Note 1). All it takes is JPMorgan’s own estimate to be off by $20 billion or more and the bank is insolvent on an economic basis (see Note 2).

You might think that $20 billion is an ample buffer but then again JPMorgan has already paid more than that in legal bills since 2008 – something that no analyst predicted would happen. Returning to the Fukushima analogy, $20 billion seems like an awfully thin defence against the leakage of financial radioactivity contained in a bank like JPMorgan today.

 

 

Note 1: My figure of $127 billion for JPMorgan’s required capital comes from dividing the bank’s reported Basel III risk-weighted assets of $1,587 billion by 12.5. Since RWAs for operational, market and some forms of credit risk are computed by multiplying a capital charge by 12.5 according to Basel rules, I argue that it is more informative to use the capital charge which is directly comparable to available capital, rather than divide available capital by RWAs to produce a misleading leverage ratio. That is also the approach followed by insurance companies that compute economic solvency ratios.

Note 2: By economically insolvent, I mean that the bank doesn’t have enough shareholder capital to cover the cost of a lot of adverse scenarios happening at once. Having a solvency ratio of less than 100 percent is equivalent to a Basel III core Tier 1 ratio of less than 8 percent, which of course is the reciprocal of 12.5.