Fear and liquidity in the high-yield market

August 19th, 2014

Junk bond funds are seeing record outflows. This has been described as a ‘six-sigma event’, in other words, six standard deviations away from the mean weekly flow, something that if normally distributed should only be expected once in millions of years.

By definition that’s unusual, but in the context of broader investment in riskier corporate debt, how significant is it? Let’s start by looking at exchange-traded funds that invest in junk bonds. This is the most liquid segment of the market, and market leader Blackrock has spoken of using ETFs to replace traditional bond investing. Since 2011, $26 billion has flooded into high-yield ETFs, $11.5 billion of that in the 12-month period up to July this year (this is based on the biggest 35 junk bond ETFs on Bloomberg). HY chart

In the last five weeks up to 8 August, that trend went into reverse, with outflows of $4.2 billion (contributing to the numbers reported in the last two weeks). Although that’s more than 10 per cent of total high-yield ETF investment, in the broader context things look different.

The point is that all the additional demand for high-yield ETFs as evidenced from two and a half years of inflows has been easily absorbed by increased supply. Consider the index behind the most popular junk ETF, the iShares iBoxx corporate bond ETF, best known by its ticker, HYG. The underlying index, the Markit iBoxx high-yield bond index, gathers together all dollar-denominated corporate junk bonds with a face value above $400 million.

Since the start of 2012, the number of iBoxx high-yield index constituents has more than doubled to about 960, and the total face value of the index increased by a similar magnitude from $376 to $808 billion, according to Markit.

During this period the amount of new corporate bond issuance has been nothing short of staggering. Consider this statistic based on SIFMA’s data: by the end of this year (assuming current volumes persist) there will have been a greater face value of high-yield bonds issued in the last three years than over the entire decade leading up to 2008.

With this flood of new issuance, the proportion of high-yield bonds owned by ETFs hasn’t increased. The retail investor can’t even keep up with the volume hitting the market. And yet somehow the yield of the iBoxx index declined from 7.8 per cent to 5 per cent over the same period, indicating that buyers outnumber sellers.

While non-ETF mutual fund inflows account for some of that spread decline, I suspect most of the demand comes from traditional fixed income investors such as insurance companies and pension funds. Just four US insurance companies alone – AIG, MetLife, Prudential and New York Life – own $44 billion of high-yield bonds. That’s more than all ETFs put together. These kinds of investors don’t have to mark their holdings to market unless they become impaired.

And these four insurers pale in comparison with banks whose portfolios are heavily stuffed with high-yield bonds and loans. Consider Deutsche Bank, which held $72 billion of below-investment grade corporate bonds and $95 billion worth of junk-quality corporate loans in 2013 according to its annual report. That’s more than four times the bank’s core equity tier 1 capital.

It will be interesting to see the impact of recent volatility on Deutsche’s VaR numbers. Fortunately for Deutsche Bank, the loans don’t have to be marked to market. These risky assets, funded by cheap central bank liquidity, pay fat yields that pad the bank’s profits. While the bank holds credit risk capital against the loans calculated according to some internal model, I doubt that model includes the volatility of the high-yield ETF market as an input.

However, the high-yield market turmoil has coincided with reports that banks are offering derivatives-based products to customers that provide them with synthetic exposure to junk bonds and loans using total-return swaps. Sound familiar? If a bank decides that a market is becoming riskier and decides to hedge, usually the first to know is the customer asked to take the other side of the trade.

Volcker Sunlight Should be the Best Disinfectant

July 25th, 2014

On the fourth anniversary of the Dodd-Frank Act, the big US banks are still black boxes in terms of their trading activity. However regulators are now getting a bit more information. Over the last month, the big banks have started providing them with so-called reporting metrics under the Volcker Rule, so that the rule’s curbs on proprietary trading can be enforced.

This is a big deal, so it’s worth going through it in some detail. Who are the regulators that get the information? There are four of them – the Fed, the Office of Comptroller of the Currency, the Federal Deposit Insurance Corporation and the Securities & Exchange Commission. Who are the banks covered by the first wave of the rule? There are nine of them that have trading assets above the $50 billion threshold – banks like JPMorgan, Goldman, Morgan Stanley, Bank of America and Citigroup.

Compiled at the trading desk level, there are seven types of metric that have to be provided to regulators, including position limits, risk factor sensitivity, value-at-risk/stress VaR, P&L attribution, inventory turnover/age and customer-facing trade ratio.

How many trading desks are there? We don’t know precisely how the banks and regulators will agree to split them out, but PWC estimates that each of the nine banks has between 50 and 125 of them. There are the main asset classes – equities, fixed income, commodities, each of which might be subdivided geographically or by subcategory, and other stuff such as mortgage servicing rights or credit valuation adjustment hedging. Each desk at each bank has to supply the seven reporting metrics to each regulator.

Like the old children’s rhyme about meeting a man on the way to St Ives, the combinatorics soon gets frightening, and helps explain why the big banks have spent billions and hired tens of thousands of new compliance staff. The regulators too have boosted their numbers, with the Fed and OCC adding about 2,000 staff between them since 2010.

Remember that these numbers are good news. Back in 2008, the big banks were all making huge proprietary bets, the regulators were clueless and the financial system nearly collapsed as a result. From this year onwards, with all these Volcker restrictions and monitoring, there should be no excuse. We can hope that the regulators will know how to sift through the mass of data, and will be prepared to argue with banks about things like ‘reasonably expected near-term demand’ for credit swaptions or whatever a desk is accumulating on its books.

Yet if history is anything to go by, there is always the possibility that they won’t know what to do or will shy away from doing it. That’s why I would like the regulators and those who watch over them in US Congress to push things a little further. What they should be doing is finding a way to make some of those Volcker metrics public.

You might ask whether the big US banks don’t already put enough information in the public domain. Don’t they already publish quarterly earnings supplements and SEC filings that run to hundreds of pages? Don’t we already get those profit/loss attributions and VaR numbers?

My response is that while we do get some information from the banks, it’s not nearly sufficient to understand what is happening in their trading businesses. For a start, VaR is only reported at a very high aggregate level, covering entire asset classes compiled from dozens of trading desks. And even then, there’s the possibility that they have calculated the metric wrongly, as JPMorgan did with its London Whale trades. Only by getting multiple metrics at a more granular level can outsiders understand what banks are actually doing.

Without that, we have to rely on ‘guidance’ – what the banks choose to tell us when they explain their disclosures. That’s what Citigroup did with its latest results, where it lost $100 million as a result of what it described as over-aggressive hedging in response to the Ukraine crisis. Presumably Citi thought that was too big to sweep under the carpet.

But what are we to make of this exchange between JPMorgan CEO Jamie Dimon and analyst Mike Mayo on 15 July:

Q – Mike L. Mayo: First, why did trading do better than the guidance earlier in the quarter? And did that relate to the 8% increase in trading VaR, which was a little surprising since volatility is still so low?

A – Jamie Dimon: The VaR jumps around, but some of that jumping around is really think of underwriting positions, CMBS, warehouse positions and stuff like that which come and go.

Q – Mike L. Mayo: Okay. I always prefer more guidance than less.

The key words in Dimon’s nothing-to-see-here answer are ‘underwriting’ and ‘warehouse’. Clearly JPMorgan took on some additional risk because it had some customers lined up, and made some extra money. But what if conditions changed and the customers backed out? To avoid a prop trade, it would have had to sell at a loss.

With Volcker metrics disclosed at the trading desk level, outsiders would have a much better understanding of risk taking at JPMorgan and other too-big-to-fail banks.

Now the banks would might respond by saying that too much disclosure would reveal their positions to hedge funds which could front-run them. But that could be dealt with using a time lag. For example, disclosures in 10-Q filings appear about 5-6 weeks after quarter end. And the banks already have provided additional disclosure in the last few years, such as sovereign debt positions in response to concerns over their Eurozone exposure. A little more shouldn’t hurt them.

So is there any chance of this actually happening? One person that did ask the question about public disclosure was former Fed governor Sarah Bloom Raskin at a hearing on 10 December 2013. According to lobby group SIFMA, which attended the hearing, Raskin questioned staff on whether there was “any value in developing a disclosure regime that reveals to the public, the nature of trading activity at banks and that demonstrates, clearly, how such trading activity may be complying with the Volcker Rule.”

This was the answer from Fed staff: “I don’t think we have anything in the rule that would provide for public disclosure of the amount of trading that’s occurring in that vein.”

Is that the end of the matter? In March, Raskin was confirmed in a new role as Deputy Secretary to the US Treasury. That should put her in a good position to revisit the issue with the same regulators, hopefully with some congressional backing.

The Mismeasure of Finance

March 3rd, 2014

Britain’s finance industry is believed to be so vital to our interests that the UK government has apparently ruled out denying the Russians access to it.

Being beholden to this sacred cow makes people uneasy, for example Sir Howard Davies, who published a comment article last week where he warned about the damaging impact of unrestricted financial sector growth. As founding chairman of the FSA, Davies played a not insignificant role in creating London’s disastrous light-touch regulatory regime, so perhaps he now feels a sense of responsibility to put things right.

Davies’ article reminded me of Diane Coyle’s excellent new book on GDP (my review will be out soon) where she dissects the way the finance sector contributes to national accounts.

Consider the UK’s GDP figures for the fourth quarter of 2008, when the UK financial sector showed record growth, outstripping the contribution of manufacturing to the economy. This as at the same time as the sector required massive government bailouts, putting the UK economy into a tailspin. As Coyle points out, this absurdity means that something must be wrong.

The problem comes from the way financial services output is measured. While fee-based services (such as debt underwriting) are straightforward, most banking services are not explicitly charged to customers. For example, price of deposit taking and lending are embedded in the interest rates that banks set for these services.

As Coyle relates, the economists’ solution to this conundrum was an ugly acronym called FISIM, or ‘financial services intermediation indirectly measured’. This is calculated as the difference between the interest rate banks earn on loans or pay on deposits and a benchmark risk-free rate, multiplied by the outstanding loan balance.  FISIM is recognised as the international standard for measuring bank contributions to GDP (See note 1).

The trouble with FISIM is that it doesn’t account for risk. Before the 2008 crisis, banks’ increased risk-taking and leverage was counted as ‘growth’ in GDP. Since then studies have shown that adjusting for risk-taking would reduce financial sector contributions to GDP by 25-40%. If the financial sector contribution to GDP is largely a statistical mirage, then why isn’t the methodology changed?

The depressing answer is that by boosting the importance of financial services in national accounts, FISIM became a self-fulfilling prophecy: policymakers became captured by financial services because the numbers said it was important. Why else, two years after the Bank of England published papers criticising the methodology, did incoming governor Mark Carney deliver a speech in October praising finance as contributing 10% of UK GDP?

Such entrenchment of flawed measuring sticks shows why genuine reform of finance is so difficult.

 

Note 1) As well as fees and FISIM there is also net spread earnings (NSE) from trading on behalf of clients. Since no-one actually buys a product called FISIM, government statisticians create an imaginary customer that buys it as an intermediate input and adjust overall GDP statistics accordingly.

Disruptive Business Models, Uber and Plane Crashes

January 30th, 2014

Disruptive internet-based business models have upended traditional industries like recorded music, newspapers and retailing. The latest flurry of innovation involves start-ups that take a service traditionally provided by a regulated firm – such as a hotel or taxi company – transforming it into commission-paying transactions between buyers and sellers. Accessed via smartphone apps and ‘regulated’ by user reviews, these new services are compelling at first sight.

AirBnB is an example that is already alarming the hotel industry and local governments. Another is Uber, the online ride sharing firm that was recently valued at $4 billion. Uber has gained notoriety because its pricing of taxi rides skyrockets on high-demand days such as New Year’s Eve, but the company insists its market-clearing role absolves it of any opprobrium. Indeed, you might say that Uber’s growth and popularity speaks for itself.

However, one issue that is less well understood is that of safety and risk. Uber recently attracted its first lawsuit after one of its drivers ran over and killed a six-year old pedestrian, while another Uber driver reportedly attacked a passenger (of course the company says it doesn’t have any drivers as such – they are all freelancers). So does the disruption of traditional ways of providing services make us more or less safe?

A report into a plane crash in Ireland provides a potential answer. At first sight airlines ought to be a ripe target for disruptive business models. They house a multitude of functions under a single corporate roof – why not let the market experiment with new models of delivering plane flights?

This was the approach of Manx2, a company that provided regional flights between non-hub airports in the UK and Ireland. I write ‘provide’ but what Manx2 actually did was sell tickets. For each particular route, Manx2 then contracted with a plane operating company to fly the passengers.

On its route from Belfast to Cork, Manx2 used a Barcelona-based company called Flightline. Flightline provided the planes and pilots and was responsible for safety. Early on 10 February 2011, 10 Manx2 passengers boarded a turboprop plane in Belfast with two pilots. When they arrived at Cork, the runway was obscured by fog. After making two abortive attempts to land and considering going to an alternative fog-free airport, the pilots made a third attempt and crashed the plane, killing themselves and four passengers.

This week, Ireland’s Air Accident Investigation Unit published a lengthy report into the crash. It concludes that Flightline’s pilots were tired, were unprepared for the route they were flying on, and made some fundamental mistakes. Flightline’s ground crew also failed to spot a technical flaw in one of the plane’s engines that contributed to the crash.

The Spanish regulator that oversaw Flightline had no clue that the crew who had trained and been accredited in sunny Spanish climes were working remotely for Manx2, flying to fogbound Irish airports. And the passengers who bought tickets from Manx2, which the report says was ‘portraying itself as an airline’ had no clue about the risks they were taking by flying in such a plane run by a freelance operator. Reading the report, it’s hard not to get the impression that the virtual airline business model of Manx2 was partly to blame for what happened.

As disruptive start-ups like Uber continue to grow and transform traditional services, such questions of risk and regulation are likely to become more important. In virtual models, managing the risks of linkages becomes as important as those of institutions. The providers of start-up capital may reap the upside in this new world.  It’s important that they share the additional costs of regulating the downside.

Taper Tantrums, Bank Lending and Emerging Markets

January 7th, 2014

As quantitative easing measures or asset purchases by central banks are scaled back or ‘tapered’, how might big international banks be affected? This was the question prompted by a bank risk manager who responded to one of my recent posts by writing:

“You should be looking at plain vanilla lending, across many types of loans, including to sovereigns. The next systemic problem is brewing there fast, supported by the very loose monetary policies around the world. It will happen sooner than many expect.”

Let’s start by considering a subset of bank lending, namely to emerging market economies – the BRICs, MINTs and others – by developed-nation institutions. While I don’t downplay the importance of emerging market-based lenders – such as Chinese or Indian banks – any cross-border contagion is likely to be channelled via multinational banks.

A starting point is to look at the impact of last summer’s so-called ‘taper tantrum’, or sell-off prompted by Fed discussion of tapering that began in May 2013. In a recent working paper, Barry Eichengreen and Poonam Gupta analysed the effect on exchange rates, reserves and equity prices for emerging markets and identified some interesting differentiating factors. In particular, countries with the deepest, most liquid markets such as Brazil, India, Turkey and South Africa saw the biggest exchange rate declines.

What about the effect on banks? For the most part, they seem to have avoided getting caught by the summer sell-off in liquid emerging market assets, which isn’t a surprise given the regulatory disincentives to hold such stuff on their balance sheets. Bank lending on the other hand is supposed to be insulated from market swings, and losses don’t have to be recognised until loans become impaired. However, Basel rules require banks to reflect a souring credit environment by increasing the risk-weightings of loans, and thus the equity capital cushion they need to hold.

Indeed, HSBC provides a useful example of how the taper tantrum affected risk weightings. In its third-quarter results, the bank said that a credit rating downgrade for Hong Kong increased its risk-weighted assets by $8.7 billion. As it happens, this effect was more than offset by reductions in RWAs as a result of improved credit conditions in Europe and the US.

Even without the offsets, the Hong Kong effect was small in the context of HSBC’s $1.1 trillion total RWAs and 13.3% Basel 2.5 core tier 1 ratio. It would take more than 40 such downgrades to reduce that ratio below 10%. However, past experience shows that emerging market crises, such as 1991 or 1997, produce dozens of downgrades clustered together.

A full blown tapering crisis might be uncomfortable for the likes of HSBC, even with the benefit of US growth on its balance sheet. It would be even less pleasant for banks with thinner capital cushions and more concentrated emerging market exposure, such as Standard Chartered or Santander.

At first sight, the US banks, whose balance sheets are dominated by the rebounding US economy, are better positioned for tapering (indeed, every data point supporting a US rebound will make tapering happen faster). Still, things could get tough at Citigroup, the US bank most exposed to emerging market lending.

And let’s not forget the wild card of derivatives. Remember the cross-currency swaps with Korean banks that gave JP Morgan a nasty shock in 1997? Basel credit valuation adjustments would probably amplify the impact of emerging market derivatives this time round.

Of course, bank CEOs tend to dismiss concerns about emerging market taper risk when questioned by analysts. Risk managers in the same institutions are privately more bearish, including the person who sent me the email above.

But let’s defer to the ultimate arbiter, the Federal Reserve. The Fed quietly updated its bank stress test methodology in November to crank up the severity of a hypothetical Asian downturn.

“The sharp slowdown in developing Asia featured in this year’s severely adverse scenario is intended to represent a severe weakening in conditions across all emerging market economies”, the Fed said. While the Fed insists that its scenarios aren’t forecasts, the change in methodology is telling. If the Fed hasn’t an inkling of what’s coming, who does?

Most-read articles of 2013

December 30th, 2013

As the year ends, I thought it would be interesting to share with you the most-read articles on this website for 2013. Unsurprisingly, the review of Nassim Taleb’s book Antifragile takes first place. Published just over a year ago, this review not only got the most views in the past year, but is the most-read article of all time on this site. My review of Daniel Kahneman’s Thinking, Fast and Slow was published in May 2012 but takes second place. The eighth-ranked article is also interesting. When I published my review of Hyman Minksy’s Stabilizing an Unstable Economy in July 2010, almost no-one viewed it. This year’s resurgence surely must be connected to the appointment of self-proclaimed Minskian Janet Yellen to the chairmanship of the Federal Reserve.

Let me take the opportunity to thank you for sending me comments and feedback (please keep it coming), and to wish you the best for 2014.

1. Review of Nassim Taleb’s Antifragile (December 2012)

2. Review of Daniel Kahneman’s Thinking, Fast and Slow (May 2012)

3. The New York Fed, Goldman and regulating conflicts

4. Cyprus, land of the structured product

5. JP Morgan vs. Fukushima

6. Confidence, doubt and climate change

7. Bank leverage, Cyprus and the lessons of the crisis

8. Review of Hyman Minsky’s Stabilizing an Unstable Economy (July 2010)

9. Regulatory tussle over bail-ins, depositors highlights complexity of bank debt

10. Goldman swap shows Greece was Europe’s subprime nation

Volcker’s CDO Smackdown

December 24th, 2013

If there’s one financial innovation that was toxic from start to finish it has to be the collateralised debt obligation (CDO). Bankers will say things like ‘there are good CDOs as well as bad ones’ or ‘it is an appropriate risk transfer tool in the right circumstances’. What nonsense.

As I discuss at length in the Devil’s Derivatives, the CDO became a vehicle for corrupting bankers and rating agencies, fooling regulators and misleading investors. One of the worst variants, the CDO of asset-backed securities, is the poster child for the financial crisis. According to a 2012 study by the Philadelphia Fed, $641 billion of ABS CDOs were issued between 1999 and 2007. The Fed estimates that $420 billion or 65 percent of this issued amount was written down by early 2011.

According to my reckoning, about half of these losses occurred within regulated banks, making ABS CDOs one of the biggest contributors to systemic risk in the crisis. If regulators are worth their salt, they should be keeping CDOs off bank balance sheets, especially as memories of the crisis begin to fade.

Until recently, the only game in town for curtailing bank CDO positions was Basel III capital requirements. While an improvement on catastrophically lax Basel II requirements, Basel III remains mired in the gameable model-driven risk-weighted assets system and contains some of the same incentives that encouraged the rampant growth of CDOs in the first place. Reports of a comeback in synthetic CDOs earlier in the year and recent decisions to water down the rules aren’t exactly grounds for optimism.

That’s why the final version of the Volcker Rule published on 10 December is so encouraging. The five US regulatory agencies apparently agreed that the Basel Committee wasn’t doing enough, and moved decisively to ban US banks from investing in CDOs. The way this was done is interesting.

While the restrictions on proprietary trading and hedging activity have been crafted to curtail the kinds of CDO-related bets that happened at Morgan Stanley in 2007 or JP Morgan in 2012, the most important CDO firebreak is in a different part of the Volcker rule, the part that restricts bank investments in hedge funds or private equity.

What the rule does is include securitisation special-purpose vehicles as a type of ‘covered fund’ that banks aren’t allowed to invest in. Basic securitisations consisting solely of loans or mortgages are exempt from the restriction, but CDOs are not. In other words, irrespective of how Basel rules evolve, US banks are forbidden from either retaining tranches of CDOs they originate or purchasing CDOs as assets from other banks, either in cash or synthetic form. For example, JP Morgan’s watershed 1997 Bistro securitisation would probably be banned under the Volcker rule. (see Note 1).

This aspect of Volcker has already riled the US banking industry, because it affects a type of CDO that many still hold on their balance sheets. So-called trust-preferred or TRuPS CDOs bundled together equity-like securities issued by banks before the crisis, effectively turning them into debt – which was then bought by many of the same banks based on inflated credit ratings. Of $59 billion in TRuPS CDOs issued, $21.4 billion was written down by 2011, according to the Philadelphia Fed’s estimates.

Until recently, US banks holding TruPS CDOs have avoided recognising such writedowns by accounting for them in the same way as loans. The Volcker rule changes all that, and last week Zions Bancorp let the cat out of the bag by writing down its positions by $387 million. The American Banking Association on 23 December threatened legal action against regulators unless this part of the Volcker rule was amended. To some people that sounds a bit like, “we’ll sue you unless you let us keep lying about the value of our CDOs”. (For more background read this informative post on the AFR blog).

However that debate turns out, the US regulators have outflanked Basel in a move that hopefully ensures the sorry history of CDOs won’t repeat itself in the American banking system. It’s a shame that the same can’t be said for European banks and their regulators.

 

Note 1: It’s still possible for US banks to sell CDO tranches directly to counterparties via portfolio default swap contracts without using an SPV. The retained risk on the bank trading book would then be subject to the proprietary trading and hedging restrictions of the Volcker rule.

 

The Hidden Winners of QE

November 15th, 2013

As we approach the fifth anniversary of the start of quantitative easing measures by central banks, McKinsey has produced a helpful report on the winners and losers of QE. The report looks at the impact of asset purchases on the borrowing costs and income of governments, corporations, insurance companies and households.

The headline finding is that governments have been the biggest beneficiaries as a result of lower borrowing costs and remittances from central banks that hold their bonds. Non-financial corporations also did well out of QE, as did US banks that could pay rock-bottom deposit rates and boost interest margins. The big losers were households whose cheaper borrowing costs were outweighed by reduced savings income and annuity rates on defined contribution pensions.

While the near-meltdown of the system that prompted QE in 2008 has receded from peoples’ memories, the debate has shifted to whether the continued measures – which feel semi-permanent – are benefiting the most vulnerable corners of the economy or are merely increasing inequality by making the rich richer.

Some commentators have pointed to McKinsey’s headline findings as evidence that QE doesn’t only benefit the rich. They do that by conflating ‘governments’ with ‘taxpayers’. This may be naïve, because it ignores a huge principal-agent problem. To understand this, consider the hidden beneficiaries of QE not discussed by McKinsey.

As McKinsey explains, households got hammered because interest on savings or annuities plummeted as a result of QE. That happened because they live in a mark-to-market environment. Those with corporate defined benefit pensions – promised a lifetime income regardless of the price of government bonds or annuities – have done better, except for the fact that companies sponsoring such plans are shutting them down.

That leaves the biggest recipients of DB pensions, whose employers are insulated from market volatility. Yes, government employees. The UK spends about £20 billion a year on central government pensions according to the National Audit Office. That’s roughly the same as the annual benefit that the UK government has enjoyed from QE since 2007, according to McKinsey’s data.

In the US, Federal employee pensions and healthcare benefits cost about $85 billion per year according to the Congressional Research Service. That’s 40 percent of the US annual QE benefit identified by McKinsey. In a sense, the ‘taxpayer benefit’ of QE amounts to raiding the savings of non-DB households and paying it straight to these government retirees. (See Note 1)

These figures don’t include the benefits of local government employees but arguably the same wealth transfer is taking place there too. Although policymakers in the UK and US have attempted to cut public sector payrolls and trim DB pensions, the results have been limited so far. Without QE, higher government borrowing costs would dramatically increase the pressure for reform.

Then look at the impact of QE on asset prices. The McKinsey report says it is hard to quantify the impact on equities and real estate, but the boost in the value of government bonds is obvious. Did that compensate those households who lost out from QE?

The answer is only very selectively. The FT’s Gillian Tett highlights a study by graduate student Sandy Hager who finds that the richest 1 percent of Americans own 42 percent of privately-held US Treasury bonds. Although Tett doesn’t make the connection with QE, this figure seems to support the argument that QE has enriched a minority.

What these numbers suggest is that there are two constituencies that have benefited most strongly from QE – government employees and very rich people. These vocal constituencies may have the most to lose from QE ending. Perhaps that explains why central bankers are in no hurry to upset the apple cart they created.

 

Note 1) It might be argued that only the unfunded part of these pension payments are true wealth transfers because the rest is funded by employee contributions.

Note 2) The McKinsey report considers the economic benefits of additional consumption resulting from the wealth effect as QE boosts asset values. In other words, the windfall spent by government retirees and rich holders of bonds and real estate trickles down to the unluckier households.

 

The New York Fed, Goldman and Regulating Conflicts

October 17th, 2013

In a week dominated by the government shutdown, the most interesting financial regulatory story to come out of the US was the lawsuit by former employee Carmen Segarra against the New York Fed. The lawsuit, which was first reported by Propublica on October 10 and widely covered afterwards, alleges that the NY Fed unjustly fired Segarra because she refused to agree that Goldman Sachs complied with a Fed requirement that it should have a firm-wide conflict-of-interest policy.

The lack of such a policy became apparent to Segarra in late 2011 when she examined Goldman’s role in advising energy company El Paso on its sale to Kinder Morgan, a company in which Goldman held a stake. The filing implies that Goldman exerted pressure on Segarra via her New York Fed colleagues, including relationship manager Michael Silva, a powerful figure who used to be Tim Geithner’s chief of staff and whose name appears in many emails relating to the bailout of AIG.

Since we have only one side of the story, we can’t say too much about the lawsuit beyond what has already been reported. The New York Fed is refusing to say much on the grounds that its relationship with Goldman is confidential, and on October 15 lost a bid to have the court proceedings sealed. Goldman says it does currently have a conflict of interest policy, and that it doesn’t have knowledge of internal Fed discussions (Interestingly, the PR who told this to Bloomberg, Andrew Williams, was a NY Fed spokesman until January 2009).

However, when I read the lawsuit I was reminded of what I wrote in Chapter 5 of The Devil’s Derivatives. According to a source who used to work in the market risk team at the Federal Reserve Board in Washington DC, the New York Fed’s relationship managers for large Wall Street banks such as JPMorgan and Citigroup were uncomfortably close to these banks before the financial crisis. My source listed examples where these relationship managers either blocked access by the FRB to these banks or dismissed FRB concerns about risk – concerns that were vindicated in the crisis.

Given these issues one might have expected the New York Fed to have its regulatory autonomy reined in by the FRB after the crisis. Instead, the NY Fed found its importance vastly increased as Goldman and Morgan Stanley became bank holding companies and thus fell directly under NY Fed supervision. Sarah Dahlgren, a relationship manager strongly criticised by my source, was promoted to run the NY Fed’s expanded bank supervisory team, a key Wall Street role that arguably makes her the second most-powerful woman in the Federal Reserve System after Janet Yellen.

This expansion in powers shouldn’t concern anyone, according to the official version of events. According to this narrative, the FRB’s point man on bank supervision, Daniel Tarullo, has fixed the kinds of problems highlighted in my book and centralised FRB control of regional Fed bank supervisors. In a recent speech, Dahlgren – who is Silva’s boss – gives the impression that the NY Fed is moving in lockstep with Tarullo, improving its supervision of Wall Street banks and setting an example for regulators globally.

That is why Segarra’s lawsuit and Propublica’s story is well-timed, because it blows a hole in this convenient narrative, one that already took a knock when the NY Fed failed to spot the compliance failures that led to the London Whale’s losses. It suggests an alternative narrative in which the NY Fed followed Tarullo’s party line by hiring risk specialists like Segarra, only to punish them once they identified problems at a powerful client that Silva the relationship manager was determined to protect. It suggests that the insularity and determination of the NY Fed to protect its fiefdom and stay friends with its ‘clients’ is as problematic as ever. And it suggests that as much as Goldman, the NY Fed needs to have a conflict-of-interest policy too.

The dilemma for the NY Fed is that to disprove this narrative in court, it may have to annoy Goldman by disclosing more information it promised would stay confidential. Then again, at least that would demonstrate its independence from the banks it regulates.

Update: Carmen Segarra’s lawsuit was dismissed by a New York judge in April 2014

Confidence, Doubt and Climate Change

October 4th, 2013

24 years ago I was a graduate student studying earth and planetary science at Harvard. My advisor, Professor Mike McElroy, was an expert on atmospheric chemistry and I helped him teach a course to 300 undergrads in what was known as the ‘core curriculum’. As a special treat for the students, McElroy invited his friend, Senator Al Gore, to give a talk.

Gore delivered an explosive presentation about the greenhouse effect and climate change (it amounted to an early draft of his film An Inconvenient Truth). At a reception for Gore in the department common room afterwards I got a feel for how important the subject was to my professor and his fellow academics. The 1988 Montreal Protocol to address manmade ozone depletion was a template for what was to come: the science of global warming was becoming established, and a massive global policymaking campaign would soon follow.

The gossip I overheard was that $1.1 billion was about to be appropriated by Congress for climate research over the next decade. “Get your NSF applications in now,” someone said. For meteorologists, chemists, oceanographers and other scientists dependent on government funding, the bandwagon was about to become a gravy train.

Then I attended a talk by MIT professor Richard Lindzen, a brown-bearded expert on cloud physics. He was known as the ‘prince of darkness’ in the Harvard planetary science department and he spent his lecture debunking what he argued were exaggerated global warming predictions. Lindzen seemed particularly hostile to Gore whom he described as a ‘demagogue’.

I was reminded of this episode when I read last week’s report by the International Panel on Climate Change, the scientific body set up to advise policymakers on global warming. The report was the fifth published by the IPCC and some of the characters I met at Harvard are still around. Al Gore shared a Nobel peace prize with the IPCC at the time of its fourth report in 2007. Lindzen has been adopted by climate sceptics and recently lambasted the IPCC’s latest document as vociferously as I remember him doing 24 years ago.

What’s interesting is the difference between the fourth and fifth reports (called AR4 and AR5 by the IPCC). Six years ago, AR4 trumpeted a ‘robust finding’ that global mean surface temperatures would increase in the near term by 0.2°C per decade. Last week, AR5 conceded that the actual increase per decade observed over the past 15 years was only 0.05°C, in what it delicately calls a ‘hiatus’ in the predicted trend. Expressed statistically, the observations fell outside the 95th percentile of the range of temperature predictions – an embarrassing signal of model failure. (the accompanying chart taken from AR5 shows model predictions in grey and observations in red)Screen Shot 2013-10-04 at 10.37.40

Although this failure of prediction has been seized upon by the sceptics, it’s only a part of the global warming evidence assembled in AR5. For example, even including the hiatus, the last 30 years were the warmest such period in eight centuries. The IPCC also identifies increasing variance in climate which is arguably more important than trends in averages, because it increases the likelihood of extreme weather events. However, the failure to get the trend in the average right seems to have chastened the IPCC, and this can be seen in the two reports’ respective treatment of uncertainty.

There are three major types of uncertainty in climate modelling. The historical state of the climate system is uncertain because of observational gaps and errors. The myriad physical, chemical and biological mechanisms that fit together in climate models are uncertain. Finally, the predictions that come out of models based on uncertain mechanisms and uncertain inputs are also uncertain.

Starting in AR4, the IPCC adopted a two-track approach to describe uncertainty. For stuff it was relatively sure about, it proposed a quantitative scale of beliefs in different findings. For example, the assertion that the last 30 years were the warmest in eight centuries is ‘very likely’ according to the IPCC which means at least a 90 percent chance of being true. Then there is a more qualitative measure of belief in a finding based on the consistency of evidence and agreement among scientists, ranging from ‘very high confidence’ to ‘very low confidence’.

By counting the number of times the different levels of confidence appear in the text of AR4 and AR5, you get a striking impression of how the belief of IPCC scientists has changed in the past six years (see histogram).IPCC belief map

In 2007, when the IPCC and Al Gore won their Nobel prize, only 7% of qualitative assessments were ‘low confidence’ or worse. By 2013, that proportion had increased to 28%.

An example of the IPCC’s newfound caution appears in the section discussing the notorious warming hiatus. The scientists think that the hiatus was caused by a combination of two things – natural variability in the climate (the oceans absorbed more heat than expected) and a downturn in solar forcing of the atmosphere connected to sunspot cycles. But the scientists’ belief in this explanation is characterised as ‘low confidence’!

At least this is healthier than the cockiness that infected AR4, and the IPCC seems to make a coded admission of this when it says in AR5: “In the case of expert judgments there is a tendency towards overconfidence both at the individual level and at the group level as people converge on a view and draw confidence in its reliability from each other”.

While much of the science of global warming is sound, the IPCC scientists have learned the hard way that only by putting their doubts and disagreements on display can they fight the impression that they are lobbyists for their own gravy train.