Monday, December 08, 2014

Hiring Good Managers Is Hard? Ha! Try Keeping Them

John West, Amie Ko

Our society has become fast-paced, data-hungry, and health-conscious. With advancing sensors and wireless technology, an array of wristband products and phone apps can now track our every movement, monitoring our steps, distance, calories burned, and even sleep patterns. Diligently collecting and analyzing data, these portable trackers like Fitbit, Fuelband, Up, and i-Watch can display our progress, reward us when we meet a goal, and encourage us when we fall short. At a whim, we know how we stack up against our goals: The data are readily available at our disposal at any second (so long as the battery’s been charged!).

We have an insatiable urge—or dare we say addiction?—to track and evaluate our stats frequently, whether they pertain to calories burned, daily Fantasy Football rankings, or investment portfolio values. In particular, keeping tabs on our investments has never been easier. The large mutual fund databases can tell us how our funds are performing against peers to the percentile. Popular software programs can run attributions over custom periods to tell us that our manager added x basis points of stock selection effect in the technology sector. And the list goes on. 

But, what do we do with this information? Does it make us better investors? Does it lead to better decisions? Does it enrich our experience as investors? Led by industry pioneers like Jack Bogle and Burt Malkiel, the data suggest it is daunting to select managers that will outperform ex ante. But even if we do hire them, chances are we won’t stick with them. Even the most sterling of long-term track records is pockmarked with performance potholes, sometimes sizable in length and depth. And in today’s age of a virtually continuous loop of performance measurement, the chance of weathering these stretches for meaningful end investor benefits seems remote. A rethinking is in order. Ironically, the policies and procedures designed to protect investors from getting “taken” by active management may well make long-term excess returns virtually unachievable. The active management game may be even harder than we thought.

A Handful of Superstars
As Burton Malkiel noted,1  we can count on the fingers of one hand the number of equity mutual funds that have beaten the market by at least 2 percentage points over more than a 40-year period. In 1970 there were over 350 U.S. equity mutual funds available to investors; of those, 30% have survived the entire 45-year period. The rest—nearly 250 funds—were merged or liquidated, presumably due to poor track records. 

What are the chances of selecting a fund that has survived the full period and outperformed the S&P 500 Index? Of the initial 358 funds, 45 have both survived and outperformed. Of these long-term outperformers, only three achieved an excess return of 2 percentage points or more. This suggests that the odds of identifying a long-term superstar who outperformed by 2 or more percentage points is a mere 0.8%, a 1-out-of-119 chance. With these odds, you actually have a slightly better chance of collecting a cash prize on the multi-state Powerball lottery (not the Mega Millions Grand Prize, but still a payout!).2  

Forty years seems lengthy, but the equity exposure in our retirement portfolios can be meaningful throughout our lifetimes. For instance, 30-year-old workers picking a standard target date fund3 could have a substantial equity allocation of 90% until age 55 and a still sizeable 30% once they reach 85 years.

Can skilled managers be selected in advance? Yes, quite possibly. Keith Ambachtsheer and his co-authors demonstrate that institutional plan sponsors with a disciplined approach to manager selection outperform sponsors with less focus (Ambachtsheer et al., 1998). But other studies, principally by Goyal et al. (2008) and Jenkinson et al. (2014), show that institutional investors don’t select winners ex ante. So, while it can be done, we acknowledge that the chances of selecting a long-term superstar are slim. 

The Watch List 
With such daunting odds, it makes considerable sense that investors monitor their line-up of actively managed funds. If you’ve got a better than 8-in-10 chance of picking a loser, it’s better to realize the mistake early—before losing the big bucks! 

To monitor their roster of investment managers, institutional investors increasingly rely on an old standard: a watch list policy. How does this policy work? Once a pension fund or endowment hires a fund, it measures the manager against guidelines which generally consist of performance criteria versus a benchmark or peer group over a defined evaluation period. If the manager doesn’t meet the specified yardstick (e.g., its relative performance declines over consecutive evaluation horizons), it is placed on a list for more intensive scrutiny. 

In light of its prevalence, does a watch list policy make the fund monitoring process more effective? How would our sample of long-term funds have fared under a typical policy guideline? A commonly-used rule among pension funds is to place “on watch” managers who underperform their benchmark on a rolling three-year basis, updated quarterly. This benchmark-centric guideline has certain advantages: It is objective, easily testable, and free of survivorship bias and other limitations inherent in peer group comparisons (West, 2010). 

Applying this guideline, we find that, on average, winners do indeed spend less time relegated to a watch list than do their lagging peers, and vice versa, as shown in Figure 1. The time that one of Malkiel’s long-term superstar managers spends “on watch” is half the time logged by a dreadful one (e.g., one with a 3% negative excess return over a 44-year period). In fact, under this guideline, it appears the linkage between a manager’s time under watch and the manager’s long-term value-added return is quite strong, as evidenced by a 78% correlation. So, in addition to clarifying the review process, watch lists do a reasonably fine job in discriminating among managers by keeping the laggards under heightened review for longer periods than the stars. 


Keeping The Stars 
But, what about the stars; the three rare mutual funds that outperformed the broad market by over 2% per annum over a 44-year period? We’d expect this exceptional breed to make fleeting appearances on the list, testing their clients’ resolve for a mere quarter or two, right? Wrong! 

Applying the common guideline described in the previous section, these long-term winners would have spent a little more than a third of their time on the list. Over this full span, this actually translates to 61 quarters or over 15 years in totality. Ouch! Even Warren Buffet’s Berkshire Hathaway wouldn’t have been immune to watch listing. Delivering a return of 5.5% in excess of the S&P 500 per annum since December 1990, Buffet would have been flagged for scrutiny for longer than four years

Rather than looking at the funds’ total time on watch over this span, what if we considered their continuous stretches of underperformance over consecutive quarters? Together, our three mega winning mutual funds would have experienced 17 spells of continuous underperformance, with the median duration of each consecutive stretch being nine quarters. Would an investor be committed to retaining any manager, even a top-notch one, who is targeted for scrutiny quarter after quarter after quarter after quarter (repeat that another five times)? 

As if this median figure of nine quarters were not already grueling enough, consider the longest continuous spell under watch. For these three performance champions, the worst stretches ranged anywhere from 18 to 23 quarters as shown in Table 1. That is, our superstar funds’ longest bout of three-year underperformance would have lasted 23 quarters or over five years! After such a formidable stint on the watch list, would investors have still kept these eventual winners? Possible, but unlikely. Not only is five years well past the typical breakpoint at which institutional investors fire the manager, it also coincides closely with the average tenure of an investment committee member (Wimmer, 2013; Pavilion Advisory Group, 2012). 

  
Without doubt, the length of time a manager remains on a watch list is a vexing test to a hiring committee’s patience in enduring maverick risk. We would bet that, at some point, termination is seriously contemplated—if not initiated—as the committee members must continuously report a manager’s lagging results to stakeholders quarter after quarter. One of us, formerly a consultant, witnessed the firing of a longtime watch list denizen, despite a recommendation to the contrary, simply for being on the watch list. “Why have a watch list if we’re not going to use it?” was the committee’s rationale.4 With many boards and committees made up of successful “doers” such as decisive entrepreneurs and problem-solving executives, performance “problems” are unlikely to be ignored long enough for the cycle to turn. 

The reality is that the time horizon of nearly all our collective investment portfolios differs markedly from the measurement period over which we judge a manager’s effectiveness. The former is decades and the latter is a handful of years. The disparity is exacerbated by the fact that each hierarchical level with responsibility for investment performance appraisal is occupied by an agent whose own results are also subject to evaluation (e.g., the portfolio manager faces pressure from the CIO, who is pressed by the CEO, who reports to the board of directors). In this age of easily accessible information and recurrent monitoring, it is entirely understandable that agents might tend to act precipitously. 

Why do such tools of performance measurements captivate us, even if they are not necessarily beneficial? In a speech titled “Measuring the Performance of Performance Measurement,” Peter Bernstein said, “A hard number appearing on a printed page or computer screen—especially a number carried out to several decimal places—has enormously persuasive powers. We want the number so badly that we accept the perception of accuracy and reality as reality itself.”5  
Even the most extraordinary long-term performers will spend a significant time trailing benchmarks (and, often, enough time to look downright incompetent). Fiduciaries don’t want to be second-guessed; and so the path of least resistance is to measure, watch, terminate, hire another likely candidate, and repeat.

Conclusion 
Today we have plentiful access to streams of data and resources, which, in theory, ought to improve our motivation to get outside, or go to the gym, and exercise. Time will tell if increased information flow on steps taken and calories burned will result in a collectively fitter and healthier populace. Our hunch is that in our quest to achieve health and physical fitness goals, measurement is secondary. Those who are committed to a healthy lifestyle will likely stick to an improved diet and a consistent exercise routine. Those who are not will use their new gizmos until their dedication fades, making the output of the measurement device so uncomfortably bad that the whole experiment is abandoned. 

The field of investing is also inundated with data and tools of performance measurement. We see investors increasingly turning to passive management and smart beta in light of their growing awareness that selecting winning active equity managers faces shockingly long odds. But those who venture forth face an equally daunting second hurdle: retaining the good managers. Are investors steadfast in keeping these managers, particularly in today’s age of faster information flow and better monitoring tools? Negative feedback, which can occur over extended stretches of time, is inevitable for even the best of long-term winners. Fiduciaries need to objectively evaluate their intellectual skill to select and their intestinal fortitude to keep good managers.  Both are required for excess returns over the long-term. 

Endnotes
1. Malkiel (2013), p. 103.
2. See http://www.powerball.com/powerball/pb_prizes.asp.
3. Equity allocations are based on the Vanguard Target Retirement 2045 Fund (VTIVX). 
4. It merits mention that placing managers on watch lists might incentivize them to take on excessive risk in a desperate attempt to avoid termination by dramatically outperforming the benchmark.
5. Bernstein (1995), p. 70.

References
Ambachtsheer, Keith, Ronald Capelle, and Tom Scheibelhut. 1998. “Improving Pension Fund Performance.” Financial Analysts Journal, vol. 54, no. 6 (November/December):15–21.

Bernstein, Peter L. 1995. “Measuring the Performance of Performance Measurement.” Performance Evaluation, Benchmarks, and Attribution Analysis, AIMR Conference Proceedings, vol. 1995, no. 2 (June):68–72.

Goyal, Amit, and Sunil Wahal. 2008. “The Selection and Termination of Investment Management Firms by Plan Sponsors.” Journal of Finance, vol. 63, no. 4 (August):1805–1847.

Jenkinson, Tim, Howard Jones, and Jose Vicente Martinez. 2014. “Picking Winners? Investment Consultants’ Recommendations of Fund Managers.” Forthcoming in the Journal of Finance. Available at SSRN: http://ssrn.com/abstract=2327042

Malkiel, Burton G. 2013. “Asset Management Fees and the Growth of Finance.” Journal of Economic Perspectives, vol. 27, no. 2 (Spring):97–108.

Pavilion Advisory Group. 2012. “Reflections on Investment Committee Governance.” White paper originally published by Stratford Advisory Group, Inc. in June 2009.

West, John. 2010. “The Folly of Peer Group Analysis.” Research Affiliates (March).

Wimmer, Brian R., Sandeep S. Chhabra, and Daniel W. Wallick. 2013. “The Bumpy Road to Outperformance.” Vanguard (July).

No comments:

Lunch is for wimps

Lunch is for wimps
It's not a question of enough, pal. It's a zero sum game, somebody wins, somebody loses. Money itself isn't lost or made, it's simply transferred from one perception to another.