Wednesday, March 26, 2014

What’s a Five-Star Fund Worth? Baird Study Suggests Not Much


Focus on Funds hears plenty of grousing from capable fund managers these days. The managers complain they can’t gather new investor money unless Morningstar awards them a four- or five-star rating.
And it’s true: Lots of investors won’t look at a fund unless it has the accolade.
Here’s what the ostensibly picky investor might not realize. The “stars” are purely backward-looking. Morningstar’s “star ratings” measure a fund’s past risk- and load-adjusted returns versus peers. That’s it.
Past performance is not just the subject of oft-repeated gibberish in your fund’s ignored disclaimers. It’s also — at least when it comes to future returns in stock funds — very close to useless.
That, at least, was my initial thought on the subject when I read a study by R.W. Baird on Morningstar’s star-rating system — more on the subject below. I thought of it because S&P Dow Jones Indices‘ studies on “persistence” find precious little evidence that fund managers can sustain high performance over time.
So few high-performing mutual funds stay in the high-performing group, in fact, that you’d expect more just by random chance, S&P has found. So throw out past performance as a predictor.
But while “useless” may apply to historical returns, it’s overstatement to apply the term (as I did in the first version of this blog post) to use it for risk-adjusted returns, which appear to show some persistence. Morningstar has found on the basis of eight years’ returns “mildly predictive” evidence that funds with strong risk-adjusted returns tend to deliver somewhat better-than-average returns in the future.
Of course, one way to sidestep the question is simply to use Morningstar’s other rating system — the “analyst ratings” — if you need a shortcut. These ratings give goldsilver, orbronze awards to the funds deemed by Morningstar to possess the best investment processes and other attractive attributes. It’s a qualitative judgment.
Making a reasoned judgment on something besides juicy numbers is indispensable.
Now here comes R.W. Baird & Co. with a study that, at first glance, is consistent with the idea that past performance is less useful than a professional fund analyst’s independent judgement — or, better yet, your’s, or your adviser’s.
Baird’s study attempted to test out the idea — on which many investors appear to act — that five stars means “attractive.” They grouped mutual funds by the number of Morningstar stars they possessed at the beginning of three distinct three-year periods from 2005-2013, a process which yielded 4,400 data points.
Result, for U.S. stock funds: A high rating corresponds in the Baird study to underperformancefor the three year period. Meanwhile, one-star funds — the recent bums — outperformed the five-star ones by an average of 146 basis points. As the Baird authors put it: “Interestingly, five-star funds not only underperformed one-star funds,” they write, but “they performed the worst of all categories.”
Here’s where it gets interesting. What I failed to realize when I first published this blog post is the importance of certain methodological choices which turns out to have an effect on the study’s result. These are, prominently, the choice of three static time periods instead of rolling ones, and the failure to correct for survivorship bias. Factors such as these appear to have made the results look worse for high-rated funds than they’d otherwise be.
One reason: the type of investing strategies which worked well from 2005-07 were, for instance, much different from the successful strategies during 2008-2010, due to starkly different market conditions. So you get a choppier result than you would have with rolling time periods and the richer data sample they introduce.
Bond funds, the big, favorable exception in Baird’s findings, may actually underscore how market conditions can drive the data. What worked in bonds over this period was fairly constant, so taking the snapshot without rolling periods gave the same result Morningstar has previously found: High rankings are “mildly predictive” of future returns.
The power of survivorship bias is another thing that matters that I didn’t factor in this afternoon when writing the first cut of this post.
The Baird study didn’t incorporate the returns of defunct funds, Morningstar’s John Rekenthalertold Barron’s, and defunct funds are disproportionately poor performers. “There’s a big pool of one-star funds, and 10% of them close in any given year,” Rekenthaler told Barron’s. “That’s not a small thing.”
Bring those poor-performing defunct funds back into the data and the 1-star returns will surely drop. By contrast, very few 5-star funds go defunct.
Rekenthaler’s overall assessment of the Baird study is that it is “pretty time-period dependent.”
The critiques weren’t disputed by the Baird author, Aaron Reynolds, when reached by aBarron’s editor earlier this week. Asked about the use of static time periods, Reynolds said, “We chose three mutually exclusive time periods because the results were more muddled when looking at rolling time periods. There was no great correlation you could come up with.”
So, what to make of all this? I take away a few things:
(1) Past performance without adjusting for risk tells you little about the future, so discount its importance when selecting an investment.
(2) There’s some evidence that risk-adjusted returns are associated with better-than-average future returns, but the evidence is not overwhelming.
(3) Even when watching risk-adjusted returns, include a qualitative approach. Data should inform the argument but the intuition and argument should make sense, too.
(4) In fund research, the devil is in the details — the methodology.

No comments:

Lunch is for wimps

Lunch is for wimps
It's not a question of enough, pal. It's a zero sum game, somebody wins, somebody loses. Money itself isn't lost or made, it's simply transferred from one perception to another.