FT Alphaville showed us a chart by Goldman of SPX realized volatility every year since 1929 (which had high vol). The volatility of the first half of 2014 is on the far side of the chart.

No doubt the six months of vol are annualized to give a rate which can be accurately compared to the other years. If volatility were constant, the completely standard way of doing this would be completely accurate. We know what 10.9% vol looks like over half a year and over a year. But volatility is not constant – this is the whole point of the chart. Now, some years with high vol have low vol in half the year and high vol in the other half of the year (like 1987, or 2008). The variation between high and low vol halves is bigger than the variation between high and low vol years, because combining two halves into a year smoothes the ups and downs of vol out a bit.

When we look at the chart, the temptation is to compare 2014H1 to other measurements, to see just how low it is. Such a comparison has absolutely no validity unless we can quantify just how the distribution of half year vols relates to the distribution of year vols. But there is no easy way (and not really any reliable but difficult way) of doing this. You need a model of stochastic volatility which has held up over the last 85 years. Stochastic vol models are quite hard if you want to price index options two and a half years out, but no-one even tries to calibrate them back further than the ’80s.

Putting the vol levels for each of the half-years in the chart would make the bars a little narrow, but it would also make some sense.

Now for a slightly silly article on the FT Data blog – apparently England were unluckier than any other team in the group stages of the World Cup. This is based on a measure called PDO, “how many of a team’s shots on target are scored, combined with the number of its opponents’ shots on target that are saved”. England’s was very low, and this is said to be unlucky, because PDO is usually ‘quite random’. FT Data have a little bit of justification for this, and various commenters have questioned it. (The biggest objection is that PDO mostly measures how crap your goalkeeper is and how good the other one is. PDO comes from hockey, and perhaps goalkeeping in hockey, which seems to involve a man standing in front of a goal which is much smaller than him, getting ready to put one or other knee on the floor, has a different statistical importance).

Let’s look at one thing they said though:

Regressing the PDO in one season on the value for the next season, using data from the Premier League, gives an R-squared statistic of 0.3724. This essentially means only 37 per cent of the extent to which a team’s PDO is above or below 1.0 can be explained by talent.

Aren’t PDO in one season and PDO in the following both dependent on a third variable ‘talent’? If so, should we expect the correlation between the two to be much less than either of their regressions onto talent (if we could observe it)? I don’t remember what the relationship between the 37% and an estimate of the ‘amount that can be explained by talent’ is, or how it depends on *n* or whatever, but it is certainly straightforward and in a textbook, and definitely isn’t that they are the same.

Nitpicking I know, but it is the FT *Data* blog (and Goldman are Goldman).