A mildly interesting essay in today’s FT:

That people respond rationally to incentives, and that market prices incorporate information about the world, are not terrible assumptions. But they are not universal truths either. Much of what creates profit opportunities and causes instability in the global economy results from the failure of these assumptions. Herd behaviour, asset mispricing and grossly imperfect information have led us to where we are today.

There is not, and never will be, an economic theory of everything. Physics may, or may not, be different. But the knowledge we can hope to have in economics is piecemeal and provisional, and different theories will illuminate different but particular situations. We should observe empirical regularities and – as in other applied subjects such as medicine and engineering – we will often find pragmatic solutions that work even though our understanding of why they work is incomplete.

Max Planck, the physicist, said he had eschewed economics because it was too difficult. Planck, Keynes observed, could have mastered the corpus of mathematical economics in a few days – it might now have taken him a few weeks. Keynes went on to explain that economic understanding required an amalgam of logic and intuition and a wide knowledge of facts, most of which are not precise: “a requirement overwhelmingly difficult for those whose gift mainly consists in the power to imagine and pursue to their furthest points the implications and prior conditions of comparatively simple facts which are known with a high degree of precision”. On this, as on much else, Keynes was right.

The question is: To what extent do mathematical models shed insight on economic phenomenon? My sober but humble opinion is: just about none. The problem is that economists envy physicists their prestige, their accuracy, and their esoteric mathematics. And in attempting to emulate physicists with their use of mathematics they make complate jackasses of themselves. Economists have been known to use the terminology (and perhaps the results) of topology. They are complete lunatics and should be chucked in some loony bin pronto.

A more general question might be posed: To what extent can even physical phenomena be mathematically modeled? When and where do we start to suffer from diminishing returns? When and where do we start creating models — like string theory — which are “not even wrong” (Peter Woit’s words)? Does the wholesale use of cohomology (for instance) shed *any* physical understanding? Where does the program of mathematisation start breaking down? Has it run its course? ODEs and PDEs are useful tools. So is Riemannian geometry (for GR). So is Hilbert space theory (for QM). But maybe it doesn’t go much further than this and the increasingly arcane instruments forged by mathematicians serve no useful purpose except aesthetic satisfaction.

There are more profound questions. Does mathematics “work” at all? Or is it a case of putting on rose-colored glasses and then exclaiming that the world looks red? In other words, the mathematical models may be self-referential in the subtle sense that they color how we look at the phenomena, how we make measurements, and what we look for. They are tautologically designed to work — and we fool ourselves into thinking some new insight has been shed.

Another aspect of modeling to think about is that it almost never happens that a “successful” model emerges *ex nihilo*. In practice, theory and experimental results converge slowly, with each helping the other. Again, it’s not the case that some *ex nihilo* mathematical model has shed miraculous insight on some are of physics.

Back in 1981, I read an article by R.W. Hamming, titled “The Unreasonable Effectiveness of Mathematics,” published in the February 1980 edition of the American Mathematical Monthly. I’ll come back to this post and comment on it, given time and inclination.