Friday, June 9, 2017

The End of Empire

As I mentioned last time, I’ve been listening to Patrick Wyman’s excellent podcast series “The Fall of Rome”.

One of the things that’s most interesting is the way Wyman fills in the gaps of how these kinds of power transitions occurred in practice. A particularly fascinating story is about the end of Roman Britain. As Wyman puts it:
The Irish came from across the sea to the west. The Picts came south, through and around Hadrian’s formerly great wall. And the Saxons came from the North Sea coast of the continent. They plundered and burned, pillaged and raped, killed and took captives. Without the Roman army to defend them, the Britons were helpless to beat back the attacks of these marauders. Within a generation, most of what remained of Roman Britain was lost and gone forever.
In the first episode of this show, I emphasized the importance of taking a regional approach. The fall of the roman empire happened in much different ways in different places. Some regions barely experienced anything we would call a disruption until the year 500 or even later, whereas others were barely recognizable in comparison to their former selves by that date. No province of the former Western Roman Empire went down faster or harder than Britain. And no region was more fundamentally or more deeply transformed by processes that we are calling the fall of the Roman Empire.
The island fragmented politically into a patchwork of new kingdoms ruled by petty tyrants, some of them native, some of them foreign born. The level of material sophistication dropped off drastically. By 500, nobody knew how to run a proper bathhouse, or, more seriously, even how to build in stone. The economy grew simpler, and barter replaced cash money. Urban centres shrank and all but disappeared. Roman Law Codes and legal concepts fell out of use. People stopped speaking Latin as an everyday language. Most strikingly, mass immigration from the continent completely rewrote the political, linguistic, and demographic map of the island. Britain represents the catastrophe side of the fall of the Roman Empire. In almost no facet of life was there recognizable continuity between Roman Britain in the 4th century and Anglo-Saxon England in the 7th.

For instance, how did Britain end up transitioning from Roman rule to Kings? Well, in Wyman’s telling, Britain was always at the periphery of the empire. Not just geographically, but also in terms of importance and focus. As the Empire experienced increasing turmoil during the late 4th century and early 5th century, attention simply turned elsewhere, and there was less and less interaction between the capital and the garrisons out in Britain. Wyman indicates that at a certain point in the early 5th Century, it would have been quite possible for Roman troops in Britain to have simply not heard anything from the central authorities for years at a time, nor been paid anything by them. In fact, the most significant effect of Rome was when usurpers like Magnus Maximus or Constantine III took a large number of their troops from Britain in their attempts to claim the throne. When the troops left, they didn’t come back, and that was about the last you would have heard on the matter.

So what was happening in the garrisons in the meantime? Well, what was left of the troops still in the the garrisons mostly supported themselves and stayed supplied by assessing taxes on the local population. By this point, these were mostly taxes in kind, as coin was becoming increasingly rare, another aspect of Empire receding. While it’s not like the locals exactly had a choice about paying these taxes to support the garrisons, it’s not obvious that they were unhappy with the arrangement either, as the garrison kept them defended from raiding parties like the Picts, the Scots or the Saxons. These groups had been increasing their raids even during the period of effective Roman rule in the 4th century, and this just got worse once Roman rule declined. In some places, such as along Hadrian's Wall, the garrisons continued to operate.  As long as the general was competent enough to keep the people happy and the troops loyal (since by this point, the troops had much more loyalty to their local leader than the far-distant Emperor they hadn’t heard from in years), things scraped by.

Though as you can imagine, losing the support of Rome did have disastrous effects on the ability of the garrisons to actually defend the people. In the 5th century, along Hadrian's wall or in the east of Britain, you were between three to five times more likely to have a blade wound visible in your skeletal remains as people during the height of Roman Britain. As some Britons wrote in 446 in a letter to the Roman general Aetius
To Aeutius, thrice consul, come the groans of the Britons. The barbarians drive us into the sea, the sea drives us back on the barbarians. Between them, two kinds of death face us. We are either slaughtered, or drowned.
But no help was forthcoming. Aetius had his own problems.

And so as Rome collapsed, the autonomy of these local generals gradually increased, until there wasn’t anyone giving them orders any more. But this didn’t come at a definite point where they were explicitly cut loose, and the nature of what they were doing didn’t experience any sharp shifts – it’s just that over time it become gradually apparent that nobody was coming anymore, and they were on their own. As a result, the distinction between “Roman general in an outlying garrison” and “local warlord” was a very blurred one in this period. There was a guy, he gave orders, they got obeyed. He may have nominally claimed to do so on behalf of Rome, but the frequency of invoking Rome as the basis of authority probably also declined until eventually it stopped altogether. At some point, the general started calling himself “King”. In an alternative timeline, maybe some future Roman Emperor might have turned up to re-establish authority. But it never happened.

In other places, the garrisons were sufficiently worthless that the job of defending the populace fell to local elites instead. Perhaps you armed some of your peasants to defend against the Saxon raiders, and gradually figured out enough how to fight them off. Again, like before, if you end up commanding enough people in battle for long enough, you effectively become a warlord.

The narrative of history that we tend to get taught is one filled with definite events and dates, especially for power transitions. The Nazis were defeated in 1945. Constantinople was taken over by the Ottomans in 1453. The American Revolutionary War ended (roughly) with the battle of Yorktown in 1781. In all these instances, the new power seems to get established fairly quickly, and our understanding is that it was readily apparent that a large transition had taken place. Whether this was exactly true even for these events is not clear, but it seems not a bad approximation of reality. And from hearing enough narratives of this structure, one ends up implicitly taking this as the rough template for when you’ve reached a sufficient understanding of What Actually Happened and Why.

But the fall of Rome just doesn’t look like that. For one thing, it’s not even clear exactly when you date the end of the Roman Empire. Before listening to this series, I naively assumed that the sack of Rome was the end of things, but that doesn’t seem to be the case at all. For starters, there were two of them, one in 410 and another in 455, and Western Roman Emperors continued to rule in some form or other before, during and after both. I also didn’t know that Rome wasn’t actually the administrative capital at this point, it was Ravenna. I’ve seen the “end date” given as 476 AD, when the last Western Roman Emperor Romulus was overthrown by Odoacer.  But this was as much as anything the last gasp of what was already a thoroughly decayed system.

And if this holds true for Rome, it holds doubly true for Roman Britain. Admittedly, we have very few written records from this period, making it hard to say for sure. The evidence for the stories above comes through things like evidence of garrisons near Hadrian's Wall being still functioning in the 6th century, the existence of hill forts in other parts of Britain around this time, and aristocratic burials emphasising military rather than civilian themes. But it seems safe to say that the end of empire was experienced partly as power and central authority just receding, until at some point it became apparent that it was gone altogether.

And the things that replace it are local power arrangements, especially those that are relatively self-sufficient. If you imagined this in modern America, if the Federal Government were to collapse, some much decayed form of New York City might survive as a separate autonomous city-state. After all, it has an existing private army in the form of the NYPD. While they only barely tolerate DeBlasio (and I doubt their restraint in this regard would survive the collapse of federal authority), it’s not hard to imagine a Giuliani or a Bloomberg continuing to run the city if the federal government disappeared. This seems much more plausible than, say, New York State – how many people with guns owe a strong primary allegiance to New York State, and how much territory are they in undisputed control of?

But then again, maybe somewhere like Idaho is a better bet. You wouldn't bet on the New York finance industry to keep functioning as usual if Washington collapsed. Nor would I bet on the continued smooth functioning of hundred storey skyscrapers in the event of an extended civil emergency. In the case of Britain, the urban centres emptied out almost entirely.

If the distinction between “general” and “warlord” is murky in the late Roman Empire, then even more murky is the distinction between “migration” and “invasion”. When the Goths crossed the Danube in 376 AD, they didn’t come as army – they were invited across by the Romans, and promised to pay taxes, serve in the  army, etc. And indeed, there had been Barbarians serving in the Roman army for a long time before this, including in positions of command. What was less common, however, and less ideal, was for entire tribes to join as a single unit, as they did with the Goths. In addition, armies at the time tended to travel with women and children. Look at them one way, and they’re a tribe or people migrating into an already multi-ethnic empire. Look at them another way, and they’re a notionally Roman but practically semi-autonomous military force with civilians supporting the camp. Either way, the result was disastrous.

So Roman armies being made up of mixed barbarian and Roman troops was not inherently unusual. But in this era, armies were independent power bases whose loyalty to the Empire couldn’t be guaranteed. The leaders of these armies would negotiate with the central authority to take part in campaigns. Was this just an amicable discussion over costs and supplies? Was it insubordinate generals being paid off instead of being made an example of (assuming that was still possible)? Or was it the Western Roman Empire making ad hoc treaties with incipient kingdoms? As with so many questions in a decaying empire, the answer is hard to pin down, and the answer you pick with the benefit of hindsight may not be the one you would have picked at the time. They’re all shades of grey, in an era when grey went from mostly white to increasingly black.

And this wasn’t just a problem for Barbarian-led armies either. Part of the reason that the Empire initially didn’t treat the barbarians as a serious threat is that the Emperors had to devote a lot of their efforts to fighting off usurpers. Some general or other amassed sufficient loyalty from troops and decided to try his hand at becoming Emperor himself. In this context, most Emperors treated these usurpations as a much more urgent and existential threat to their rule. Barbarians causing problems at the periphery of the empire could wait, but if you got overthrown today, you were toast. To the extent that you thought of the barbarians much at all, they might have been viewed as potential auxiliary forces to be dragooned in to assist you in your battles.

Of course, problems at the periphery don’t always stay at the periphery.

We look back and see “Rome”, threatened and eventually defeated by invaders. But the people at the time saw their other Roman rivals for power as the real issue. Their main aim was not maximising Rome as an abstract idea, but maximising their own power and influence. The Empire itself was just taken for granted, and all the serious efforts were devoted to the real enemy within.

It doesn’t map exactly to the Blue State / Red State conflict we see today, but the parallels are definitely there.

In late Rome, mass migration of potentially hostile peoples can be seen as both symptom and contributing cause of decline. The migrations themselves had disastrous consequences (like the loss at the Battle of Adrianople in 378, the first major defeat of a Roman Field Army in centuries). But you can also see the immigration itself as happening due to a divided and weakened state, where it was increasingly hard to staff armies, increasingly hard to maintain order, and increasingly riven with violent internal conflicts for power.

Even if the correlation is there, cause and effect may go the other way - a weakened state is unable or unwilling to prevent mass migration. This is certainly what post-Roman Britain looked like. The people who started out as raiding parties ended up as settlers. There is some dispute as to whether the Saxons arrived en masse uninvited, or whether some of them were brought over as warriors by local warlords to assist in the defence against other raiders, and just ended up pushing out the warlords once they had enough numbers. But the effect was the same - Roman Britain became Saxon England. If your local warlord wasn't strong enough to resist, you ended up being ruled by a new foreign warlord.

And how many people are we talking? To quote Wyman:
How big did a migration have to be to be a mass migration? Thousands? Tens of Thousands? Hundreds of thousands of people? I think say tens of thousands would be enough to qualify. They didn't all come at the same time, or even from the same places along that North Sea coast. A few thousand migrants per year doesn't sound like many in the context of a region with a couple of million inhabitants, but over a century or so, that's a huge number. Remember that idea of cultural diffusion, too. If those immigrants disproportionately end up forming the dominant class in that society, they can reshape it dramatically.
Do you find yourself wondering if historians of the future will say the same thing about Europe and America?

And so what starts out as a simple one-sentence narrative – the Roman Empire was destroyed by a disastrous decision to let in the Goths – ends up more nuanced. The original statement still has a significant kernel of truth, and you can see how it fits in the bigger picture. But history is complicated, and the more you look at the details, the harder it is to attribute everything to a single cause.

It's tempting to look at history and think that our lack of understanding comes mostly from a lack of source material and information. But as a friend of mine AL who studied it seriously once put it , we have almost all the source material you could want on the Iraq War - does that mean we fully understand it? Not really. The challenge is always what the economists call identification - we are always trying to tease out causation from a single timeline of events.

The economist would say that there is no hope of real identification from this kind of messy history.

The historian  might counter that when the issues are important enough, it is worth making the best attempt at it possible, even if it's imperfect. If there's a chance that you might be living through another major civilisational decline, that seems to be worth trying to figure out.

Friday, May 26, 2017

The economist's case for at least agnosticism about Bitcoin

As far as I can tell, the primary problem with Bitcoin is that after you've bought it, you become medically incapable of shutting the @#$% up about Bitcoin. So it goes.

It took me a long time to buy any bitcoin, but I should have done it about three years ago. This isn't cheap talk, by the way. I know exactly why I should have bought it back then, based on the knowledge I had at the time (which is the only criterion by which you ought to regret any decision). To wit: I considered myself a Bitcoin agnostic. This made me more optimistic than perhaps 99% of finance people I spoke to. But then again, 99% of finance people I spoke to also couldn't easily explain why Bitcoin existed in the first place. 3 years later, all of the above is still true, but I finally got off my butt and did something about it, albeit after an enormously costly delay.

The standard economics textbook description of money says that it tends to arise because it helps facilitate exchange. If we need to barter goods with each other, it's hard for me as a blacksmith to obtain wheat unless I can find a wheat farmer who also coincidentally needs blacksmithing services. But if I can exchange my blacksmithing services for some asset money (as yet undefined as to what that is) and then turn the money into wheat down the line, this greatly allows us to trade more and grow the economy. So far so good. But what exactly makes something money?

The standard economics textbook definition of money says that it has to fulfill 3 purposes, namely
#1. It has to be a unit of account - a way of measuring how much of something you have
#2. It has to be a medium of exchange - a means for people to transact amongst each other and exchange goods and services indirectly, rather than directly through barter
#3. It has to be a store of value - that is, have some worth derived from an alternative use other than the monetary aspect itself, to ensure that people will be willing to hold it.

Under this standard narrative, bitcoin fails. #1 is fairly easy to meet, but bitcoin's big strength is in #2, which it passes with flying colours. Importantly, however, it fails badly on #3. Digital bits have no inherent value, and no external use to make them a store of value. Ergo, it should have no value above zero, and anything else is a bubble about to collapse. So goes the standard story.

Gold originally fulfilled all three purposes. You could weigh it, and trade accordingly. Turning gold into gold coins helped with 1 and 2, so drove out raw gold. It was easier to transact and measure in coins. Of course, the problem with coins is that they could get filed away at the edges to steal some of the gold, but still be worth approximately one coin. So eventually the coins got replaced with pieces of paper that were claims on gold in a government vault. At the start, you could actually make the conversion. Then conversion became increasingly a fiction, before FDR decided to do away with the pretense of convertibility, suspending the conversion and limiting the ability of private individuals to hold gold.

 At this point, you may be wondering how US dollars continue to meet the 3 definitions above. After all, they kept being used as money, and economists didn't all seem in a raging hurry to update their definitions. So the standard answer is that the 'store of value' aspect is that the government, who has guns, will accept USD as a means of paying taxes (and indeed demands that form). Because they have a guaranteed value for that, they have a guaranteed value for everything else.

To me, this seems to have a definite flavour of ex-post rationalisation. My hunch is that if you asked people 100 years ago whether they would still be equally willing to hold dollars if they were backed by nothing at all, they would have answered strongly in the negative. In the end, they were prohibited from switching into gold at the time, so it was a moot point. But what about now, when they could switch? I highly doubt that many people today would explicitly state that they're willing to hold dollars because they can pay their taxes in them. 100 years ago, they probably would have laughed you out of the room.

The economists are right in a narrow sense, of course (as they often are). Bitcoin does indeed fail as a store of value, and, technically, the dollar does not.

And yet, here is some evidence that ought give you pause, assuming you're not an economist in the midst of full-blown cognitive-dissonance-induced denial:



This is Bitcoin today, stubbornly refusing to prove economists right by ceasing to exist. As a matter of fact, since the coinbase time series starts in January 2013, it's up some 19,000% or so.

Don't look at the graph and ask if you think it's about to drop. Look at the graph and ask how much it would have to drop to get to where it was in 2013 (let alone 2009).

At a certain point, it seems prudent to at least consider the possibility that there might be something going on here, but you don't know what it is, do you Mr Jones? Or at a minimum, ask the following question - what level of future growth would you need to see to change your mind? Another 100%? Another 1000%? Can we agree on it now, so that if it eventually happens, you might reconsider the question?

The null hypothesis here is not in much doubt. Bitcoin is a bubble, and will eventually collapse.

Actually, the true null hypothesis is a little more specific, at least if you believe standard economics. Bitcoin should have a price of zero. It has no value except as a currency, and it is worthless as a currency.

So what is the alternative hypothesis?

The alternative hypothesis is that Bitcoin is likely to stay at a non-zero value for quite a long time, if not indefinitely, and moreover may end up being worth a lot. That may sound woolly and hand-wavy, so let me explain.

First off, how many things can you name that truly have a value of zero?

It's surprisingly hard. If you don't believe me, here's a photo of cans of air from the top of Mount Fuji selling for ¥500

Image

And things like rubbish or nuclear waste have definitively negative prices - you have to pay to get rid of them. They're still not exactly zero.

The point is that "fundamental value" is a concept that, in my opinion, creates at least as much confusion as it dispels.

The primary value of an asset today is what you think someone will pay for it tomorrow. If they can use the asset for some external purpose, and you have a guide to what that external purpose is likely to be worth to them, you have a guide to what it will be trading at tomorrow. But that's all it is. If you have some other way of estimating what people will pay for the asset tomorrow, then you don't need the intermediate heuristic of fundamental value. (This is especially true for assets which don't directly produce cash flows - for ones that do, there's a better case that you should just value the cash flows, but even then, you still need to know tomorrow's willingness to pay unless you're able to hold the asset to infinity to collect all the future cash flows).

So in that case, what should Bitcoin be worth? Whatever people are willing to buy it for tomorrow. And what number is that? Well, that's the rub. But at least we know the right question to ask now.

As a consequence, we can begin to formulate an alternative definition of requirement #3 for money that we started with. Specifically:
#3A - If you accept the asset today in exchange for giving up valuable goods or services, you have to have a very strong belief that you will be able to exchange said asset tomorrow for someone else's goods and services, and receive approximately the same value as what you exchanged today.
Viewed from this angle, we can see that requirement #3A is at heart a co-ordination problem. Once we all agree on something being money, it becomes money. More importantly, we can see why people mistakenly viewed #3 as being the requirement. In essence, being a store of value is one way of solving the co-ordination problem. If it's common knowledge that some people will be willing to accept gold because it's useful for jewellery, most people who don't value it for jewellery are nonetheless willing to hold it.

But this isn't a strict requirement. Once the belief is established, it becomes self-fulfilling. When you accept US dollars, you aren't doing the iterations and thinking that it will eventually be exchangeable for taxes. You're just accepting it because you can buy your groceries with it tomorrow. Now, in the long run, it's true that if the US government collapses, you don't want to be holding US dollars, so in that sense the economists are right. But this is a long way from most people's actual calculation.

In the case of Bitcoin, a belief that Bitcoin will retain some value tomorrow can justifiably be sustained as long as I know that there's a decent number of drug dealers and corrupt Chinese officials who want to hold Bitcoin because it's (sort-of) anonymous and can be easily taken out of the country when the porridge hits the propeller. But in the short run, I hold Bitcoin because I think that people tomorrow will hold Bitcoin.

In fact, it's stronger than that. Like a classic bubble, people actually believe that more people will want to purchase bitcoin tomorrow, and at higher prices. In other words, the supply is fixed, and the more the price goes up, the more people begin to think "Huh, maybe I should hold at least a few grand worth of Bitcoin, just in case." If more people begin to think that, the price will indeed keep rising. Of course it can't rise like that forever.

But even if you think of Bitcoin as a bubble, it behooves you to notice something rather different about it from most bubbles, like the tech boom. In the case of Bitcoin, it seems to me from anecdotal experience that many, if not most, of the people buying bitcoin today are planning to hold it for a long time, if not forever. And this is definitively not true for most bubbles. People generally ride bubbles planning to get out once it's gone up enough, then go back to holding cash, or houses, or whatever. If that's what most people are thinking, the belief structure becomes very unstable, as any dip in price suddenly might cause a lot of people to switch to selling. Even if Bitcoin is a bubble, if most of its adherents plan to hold onto it for a long time, regardless of current price levels, then this reduces the likelihood of a complete collapse when everyone rushes for the exits.

In other words, even if this is a bubble, it may be a surprisingly durable one.

And the reason that "bubble" here is not necessarily a pejorative term is a point made by Moldbug - that money is the bubble that doesn't have to pop. In other words, there will be at least one good that is held in excess of its demand for other uses, because of its use for transactional purposes.

It may seem strange to reference Moldbug, since he comes out as a skeptic, based on his guess that the government will outlaw it.

But there is a counter-argument to that - the Uber problem. Namely, the government has a limited amount of time in which it can easily ban Bitcoin. The reason is that as the price gets high enough, enough people have enough to lose that it becomes politically costly to ban it. And so at some point, you get a compromise answer, like Coinbase seems to have done - you have to submit ID, it's linked to your bank account, and you have to give a social security number. The US Government levels capital gains taxes, everyone is happy. Why ban something if you can make more money by taxing it instead?

Because there is one rhetorical claim about Bitcoin made by its proponents that I think has caused more confusion than any other. It was this realisation that made me change my mind and invest in it. (Which, to emphasise, I'm not encouraging you to do. I'm some stranger talking smack on the internet, and this is not financial advice. But still)

It is this:

Bitcoin is not going to be a substitute for the dollar.

Bitcoin is going to be a substitute for gold.

Which is to say, the reserve asset that you hold in some amount as a hedge against the @#$% hitting the fan. This is of course, mid-level @#% hitting the fan, such as large-scale financial instability - if things really get hairy, the only worthwhile assets will be guns, ammunition, antibiotics, water purification tablets, and that kind of thing. But again, the same holds true for gold. If you honestly think that in a post apocalyptic New York there's going to be a vibrant demand for gold for jewellery purposes, perhaps you would do better investing your savings in shares in the Brooklyn Bridge.

Put another way, the case for Bitcoin in concise terms is that Bitcoin is to gold what neocameralism is to monarchy.

That is to say, it's what you get if you took an old but existing arrangement, and instead of trying to mimic it exactly, you thought about how you would design a modern version of it that a) retained the essential strengths while b) utilising technological innovation since the early form to overcome its weaknesses. (Some thoughts of mine on the neocameralism vs monarchy comparison are here).

In the view of Bitcoin, the essential aspect of gold is its relatively fixed supply. So let's go one better, and make a mathematically fixed supply. Rather than gold coins, let's create highly divisible bitcoins that can be traded across borders costlessly. Rather than measure purity over and over, let's create a blockchain to solve the problem of double-spending and transactions between mutually suspicious parties. Meanwhile, the fact that it can be mined by anyone easily at first, but only with more difficulty later, encouraged people to get in on the action early.

If you thought an essential aspect of gold was its value in jewellery, then you'd be a skeptic.

Rather, the other essential aspect of bitcoin was its first-mover advantage. Sure, someone else might invent other coins (and they have), but because Bitcoin was the first to market, it already has the advantage of incumbency. And in a co-ordination game, that's a huge deal.

And I think phrasing the question this way to economists helps to clarify the issue. In other words, if you're a Bitcoin skeptic and think its a bubble that's inevitably going to burst, I would ask you: is gold a bubble? This is harder to prove than in the case of Bitcoin, because it does have a fundamental value from other uses, so its value shouldn't go to zero. As a consequence, evaluating whether it's a bubble is much more thorny and more subjective. But it seems pretty clear to me that central banks aren't holding gold because they're about to turn their bullion into wedding rings. As Moldbug points out, in 2011 gold reserves were 50 times annual production. For silver, they were twice annual production. Assuredly there is something unusual about people's desire to invest in gold. So if you think that this is creating excess demand, surely this is pushing up the price, no? Supply is pretty fixed in at least the short term, if not the medium term too. And isn't excess demand pushing up prices the definition of a bubble? The point, of course, is that with gold this state of affairs has persisted for an extraordinarily long time. Is there any particular reason to assume that gold is about to disappear as a hedge asset? Not to me.

But I know my economist friends well, and I know their objection to the above reasoning, which makes Bitcoin different from gold. Which is to say, without a fundamental source of value other than as a money-like good, isn't the whole thing liable to unravel really quickly? Put differently, you and I might be willing to hold Bitcoin because we assume that there's reserve demand from Chinese officials and drug dealers, but why are the Chinese officials and drug dealers themselves willing to hold it?

This is another way of saying, why don't we all iterate backwards and realise that without an ultimate holder of the good from some other source, the value to everyone should be zero? Suppose we have a game where if we co-ordinate on a good being money, it gives value to both of us, but in the final round whoever is holding it ends up with a worthless asset.

If the game is finitely repeated, the economists are absolutely right. If everyone correctly performs backward induction, you'd predict a) Bitcoin should never have a positive value to begin with, and b) even if it does, this should be rather fragile. If it's an infinitely repeated game, then the Nash Equilibrium has more possibilities, as it usually does. In this case, if there is no final period, then it seems more like a straight one-shot co-ordination game where if we both agree, we both benefit. But let's take the finitely repeated version with a penalty for holding in the last round, as the logic is stricter there. And the logic dictates that since no-one is willing to hold in the last round, they don't want to hold in the second last either, and so on.

But here is the trillion dollar question - how much do people actually perform backward induction? And if they don't, how should you act in response?

The classic version of this is the iterated Prisoner's dilemma. Suppose two people are playing against each other 100 times in a row. The economist's answer is that if we're only playing a finite number of times, there's only one Nash Equilibrium to this game. We both defect in the last round. Knowing this, we both defect in the second last round, and the third last round, and thus in all rounds.

And yet... people don't. They routinely co-operate, and only begin defecting towards the end. This is why tit-for-tat works so well in practice. Because most people don't actually do backwards induction for more than a few iterations. This is why they don't start defecting until close to the end.

And bear in mind that, unlike Prisoner's Dilemma, Bitcoin is a co-ordination game, meaning it actually is a Nash Equilibrium for us all to believe in Bitcoin, at least in the one shot version. In the case of Prisoner's Dilemma, you can mathematically prove that people aren't acting rationally, and yet they still do it just the same. Here, it at least can be rational.

Now, bear in mind, the economists aren't wrong on the bigger picture - it still might collapse, for all the reasons they say. But that's not the same as saying that it has to collapse. I would guess, rather, that the opposite is likely to be true. The longer it goes without collapsing, the stronger the self-fulfilling aspect of the belief becomes, and the more stable it becomes.

Mainstream economists and finance types are looking at Bitcoin continuing to rise in price, yelling that this is a stupid and unstable equilibrium and that people should all start defecting immediately.

This is just like the economist watching two people play prisoner's dilemma and continue to co-operate round after round. You can laugh and call them morons, but a betting market just opened up. It's round 43 of 100. They both co-operated last round. Rubber to the road, what would you bet they're going to do this time?

After eight years of people continuing to not defect in Bitcoin, perhaps, dear economist, it's time to re-examine your assumptions.

Updated: On the other hand, if you wanted to make a concise case for a bubble, just check out some of the bizarre creations down at the lower market cap end of the cryptocurrency list. $10 million of FedoraCoin, you say? It's woefully underperforming PepeCash at $13 million. Hmm.

Saturday, May 20, 2017

The metres lost, the metres gained

What is gone, but still remembered, is quite vivid and easy to see. What is yet to come is often only perceived dimly, if at all.

For a reactionary, taking a walk through the Basilique de Saint Denis in Paris is a singular and sobering experience.

Inside the church are the remains of 34 of the 37 Kings of France. This is a glorious history spanning from 481 A.D. to (depending on how you want to mark it) either 1792 or 1848. Just ponder how long that really is, and how many nations and empires rose and fell in that time.

It all was brutally cut down in the French revolution, though it recurred in fits and starts during the general chaos that was France in the 19th century.

But the tragedy is made all the more poignant by the fact that the glory of the institution is so utterly forgotten as to be almost irrelevant in modern France.

If you turn up in Saint Denis, dear reader, you will probably have the place almost to yourself. As indeed I did when I was there. Just me, Charles Martel, Louis XIV, and Clovis I.

I remember once asking a French friend of mine, "How exactly is the French Revolution portrayed in French schools? Is it an unalloyed good? Mostly good? Mostly bad? A mixture of good and bad? Opinions differ between good and bad?"

"Oh, it's a good thing", he replied. "You know, Liberty, Equality, Fraternity - those things hadn't been tried before."

Deciding to elide over his odd narrative of the history of freedom, I instead opted for the more specific:
"But what about the Terror? And the 90-odd years of political instability that followed?"

His reply, "Oh yeah, I guess we don't talk about that stuff so much."

To the French, the only French history worth studying begins with the Revolution. Everything before that seems to just be lazily lumped in under the heading of "tyranny".

As ahistorical and contemptible as this is, the surest sign that nobody gives a damn about learning any of this is that the Church at Saint Denis remains relatively unruffled despite being located in what is now a heavily Muslim area of Paris. The last I heard of it being in the news was back in 2015 when the terrorists in the 2015 attack  on Paris were killed by police in Saint Denis in a massive shootout. But as far as I know, there is little evidence of vandalism of the tombs in modern times (unlike the looting of all the valuables in the French revolution). The simple truth is that even among the Muslims in the area nobody even knows or cares enough to attack it as a symbol. They attack the real symbols of France - theatres, football stadiums, cafes and restaurants.

Charles Martel weeps.

So we have a glorious and storied history of the French monarchy, dating all the way back to Clovis I, assigned to the dustbin.

We can see what is lost alright. I admit that I, unlike my French friend, am far less optimistic about what has been gained.

But somewhere in the back of one's mind while wandering around the Church, an odd niggling question pokes its way to the surface, to disturb one's reverie and melancholy. A question which, indeed, I've wondered about before.

What does it even mean to be the first King of France? Who was this Clovis I fellow? And what on earth happened before that?

Even relatively educated people often have large swathes of gaping ignorance about history, myself included. At the time I was walking around there, I didn't know at all.

The first thing to clarify is that Clovis I wasn't exactly the first King of France. Rather, he was the first King of the Franks. France is an area and a country - that came later. The Franks are a people, or a tribe.

And who were the Franks?

To give you the shortest and pithiest answer, you probably have heard of them and their exploits, but mostly under a different name.

They were the barbarians, destroying and preying on the last vestiges of the Western Roman Empire.

I've been learning about this in Patrick Wyman's excellent podcast series, The Fall of Rome.

They may not have been the Visigoths, sacking Rome under Alaric in 410, or the even more destructive Vandals, sacking it again in 455.

But make no mistake, if you were a supporter of the existing civilisational order at the time, you would have experienced the rise of Clovis I mostly in terms of his turning on and eventually defeating the few remaining serious Roman forces, such as at the Battle of Soissons in 486, and in his consolidating power over the other barbarian tribes.

In other words, Clovis became King of the Franks because he killed all the other Frankish chieftains and leaders, eventually uniting the various barbarian armies and tribes under his rule. That was how you became the first King of the Franks. What this replaced was the prior status of being one warlord of many, among a loose confederation of ethnically related tribes.

As Wyman points out in a number of places, during this period there wasn't actually a sharp distinction between concepts such as
i) 'an invading barbarian army',
ii) 'a barbarian people on the move' (since armies in those days often traveled with soldiers' wives and children, who lived with them), and
iii) 'a Roman army lead by a barbarian general with mixed Roman and barbarian troops' (since barbarians had fought on behalf of Rome, in one form or another, for a long time before this, and many of the leaders of this period were either allied with Rome or nominally Roman subordinates at some point, Clovis included) ,

Moreover, in the general disarray of this period, it's also hard to know how much to view the increasing power of these armies as the cause of the collapse of the Roman Empire, or just the symptom of other groups rising to fill the increasing power vacuum left in the wake of the collapsing state. The distinction is not a clear one, and it doesn't much change what it would have been like to be on the receiving end of it.

If you were a Roman, living through the destruction of the society and structures that had ruled for 800 years, it would be extraordinarily difficult to look at savages like the Franks and see the possibility for a glorious future monarchy lasting 1300-odd years.

You would only see chaos, slaughter, and despair.

And for a long time, you would be right. There are not many fun stories out of Europe in the 6th Century, or the 7th or 8th for that matter.

But out of the chaos and carnage eventually rose the 37 Kings of France.

I confess, in my darker moments it is indeed quite difficult to look around at this fallen world of ours and see anything but societal decay, warded off only temporarily by technology.

Perhaps right now, that's all there is. But whether this is true or not, your perceptions are apt to make it likely to seem that way. You would have an easier time guessing who will be seen as the Valentinian III of our era than who might ultimately be seen as our Clovis I.

What is being lost is easy to see.

It takes much more judgment to look at the chaos and see the potential in what is yet to come.

What rough beast, its hour come round at last, slouches towards Bethlehem to be born?

Monday, May 1, 2017

On the Pathology of Low Birthrates

One of the important axioms of organisational development is that if you want an organisation to be successful and sustainable, you should make sure it's profitable.

For organizations like businesses, whose whole raison d'être is profit, this doesn't need much explanation. But what about for causes where the organisers don't care much about profit - a renaissance fair, a church, a literary magazine?

There was a great Social Matter article talking about this a month or so ago in the context of the Gulenist movement in Turkey - why would a religious cult also operate a test prep centre?

The reason is that a profitable organization is self-sustaining. Every organisation needs resources, and profit ensures you won't run out of them. Even if the resources you really need aren't money, profit ensures that a) you don't fail for lack of money, and b) you've got a good shot of acquiring the non-monetary resources you need anyway. Suppose you want supporters - well, would better marketing help? Would free food? Would a great place to hold meetings?

When you forget this lesson, you end up like jwz (whose writing I enjoy, even if I don't agree with all of it) with DNA Lounge (a nightclub I've been to, and very much like) - he made a ton of money in tech, wanted to run a cool nightclub, and didn't care about the money. Then $5 million later, he ran out of money. It sounds both mean and trite at this stage, but if he really cared about the mission of having creative musical venues available, he should have worked damn hard to make it profitable as soon as humanly possible.

But even people who think about this when it comes to profit and organisations often don't think about the equivalent for ideas and cultural practices.

To wit: if you want a culture or idea to survive, the people who practice it must have high birth rates.

Because while organisations propagate themselves by resources, ideas and cultures are carried by people. It doesn't matter how much you love your particular idea - feminism, classical music, the constitution, whatever. If the people who support that idea have below replacement birth rates, and the people who are opposed to that idea have above replacement birth rates, then the prevalence of that idea is being whittled away, slowly but surely. Ideas don't breed directly, but they can still be bred out.

Because ideas, like most things in this world, are heritable. Both genetics and culture mean that parents in general pass their values on to their children. Take away the children, and you take away the people likely to hold the idea tomorrow.

Of course, people are apt to forget this, because it's a slow-moving effect. The faster way ideas spread is through communication across a given population.

Which is all well and good. The more you spread the idea, the more people who hold it right now, and, ceteris paribus, the more people will hold it next generation.

Where things get complicated, however, is if the idea itself reduces birthrates directly. This is especially true for ideas like feminism or progressivism in general. In this sense, they are parasitic and pathological. I mean this as a metaphor, but only in the barest biological sense. They reduce the reproductive fitness of their host, simply by reducing the number of offspring it has that survive to adulthood to themselves reproduce. As a consequence, these ideas are like a deadly virus that can only survive by spreading and infecting other hosts. Is reducing the reproductive fitness of your host not the very essence of parasitism?

Ideas that increase procreation are symbiotic in that sense - the idea spreads by increasing the fitness of its host. But as in nature, parasites and diseases can spread and survive, although there is a tradeoff between the mortality rate and the transmission rate. The faster you kill off the host, the faster the disease must also spread, or it kills off itself with the host. In this sense, the fact that progressivism has spread throughout the west with increasing speed, and the fact that it is catastrophic for birth rates, are not a coincidence. The former is a requirement for the latter.

It is an unassailable fact that the ideas, beliefs and circumstances of the modern west are extraordinarily pathological in terms of birth rates. The exact cause of this is hard to pin down, but in some sense it doesn't specifically matter - not only the directly pathological ideas, but those that tend to co-locate with it, are similarly being selected out. So a taste for classical music rose with the growth of Europe and was able to last for a long time, but now is associated only with low birth rate groups. If you disagree with my assessment that progressivism is considerably to blame for low birth rates, that's fine, because they're all going down together. If you think the answer is just 'wealth' as the cause of low birth rates, then we are ineluctably being selected for poverty.

(The problem with wealth as an explanation, incidentally, is that while it could explain the time series and the current cross-section, it fails entirely with the historical cross-section. Which is to say, for most of history, the rich had more children. For them at least, wealth didn't seem to produce the same pathologically low birth rates that it does for us).

But no matter where exactly it is coming from, the west simply cannot survive long term in its present form. And this is a purely mathematical prediction, not a sociological one. Any set of values that creates below replacement birth rates is pathological, and is actively being bred out.

Of course, the other complicating factor is that the west keeps taking in new immigrants. When they arrive, they have high birth rates, before they too end up declining. In the mean time, they acquire at best only a fraction (if any) of the traits that made the west what it was.

Which, if you like the west as it is, or as it was, is a big problem.

But if you're the blind idiot god of social evolution, this is the pathology solving itself. The modern west is pathological, and the dismantling of the circumstances that created it is evolution's revenge.

The ultimate irony of social Darwinism is that while it was pilloried for its racism in predicting the decline of third world populations, on current birthrates it was ultimately the west itself, the very progenitor of the idea, that was the unfit one. Evolution does not work the way most people seem to think, just making stuff awesome according to your particular preference for what that involves.

The biggest question isn't whether the current situation can go on forever. It's only what will replace it. The replacement will be made up of individuals holding ideas that are resistant to whatever set of pressures create low birth rates. In this sense, we are like a population in the midst of a great plague, knowing that eventually society will only be made up of people with an immune system able to defend against it.

If you want to know who that might be, just look at who is currently having children. The sincerely religious, such as Mormons and Muslims, for one. And those with a very high time preference and few outside options.

There are many forms of non-pathological social structures and ideas that could replace the current one.

One is Victorian England.

Another is Africa 40,000 years ago.

You may care which of these we end up in, but evolution doesn't.

Most likely, it will be neither, but some new combination of traits and ideas. When the dinosaurs get wiped out, the new species don't evolve back into the same old dinosaurs.

The good news, however, is that ideas are not DNA - people can change their ideas much faster than their genes. And whatever pathology is producing our current predicament must be relatively recent in origin, suggesting that fixing it does not necessarily involve going back to the dark ages. I have suggested the birth control basilisk as one possible cause, but the problem is a hard one to pin down.

The bad news is that we seem to be making almost no progress in actually fixing the problem, or even identifying it.

But the big picture lesson stands - there are, and can be, no healthy low-birthrate societies. It is a contradiction in terms.

Sunday, April 23, 2017

The Puritan Hypothesis Personified

We live in a period where Whig history is the only history that most people know. The west conquered the evils of Nazism and racism, and is moving towards a progressive utopia. People don't know the term 'Whig history', but they know the idea alright.

To me, the best summary of Whig history comes from our modern secular saint, Martin Luther King, who summed it up thus:
"The arc of the moral universe is long, but it bends toward justice."
The history of this quote is a fascinating one.

The MLK quote is actually a paraphrasing of a longer quote by Theodore Parker, who said:
Look at the facts of the world. You see a continual and progressive triumph of the right. I do not pretend to understand the moral universe, the arc is a long one, my eye reaches but little ways. I cannot calculate the curve and complete the figure by the experience of sight; I can divine it by conscience. But from what I see I am sure it bends towards justice.
I'd never heard of this guy before. I actually came across this while looking up the MLK quote as part of writing a different post, but ended up down this rabbit-hole instead.

The reason is that the story of Theodore Parker seems almost shockingly tailor-made to support Moldbug's puritan hypothesis of leftism - that it is primarily an offshoot of mainline American Protestantism that came out of New England Puritans, and over time gradually morphed to replace God with Social Justice.

First off, where would you guess that Theodore Parker was born? Where else, but Massachusetts! Lexington, MA, to be precise.

And for some reason, it wasn't a big surprise to find out that the rest of his life story fits almost eerily into place.

He was an ardent abolitionist, living from 1810 to 1860.

If I told those facts alone, what might you guess about his education and profession?

Would you guess, perhaps, a Unitarian Universalist preacher who had both undergraduate and graduate degrees from Harvard University?

Ding ding ding, we have a winner!

Even his ancestry is exactly on point:
His paternal grandfather was John Parker, the leader of the Lexington militia at the Battle of Lexington. 
Remember this the next time conservatives are revering the founding fathers. Their descendants, both literal and intellectual, are leftists. Skip the standard Jefferson and Washington hagiographies and read some Thomas Hutchinson instead.
Among his colonial Yankee ancestors were Thomas Hastings, who came from the East Anglia region of England to the Massachusetts Bay Colony in 1634, and Deacon Thomas Parker, who came from England in 1635 and was one of the founders of Reading.
East Anglia, East Anglia, where have I heard that name before?

That's right, it's from David Hackett Fisher's book 'Albion's Seed'. It's where the English Puritans came from before they moved to Massachussetts.

But so what, a skeptic may say. Who cares about his background if he was able to perfectly capture the importance of aiming towards what is just and right?

Well, perhaps it might do to know exactly what the "justice" was that he thought the universe was bending towards. Because it looks an awful lot like "leftism".

First up, feminism! From Parker himself
"The domestic function of the woman does not exhaust her powers... To make one half of the human race consume its energies in the functions of housekeeper, wife and mother is a monstrous waste of the most precious material God ever made"
and from the mouths of others
Stanton called his sermons "soul-satisfying" when beginning her career, and she credited him with introducing her to the idea of a Heavenly Mother in the Trinity.
I'm no Christian, let alone a biblical scholar, but I apparently missed the Heavenly Mother part of the Trinity.

But don't strain yourself too hard trying to reconcile it, as Parker was pretty upfront about his perversions of Christianity. Next up, trying to bowdlerize the Bible to take out anything he perceived as inconvenient, leaving a mush of vague sentimental spiritualism:
  “I preach abundant heresies,” he wrote to a friend, “and they all go down—for the listeners do not know how heretical they are.” For years he had wrestled with the factuality of the Hebrew Scriptures, and by 1837 he was wishing “some wise man would now write a book…and show up the absurdity of…the Old Testament miracles, prophecies, dreams, miraculous births, etc.’”
In 1841, Parker laid bare his radical theological position in a sermon titled A Discourse on the Transient and Permanent in Christianity, in which he espoused his belief that the scriptures of historic Christianity did not reflect the truth. In so doing, he made an open break with orthodox theology. He instead argued for a type of Christian belief and worship in which the essence of Jesus’s teachings remained permanent but the words, traditions, and other forms of their conveyance did not. He stressed the immediacy of God and saw the Church as a communion, looking upon Christ as the supreme expression of God. Ultimately, he rejected all miracles and revelation and saw the Bible as full of contradictions and mistakes. He retained his faith in God but suggested that people experience God intuitively and personally, and that they should center their religious beliefs on individual experience.
The Bible is all a metaphor filled with mistakes and superstitions, just go with what you feel, man. But I'm still Christian, don't you know.

It will not come as a shock to find out that Parker's successors felt less bound to utter the last part. But Parker himself was definitely ahead of the curve, as you have to be when you're deemed heretical by the Unitarian Church (of all organisations).

By contrast, if you want to find out what someone thinks who actually does take the bible literally and cares what it says about slavery, read Robert Dabney's 'A Defense of Virginia'. In it, you will find over 100 pages of exhaustive yet fascinating discourse on what exactly the Bible has to say on the slavery question, and it's probably not what you'd think. Dabney was well acquainted with men like Parker, and skewered them wonderfully:
The Socinian and skeptical type of all the evasions of our Scriptural argument has been already intimated. If the most profane and reckless wresting of God's word will not serve their turn to make it speak abolitionism then they not seldom repudiate its authority. One of their leaders, long a professed minister of the Gospel, declares at the close of a train of tortuous sophisms that if he were compelled to believe the Bible countenances slavery he should be compelled to give up the Bible, thereby virtually confessing that he had never been convinced of the infallibility of that which for thirty years he had been pretending to preach to men as infallible. Others more blatant and blasphemous when compelled to admit that both the Bible and the American constitution recognized slavery exclaimed "Give me then an anti slavery constitution, an anti slavery Bible, and an anti slavery God!" 
This is almost exactly what Parker did.

And even outside slavery, the list of causes Parker supported is almost straight out of modern leftist orthodoxy
As Parker's early biographer John White Chadwick wrote, Parker was involved with almost all of the reform movements of the time: "peace, temperance, education, the condition of women, penal legislation, prison discipline, the moral and mental destitution of the rich, the physical destitution of the poor" though none became "a dominant factor in his experience" with the exception of his antislavery views. He "denounced the Mexican War and called on his fellow Bostonians in 1847 'to protest against this most infamous war.'"
Let's just count up how many modern leftist causes this guy managed to hit - blacks, hispanics, pacifism (of a sort), education, feminism, criminals, poverty, hating the rich. Parker loses points, however, for not having the foresight to also agitate about homosexuality and the environment. Had he been slightly more visionary on these fronts, he'd be a shoo-in for the 2020 Democratic Party nomination.

But still, justice! Who can forget that famous arc of justice, to which the moral universe is inevitably tending?

Well, it might not hurt to examine how exactly Parker advocated pushing that moral universe on its way, because he certainly wasn't interested in letting the universe work things out in its own due course. Parker was unusually far-sighted in terms of how he applied leftist aims too. Behold a life full of civil disobedience, incitement to violence, funding of guerrilla violence, and general Alinsky-style agitation:
He wrote the scathing To a Southern Slaveholder in 1848, as the abolition crisis was heating up and took a strong stance against slavery and advocated violating the Fugitive Slave Law of 1850, a controversial part of the Compromise of 1850 which required the return of escaped slaves to their owners. Parker worked with many fugitive slaves, some of whom were among his congregation. As in the case of William and Ellen Craft, he hid them in his home. Although he was indicted for his actions, he was never convicted.
Guilty as Sin, Free as a Bird. He's the Bill Ayers of 1850.
During the undeclared war in Kansas (see Bleeding Kansas and Origins of the American Civil War) prior to the outbreak of the American Civil War, Parker supplied money for weapons for free state militias. As a member of the Secret Six, he supported the abolitionist John Brown, whom many considered a terrorist. After Brown's arrest, Parker wrote a public letter, "John Brown's Expedition Reviewed," arguing for the right of slaves to kill their masters and defending Brown’s actions.
Obviously, the South were the untrammeled instigators of the Civil War through their reckless secession. They should have just stuck around to deal with guys like Parker who advocated for terrorism and slaves murdering their masters. Be reasonable, Southern slave owners!

Of course, you may think this is all just incidental to the original quote with which we started - the guy liked justice, even if in other arenas of his life he was a bit extreme.

But it turns out these ideas aren't incidental to the main point. They are in fact the essence of the idea. Here's the full context of the quote, from quote investigator
Look at the facts of the world. You see a continual and progressive triumph of the right. I do not pretend to understand the moral universe, the arc is a long one, my eye reaches but little ways. I cannot calculate the curve and complete the figure by the experience of sight; I can divine it by conscience. But from what I see I am sure it bends towards justice.
Things refuse to be mismanaged long. Jefferson trembled when he thought of slavery and remembered that God is just. Ere long all America will tremble.
In the context of everything he did and said, it is hard to read the last line as anything but a threat.

This was in 1853.

12 years and 620,000-odd corpses later, America had done a lot of trembling alright.

This is the context of the famous MLK quote with which we began. You will find a version of this quote on the MLK memorial in Washington D.C., which tells you how much the idea has become a source of bipartisan inspiration.

And this, incidentally, is the second Parker quote lightly paraphrased by a modern secular saint, and memorialised in Washington D.C. The other is even more famous. Parker, as well as agitating for violent overthrow of slavery, was a big fan of democracy:
A democracy — of all the people, by all the people, for all the people
And so at last, we see an odd correspondence between the old and the new.

For in fact,
The arc of the moral universe is long, but it bends toward justice.
in practice means
The arc of the moral universe is long, but it bends toward leftism. 
Which could be ever-so-slightly rephrased as:
Cthulhu may swim slowly. But he only swims left.
Indeed.

When stripped of the marketing, Parker, MLK and Moldbug can all agree on the trend, even if they disagree on how they feel about it.

It's not for nothing that they carve it in marble in Washington D.C.

Thursday, March 30, 2017

On Self-Control and Eating Disorders

Years ago, I had a friend who once described herself in a moment of honesty as having gone through a period where she was, as she put it, an exercise bulimic. She would exercise obsessively, going on runs every day and working out all the time. The label itself was perhaps the most interesting part, and seemed to be her own creation. We have mental categories for eating disorders. Most of us, however, do not have categories for exercise disorders.

Eating disorders, at least in the popular telling, are usually associated with looking excessively emaciated and thin. But instead of just not eating, she ate a more or less regular (albeit modest and vegan) diet and exercised like crazy. The result was looking thin but toned, not the stick figure arms that I always associated with anorexia or bulimia.

And this proved almost the perfect disguise, because someone who just likes to exercise a lot fits the aspirational ideal of our society. We envision them as ourselves but with more willpower, doing all the gym classes that we always meant to do but never got around to doing.

By contrast, someone who simply refuses to eat meaningful quantities of food, certainly in public, strikes many more people as inherently suspicious. Having willpower being channeled into straight self-denial of something so basic like food, rather than hard work on something "virtuous" (in the modern telling) like exercise, raises far more alarm bells about sanity. These alarm bells themselves are quite telling, of course. They arise because we live in an inherently self-indulgent society dedicated to consumption and self-centredness, where self-denial seems like a bizarre refutation of what we hold dear. But nonetheless, the perception was what it was.

I have known a number of girls over the years who claimed to have eating disorders at some point or other.

Parenthetically, I even knew one girl who claimed this was a very useful cold reading technique on women. Say to them "I'm guessing that you possibly had an eating disorder at some point, though you may not have ever told anyone about it." The reason it worked, she claimed, is that this described a sufficiently sizable majority of women in the population, so was a safe bet to make regardless of who you were talking to, especially given the ambiguity in understanding of the term 'eating disorder'.

But among those few who claimed to have an eating disorder, there were some things that stuck out.

Firstly, the claim was always in the past tense. I had an eating disorder some time ago, not I have an eating disorder. But the story somehow never made it clear that the underlying mindset had changed that much. The circumstances had changed, sure, but the stories tended to lack a defining end which stated "and that's when I figured out that I had to change my way of thinking, and so I did." Instead, the descriptions of past eating disorders always had a distinctly confessional aspect about them. I suspect my cold-reading friend knew of what she spoke when she added the clause about maybe not telling anyone about it. One does not lightly confess to things which sound like mental health problems.

People nearly always tell you the unflattering things about themselves in oblique ways, and you need to know how to listen. One way is the one above, and the rarer case, where introspection and honesty have gotten to the point of identifying flaws, but they are phrased in the past tense. When you hear "I was [this unflattering thing]", you should understand it to mean "I fear I still am [this unflattering thing], but I struggle and work to try to change it, with varying degrees of success"

The other way people reveal themselves is by generalization to the population at large, particularly when it comes to mental processes. Whenever you hear someone say "people generally think in manner X", pay close attention, especially if the characterisation strikes you as wrong. They are nearly always describing themselves.

My cold-reading friend, for instance, was not among those who had said they'd had an eating disorder themselves. But it would be a very good bet to make.

The second notable aspect of the eating disorder confessions I've heard is the rationale. And they all said the same thing: it wasn't really about being thin. It was about exerting control over one small aspect of their lives. And the need to do this became strong when the rest of their life seemed to be sliding out of control. The message in multiple cases was the same - "At least I can control this part".

Which fits in with the characterisation of the mindset above. What seemed to have changed to make the eating disorder go from present tense to past tense, if you read between the lines, was that the problems making their life uncertain had gradually subsided. That, more than anything else, was what caused them to no longer obsess about their weight and eating. Everyone wants to look thin, pretty and be in good shape. But what was being felt internally, and what sometimes but less frequently gets observed from the outside, is the excessive control. This was the underlying need - the food was merely the one part they could solve in a life that had gotten chaotic and frightening. In one case it was going to boarding school at age 13. In another it was being in a foreign country and suddenly knowing nobody.

I have a suspicion, though I'm sure many psychiatrists will disagree, that a decent amount of what we classify as mental disorders really represent points on a continuum of character traits, rather than binary instances of a disease. At some point, we have a general view that the threshold has been crossed into the pathological territory, but this is nearly always a judgment call on which people will often disagree. The distinction between being a clinical narcissist, a sub-clinical narcissist, and just a general selfish asshole, is not an easy one to pin down.

So it seems to be here.

The underlying tendency here is far more common in its less pathological form - that when you can't manage the large things in your life, you at least manage the small. You may not know where your relationship is heading, or how you're going to find a job, but you can at least get the laundry done and make sure the dish cloths aren't too dirty. At some point, I suspect a lot of us have done this, busying ourselves with the musical score of day-to-day activities while the deckchairs of our life slowly slide down the deck of the Titanic.

What we are really observing with something like anorexia, then, is that the need for control has gotten so great that the person can't even see how they appear to the outside world. Keeping yourself 10kg underweight is very hard work. The pathology is that the need to do something extremely hard and controlled outweighs even the need (or ability) to see that others will perceive you as being mentally ill.

And to be honest, this is a pretty good diagnostic of at least some instances when things have tipped over into pathology - when you can no longer even see how you appear to the outside world. But if one focuses only on this aspect, one tends to treat the symptom of the excess, but not think about the underlying cause, especially in the sub-clinical cases.

For instance, one of the girls I knew said that the lesson she had learned was not that she shouldn't obsessively control her weight, but rather that she needed to obsessively control her weight and weigh a few kilograms more so that people wouldn't think that she looked too thin.

It seems, shall we say, optimistic to think that this has fixed the underlying problem, unless you think that eating disorders are merely a problem of low BMI.

For those of us who are not psychiatrists and not dealing frequently with people who are genuinely mentally ill, the interesting thing about understanding pathological behavior is for what it reveals about how to fix our own, non-pathological versions of the same basic trait.

And I suspect the broader lesson is that you should be wary of dealing with the uncertainty and chaos of your life by focusing on the small problems in lieu of the large. Willpower is a finite resource. Spend it on the most important use.

Saturday, March 11, 2017

How to Improve the Discourse on Education Policy

Is there a subject of policy debate in modern society more deranged than education? When it comes to primary and secondary education, the sheer sentimentality, mendacity and surreality of most policy proposals borders on incredible. We just need to Fix The Schools™, then everything will be alright with our wayward youth.

Education, curiously, is one of those subjects on which the divide between the reactionary mindset and the mainstream conservative mindset is starkest.

As far as I can tell, there seem to be three main explanations for why some schools perform poorly.

The leftist mindset is that the problem with "bad schools" (where this is usually taken to mean "schools with poor measures of student academic achievement) is a lack of resources. School funding is tied to property taxes, so rich schools get more money than poor ones, and that's why they perform better. More money will let them buy textbooks, and ipads, and hire better teachers, and have art programs which will distract kids from joining violent gangs, etc.

Of course, this runs up against the problem that we've been throwing money into education, hand over fist, for decades, with literally nothing to show for it. As Scott Alexander discussed a few weeks ago in his post on cost disease:



Maybe another 50% increase and we'll finally Fix The Schools™!

And then there's the mainstream conservative answer: the problem with bad schools is bad teachers and bad incentives. The teachers unions are powerful, and the Democrats are beholden to them, which means that there's no competition across schools, no ability to fire underperforming teachers, no incentives for better performance etc.

This of course runs into the problem that if this were the main driver of educational differences, then states where the Democrats have strong political power should do worse. Do they?

No. If you take, say 2015 NAEP 8th Grade Mathematics Scores (available here) and correlate them with 2016 Democratic Presidential Vote Shares (available here), you get a whopping correlation of  -0.0595. If you're a regression guy, if you regress mathematics test scores on vote percentage, the t-stat is a paltry -0.417, with an R-squared of 0.0035. 

But even this overstates the case, because DC is a huge outlier in vote share at 90.48%, and an absolute sinkhole in terms of test scores. If you take DC out, the correlation is actually positive, at 0.183 (though the t-stat on the regression is still insignificant, at 1.29). If you use 2012 vote shares, which were perhaps more usual, the correlation increases to 0.214 excluding DC, and the t-stat is 1.52. That is to say, more Democratic states do, if anything, slightly better, though the effect isn't particularly strong. You can see this just by ranking the states:

State NAEP Math Dem. Vote Share
Massachusetts 297 60.01%
Minnesota 294 46.44%
New Hampshire 294 46.98%
New Jersey 293 54.99%
Vermont 290 55.72%
Wisconsin 289 46.45%
North Dakota 288 27.23%
Virginia 288 49.75%
Indiana 287 37.91%
Montana 287 35.75%
Washington 287 54.30%
Wyoming 287 21.63%
Colorado 286 48.16%
Iowa 286 41.74%
Nebraska 286 33.70%
Utah 286 27.46%
Maine 285 47.83%
Ohio 285 43.56%
South Dakota 285 31.74%
Connecticut 284 54.57%
Idaho 284 27.49%
Kansas 284 36.05%
Pennsylvania 284 47.85%
Texas 284 43.24%
Arizona 283 45.13%
Maryland 283 60.33%
Oregon 283 50.07%
Illinois 282 55.83%
Missouri 281 38.14%
North Carolina 281 46.17%
Rhode Island 281 54.41%
Alaska 280 36.55%
Delaware 280 53.18%
New York 280 58.40%
Georgia 279 45.64%
Hawaii 279 62.22%
Kentucky 278 32.68%
Michigan 278 47.27%
Tennessee 278 34.72%
South Carolina 276 40.67%
Arkansas 275 33.65%
California 275 61.73%
Florida 275 47.82%
Nevada 275 47.92%
Oklahoma 275 28.93%
Mississippi 271 40.11%
New Mexico 271 48.26%
West Virginia 271 26.48%
Louisiana 268 38.45%
Alabama 267 34.36%
District of Columbia 263 90.48%

Admittedly there's a lot more variables you'd want to throw into the regression, but still, the univariate big picture doesn't look like the Republican story either.

So what's the reactionary position on why there are bad schools? 

Bad schools are primarily due to bad students. Some students are dumb, unruly, lazy, dysfunctional brats. They can't learn, won't learn, and don't learn. You don't even need to take a strong stance on why these differences arise, but just assume that by the time the kids arrive at school, some of them are just a drain and a menace on everyone around them.

And for some reason, this explanation is considered anathema to most right-thinking people. How can you say anything so mean? All students have the potential to succeed, if only they're given the right circumstances!

If the reactionary position strikes you as excessively unkind (especially if its unkindness makes you flinch from accepting its possible truth), I want you to try the following thought experiment.

When you think of "schools", what mental picture comes to mind?

I suspect you are thinking of an idealised brochure, a smiling child at a desk, something that would fit easily as an advertisement for a charity on the side of a bus. The kid is also likely to be young, probably around 5-8 years old, bright-eyed at the world.

Stop thinking of that. Schools aren't like that.

Okay, so what are schools like? 

You don't need me to tell you. You've been to one. 

To borrow an idea from the War Nerd (when he was explaining why it was easy to get young men to fight and die in wars): if you want to think of schools, think of your 9th grade PE class.

Who was in that class? There were some good students, some of your friends that you think fondly of. If you're reading this blog, chances are both you and your friends were pretty high achieving.

Then there were some middle of the road kids, who were nice enough, and filled out the fat part of the bell curve.

Then there was almost certainly a solid rump of kids best described as complete dickheads. Dumb, mean, idiots. That bully who liked to pick on the young kids. That big punk who stole your friend's bike tire. That guy who was thick as two planks, and boasted about taking a crap on some stranger's car while drunk one Saturday night.

Now, think of just those scumbags, because these are almost certainly the underperforming students we're trying to fix. Imagine that you're designing education policy. How can you improve the educational outcomes of those students? Are they suddenly going to apply themselves more if higher property taxes provide them with a free iPad? Are they about to dive into calculus if only they can find some inspiring young teacher with hip and fresh real world examples of differential equations?

Of course they're not. They're just idiots who will make life miserable for whoever is around them. 

If your school has too many such students, it is probably going to be a "bad school". Now, at this stage of argumentation, it is still a matter of conjecture that the scumbag kids of the world are not spread exactly uniformly across every single school district. But is the idea so outlandish? Do you think the adult scumbags are spread precisely uniformly across every neighbourhood and state? Perhaps this matches your experience of traveling around your city or country, but somehow I doubt it. If it were true, you should feel approximately equally happy moving to any neighbourhood in your city, or any state in the country, since everyone is basically the same! Yeah right. And if the annoying adults aren't distributed uniformly, why should the kids be? The first law of behavioral genetics doesn't go away just because you're feeling sentimental about all kids being nice at heart. And given the capacity of nasty kids to have enormous disruptive negative spillovers on the kids around them, it's not clear how much of a difference in distribution you would need to affect the aggregate outcomes. 

But even if you think of the whole distribution of students (rather than just the left tail), did that distribution seem like something pretty fixed over your schooling, or something with a lot of year-to-year variation? Did the students in the bottom third of the class in one year ever suddenly jump to the top third the next year as a result of a really good teacher? Or would you say that the personal traits, and relative test scores, of the students in your class were approximately stable in rank order over your whole school career? How confident are you that your treatment could upset last year's rank order by very much?

If your education policy doesn't seem like it will work on the ne'er-do-well kids in your 9th grade PE class, it's probably not going to work at all.

And thinking about those kids is an incredibly grounding reality check to cure multi-billion dollar sentimental nonsense that every kid is just wanting to get the best possible education in life. Some are. Some aren't. It's not that every child who does badly at school is also mean and of poor character. But thinking about the ones that are is the best cure for hazy, rose-tinted thinking on the subject. It makes it easier to focus on the sheer stubbornness of the problem at hand.

None of this means that the reactionary position is the only difference across schools, or that there isn't any role for other factors. But let's just say that it's a hypothesis that seems like it might be worth considering more than currently is done, at least in the public discourse.

It is self-evident that the world has a substantial fraction of dickheads in it.

All of those dickheads were 10 years old once. 

Most of them were probably dickheads back then too.