Zendikar prerelease

At the 2 Klaveren café for the MtG Battle For Zendikar prerelease event. Alex is playing; I’m not.

Abi drove us to the short ferry. We went to the Gamekeeper store first, so Alex could buy some more card sleeves. Then we took the tram to the event. (It’s the same place where we came last week for Alex’s first Friday Night Magic.) Registration ran from about 10:30 to 11:10. Players were then given 50 minutes to build their deck from the six booster packs included in the prerelease box. Players take part in six match-ups throughout the day. Alex is having fun.

Dam-tot-dam walk 2015

Abi and I took part in the Dam-tot-Dam walk from Amsterdam to Zaandam again yesterday. It was raining (though not too heavily) when we started at about 08:35, and there was a brief shower around 12:20, but the rest of the time it was fine walking weather. We did the 27km route. Abi’s knees were hurting by the end. If we do it again next year, we’ll take one of the shorter distances.

I also saw my first ever Zaandam flag in the wild! It made me very happy.

This year’s walk was very different from last year’s. The route change was only one part of that: instead of ending the walk at a festively decked-out Burchtkade, it led us to Burgemeester in ‘t Veldpark which had been turned into the “Dam Tot Dam Park” for the occasion which had been taken over by massive corporate sponsor tents, most of which were empty and closed. I’m sure they’ll all be open and bustling with special invitees and promotions for the main running event today, but yesterday it felt barren, useless, and not very gezellig, despite the excessively loud live hela-hola band. To get there, the the last two kilometers of the route were a tortuous stretch through residential streets that were too narrow for the number of walkers going there as well as coming back, because the park is way out of the town centre, and nowhere near the bus stops, train station, and catering everyone wanted to return to. Making people do another two km just after they’ve plugged through a long walk is not a great idea.

Of course we eavesdropped on the conversations of other walkers around us, and all we heard were complaints. Progress was stop-start as we waited for cars and crossings, and our average walking speed dropped precipitously. Stop-start isn’t so bad at the start of the route as it goes through the centre of Amsterdam (although the traffic management was better there), but at the end of the route, when you’re tired, it feels like a slog. I hope the organisers will reconsider for next year.

The other thing that was different was me. Last year’s Dam-tot-Dam came at the trough of a period of depression that I’m still recovering from and fighting against. For that, and other reasons, it has been a tough year. Last year, the rain at the start of the event would have sent me straight back home. This year, it was a question of whether we would just walk on, or pause to take some shelter during the worst of it. (We sheltered a bit, but only for a few minutes.) On the walk, even when the subject turned to things that were bothering me, there was no knot of anxiety in my belly, and no visceral dislike of everything around me. (And this even after a very short night’s sleep, having taken Alex to his first Friday Night Magic the evening before.)

At the low times it’s hard to remember the highs, and vice versa.

Statistical errors

Some day when I have an abundance of emotional energy to spare, I will write my full rant about the toxic promises of commercial A/B testing. For now, I’ll just reference this:

Consider Motyl’s study about political extremists. Most scientists would look at his original P value of 0.01 and say that there was just a 1% chance of his result being a false alarm. But they would be wrong. The P value cannot say this: all it can do is summarize the data assuming a specific null hypothesis. It cannot work backwards and make statements about the underlying reality. That requires another piece of information: the odds that a real effect was there in the first place. To ignore this would be like waking up with a headache and concluding that you have a rare brain tumour — possible, but so unlikely that it requires a lot more evidence to supersede an everyday explanation such as an allergic reaction. The more implausible the hypothesis — telepathy, aliens, homeopathy — the greater the chance that an exciting finding is a false alarm, no matter what the P value is.

These are sticky concepts, but some statisticians have tried to provide general rule-of-thumb conversions (see ‘Probable cause’). According to one widely used calculation5, a P value of 0.01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of 0.05 raises that chance to at least 29%. So Motyl’s finding had a greater than one in ten chance of being a false alarm. Likewise, the probability of replicating his original result was not 99%, as most would assume, but something closer to 73% — or only 50%, if he wanted another ‘very significant’ result. In other words, his inability to replicate the result was about as surprising as if he had called heads on a coin toss and it had come up tails.

Critics also bemoan the way that P values can encourage muddled thinking. A prime example is their tendency to deflect attention from the actual size of an effect. Last year, for example, a study of more than 19,000 people showed that those who meet their spouses online are less likely to divorce (p < 0.002) and more likely to have high marital satisfaction (p < 0.001) than those who meet offline (see Nature http://doi.org/rcg; 2013). That might have sounded impressive, but the effects were actually tiny: meeting online nudged the divorce rate from 7.67% down to 5.96%, and barely budged happiness from 5.48 to 5.64 on a 7-point scale. To pounce on tiny P values and ignore the larger question is to fall prey to the “seductive certainty of significance”, says Geoff Cumming, an emeritus psychologist at La Trobe University in Melbourne, Australia. But significance is no indicator of practical relevance, he says: “We should be asking, 'How much of an effect is there?', not 'Is there an effect?'”

Source: Scientific method: Statistical errors : Nature News & Comment

Also: ” Science Isn’t Broken — it’s just a hell of a lot harder than we give it credit for.” on fivethirtyeight.com

Kevin Kelly on time vs. money

Here is what I learned from 40 years of traveling: Of the two modes, it is far better to have more time than money.

When you have abundant time you can get closer to core of a place. You can hang around and see what really happens. You can meet a wider variety of people. You can slow down until the hour that the secret vault is opened. You have enough time to learn some new words, to understand what the real prices are, to wait out the weather, to get to that place that takes a week in a jeep.

Money is an attempt to buy time, but it rarely is able to buy any of the above. When we don’t have time we use money to try to get us to the secret door on time, or we use it avoid needing to know the real prices, or we use money to have someone explain to us what is really going on. Money can get us close, but not all the way.

Source: More time is better than more money. – Ronda, Spain — A Hi Moment