1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
theunitofcaring

theunitofcaring:

 It is my contention that the tendency of strategy games to turn even the woolliest of liberals into ravening tyrants is a result of a perspective that the games foist upon us. It is the same perspective that politicians have foisted upon them when they gain power. Indeed, strategy games turn liberals into fascists for the same reason that becoming President turns liberal Democratic Presidential candidates into soul-less autocrats who order air strikes on villages, turn a blind eye to torture and send the national guard to deal with people who have been flooded out of their homes. People placed in positions of power do not become authoritarian because the system is ‘rigged’, they become authoritarian because in order to control a state they have to see the world like a state — and the state cares no more for individual humans than we do for the individual cells in our bodies. […]

 [a player] is not the person who decided that Sim cities run more smoothly if workers are oppressed. However, she is the person who decided that her Sim city needed to run smoothly. ‘Smoothness’ is not a human value, as the efficiency of an entire city or civilisation really does not matter to an individual human being. ‘Smoothness’ is an aesthetic value that only becomes apparent when you detach yourself from the limited viewpoint of an individual human in order to look at the world from a detached perspective.

(from this article, h/t Kaj on Facebook)

I disagree with the article, mostly because the efficiency of a city or civilization does affect the wellbeing of individual human beings: those are the societies that can afford nice things like mosquito nets and universal basic income. And most people who aren’t politicians also suffer from scope insensitivity and are basically indifferent to the suffering of faraway people; it is not because they are thinking as a state, it is because the world has lots of bad things and a million is a statistic. But the article reminded me of a question I’ve put way too much thought into, which is how to ethically play Civilization.

(To be clear, I don’t think it’s unethical to play games as a maniacal tyrant and I grumpily avoid any game that’s all about how you can be a good person while playing it. Your media consumption has nothing to do with your goodness.)

But still the only way I’ve ever played Civilization, and Steam says I’ve played many hundreds of hours of Civilization, is while wondering: if the lives of people in Civilization were morally important, if you were playing god on a chessboard with real thinking, feeling people, how should you play?

I started out a pacifist. I never ever started wars, and if someone else attacked me I asked for peace the first turn it was available. I had almost no military and kept taxes low and spent all my energies on libraries and universities and was clearly the most virtuous autocrat in all of ancient history.

This, obviously, loses you the game. For a long time I was indignant about this; I would be a peaceful, trading, thriving civilization, and people kept declaring war on me. I’d sign treaties with them and they’d use the treaties to rearm and sneak-attack me. I’d give back their conquered cities and they’d burn mine to the ground. I was too virtuous to win and felt very indignant about it.

I started feeling less indignant when I noticed that “too virtuous to achieve a morally good outcome” is not actually a thing, except under very silly conceptions of virtue. Getting conquered by some other civilization is probably not the best way to maximize the utility of the people in your game. Being good means getting good outcomes. In Civ you probably want to become the dominant power, because then you can give everyone good lives and badger your neighbors into doing so and lean on people for nonproliferation treaties and conquer the stars and so forth.

So I became a pacifist 2.0: I maintained a large enough military to deter fights, and once someone started them I asked my allies to attack them too and I didn’t always offer them peace terms as soon as possible, not if I could extend the war a couple rounds and make sure they couldn’t do it again. (the Civ A.I. doesn’t really respond to incentives well, so ‘predictably retaliate’ isn’t a good gameplay strategy. This might be truer in real life than game theorists think, considering how many ‘predictably retaliate’ treaties have been set up and then promptly started a war.)

But then I started wondering: am I still playing for the moral high ground, rather than the morally best outcome? Maybe I should aim to take over the world as fast as possible, since my literacy rate is twice as high as anyone else’s and my citizens live longer and happier lives and how can I justify letting people live in a desperately poor dictatorship next door? The real life answer to that is “we tried it, didn’t work and gave us lots of incentives to be evil”, but Civ gameplay doesn’t strongly deter it; should I assume that all those invisible evils are happening even though there’s no associated mechanic? And, I mean, an effective Civilization altruist might decide to minimize time-to-starship or maximize odds of starship, on the grounds that an intergalactic civilization is the most important thing.

If we were to invent games that challenge the tendency to abstract away the lives of millions of people, and therefore get bad at making those lives better, I would want it to be a game that does not encourage personal virtue, where you don’t get the good ending through “avoiding any of the actions that are morally bad” but through bringing about the world with the most good. Because “I refuse to do Wrong things” is just much easier, and less interesting as a gameplay constraint, than figuring out how to make a good world.

…though I’d love if real politicians playing the real world game of statecraft were ardent ideological pacifists. 

theunitofcaring

On Rejection and Casual Writing

Three days ago, I went through a traditional rite of passage for junior academics: I received my first rejection letter on a paper submitted for peer review. After I received the rejection letter, I forwarded the paper to two top professors in my field, who both confirmed that the basic arguments seem to be correct and important. Several top faculty members have told me they believe the paper will eventually be published in a top journal, so I am actually feeling more confident about the paper than before it got rejected.

I am also very frustrated with the peer review system. The reviewers found some minor errors, and some of their other comments were helpful in the sense that they reveal which parts of the paper are most likely to be misunderstood. However, on the whole, the comments do not change my belief in the soundness of the idea, and in my view they mostly show that the reviewers simply didn’t understand what I was saying.

One comment does stand out, and I’ve spent a lot of energy today thinking about its implications: Reviewer 3 points out that my language is “too casual”. I would have had no problem accepting criticism that my language is ambiguous, imprecise, overly complicated, grammatically wrong or idiomatically weird. But too casual? What does that even mean? I have trouble interpreting the sentence to mean anything other than an allegation that I fail at a signaling game where the objective is to demonstrate impressiveness by using an artificially dense and obfuscating academic language.

From my point of view, “understanding” something <i>means</i> that you are able to explain it in a casual language. When I write a paper, my only objective is to allow the reader to understand what my conclusions are and how I reached them. My choice of language is optimized only for those objectives, and I fail to understand how it is even possible for it to be “too casual”.

Today, I feel very pessimistic about the state of academia and the institution of peer review. I feel stronger allegiance to the rationality movement than ever, as my ideological allies in what seems like a struggle about what it means to do science. I believe it was Tyler Cowen or Alex Tabarrok who pointed out that the true inheritors of intellectuals like Adam Smith are not people publishing in academic journals, but bloggers who write in a causal language. I can’t find the quote but today it rings more true than ever.

I understand that I am interpreting the reviewers choice of words in a way that is strongly influenced both by my disappointment in being rejected, and by my pre-existing frustration with the state of academia and peer review. I would very much appreciate if anybody could steelman the sentence “the writing is too casual”, or otherwise help me reach a less biased understanding of what just happened.

The paper is available at https://rebootingepidemiology.files.wordpress.com/2016/03/effect-measure-paper-031716.pdf. I am willing to send a link to the reviewers’ comments by private message to anybody who is interested in seeing it.

mugasofer

neoliberalism-nightly asked:

Can you give me a or two good article on why people should use QUALY for utiltarian calculations.

slatestarscratchpad answered:

What level are we talking about here? Do you want to know what QALYs are, or why they might be better than DALYs or some other complicated thing?

mugasofer:

slatestarscratchpad:

neoliberalism-nightly:

What are DALYs?

So: QALYs are quality adjusted life years. They’re years of your life, but penalized for them being supposedly bad. For example, they surveyed some people and found that they would really dislike being blind, such that a year of their life blind was only half as good as a year of their life sighted. So a year blind is worth 0.5 QALYs.

DALYs are the same, except that they also incorporate age. Since being old sucks (citation needed), maybe a year of your life at 80 is only worth half as much as a year of your life at 20 (this is an example, I don’t know the actual weighting function).

The reason these are useful is to compare the costs and benefits of different interventions. For example, which should we do - cure blindness or cure malaria?In order to do this calculation, we need to know the utility weights of blindness vs. death in ways that are convertible with each other.

QALYs are useful because we assume (maybe falsely) that everyone values a QALY the same amount - certainly this is more true than in the case of money, where a poor person really wants $1000 but a rich person doesn’t care about it at all.

Suppose that it cost $50 to cure one case of blindness in a child who would go on to live 50 more years. And it costs $1000 to cure one case of otherwise-fatal malaria in a child who would go on to live 50 more years. So we calculate:

Since blindness costs half a QALY per year, preventing 50 years of blindness produces 25 QALYs. Since that costs $50, that intervention costs $2 per QALY.

Since death costs one QALY per year, preventing 50 years of death produces 50 QALYs. Since that costs $1000, that intervention costs $20/QALY.

Therefore, the blindness intervention is more effective charity than the malaria intervention.

Also, the First World medical system has decreed that it will aim to do all treatments that cost less than $60,000/QALY, so this gives us a good measure of medical cost-effectiveness. If somebody invents a surgery that will make you live one year longer, but it costs $1 million, it’s not worth Medicare’s resources - in the very absolute sense that it will take away money from the Medicare pot that could be used more efficiently on someone else. Once you agree to use all money as efficiently as possible, $60,000/QALY is the number that falls out of how much funding the system has and how much various medical interventions cost.

DALYs are probably better if (like me) you think that people probably value some parts of their lives more than others, but they’re harder to calculate with.

To get Rationalist for a moment: shouldn’t all these numbers be completely thrown off by the fact that people overestimate how much being blind would affect their life?

(Wikipedia on Impact Bias, article which is behind a paywall but the abstract claims to have a “number of studies” attesting to this.)

It seems like, at the very least, you should be surveying people who have gone blind at some point and ask them how much they would like to go back and comparing them to the amount healthy people will pay to avoid going blind. Or something. But it doesn’t seem like most people who compile QALY figures do even that much to ensure accuracy.

QALY researchers are well aware of the difference between asking healthy people to value a hypothetical state of health, and asking people who are currently experiencing that health state.  There is an extensive literature on this and it is discussed in introductory courses on medical decision making.  See for instance the standard introductory textbook by Hunink, at http://www.amazon.com/Decision-Making-Health-Medicine-Integrating/dp/1107690471

There are instruments for measuring QALY based on the valuations of people who are sick, and other instruments for measuring QALY based on valuations of healthy people. There are arguments on both sides, but I agree with you that it is better to use the valuations of sick people. However, you have to be aware that a consequence of this is that we will actually be willing to spend <i>less</i> money to cure people.

mugasofer Source: slatestarscratchpad
slatestarscratchpad

neoliberalism-nightly asked:

Can you give me a or two good article on why people should use QUALY for utiltarian calculations.

slatestarscratchpad answered:

What level are we talking about here? Do you want to know what QALYs are, or why they might be better than DALYs or some other complicated thing?

slatestarscratchpad:

neoliberalism-nightly:

What are DALYs?

So: QALYs are quality adjusted life years. They’re years of your life, but penalized for them being supposedly bad. For example, they surveyed some people and found that they would really dislike being blind, such that a year of their life blind was only half as good as a year of their life sighted. So a year blind is worth 0.5 QALYs.

DALYs are the same, except that they also incorporate age. Since being old sucks (citation needed), maybe a year of your life at 80 is only worth half as much as a year of your life at 20 (this is an example, I don’t know the actual weighting function).

The reason these are useful is to compare the costs and benefits of different interventions. For example, which should we do - cure blindness or cure malaria?In order to do this calculation, we need to know the utility weights of blindness vs. death in ways that are convertible with each other.

QALYs are useful because we assume (maybe falsely) that everyone values a QALY the same amount - certainly this is more true than in the case of money, where a poor person really wants $1000 but a rich person doesn’t care about it at all.

Suppose that it cost $50 to cure one case of blindness in a child who would go on to live 50 more years. And it costs $1000 to cure one case of otherwise-fatal malaria in a child who would go on to live 50 more years. So we calculate:

Since blindness costs half a QALY per year, preventing 50 years of blindness produces 25 QALYs. Since that costs $50, that intervention costs $2 per QALY.

Since death costs one QALY per year, preventing 50 years of death produces 50 QALYs. Since that costs $1000, that intervention costs $20/QALY.

Therefore, the blindness intervention is more effective charity than the malaria intervention.

Also, the First World medical system has decreed that it will aim to do all treatments that cost less than $60,000/QALY, so this gives us a good measure of medical cost-effectiveness. If somebody invents a surgery that will make you live one year longer, but it costs $1 million, it’s not worth Medicare’s resources - in the very absolute sense that it will take away money from the Medicare pot that could be used more efficiently on someone else. Once you agree to use all money as efficiently as possible, $60,000/QALY is the number that falls out of how much funding the system has and how much various medical interventions cost.

DALYs are probably better if (like me) you think that people probably value some parts of their lives more than others, but they’re harder to calculate with.

Scott, there is an additional major difference between DALYs and QALYs which you did not cover.

QALYs are defined in terms of an individual’s utility from birth (or from the time a decision is made) until death.  During that time, he may experience some person-time that to him is worth 50% of what it would have been worth if he was in full health.  At the end of his life, if he was in full health until he was 75 and then lived 10 years at low quality, you can meaningfully say that he lived 80 quality-adjusted life years. 

DALYs are defined in terms of an individual’s <i>lost</i> utility from onset of disease until they would have died had they not had the disease. This is a counterfactual concept. Since we can never know when a person would have died if he had not had the disease, this makes the concept itself kind of metaphysical at the individual level.

Of course, if you are going to estimate the effect of an intervention in terms of QALYs, you are also going to have to make assumptions about what would have happened if you cured the disease. The difference between the two approaches is whether the counterfactual theory is incorporated in the definition of the construct, or if it is something that you have to keep in mind when you do the statistics. 

The philosophy behind QALYs is that for any individual, that person has one potential QALY outcome under treatment, and another potential QALY outcome under no treatment.  You as the scientist can only observe one of the two potential outcomes, but with the right statistics you may be able to compare the two potential outcomes on average in the population he was part of.

The philosophy behind DALYs is that for any individual, you can somehow observe the period of time between their death and the time their death would have occurred if the world was different.

-

I once took a very formal, mathematical course on decision theory for public health. When we got to the theory of how to measure utility, the course focused almost exclusively on QALYs instead of DALYs, in part because QALYs are conceptually cleaner and because QALYs make much more sense in the Von Neumann Morgenstern axiom system.  

In VNM,  individuals choose between two lotteries. For instance, you may offer a person a choice between lottery 1  (which gives you x% chance of living the rest of your life at full health, otherwise you die immediately) and lottery 2 (which gives you 100% chance of living the rest of your life at your current health state).  This corresponds to the idea of QALYs, where the quality weight is defined in terms of the percentage probability of dying that would make a person indifferent between the two lotteries.

In contrast, you can’t really describe the DALY idea of “years lost to disease” in terms of a choice between lotteries..

My sense is that there is a group of people around Christopher Murray who push for DALYs, and pretty much everyone else ignores them because QALYs are simply defined in a better way. 

slatestarscratchpad