Comment author: casebash 15 June 2017 01:57:41PM *  0 points [-]

I've expanded the first paragraph and added a hypothetical example. Let me know if this clarifies the situation.

EDIT: Oh, I also added in a direct response to your comment.

Comment author: Michael_Wulfsohn 16 June 2017 12:42:56PM 0 points [-]

Thanks, it does a bit.

What I was saying is that if I were Andrew, I'd make it crystal clear that I'm happy to make the cup of tea, but don't want to be shouted at; there are better ways to handle disagreements, and demands should be framed as requests. Chances are that Bob doesn't enjoy shouting, so working out a way of making requests and settling disagreements without the shouting would benefit both.

More generally, I'd try to develop the relationship to be less "transactional", where you act as partners willing to advance each other's interests and where there is more trust, rather than only doing things in expectation of reward.

Comment author: Michael_Wulfsohn 14 June 2017 07:24:44AM 3 points [-]

Sounds like a really interesting and worthwhile topic to discuss. But it's quite hard to be sure I'm on the same page as you without a few examples. Even hypothetical ones would do. "For reasons that should not need to be said" - unfortunately I don't understand the reasons; am I missing something?

Anyway, speaking in generalities, I believe it's extremely tempting to assume an adversarial dynamic exists. 9 times out of 10, it's probably a misunderstanding. For example, if a condition is given that isn't palatable, it's worth finding out the underlying reasons for the condition being given, and trying to satisfy them in other ways. Since humans have a tendency towards "us vs them" tribal thinking, there's considerable value in making effort to find common ground, establish mutual understanding, and reframe the interaction as a collegiate rather than adversarial one.

This isn't meant as an argument against what you've said.

Comment author: MichaelPlant 06 December 2016 11:32:24AM *  0 points [-]

Hello Michael,

Yeah, I totally agree. The scope of what I was talking about was more limited. If there were clearly net WAS, we'd have to weigh up the apparent benefits of ecosystem destruction (i.e. less animal misery) against the sort of costs you're taking about.

My aim was to challenge the argument about their being net WAS. Unless there is net WAS (or you're a negative utilitarian) the case for habitat destruction looks pretty thin anyway.

FWIW, I don't think their being experienced vs remembered selves is a problematic for a hedonic framework. In fact that distinctly requires the assumption people do feel things and can rate how bad it is, and those can then be compared to their memories. That stuff is a problem for our ability to make good affective forecasts (which I admit we suck at).

Comment author: Michael_Wulfsohn 06 December 2016 12:59:45PM 0 points [-]

Ah, you're right about the hedonistic framework. On re-reading your intro I think I meant the idea of using pleasure as a synonym for happiness and taking pain and suffering as synonyms for unhappiness. This, combined with the idea of counting minutes of pleasure vs. pain, seems to focus on just the experiencing self.

Comment author: Michael_Wulfsohn 06 December 2016 11:14:22AM 0 points [-]

Thanks for the post. I doubt the length is a problem. As long as you're willing to produce quality analysis, my guess is that most of the people on this forum would be happy to read it.

My thoughts are that destruction of ecosystems is not justifiable especially because many of its effects are probably irreversible (e.g. extinction of some species), and because there is huge uncertainty about its impact. The uncertainty arises because of the points you make, and because of the shakiness of even some of the assumptions you use such as the hedonistic framework. (For example, in humans the distinction between the "experiencing" and "remembering" selves diminishes the value of this framework, and we don't know the extent to which it applies to animals.) Additional uncertainty also exists because we do not know what technological capabilities we might have in the future to reduce wild animal suffering. So almost regardless of the specifics, I believe that it would certainly be better to wait at least until we know more about animal suffering and humanity's future capabilities, before seriously considering taking the irreversible and drastic measure of destroying habitats. This might be just a different point of emphasis rather than something you didn't cover.

Comment author: Milan_Griffes 14 November 2016 06:20:04AM *  1 point [-]

I basically agree with your critique, though I'd say my assumptions are more naïve than arbitrary (mostly semantic; the issues persist either way). On reflection, I don't think I've arrived at any solid conclusions here, and this exercise's main fruit is a renewed appreciation of how tangled these questions are.


I'm getting hung up on your last paragraph: "However, if it's 10 or 20, then you're probably going to be led astray by spurious results."

This is pretty unsatisfying – thinking about the future is necessarily speculative, so people are going to have to use "arbitrary" inputs in their models for want of empirical data. If they only use a few arbitrary inputs, their models will likely be too simplistic to be meaningful. But if they use many arbitrary inputs, their models will give spurious results? It sort of feels like an impossible bind for the project of modeling the future.

Or maybe I'm misunderstanding your definition of "arbitrary" inputs, and there is another class of speculative input that we should be using for model building.

Comment author: Michael_Wulfsohn 15 November 2016 03:05:05PM 0 points [-]

Sure. When I say "arbitrary", I mean not based on evidence, or on any kind of robust reasoning. I think that's the same as your conception of it.

The "conclusion" of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don't go as far as to actually make a recommendation.

To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a number that I have simply chosen because it "felt" right to me, then that person could quite reasonably suggest a different number be used. If they are able to choose some other reasonable number that produces different conclusions, then they have shown that my conclusions are not reliable. The key test for arbitrary assumptions is: will the conclusions change if I assume other values?

Otherwise, arbitrary assumptions might be helpful if you want to conduct a hypothetical "if this, then that" analysis, to help understand a particular dynamic at play, like bayesian probability. But this is really hard if you've made lots of arbitrary assumptions (say 10-20); it's difficult to get any helpful insights from "if this and this and this and this and........, then that".

So yes, we are in a bind when we want to make predictions about the future where there is no data. Who was it that said "prediction is difficult, especially about the future"? ;-) But models that aren't sufficiently grounded in reality have limited benefit, and might even be counterproductive. The challenge with modelling is always to find ways to draw robust and useful conclusions given what we have.

Comment author: Michael_Wulfsohn 15 November 2016 09:11:00AM 3 points [-]

EAs like to focus on the long term and embrace probabilistic achievements. What about pursuing policy reforms that are currently inconsequential, but might have profound effects in some future state of the world? That sort of reform will probably face little resistance from established political players.

I can give an example of something I briefly tried when I was working in Lesotho, a small, poor African country. One of the problems in poor countries is called the "resource curse". This is the counter-intuitive observation that the discovery of valuable natural resources (think oil) often leads to worse economic outcomes. There are a variety of reasons, but one is that abundant natural resources often cause countries with already-weak institutions to become even more corrupt, as powerful people scramble to get control of the resource wealth, methodically destroying checks and balances as they go.

In Lesotho, non-renewable natural resources--diamonds--currently account for only a small portion of Lesotho's GDP (around 10%). I introduced the idea of earmarking such natural resource revenues received by the government as "special", to be used only for infrastructure, education etc projects, instead of effectively just being consumed (for more info on this idea see this article or google "adjusted net savings"). Although this change would not have huge consequences right now, I thought that it might if there were a massive natural resource discovery in Lesotho in the future. Specifically, Lesotho might be able to avoid some of the additional corruption by already having a structure set up to protect the resource revenues from being squandered.

The idea I'm putting forward for a potential EA policy initiative is to pursue a variety of policy changes that seem painless, even inconsequential, to policymakers now, but have a small chance of a big impact in some hypothetical future. The idea is to get the right reforms passed before they become politically contentious. While it can be hard to get policymakers to pay attention to issues seen as small, there are plenty of examples of political capture that could have been mitigated by early action. And this kind of initiative is probably relatively neglected given humanity's generally short-term focus. I think EAs are uniquely well placed to prioritize it.

Comment author: Michael_Wulfsohn 11 November 2016 04:09:29AM 0 points [-]

On political reform, I'm interested in EAs' opinions on this one.

In Australia, we have compulsory voting. If you are an eligible voter and you don't register and show up on election day, you get a fine. Some people do submit a blank ballot paper, but very few. I know this policy is relatively uncommon among western democracies, but I strongly support it. Basically it leaves the government with less places to hide.

Compulsory voting of course reduces individual freedom. But that reduction is small, and the advantages from (probably) more inclusive government policy seem well worth it. I've heard it said that if this policy were implemented in the US, then the democrats would win easily. I can't vouch for the accuracy of that, but if it's true, then in my opinion it means that the democrats should be the ones in power.

Comment author: Michael_Wulfsohn 09 November 2016 06:52:33AM 1 point [-]

Sorry, this is going to be a "you're doing it wrong" comment. I will try to criticize constructively!

There are too many arbitrary assumptions. Your chosen numbers, your categorization scheme, your assumption about whether giving now or giving later is better in each scenario, your assumption that there can't be some split between giving now and later, your failure to incorporate any interest rate into the calculations, your assumption that the now/later decision can't influence the scenarios' probabilities. Any of these could have decisive influence over your conclusion.

But there's also a problem with your calculation. Your conclusion is based on the fact that you expect higher utility to result from scenarios in which you believe giving now will be better. That's not actually an argument for deciding to give now, as it doesn't assess whether the world will be happier as a result of the giving decision. You would need to estimate the relative impact of giving now vs. giving later under each of those scenarios, and then weight the relative impacts by the probabilities of the scenarios.

Don't stop trying to quantify things. But remember the pitfalls. In particular, simplicity is paramount. You want to have as few "weak links" in your model as possible; i.e. moving parts that are not supported by evidence and that have significant influence on your conclusion. If it's just one or two numbers or assumptions that are arbitrary, then the model can help you understand the implications of your uncertainty about them, and you might also be able to draw some kind of conclusion after appropriate sensitivity testing. However, if it's 10 or 20, then you're probably going to be led astray by spurious results.

Comment author: Michael_Wulfsohn 25 October 2016 07:32:32AM 2 points [-]

I agree that EAs should pay more attention to systemic risk. Aside from exerting indirect influence on many concrete problems, it is also one of the few methods available to combat the threat of unknown risks (or equivalently increase our ability to capitalize on unknown opportunities). Achieving positive systemic change may also be more sustainable than relying on philanthropy.

In particular, I like the global governance example as a cause. This can be seen as improving the collective intelligence of humanity, and increasing the level of societal welfare we are able to achieve. Certain global public goods are simply not addressed, even despite much fanfare in the case of carbon emission abatement. Better global governance would thus create new possibilities for our species.

A full-fledged world government might be the endgame, but in the meantime small advances might be made to existing institutions like the UN and the EU, as you suggest. Unfortunately this can be very difficult; removing veto power in the UN Security Council is a case in point. Fundamentally, any advance on this front requires countries to sacrifice part of their own sovereignty, which seldom feels comfortable. But fortunately the general trend since WWII has been towards more global coordination, the recent visible setback of Brexit notwithstanding. My personal belief is that any acceleration of this trend could have huge positive consequences.