Magnus Vinding

Researcher @ Center for Reducing Suffering
1602 karmaJoined May 2018Copenhagen, Denmark
magnusvinding.com/

Bio

Working to reduce extreme suffering for all sentient beings.

Author of Suffering-Focused Ethics: Defense and Implications; Reasoned Politics; & Essays on Suffering-Focused Ethics.

Co-founder (with Tobias Baumann) of the Center for Reducing Suffering (CRS).

Ebooks available for free here and here.

Comments
93

Topic contributions
6

FWIW, I don't see that piece as making a case against panpsychism, but rather against something like "pansufferingism" or "pansentienceism". In my view, these arguments against the ontological prevalence of suffering are compatible with the panpsychist view that (extremely simple) consciousness / "phenomenality" is ontologically prevalent (cf. this old post on "Thinking of consciousness as waves").

The following list of reports may or may not be helpful to include in the 'Further reading' section, but I don't think that's for me to decide since it's collected by me and published on my blog: https://magnusvinding.com/2023/06/11/what-credible-ufo-evidence/

A similar critique has been made in Friederich & Wenmackers' article "The future of intelligence in the Universe: A call for humility", specifically in the section "Why FAST and UNDYING civilizations may not be LOUD".

Yeah, it would make sense to include it. :) As I wrote "Robin Hanson has many big ideas", and since the previous section was already about signaling and status, I just mentioned some other examples here instead. Prediction markets could have been another one (though it's included in futarchy).

Thus it is not at all true that that we ignore the possibility of many quiet civs.

But that's not the claim of the quoted text, which is explicitly about quiet expansionist aliens (e.g. expanding as far and wide as loud expansionist ones). The model does seem to ignore those (and such quiet expansionists might have no borders detectable by us).

Thanks, and thanks for the question! :)

It's indeed not obvious what I mean when I write "a smoothed-out line between the estimated growth rate at the respective years listed along the x-axis". It's neither the annual growth rate in that particular year in isolation (which is subject to significant fluctuations), nor the annual average growth rate from the previously listed year to the next listed year (which would generally not be a good estimate for the latter year).

Instead, it's an estimated underlying growth rate at that year based on the growth rates in the (more) closely adjacent years. I can see that the value I estimated for 2021 was 2.65 percent, the average growth rate from 2015-2022 (according to the data from The World Bank). One could also have chosen, say, 2020-2022, which would yield an estimate of 2.01 percent, but that's arguably too low an estimate given the corona recession.

I think this is an important point. In general terms, it seems worth keeping in mind that option value also entails option disvalue (e.g. the option of losing control and giving rise to a worst-case future).

Regarding long reflection in particular, I notice that the quotes above seem to mostly mention it in a positive light, yet its feasibility and desirability can also be separately criticized, as I've tried to do elsewhere:

First, there are reasons to doubt that a condition of long reflection is feasible or even desirable, given that it would seem to require strong limits to voluntary actions that diverge from the ideal of reflection. To think that we can choose to create a condition of long reflection may be an instance of the illusion of control. Human civilization is likely to develop according to its immediate interests, and seems unlikely to ever be steered via a common process of reflection.

Second, even if we were to secure a condition of long reflection, there is no guarantee that humanity would ultimately be able to reach a sufficient level of agreement regarding the right path forward — after all, it is conceivable that a long reflection could go awfully wrong, and that bad values could win out due to poor execution or malevolent agents hijacking the process.

The limited feasibility of a long reflection suggests that there is no substitute for reflecting now. Failing to clarify and act on our values from this point onward carries a serious risk of pursuing a suboptimal path that we may not be able to reverse later. The resources we spend pursuing a long reflection (which seems unlikely to ever occur) are resources not spent on addressing issues that might be more important and more time-sensitive, such as steering away from worst-case outcomes

Thanks for your question, Péter :)

There's not a specific plan, though there is a vague plan to create an audio version at some point. One challenge is that the book is full of in-text citations, which in some places makes the book difficult to narrate (and it also means that it's not easy to create a listenable version with software). You're welcome to give it a try if you want, though I should note that narration can be more difficult than one might expect (e.g. even professional narrators often make a lot of mistakes that then need to be corrected).

Thanks for your comment, Michael :)

I should reiterate that my note above is rather speculative, and I really haven't thought much about this stuff.

1: Yes, I believe that's what inflation theories generally entail.

2: I agree, it doesn't follow that they're short-lived.

In each pocket universe, couldn't targeting its far future be best (assuming risk neutral expected value-maximizing utilitarianism)? And then the same would hold across pocket universes.

I guess it could be; I suppose it depends both on the empirical "details" and one's decision theory.

Regarding options a and b, a third option could be:

c: There is an ensemble of finitely many pocket universes wherein new pocket universes emerge in an unbounded manner for eternity, where there will always be a vast predominance of (finitely many) younger pocket universes. (Note that this need not imply that any individual pocket universe is eternal, let alone that any pocket universe can support the existence of value entities for eternity.) In this scenario, for any summation between two points in "global time" across the totality of the multiverse, earlier "pocket-universe moments" will vastly dominate. That might be an argument in favor of extreme neartermism (in that kind of scenario).

But, of course, we don't know whether we are in such a scenario — indeed, one could argue that we have strong anthropic evidence suggesting that we are not — and it seems that common-sense heuristics would in any case speak against giving much weight to these kinds of speculative considerations (though admittedly such heuristics also push somewhat against a strong long-term focus).

These are cached arguments that are irrelevant to this particular post and/or properly disclaimed within the post.

I don't agree that these points are properly disclaimed in the post. I think the post gives an imbalanced impression of the discussion and potential biases around these issues, and I think that impression is worth balancing out, even if presenting a balanced impression wasn't the point of the post.

The asks from this post aren't already in the water supply of this community; everyone reading EA Forum has, by contrast, already encountered the recommendation to take animal welfare more seriously.

I don't think this remark relates so closely to my comment. My comment wasn't about a mere "recommendation to take animal welfare more seriously", but rather about biases that may influence us when it comes to evaluations of arguments regarding the moral status of, for example, speciesism and veganism, as well as about the practical feasibility of veganism. It's not my impression that considerations about such potential biases, and the arguments and research that relate to them (this paper being another example of such research), are familiar to everyone reading the EA Forum.

I have the same impression with respect to philosophical arguments against speciesism (which generally have far stronger implications than just a recommendation to take animal welfare more seriously). For example, it's not my impression that everyone reading the EA Forum is familiar with the argument from species overlap. Indeed, it seems to me that this argument and its implications are generally underappreciated even among most animal advocates.

Load more