AM

Andreas_Mogensen

156 karmaJoined Aug 2014

Posts
19

Sorted by New

Comments
10

Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, I'm most inclined to think this is one of those cases where we've got a philosophical argument we don't immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false. 

On the other hand, I think I'm most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. I'm more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper 'Tough enough? Robust satisficing as a decision norm for long-term policy analysis' but we weren't especially sold on them.

Thanks, Michael. Yes, you're right - in the bit you quote from at the start I'm assuming the bursts have some kind of duration rather than being extensionless. I think that probably got mangled in trying to compress everything! 

The zero duration frame possibility is an interesting one - Some of Vasco's comments below point in the same direction, I think. Is your thought that the problem is something like - If you have these isolated points of experience which have zero duration, then since there's no experience there to which we can assign a non-zero objective duration, if you measure duration objectively, you count those experiences as nothing, whereas intuitively that's a mistake - There's an experience of pain there, after all. It's got to count for something!

I think that's an interesting objection and one I'll have to think more about. My initial reaction is that perhaps it's bound up with a general weirdness that attaches to things that have zero measure but (in some sense) still aren't nothing? E.g., there's something weird about probability zero events that are nonetheless genuinely possible, and taking account of events like that can lead to some weird interactions with otherwise plausible normative principles: e.g., it suggests a possible conflict between dominance and expected utility maximization (see Hajek, "Unexpected Expectations," p. 556-7 for discussion). 

Thanks! I think that makes sense. I discuss something slightly similar on pp. 21 - 22 in the paper (following the page numbers at the bottom), albeit just the idea that you should count discrete pain experiences in measuring the extensive magnitude of a pain experience, without any attempt to anchor this in a deeper theory of how experience unfolds in time. 

Maybe one thing I'm still a bit unsure of here is the following. We could have a view on which time is fundamentally discrete, rather than continuous. There are physical atoms of time and how long something goes on for is a matter of how many such atoms it's made up of. But, on its face, those atoms needn't correspond to the 'frames' into which experiences are divided, since that kind of division among experiences may be understood as a high-level psychological fact. Similarly, the basic time atoms needn't correspond to discrete steps in any physical computation, except insofar as we imagine fundamental physics as computational. Thus, experiential frames could be composed of different numbers of fundamental temporal atoms, and varying the hardware clock-speed could lead to the same physical computation being spread over more or fewer time atoms. This seems to give us some sense in which experiences and physical computation unfolds in time, albeit in discrete time. However, I took it you wanted to rule that out, and so probably I've misunderstood something about how you're thinking about the relationship between the fundamental time atoms and computations/experiential frames, or I've just got totally the wrong picture? 

Thanks for the question! There's a lot more about how I arrive at this conception of subjective indistinguishability in the paper itself (section 4.2), but in terms of the analogy with your parody principle, notice that your definition of mathematical indistinguishability just says that there has to be a one-to-one mapping, whereas the proposed account of subjective indistinguishability says that there has to be such a mapping and the mapped pairs must always be pairwise indistinguishable to the subject. If I said that two ranges of numbers are mathematically indistinguishable if there's a one-to-one mapping among them such that the numbers we map to one another are indistinguishable, that doesn't sound too implausible and presumably doesn't generate the counter-example you note? (Though it might turn on what we mean by saying that two numbers are 'indistinguishable'!) If that's right, then I don't think my principle is challenged by the analogy with the parody principle you note. 

Thanks, Vasco! It's possible that we're just reading different things into the idea that "conscious experience unfolds in time"? For example, there's a sense in which that's fully compatible with thinking that experience is discrete as opposed to continuous if by that we mean that the content of consciousness changes discontinuously or that consciousness proceeds in short-lived bursts against the backdrop of surrounding unconsciousness. Is the view you're proposing that our experiences have no location or extension in time? I think all I'm saying here is that that view is false, so there might otherwise be no disagreement. It's also worth noting that I take the falsity of that sort of view to be a presupposition of the argument I'm criticising in the paper, since it assumes that adjusting the clock speed of the simulation hardware results in experiences that fill different amounts of objective time. 

No, it's just the fact that the experience unfolds in time. It seems clear to me that that's important from the perspective of explaining consciousness as an empirical phenomenon. I obviously agree we might have our doubts about whether the way experience unfolds in clock time matters from an ethical perspective. 

Thanks, Michael - Sorry for the delay in replying to this!

What I was trying to argue in 4.3 is that the following is a bad reason to think that the different experiences have the same value in spite of lasting for very different amounts of clock time: a computational theory of consciousness is true and the time that a given computation needs in order to complete when physically instantiated ought to be irrelevant to the character of mind, since there's nothing in a Turing-machine model of computation corresponding to the amount of time the machine spends in a given configuration or requires when transitioning from one configuration to another. 

I take it that that style of argument does depend on assuming that mind is Turing-style computation. I can see that you could perhaps have some other kind of theory of mind and say that according to this theory, mental processing is to be modelled as this kind of state, followed by this kind of state, and we make no reference in our model to the amount of time the system is in each state or the amount of time required between transitions, although what is happening is not to be understood as computation. You might then argue in a somewhat similar way that because you model mental processing thusly and the model omits any time dimension, the time required for the physical instiatiation of the modelled process ought to be irrelevant to the value of a given experience. 

However, if you try to say something along those lines, then I think a very similar objection arises to the one I outline in the paper. Since what's described appears to be an atemporal model of mental processing, whereas experience in fact unfolds in time, the model has got to be incomplete and needs to be supplemented somehow if it's to properly describe the basis of experience, and thus we seem to be drawing inferences about the phenomena we are trying to model that simply reflect gaps and abstractions in our models of them, i.e., ways in which our models fail to capture the reality of what's actually going on. That seems like a mistake. So I think that what I say in the paper about computationalism can be recast as applied to any similar way of drawing inferences from any model of what realizes experience that is essentially atemporal. In that sense, I don't think that retreat from the computational theory of mind helps. 

That having been said, I want to emphasize that the argument in section 4.3 is not intended to show that it is false to judge that the clock time required for a physical process to complete is irrelevant to the value of the realized experience. It's merely intended to show that a particular argument for making that judgment isn't a very good one. In that sense, I am not giving any kind of positive argument against the claim that the two brains you describe realize experiences with the same hedonic value. In 4.3, I'm just trying to say that a particular argument that one might give for a view like that isn't a good one, and so if you think that the amount of clock time a person is in pain is irrelevant to the disvalue of their experience in this sort of case, you need a different reason for holding that view. In some sense, the rest of the paper might be taken as arguing that other reasons of that kind don't seem to be available. 

1. This one is very in the weeds, but I was very confused about some conflicting results Pinker and Braumoeller get in testing for the hypothesis of a break in war incidence after 1945. Pinker (2011: 252) writes: "Taking the frequency of wars between great powers from 1495 to 1945 as a baseline, the chance that there would be a sixty-five year stretch with only a single great power war (the marginal case of the Korean War) is one in a thousand. Even if we take 1815 as our starting point, which biases the test against us by letting the peaceful post-Napoleonic 19th century dominate the base rate, we find that the probability that the postwar era would have at most four wars involving a great power is less than 0.004, and the probability that it would have at most one war between European states (the Soviet invasion of Hungary in 1956) is 0.0008." Braumoeller (2019: 27-8) gets different results by modelling the onset of great power war in a given year as a binomial distribution with p = 0.02, based on the rate of great power war in the last five centuries: “the probability of observing seven continuous decades of peace …. is 24.3%.” (28) He also writes: “it would still take about 150 years of uninterrupted peace for us to reject conclusively the claim that the underlying probability of systemic war remains unchanged.” (28) Both Pinker and Braumoeller are relying primarily on Levy ([War in the Modern Great Power System] 1983) to estimate the rate of great power war, so I don’t understand why they get such radically different results. What's going on?

2. Battlefield deaths generally do not count civilians killed directly or indirectly as a result of military conflict. Apparently, it is extremely difficult to reliably measure total excess mortality due to war, and as a result battlefield deaths are used as the standard measure (Pinker 2011: 299-300; Braumoeller 2019: 101). At the same time, authors like Kaldor ([New and Old Wars] 1999) argue that civilian deaths have increased significantly as a share of all war deaths, with civilians now typically the majority of those killed as a result of war, and Roberts ['Lives and statistics'] records estimates that roughly 40% of casualties in Bosnia-Herzegovina, 1991-5 were civilians, and between 75-83% in the Second Gulf War. Given that we do not have reliable data for so important a part of the overall picture, are contemporary debates on trends in the severity/intensity/prevalence of battle deaths of the kind between Pinker and Braumoeller actually telling us very much at all about whether wars are getting better or worse as a 'public health problem'?

3. Braumoeller (2019: 179) asserts that “[t]he four decades following the Napoleonic Wars were, by a significant margin, the most peaceful period on record in Europe.” I didn't feel he said very much to explain the grounds of this assertion in the book. On what basis can it be said that the period between the Congress of Vienna and the outbreak of the Crimean War was significantly more peaceful than that between the end of World War II and Perestroika? I don't know the former period well, but just looking at the list of conflicts in Europe during these periods from Wikipedia, this didn't seem to me especially plausible. (Possibly I'm just misunderstanding what he's saying, and the claim is that the decades after the Napoleonic Wars were a lot more peacefully than any before then.)

Great post! Like MichaelA, I'd be really interested in something systematic on the reversal of century-long trends in history.

With respect to the 'outside view' approach, I wondered what you would make of the rejoinder that actually over the very long  autocracy is the outlier - provided that we include hunter-gatherers? 

On the view I take to be associated with Christopher Boehm's work, ancestral foragers are believed to have exhibited the fierce resistane to political hierarchy characteristic of mobile foragers in the ethnographic record, relying on consensus-seeking as a means of collective decision-making. In some sense, this could be taken to indicate that human beings have lived without autocracy and with something that could be described as vaguely democratic throughout virtually all of its history. Boehm writes: "before twelve thousand years ago, humans basically were egalitarian. They lived in what might be called societies of equals, with minimal political centralization and no social classes. Everyone participated in group decisions, and outside the family there were no dominators" (Hierarchy in the Forest pp. 3-4) 

Obviously, you can make the rejoinder that the relevant reference class should be 'states' and so shouldn't include acephalous hunter-gatherer bands, but by the same logic I take it someone could claim that the reference class should be narrowed further to 'industrialised states' when we make our outside view forecast about how long democracy will be popular. The difficulty of fixing the appropriate reference class here seems to me to raise doubts about how much epistemic value can be derived from base rates and seems to require predictions to be based more firmly in the sorts of causal questions that are focal later in your post: understanding why hunter-gatherer bands are egalitarian, agrarian states aren't, and industrialized economies have tended to be. 

David Thorstad and I are currently writing a paper on the tools of Robust Decision Making (RDM) developed by RAND and the recommendation to follow a norm of 'robust satisficing' when framing decisions using RDM. We're hoping to put up a working paper on the GPI website soon (probably about a month). Like you, our general sense is that the DMDU community is generating a range of interesting ideas, and the fact that these appeal to those at (or nearer) the coalface is a strong reason to take them seriously. Nonetheless, we think more needs to be said on at least two critical issues.

Firstly, measures of robustness may seem to smuggle probabilities in via the backdoor. In the locus classicus for discussions of robustness as a decision criterion in operations research, Rosenhead, Elton, and Gupta note that using their robustness criterion is equivalent to maximizing expected utility with a uniform probability distribution given certain assumptions about the agent's utility function. Similarly, the norm of robust satisficing invoked by RDM is often identified as a descendant of Starr's domain criterion, which relies on a uniform (second-order) probability distribution. To our knowledge, the norm of robust satisficing appealed to in RDM has not been stated with the formal precision adopted by Starr or Rosenhead, Elton, and Gupta, but given its pedigree, it's natural to worry that it ultimately relies implicitly on a uniform probability measure of some kind. But this seems at odds with the skepticism toward (unique) probability assignments that we find in the DMDU literature, which you note. 

Secondly, we wonder why a concern for robustness in the face of deep uncertainty should lead to adoption of a satisficing criterion of choice. In expositions of RDM, robustness and optimizing are frequently contrasted, and a desire for robustness is linked to satisficing choice. But it's not clear what the connection is here. Why satisfice? Why not seek out robustly optimal strategies? That's basically what Starr's domain criterion does - it looks for the act that maximizes expected utility relative to the largest share of the probability assignments consistent with our evidence.  
Load more