“Just take the expected value” – a possible reply to concerns about cluelessness - Effective Altruism Forum
http://effective-altruism.com/
“Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/Thu, 21 Dec 2017 19:37:07 +0000
Submitted by <a href="http://effective-altruism.com/user/Milan_Griffes">Milan_Griffes</a>
•
5 votes
•
<a href="http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/#comments">21 comments</a>
<div><p dir="ltr">This is the second in a series of posts exploring <a href="https://flightfromperfection.com/cluelessness-what-to-do.html">consequentialist cluelessness</a> and its implications for effective altruism:</p>
<ul>
<li>The <a href="/ea/1hh/what_consequences/">first post</a> describes cluelessness & its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact.<br><br></li>
<li><strong>This post</strong> considers a potential reply to concerns about cluelessness – maybe when we are uncertain about a decision, we should just choose the option with the highest expected value.<br><br></li>
<li>Following posts discuss <a href="/ea/1j4/how_tractable_is_cluelessness/">how tractable cluelessness is</a>, and what <a href="/ea/1kv/doing_good_while_clueless/">being clueless implies about doing good</a>.</li>
</ul>
<p style="text-align: left;"><br>Consider reading the <a href="/ea/1hh/what_consequences/">first post</a> first.</p>
<p style="text-align: center;"><br>---</p>
<p style="text-align: center;"> </p>
<p dir="ltr">A rationalist’s reply to concerns about cluelessness could be as follows:</p>
<ul>
<li>Cluelessness is just a special case of empirical uncertainty.<sup>[1]</sup><br><br></li>
<li>We have a framework for dealing with empirical uncertainty – <a href="https://en.wikipedia.org/wiki/Expected_value">expected value</a>.<br><br></li>
<li>So for decisions where we are uncertain, we can determine the best course of action by multiplying our best-guess probability against our best-guess utility for each option, then choosing the option with the highest expected value.</li>
</ul>
<p><br>While this approach makes sense in the abstract, it doesn’t work well in real-world cases. The difficulty is that it’s unclear what “best-guess” probabilities & utilities we should assign, as well as unclear to what extent we should believe our best guesses.  </p>
<p>Consider this passage from <a href="https://flightfromperfection.com/files/post_attachments/cluelessness_greaves_2016.pdf">Greaves 2016</a> (“credence function” can be read roughly as “probability”):</p>
<blockquote>
<p>The alternative line I will explore here begins from the suggestion that in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’). Intuitively, the idea here is that when the evidence fails conclusively to recommend any particular credence function above certain others, agents are rationally required to remain neutral between the credence functions in question: to include all such equally-recommended credence functions in their representor.</p>
</blockquote>
<p dir="ltr"><br>To translate a little, Greaves is saying that real-world agents don’t assign precise probabilities to outcomes, they instead consider multiple possible probabilities for each outcome (taken together, these probabilities sum to the agent’s “representor”). Because an agent holds multiple probabilities for each outcome, and has no way by which to arbitrate between its multiple probabilities, it cannot use a straightforward expected value calculation to determine the best outcome.</p>
<p dir="ltr">Intuitively, this makes sense. Probabilities can only be formally assigned when the <a href="https://en.wikipedia.org/wiki/Sample_space">sample space</a> is fully mapped out, and for most real-world decisions we can’t map the full sample space (in part because the world is very complicated, and in part because we can’t predict the long-run consequences of an action).<sup>[2]</sup> We can make subjective probability estimates, but if a probability estimate does not flow out of a clearly articulated model of the world, its believability is suspect.<sup>[3]</sup></p>
<p dir="ltr">Furthermore, because multiple probability estimates can seem sensible, agents can hold multiple estimates simultaneously (i.e. their representor). For decisions where the full sample space isn’t mapped out (i.e. most real-world decisions), the method by which human decision-makers convert their multi-value representor into a single-value, “best-guess” estimate is opaque.</p>
<p dir="ltr">The next time you encounter someone making a subjective probability estimate, ask “how did you arrive at that number?” The answer will frequently be along the lines of “it seems about right” or “I would be surprised if it were higher.” Answers like this indicate that the estimator doesn’t have visibility into the process by which they’re arriving at their estimate.</p>
<p dir="ltr">So we have believability problems on two levels:</p>
<ol>
<li dir="ltr">
<p dir="ltr">Whenever we make a probability estimate that doesn’t flow from a clear world-model, the believability of that estimate is questionable.</p>
</li>
<li dir="ltr">
<p dir="ltr">And if we attempt to reconcile multiple probability estimates into a single best-guess, the believability of that best-guess is questionable because our method of reconciling multiple estimates into a single value is opaque.<sup>[4]</sup></p>
</li>
</ol>
<p><br>By now it should be clear that simply following the expected value is not a sufficient response to concerns of cluelessness. However, it’s possible that cluelessness can be addressed by other routes – perhaps by diligent investigation, we can grow clueful enough to make believable decisions about how to do good. The <a href="/ea/1j4/how_tractable_is_cluelessness/">next post</a> will consider this further.<br><br></p>
<p dir="ltr"><em>Thanks to Jesse Clifton and an anonymous collaborator for thoughtful feedback on drafts of this post. Views expressed above are my own. Cross-posted to <a href="https://flightfromperfection.com/just-take-the-expected-value.html">my personal blog</a>.</em></p>
<p> </p>
<p style="text-align: center;" dir="ltr">---</p>
<p> </p>
<h2 style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">Footnotes</h2>
<p> </p>
<p>[1]: This is separate from normative uncertainty – uncertainty about what criterion of moral betterness to use when comparing options. Empirical uncertainty is uncertainty about the overall impact of an action, given a criterion of betterness. In general, cluelessness is a subset of empirical uncertainty. </p>
<p> </p>
<p>[2]: Leonard Savage, who worked out much of the foundations of Bayesian statistics, considered Bayesian decision theory to only apply in "small world" settings. See p. 16 & p. 82 of the second edition of his <a href="https://books.google.com/books/about/The_Foundations_of_Statistics.html?id=zSv6dBWneMEC">Foundations of Statistics</a> for further discussion of this point.</p>
<p><br>[3]: Thanks to Jesse Clifton to making this point.</p>
<p> </p>
<p>[4]: This problem persists even if each input estimate flows from a clear world-model.</p></div>
<a href="http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/#comments">21 comments</a>
ThomasSittler on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cz3
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cz32017-12-30T14:13:48.945790+00:00
<div class="md"><blockquote>
<p>By now it should be clear that simply following the expected value is not a sufficient response to concerns of cluelessness.</p>
</blockquote>
<p>I don't think this follows. Even if we have no information, there are strong theoretical reasons to have sharp credences (ones that are represented by a single number).</p>
<p>There is an existing literature on this. See</p>
<p>R. White: Evidential symmetry and Mushy Credence</p>
<p>S. Bradley: Imprecise probabilities. <a href="https://plato.stanford.edu/entries/imprecise-probabilities/" rel="nofollow">https://plato.stanford.edu/entries/imprecise-probabilities/</a></p>
<p>A. Elga: subjective probabilities should be sharp</p>
<p>Elga shows that agents who don't have perfectly sharp probabilities are vulnerable to a variant of Dutch Books.</p></div>
Owen_Cotton-Barratt on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cxd
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cxd2017-12-21T19:52:45.341087+00:00
<div class="md"><blockquote>
<p>By now it should be clear that simply following the expected value is not a sufficient response to concerns of cluelessness.</p>
</blockquote>
<p>I was pretty surprised by this sentence. Maybe you could say more precisely what you mean?</p>
<p>I take the core concern of cluelessness to be that perhaps we have no information about what options are best. Expected value gives a theoretical out to that (with some unresolved issues around infinite expectations for actors with unbounded utility functions). Approximations to expected value that humans can implement are as you point out kind of messy and opaque, but that's a feature of human reasoning in general, and doesn't seem particularly tied to expected value. Is that what you're pointing at?</p></div>
JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cxz
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cxz2017-12-22T17:20:48.164252+00:00
<div class="md"><p>I can’t speak for the author, but I don’t think the problem is the difficulty of “approximating” expected value. Indeed, in the context of subjective expected utility theory there is no “true” expected value that we are trying to approximate. There is just whatever falls out of your subjective probabilities and utilities.</p>
<p>I think the worry comes more from wanting subjective probabilities to <em>come</em> from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a model, as is arguably often the case with EAs trying to optimize complex systems or the long-run future, then it is reasonable to ask why they should carry much epistemic / decision-theoretic weight.</p>
<p>(People who hold this view might not find the usual Dutch book or representation theorem arguments compelling.)</p></div>
RomeoStevens on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cy4
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cy42017-12-24T07:48:19.143976+00:00
<div class="md"><p>I'll second this. In double cruxing EV calcs with others it is clear that they are often quite parameter sensitive and that awareness of such parameter sensitivity is rare/does not come for free. Just the opposite, trying to do sensitivity analysis on what are already fuzzy qualitative->quantitative heuristics is quite stressful/frustrating. Results from sufficiently complex EV calcs usually fall prey to ontology failures, ie key assumptions turned out wrong 25% of the time in studies on analyst performance in the intelligence community, and most scenarios have more than 4 key assumptions.</p></div>
kbog on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyj
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyj2017-12-26T20:55:56.350450+00:00
<div class="md"><blockquote>
<p>I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a model, as is arguably often the case with EAs trying to optimize complex systems or the long-run future, then it is reasonable to ask why they should carry much epistemic / decision-theoretic weight.</p>
</blockquote>
<p>But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse. It doesn't imply that taking the expected value is not the right solution to the idea of cluelessness.</p></div>
JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyn
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyn2017-12-27T07:46:00.190969+00:00
<div class="md"><blockquote>
<p>But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse.</p>
</blockquote>
<p>I'm not sure what you mean. There is nothing being estimated and no concept of robustness when it comes to the notion of subjective probability in question.</p></div>
kbog on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyq
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyq2017-12-27T17:20:30.098094+00:00
<div class="md"><p>The expected value of your actions is being estimated. Those estimates are based on subjective probabilities and can be well or poorly supported by evidence.</p></div>
JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyr
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyr2017-12-27T20:38:33.155475+00:00
<div class="md"><p>For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence, unless you just mean that they result from calculating the Bayesian update correctly or incorrectly.</p>
<p>Likewise there is no true expected utility to estimate. It is a measure of an epistemic state, not a feature of the external world.</p>
<p>I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.</p></div>
kbog on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1j
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1j2018-01-07T07:38:10.939753+00:00
<div class="md"><blockquote>
<p>For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence</p>
</blockquote>
<p>Yes, whether you are Bayesian or not, it means that the estimate is robust to unknown information.</p>
<blockquote>
<p>I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory.</p>
</blockquote>
<p>No, subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models. I don't see why you would think otherwise.</p>
<blockquote>
<p>As does what you have said about robustness and being well or poorly supported by evidence.</p>
</blockquote>
<p>No, everything that has been written on the optimizer's curse is perfectly compatible with subjective expected utility theory.</p></div>
JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1q
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1q2018-01-07T22:22:15.538915+00:00
<div class="md"><blockquote>
<p>whether you are Bayesian or not, it means that the estimate is robust to unknown information</p>
</blockquote>
<p>I’m having difficulty understanding what it means for a subjective probability to be robust to unknown information. Could you clarify?</p>
<blockquote>
<p>subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models.</p>
</blockquote>
<p>Could you give an example where two Bayesians have the same subjective probabilities, but SEUT tells us that one subjective probability is better than the other due to better robustness / resulting from a better model / etc.?</p></div>
kbog on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1u
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1u2018-01-08T17:57:15.361676+00:00
<div class="md"><p>It means that your credence will change little (or a lot) depending on information which you don't have.</p>
<p>For instance, if I know nothing about Pepsi then I may have a 50% credence that their stock is going to beat the market next month. However, if I talk to a company insider who tells me why their company is better than the market thinks, I may update to 55% credence.</p>
<p>On the other hand, suppose I don't talk to that guy, but I did spend the last week talking to lots of people in the company and analyzing a lot of hidden information about them which is not available to the market. And I have found that there is no overall reason to expect them to beat the market or not - the info is good just as much as it is bad. So I again have a 50% credence. However, if I talk to that one guy who tells me why the company is great, I won't update to 55% credence, I'll update to 51% or not at all.</p>
<p>Both people here are being perfect Bayesians. Before talking to the one guy, they both have 50% credence. But the latter person has more reason to be surprised if Pepsi diverges from the mean expectation.</p></div>
JesseClifton on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1v
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1v2018-01-08T22:21:40.799658+00:00
<div class="md"><p>It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.</p>
<p>My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.</p></div>
Milan_Griffes on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyt
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyt2017-12-27T22:36:00.535370+00:00
<div class="md"><p>I agree with Jesse's reply.</p></div>
MikeJohnson on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d55
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d552018-01-20T01:02:44.307698+00:00
<div class="md"><p>I’m late to the party, but I’ve really enjoyed this series of posts. Thanks for writing.</p></div>
kbog on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyi
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cyi2017-12-26T20:49:20.625381+00:00
<div class="md"><blockquote>
<p>We can make subjective probability estimates, but if a probability estimate does not flow out of a clearly articulated model of the world, its believability is suspect</p>
</blockquote>
<p>I don't see how this implies that the expected value isn't the right answer. Also, what exactly do you mean by "believability"? It's a subjective probability estimate.</p>
<blockquote>
<p>Greaves is saying that real-world agents don’t assign precise probabilities to outcomes, they instead consider multiple possible probabilities for each outcome (taken together, these probabilities sum to the agent’s “representor”). Because an agent holds multiple probabilities for each outcome, and has no way by which to arbitrate between its multiple probabilities, it cannot use a straightforward expected value calculation to determine the best outcome.</p>
</blockquote>
<p>I don't hold multiple probabilities in this way. Sure some agents do, but presumably those agents aren't doing things correctly. Maybe the right answer here is "don't be confused about the nature of probability."</p>
<blockquote>
<p>The next time you encounter someone making a subjective probability estimate, ask “how did you arrive at that number?” The answer will frequently be along the lines of “it seems about right” or “I would be surprised if it were higher.” Answers like this indicate that the estimator doesn’t have visibility into the process by which they’re arriving at their estimate</p>
</blockquote>
<p>There are lots of claims we make on the basis of intuition. Do you believe that all such claims are poor, or is probability some kind of special case? It would help to be more clear about your point - what kind of visibility do we need and why is it important?</p>
<blockquote>
<p>Whenever we make a probability estimate that doesn’t flow from a clear world-model, the believability of that estimate is questionable</p>
</blockquote>
<p>This statement is kind of nonsensical with a subjective Bayesian model of probability; the estimate <em>is</em> your belief. If you don't have that model, then sure a probability estimate could be described as likely to be wrong, but it's still not clear why that would prevent us from saying that a probability estimate is the best we can do.</p>
<blockquote>
<p>And if we attempt to reconcile multiple probability estimates into a single best-guess, the believability of that best-guess is questionable because our method of reconciling multiple estimates into a single value is opaque.</p>
</blockquote>
<p>The way of reconciling multiple estimates is to treat them as evidence and update via Bayes' Theorem, or to weight them by their probability of being correct and average them using standard expected value calculation. If you simply take issue with the fact that real-world agents don't do this formally, I don't see what the argument is. We already have a philosophical answer, so naturally the right thing to do is for real-world agents to approximate it as well as they can.</p></div>
Milan_Griffes on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cys
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/cys2017-12-27T22:35:03.593109+00:00
<div class="md"><blockquote>
<p>The way of reconciling multiple estimates is to treat them as evidence and update via Bayes' Theorem, or to weight them by their probability of being correct and average them using standard expected value calculation. If you simply take issue with the fact that real-world agents don't do this formally, I don't see what the argument is. We already have a philosophical answer, so naturally the right thing to do is for real-world agents to approximate it as well as they can.</p>
</blockquote>
<p>"Approximate it as well as they can" implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared. Outside of the subjective Bayesian framework seems to be where the difficulty lies.</p>
<p>I agree with what Jesse stated above: "I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence."</p>
<p>A standard like "how accurately does this estimate predict the future state of the world?" is what we seem to use when comparing the quality (believability) of subjective estimates.</p>
<p>I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.</p></div>
kbog on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1k
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d1k2018-01-07T07:42:37.131372+00:00
<div class="md"><blockquote>
<p>"Approximate it as well as they can" implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared.</p>
</blockquote>
<p>How does it imply that? A Bayesian agent makes updates to their beliefs to approximate the real world as well as it can. That's just regular Bayesian updating, whether you are subjective or not.</p>
<blockquote>
<p>I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.</p>
</blockquote>
<p>I don't see what this has to do with subjective estimates. If we talk about estimates in objective and/or frequentist terms, it's equally difficult to observe the long term unfolding of the scenario. Switching away from subjective estimates won't make you better at determining which estimates are correct or not.</p></div>
Milan_Griffes on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d2a
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d2a2018-01-11T00:51:06.329320+00:00
<div class="md"><blockquote>
<p>How does it imply that?</p>
</blockquote>
<p>I don't have a fully articulated view here, but I think the problem lies with how the agent assesses how its approximations are doing (i.e. the procedure an agent uses to assess when an update is modeling the world more accurately or less).</p></div>
Milan_Griffes on “Just take the expected value” – a possible reply to concerns about cluelessness
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d28
http://effective-altruism.com/ea/1ix/just_take_the_expected_value_a_possible_reply_to/d282018-01-11T00:48:03.046343+00:00
<div class="md"><blockquote>
<p>If we talk about estimates in objective and/or frequentist terms, it's equally difficult to observe the long term unfolding of the scenario.</p>
</blockquote>
<p>Agreed. I think the difficulty applies to both types of estimates (sorry for being imprecise above).</p></div>