Comment author: JesseClifton 08 January 2018 10:21:40PM *  0 points [-]

It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.

My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.

Comment author: kbog  (EA Profile) 09 January 2018 08:17:20PM *  0 points [-]

It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.

Well in this case at least, it is apparent that the differences are caused by how well or poorly supported people's beliefs are. It doesn't say anything about variance in general.

My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.

Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.

Comment author: JesseClifton 07 January 2018 10:22:15PM 0 points [-]

whether you are Bayesian or not, it means that the estimate is robust to unknown information

I’m having difficulty understanding what it means for a subjective probability to be robust to unknown information. Could you clarify?

subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models.

Could you give an example where two Bayesians have the same subjective probabilities, but SEUT tells us that one subjective probability is better than the other due to better robustness / resulting from a better model / etc.?

Comment author: kbog  (EA Profile) 08 January 2018 05:57:15PM *  0 points [-]

It means that your credence will change little (or a lot) depending on information which you don't have.

For instance, if I know nothing about Pepsi then I may have a 50% credence that their stock is going to beat the market next month. However, if I talk to a company insider who tells me why their company is better than the market thinks, I may update to 55% credence.

On the other hand, suppose I don't talk to that guy, but I did spend the last week talking to lots of people in the company and analyzing a lot of hidden information about them which is not available to the market. And I have found that there is no overall reason to expect them to beat the market or not - the info is good just as much as it is bad. So I again have a 50% credence. However, if I talk to that one guy who tells me why the company is great, I won't update to 55% credence, I'll update to 51% or not at all.

Both people here are being perfect Bayesians. Before talking to the one guy, they both have 50% credence. But the latter person has more reason to be surprised if Pepsi diverges from the mean expectation.

Comment author: kbog  (EA Profile) 07 January 2018 07:45:28AM 0 points [-]

This seems like a rather mundane point and I would be surprised if there were a lot of welfare economics literature which overlooked it in relevant cases.

Comment author: Milan_Griffes 27 December 2017 10:35:03PM 0 points [-]

The way of reconciling multiple estimates is to treat them as evidence and update via Bayes' Theorem, or to weight them by their probability of being correct and average them using standard expected value calculation. If you simply take issue with the fact that real-world agents don't do this formally, I don't see what the argument is. We already have a philosophical answer, so naturally the right thing to do is for real-world agents to approximate it as well as they can.

"Approximate it as well as they can" implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared. Outside of the subjective Bayesian framework seems to be where the difficulty lies.

I agree with what Jesse stated above: "I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence."

A standard like "how accurately does this estimate predict the future state of the world?" is what we seem to use when comparing the quality (believability) of subjective estimates.

I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.

Comment author: kbog  (EA Profile) 07 January 2018 07:42:37AM *  0 points [-]

"Approximate it as well as they can" implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared.

How does it imply that? A Bayesian agent makes updates to their beliefs to approximate the real world as well as it can. That's just regular Bayesian updating, whether you are subjective or not.

I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.

I don't see what this has to do with subjective estimates. If we talk about estimates in objective and/or frequentist terms, it's equally difficult to observe the long term unfolding of the scenario. Switching away from subjective estimates won't make you better at determining which estimates are correct or not.

Comment author: JesseClifton 27 December 2017 08:38:33PM 3 points [-]

For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence, unless you just mean that they result from calculating the Bayesian update correctly or incorrectly.

Likewise there is no true expected utility to estimate. It is a measure of an epistemic state, not a feature of the external world.

I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.

Comment author: kbog  (EA Profile) 07 January 2018 07:38:10AM *  0 points [-]

For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence

Yes, whether you are Bayesian or not, it means that the estimate is robust to unknown information.

I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory.

No, subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models. I don't see why you would think otherwise.

As does what you have said about robustness and being well or poorly supported by evidence.

No, everything that has been written on the optimizer's curse is perfectly compatible with subjective expected utility theory.

Comment author: JesseClifton 27 December 2017 07:46:00AM 0 points [-]

But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse.

I'm not sure what you mean. There is nothing being estimated and no concept of robustness when it comes to the notion of subjective probability in question.

Comment author: kbog  (EA Profile) 27 December 2017 05:20:30PM 0 points [-]

The expected value of your actions is being estimated. Those estimates are based on subjective probabilities and can be well or poorly supported by evidence.

Comment author: JesseClifton 22 December 2017 05:20:48PM 3 points [-]

I can’t speak for the author, but I don’t think the problem is the difficulty of “approximating” expected value. Indeed, in the context of subjective expected utility theory there is no “true” expected value that we are trying to approximate. There is just whatever falls out of your subjective probabilities and utilities.

I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a model, as is arguably often the case with EAs trying to optimize complex systems or the long-run future, then it is reasonable to ask why they should carry much epistemic / decision-theoretic weight.

(People who hold this view might not find the usual Dutch book or representation theorem arguments compelling.)

Comment author: kbog  (EA Profile) 26 December 2017 08:55:56PM 0 points [-]

I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a model, as is arguably often the case with EAs trying to optimize complex systems or the long-run future, then it is reasonable to ask why they should carry much epistemic / decision-theoretic weight.

But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse. It doesn't imply that taking the expected value is not the right solution to the idea of cluelessness.

Comment author: kbog  (EA Profile) 26 December 2017 08:49:20PM *  1 point [-]

We can make subjective probability estimates, but if a probability estimate does not flow out of a clearly articulated model of the world, its believability is suspect

I don't see how this implies that the expected value isn't the right answer. Also, what exactly do you mean by "believability"? It's a subjective probability estimate.

Greaves is saying that real-world agents don’t assign precise probabilities to outcomes, they instead consider multiple possible probabilities for each outcome (taken together, these probabilities sum to the agent’s “representor”). Because an agent holds multiple probabilities for each outcome, and has no way by which to arbitrate between its multiple probabilities, it cannot use a straightforward expected value calculation to determine the best outcome.

I don't hold multiple probabilities in this way. Sure some agents do, but presumably those agents aren't doing things correctly. Maybe the right answer here is "don't be confused about the nature of probability."

The next time you encounter someone making a subjective probability estimate, ask “how did you arrive at that number?” The answer will frequently be along the lines of “it seems about right” or “I would be surprised if it were higher.” Answers like this indicate that the estimator doesn’t have visibility into the process by which they’re arriving at their estimate

There are lots of claims we make on the basis of intuition. Do you believe that all such claims are poor, or is probability some kind of special case? It would help to be more clear about your point - what kind of visibility do we need and why is it important?

Whenever we make a probability estimate that doesn’t flow from a clear world-model, the believability of that estimate is questionable

This statement is kind of nonsensical with a subjective Bayesian model of probability; the estimate is your belief. If you don't have that model, then sure a probability estimate could be described as likely to be wrong, but it's still not clear why that would prevent us from saying that a probability estimate is the best we can do.

And if we attempt to reconcile multiple probability estimates into a single best-guess, the believability of that best-guess is questionable because our method of reconciling multiple estimates into a single value is opaque.

The way of reconciling multiple estimates is to treat them as evidence and update via Bayes' Theorem, or to weight them by their probability of being correct and average them using standard expected value calculation. If you simply take issue with the fact that real-world agents don't do this formally, I don't see what the argument is. We already have a philosophical answer, so naturally the right thing to do is for real-world agents to approximate it as well as they can.

Comment author: Henry_Stanley 06 December 2017 11:45:55PM 1 point [-]

If the data for these surveys didn't come from Lisak ... then it's just nonsensical to presume that the data is skewed because it's feminist

Agreed - but I still think we should be concerned about the quality of the data. The linked article suggests that Lisak's study was assembled from other studies which he's apparently unable to cite, which weren't especially careful about the data they collected, and which probably aren't representative of most college campuses.

Comment author: kbog  (EA Profile) 10 December 2017 04:20:15AM *  1 point [-]

Nothing in that article suggests that the data was low quality, just that some of them might not have been traditional college students.

probably aren't representative of most college campuses.

That's irrelevant here, because the number here is being used as a representation of men in EA, not men on college campuses.

In response to What consequences?
Comment author: kbog  (EA Profile) 28 November 2017 06:10:14AM 1 point [-]

It's worth noting that long-run consequences doesn't necessarily imply just looking at x-risks. A fully fleshed out long-run evaluation looks at many factors of civilization quality and safety, and I think it is good enough to dominate other considerations. It's certainly better than allowing mere x-risk concerns to dominate.

But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best.

I don't think this is true. Killing a random baby on the off chance that it might become a dictator is a bad idea. You can do the math on that if you want, or just trust me that the expected consequences of it are hurtful to society.

View more: Next