CS

Chris Smith

230 karmaJoined

Posts
1

Sorted by New

Comments
21

I largely agree with what you said in this comment, though I'd say the line between data collection and data processing is often blurred in real-world scenarios. 

I think we are talking past each other (not in a bad faith way though!), so I want to stop myself from digging us deeper into an unproductive rabbit hole.

Just saw this comment, I'm also super late to the party responding to you!

It actually seems to me it might have been worth emphasising more, as I think a casual reader could think this post was a critique of formal/explicit/quantitative models in particular.

Totally agree! Honestly, I had several goals with this post, and I almost complete failed on two of them:

  • Arguing why utilitarianism can't be the foundation of ethics.
  • Without talking much about AI, explaining why I don't think people in the EA community are being reasonable when they suggest there's a decent chance of an AGI being developed in the near future.

Instead, I think this post came off as primarily a criticism of certain kinds of models and a criticism of GiveWell's approach to prioritization (which is unfortunate since I think the Optimizer's Curse isn't as big an issue for GiveWell & global health as it is for many other EA orgs/cause areas).

--
On the second piece of your comment, I think we mostly agree. Informal/cluster-style thinking is probably helpful, but it definitely doesn't make the Optimizer's Curse a non-issue.

Just found this post, coming in to comment a year late--Thanks Michael for the thoughtful post and Ozzie for the thoughtful comments! 

I'm not saying that these are easy to solve, but rather, there is a mathematical strategy to generally fix them in ways that would make sense intuitively. There's no better approach than to try to approximate the mathematical approach, or go with an approach that in-expectation does a decent job at approximating the mathematical approach.

I might agree with you about what's (in some sense) mathematically possible (in principle). In practice, I don't think people trying to approximate the ideal mathematical approach are going to have a ton of success (for reasons discussed in my post and quoted in Michael's previous comment). 

I don't think searching for "an approach that in-expectation does a decent job at approximating the mathematical approach" is pragmatic.  

In most important scenarios, we're uncertain  what approaches work well in-expectation. Our uncertainty about what works well in-expectation is the kind of uncertainty that's hard to hash out in probabilities. A strict Bayesian might say, "That's not a problem--with even more math, the uncertainty can be handled...."

While you can keep adding more math and technical patches to try and ground decision making in Bayesianism, pragmatism eventually pushes me in other directions. I think David Chapman explains this idea a hell of a lot better than I can in Rationalism's Responses To Trouble.

Getting more concrete:
Trusting my gut or listening to domain experts might turn out to be approaches that work well in some situation. If one of these approaches works, I'm sure someone could argue in hindsight that an approach works because it approximates an idealized mathematical approach. But I'm skeptical of the merits of work done in the reverse (i.e., trying to discover non-math approaches by looking for things that will approximate idealized mathematical approaches).
 

(I used to work for GiveWell)

Hey Ben,

I'm sympathetic to a lot of the points you make in this post, but I think your conclusions are far more negative than is reasonable.

Here's the stuff I largely agree with you on:

-The opportunities to save lives w/ global health interventions probably aren't nearly as easy as Singer's thought experiment suggests

-Entities other than GiveWell use GiveWell's estimates without the appropriate level of nuance and detail about where the estimates come from and how uncertain they are

-There's not anything close to $50,000,000,000 funding gap for ultra cost-effective interventions to save lives

-GiveWell's cost-effectiveness estimates are probably overly optimistic

That said, I find a few of the things you say in this post frustrating:

"Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated."

I don't think anyone at GiveWell believes millions of lives could be saved today at an ultra-low cost. GiveWell regularly publishes their room for more funding analyses that indicate it thinks the funding gaps for their recommended interventions amount to way way less than $50 billion/year.

As far as I can tell, people at Good Ventures & Open Phil sincerely believe that funding in cause areas other than global health may be incredibly cost-effective. I think Good Ventures funds other stuff because they think each $5,000 of funding given to those causes may do more good than an additional $5,000 given to GiveWell's recommended charities. They might be dead wrong, but I don't think they rationalize their choices with, "Well, GiveWell's estimates are just BS so let's not take them seriously."

"They were worried that this would be an unfair way to save lives."

I find this way of describing GW's motivations awfully uncharitable.

"[The cost-effectiveness estimates are] marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process."

GiveWell puts a ton of effort into coming up with these numbers and drawing on them as they make decisions. None of that would happen if the numbers were just created for the purposes of marketing and manipulation. I have significant reservations about how GiveWell's estimates are created and used. I don't have significant reservations about GiveWell's sincerity when sharing the estimates.

That's interesting—and something I may not have considered enough. I think there's a real possibility that there could be excessive quantification in some areas of the EA but not enough of it in other areas.

For what it's worth, I may have made this post too broad. I wanted to point out a handful of issues that I felt all kind of fell under the umbrella of "having excessive faith in systematic or mathematical thinking styles." Maybe I should have written several posts on specific topics that get at areas of disagreement a bit more concretely. I might get around to those posts at some point in the future.

Again, none of this is to say that Bayesianism is fundamentally broken or that high-level Bayesian-ish things like "I have a very skeptical prior so I should not take this estimate of impact at face value" are crazy.

As a real world example:

Venture capitalists frequently fund things that they're extremely uncertain about. It's my impression that Bayesian calculations rarely play into these situations. Instead, smart VCs think hard and critically and come to conclusions based on processes that they probably don't full understand themselves.

It could be that VCs have just failed to realize the amazingness of Bayesianism. However, given that they're smart & there's a ton of money on the table, I think the much more plausible explanation is that hardcore Bayesianism wouldn't lead to better results than whatever it is that successful VCs actually do.

It's always worth entertaining multiple models if you can do that at no cost. However, doing that often comes at some cost (money, time, etc). In situations with lots of uncertainty (where the optimizer's curse is liable to cause significant problems), it's worth paying much higher costs to entertain multiple models (or do other things I suggested) than it is in cases where the optimizer's curse is unlikely to cause serious problems.

Hey Kyle, I'd stopped responding since I felt like we were well beyond the point where we were likely to convince one another or say things that those reading the comments would find insightful.

I understand why you think "good prior" needs to be defined better.

As I try to communicate (but may not quite say explicitly) in my post, I think that in situations where uncertainty is poorly understood, it's hard to come up with priors that are good enough that choosing actions based explicit Bayesian calculations will lead to better outcomes than choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking.

I'd also be excited to see more people in the EA movement doing the sort of work that I think would put society in a good position for handling future problems when they arrive. E.g., I think a lot of people who associate with EA might be awfully good and pushing for progress in metascience/open science or promoting a free & open internet.

Load more