Comment author: gworley3  (EA Profile) 25 October 2017 10:20:13PM 0 points [-]

I like this phrasing, but maybe not for the reason you propose it.

"Doing the most good" leaves implicit what is good, but still uses a referent ("good") that everyone thinks they know what it means. I think this issue is made even clearer if we talk about "optimizing Earth" instead since optimization must always be optimizing for something. That is, optimization is inherently measured and is about maximization/minimization of some measure. Even when we try to have a generic notion of optimal we still really mean something like effective or efficient as in optimizing for effectiveness or optimizing for efficiency.

But if EA is about optimizing Earth or doing the most good, we must still tackle the problem of what is worth optimizing for and what is good. You mention impact, which also sounds a lot to me like some combination of effectiveness and productivity multiplied by effect size, yet when we are this vague that makes EA more of a productivity movement and less of a good doing movement, whatever we may think good is. The trouble is that, exposing the hollowness of ethical content in the message, it makes it unclear what things would not benefit from being part of EA.

To take a repugnant example, if I thought maximizing suffering were good, would I still be part of EA since I want to optimize the Earth (for suffering)?

The best attempt at dealing with this issue has, for me, been Brian Tomasik's looks at dealing with moral multiplicity and compromise.

In response to Open Thread #39
Comment author: casebash 23 October 2017 11:53:40PM *  5 points [-]

LW 2.0 now exists. It's still in beta, with a significant number of bugs left to fix and many features that haven't been added yet, but at some point it will become stable enough that it would be reasonable to consider switching. I'm curious what people think about this? Just thought that I'd flag this now.

In response to comment by casebash on Open Thread #39
Comment author: gworley3  (EA Profile) 25 October 2017 07:04:27PM 1 point [-]

My guess is that it probably makes sense to keep the brands separate even if they are intertwined. That does mean there may be a lot of cross posting or posting things in one location when they would have done better in the other. Unless LW 2.0 has some plans I'm unaware of to support branded sub-groups so EA could have its own identity on the LW platform.

In response to Open Thread #39
Comment author: gworley3  (EA Profile) 25 October 2017 07:01:47PM 1 point [-]

I recently wrote about if generic feedback processes might produce suffering. I'm working on a follow up post now so interested especially in things I didn't address that people would like to see addressed.

Comment author: astupple 20 September 2017 02:08:19AM 2 points [-]

While I completely see what you're saying, at the risk of sounding obtuse, I think the opposite of your opener may be true.

"People who do things are not, in general, idea constrained"

The contrary of this statement may be the fundamental point of EA (or at least a variant of it): People who do things in general (outside of EA) tend to act on bad ideas. In fact, EA is more about the ideas underlying what we do than it is about the doing itself. Millions of affluent people are doing things (going to school, work, upgrading their cars and homes, giving to charity), without examining the underlying ideas. EA's success is its ability to convert doers to adopt its ideas. It's creating a pool of doers who use EA ideas instead of conventional wisdom.

Perhaps there are two classes of doers, those already in the EA community who "get it," and those outside who are just plugging away at life. When I think of filling talent gaps, I think that can be filled by (A) EA community members developing skills, and (B) recruiting skilled people to join the community. Group A probably doesn't need good ideas because they've already accepted the ideas of our favorite thinkers etc. The marginal benefit of even better ideas is small. Instead, group A is better off if it simply gets down to the hard work of growing talent. But group B is laboring under bad ideas, and for many, it might not take much at all to get them to substitute bad ideas for EA-ideas. My guess is that, to grow talent, it is easier to convert doers from group B than to optimize doers in group A (which is certainly not to say group A shouldn't do the hard work of optimizing their talent).

There is an odd circularity here- I think I just argued myself out of my original stance. I seem to have just concluded that we shouldn't focus on the ideas of the EA community (which was my original intention) and instead should focus on methods of recruiting.

Maybe I'm arguing that we should develop recruiting ideas?

Also- any suggestions for good formal discussions of the philosophy and sociology of ideas (beyond the slightly nauseating pop business literature)? "Where Good Ideas Come From" by Steven Johnson is excellent, but not philosophically rigorous.

Comment author: gworley3  (EA Profile) 20 September 2017 05:54:27PM 1 point [-]

Maybe I'm arguing that we should develop recruiting ideas?

Yep :-)

Also- any suggestions for good formal discussions of the philosophy and sociology of ideas (beyond the slightly nauseating pop business literature)? "Where Good Ideas Come From" by Steven Johnson is excellent, but not philosophically rigorous.

I don't but I suspect some folks around here do. Talk to Malcolm Ocean maybe?

Comment author: gworley3  (EA Profile) 19 September 2017 07:09:25PM 2 points [-]

I'm going to try to explain here why I am suspicious of the need for this.

People who do things are not, in general, idea constrained from what I can tell. Lots of people have lots of ideas about what they could do and there are already people making arguments for and against these ideas in public forums. People who choose to act do so based in part of how these discussions of ideas influence their thinking, but filtered through the lens of experience at making stuff happen.

Additionally, we already have a lot of ideas people recognize as being worth implementing that no one is working on or work being done on them has not yet come to fruition. It doesn't take long, relative to the effort that will be invested to do something, to read and think enough to decide what to do, so it seems more likely to me that on the margin we need more desire to do than more curation of ideas about what to do.

All this said, if you want to do something I think there is something to be done in terms of curating the list of ideas/projects you want to see people know about and promoting the existence of that list. Or writing about specific ideas/projects you think people should work on and trying to convince folks they should work on those. But an idea directory of the sort you propose sounds to me like a lot of make-work to see only slightly more clearly the landscape doers are already navigating.

In response to comment by gworley3  (EA Profile) on S-risk FAQ
Comment author: Brian_Tomasik 19 September 2017 12:11:11AM 3 points [-]

the sort of thing we were pointing at in the late 90s before we started talking about x-risk

I'd be interested to hear more about that if you want to take the time.

In response to comment by Brian_Tomasik on S-risk FAQ
Comment author: gworley3  (EA Profile) 19 September 2017 12:50:18AM 4 points [-]

My memory is somewhat fuzzy here because it was almost 20 years ago, but I seem to recall discussions on Extropians about far future "bad" outcomes. In those early days much of the discussion focused around salient outcomes like "robots wipe out humans" that we picked up from fiction or outcomes that let people grind their particular axes (capitalist dystopian future! ecoterrorist dystopian future! _ dystopian future!), but there was definitely more serious focus around some particular issues.

I remember we worried a lot about grey goo, AIs, extraterrestrial aliens, pandemics, nuclear weapons, etc. A lot of it was focused on getting wiped out (existential threats), but some of it was about undesirable outcomes we wouldn't want to live in. Some of this was about s-risks I'm sure, but I feel like a lot of it was really more about worries over value drift.

I'm not sure there's much else there, though. We knew bad outcomes were possible, but we were mostly optimistic and hadn't developed anything like the risk-avoidance mindset that's become relatively more prevalent today.

In response to S-risk FAQ
Comment author: gworley3  (EA Profile) 18 September 2017 07:30:39PM 4 points [-]

One thing I find meta-interesting about s-risk is that s-risk is included in the sort of thing we were pointing at in the late 90s before we started talking about x-risk, and so to my mind s-risk has always been part of the x-risk mitigation program but, as you make clear, that's not how it's been communicated.

I wonder if there are types of risks for the long-term future we implicitly would like to avoid but have accidentally explicitly excluded from both x-risk and s-risk definitions.

Comment author: gworley3  (EA Profile) 07 September 2017 10:15:03PM 3 points [-]

I think the challenge with a project like this is that it is not 'neutral' in the way most EA causes are.

Most EA causes I can think of are focused on some version of saving lives or reducing suffering. Although there may be disagreement about how to best save lives or reduce suffering (and what things suffer), there is almost no disagreement that we should save lives and reduce suffering. Although this is not a philosophically neutral position, it's 'neutral' in that you will find a vanishingly small number of people who disagree with the goal of saving lives and reducing suffering.

To put it another way, it's 'neutral' because everyone values saving lives and reducing suffering so everyone feels like EA promotes their values.

Specific books, unless they are complete milk-toast, are not neutral in this way and implicitly promote particular ideas. Much of introductory EA literature, if nothing else, assumes positive act utilitarianism (although within the community there are many notable voices opposed to this position). And if we move away from EA books to other books we think are valuable, they are also going to drift further from 'neutral' values everyone can get behind.

This is not necessarily bad, but it is a project that doesn't seem to fit well to me with much of the EA brand because whatever impact it has will have to be measured in terms of values not everyone agrees with.

For example, lots of people in the comments list HPMOR, The Sequences, or GEB. I like all of these a lot and would like to see more people read them, but that's because I value the ideas and behaviors they encourage. You don't have to look very far in EA though to find people who don't agree with the rationalist project and wouldn't like to see money spent on sending people copies of these books.

In a position like that, how do you rate the effectiveness of such a project? The impact will be measured in terms of value transmission around values that not everyone agrees on spreading. Unless you limit yourself to books that just promote the idea that we can save lives, reduce suffering, and be a little smarter about how we go about that, I think you'll necessarily attract a lot of controversy in terms of evaluation.

I'm not saying I'm not in favor of people taking on projects like this. I just want to make sure we're aware it's not a normal EA project because the immediate outcome seems to be idea transmission and it's going to be hard to evaluate what ideas are even worth spreading.

Comment author: gworley3  (EA Profile) 31 August 2017 05:07:02PM 2 points [-]

Reading between the lines here, are you saying ACE may not be living up to EA standards given this and other recommendations it has made?

In response to Open Thread #38
Comment author: Kaig 24 August 2017 11:57:07PM 6 points [-]

Hi All,

I wanted to make a post about this but I just signed up so unfortunately do not have the reputation needed yet. So if anyone finds this worthy enough for a post, you are welcome to make one.

In short I think it would be beneficial for EA to get its own Stack Exchange and we can make this happen by casting a vote on the existing proposal:

https://area51.stackexchange.com/proposals/109145/effective-altruism

The longer argument for this:

If you have done anything related to programming you are probably familiar with stack exchange. It is a community website where peers can answer each others questions, these answers and questions can be voted on. This format might sound very familiar to this forum and reddit (which also has a quality EA page) However I think it would be valuable for EA to get its own stack exchange page for the following reasons:

  • It has a format that makes it easy to ask short questions and preserves common general questions (this forum and reddit seem more suited for lengthy discussions)
  • Older questions are easily found on Stack exchange, also search engines list them quiet well
  • Stack exchange has a huge community, EA could get some free promotion by having its own stack exchange page
  • EA organisation sites have good FAQ's about general things for newcomers but imo there is nothing compared to crowd sourced FAQ for which stack exchange has the ideal format

I hope you found this convincing and we will soon give birth to the EA Stack Exchange page.

In response to comment by Kaig on Open Thread #38
Comment author: gworley3  (EA Profile) 25 August 2017 06:47:30PM 1 point [-]

Would there be enough activity to justify an SE? Seems like an area where we might quickly run out of questions but want to spend a lot of time finding better answers to old questions, which I'm not sure fits the SE format.

View more: Next