JM

John_Maxwell

3249 karmaJoined Aug 2014

Comments
547

I think critics see it as a "sharp left turn" in the AI Alignment sense, where the longtermist values were there all along but were much more dormant while EA was less powerful.

Not necessarily a deliberate strategy though -- my model is that EA started out fairly cause-neutral, people had lots of discussions about the best causes, and longtermist causes gradually emerged as the best.

E.g. in 2012 Holden Karnofsky wrote:

I consider the general cause of "looking for ways that philanthropic dollars can reduce direct threats of global catastrophic risks, particularly those that involve some risk of human extinction" to be a relatively high-potential cause. It is on the working agenda for GiveWell Labs and we will be writing more about it.

You said you were looking for "when the ideas started gathering people". I do suspect there's an interesting counterfactual where in-person gathering wasn't a major part of the EA movement. I can think of some other movements where in-person gathering is not focal. In any case, I'm not hung up on the distinction, it just seemed worth mentioning.

The "effective altruism" tag on LessWrong has lots of early EA discussion. E.g. here is a comment from Anna Salamon explaining Givewell to Eliezer Yudkowsky in early 2009.

My sense is that early EA summits were pretty important -- here are videos from the first EA Summit in 2013.

I think the "fulltime job as a scientist" situation could be addressed with an "apply for curation" process, as outlined in the second half of this comment.

Thanks a lot for writing this post!

Personal experience: When I tried a vegan diet, I experienced gradually decreasing energy levels and gradually increasing desire for iron-rich animal products (hamburgers). My energy levels went back to normal when I went ahead and ate the hamburgers.

So, I'm really excited about the potential of nutritional investigations to improve vegan diets!

For bivalvegans, note that some bilvalves are rich in heme iron (heme iron, from animals, is more easily absorbed than the non-heme iron found in plants).

Again, personal experience: I've found that if I'm still feeling tired after multiple days of rest, consuming food with significant heme iron usually solves the problem. (Think this might have been true one time even when I didn't meet the technical criteria for iron deficiency?)

But, just to reinforce the point that too much iron can also be bad:

Donating blood can improve your cardiovascular health. Elevated levels of iron in the blood puts men at increased risk of heart disease. Donating blood takes iron out of your system (It’s gradually replenished by the foods you eat).

Source.

Thanks for all your hard work, Megan.

I'm reminded of this post from a few months ago: Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely.

And this point from a post Peter Wildeford wrote: "I think criticism of EA may be more discouraging than it is intended to be and we don't think about this enough."

In theory, the EA movement isn't about us as EAs. It's about doing good for others. But in practice, we're all humans, and I think it's human nature to have an expectation of recognition/gratitude when we've done an altruistic act. If instead of gratitude, we get a punishment in the form of a bad outcome or sharp words, that feels like a bait & switch.

My hypothesis is that being surrounded by other do-gooders makes the situation worse. You feel like you're in a recognition deficit, many people around you feel the same way, and no one is injecting gratitude into the ecosystem to resolve the misery spiral. Internal debates exacerbate things, insofar as trying to understand someone else's perspective depletes the same emotional resource that altruism does.

Anyway, most of that wasn't very specific to your post -- I'm just wondering if emphasizing "other-care" in addition to "self-care" would help us weather ups & downs.

And, thanks to all the EAs reading this for all the good you are doing.

I wonder if a good standard rule for prizes is that you want a marketing budget which is at least 10-20% the size of the prize pool, for buying ads on podcasts ML researchers listen to or subreddits they read or whatever. Another idea is to incentivize people to make submissions publicly, so your contest promotes itself.

Title: Prizes for ML Safety Benchmark Ideas

Author: Joshc, Dan H

URL: https://forum.effectivealtruism.org/posts/jo7hmLrhy576zEyiL/prizes-for-ml-safety-benchmark-ideas

Why it's good: Benchmarks have been a big driver of progress in AI. Benchmarks for ML safety could be a great way to drive progress in AI alignment, and get people to switch from capabilities-ish research to safety-ish research. The structure of the prize looks good: They're offering a lot of money, there are still over 6 months until the submission deadline, and all they're asking for is a brief write-up. Thinking up benchmarks also seems like the sort of problem that's a good fit for a prize. My only gripe with the competition is that it doesn't seem widely known, hence posting here.

There are hundreds of startup incubators and accelerators -- is there a particular reason you like Entrepreneur First?

Interesting points.

I think we had a bunch of good shots of spotting what was going on at FTX before the rest of the world, and I think downplaying Sam's actual involvement in the community would have harmed that.

I could see this going the other way as well. Maybe EAs would've felt more free to criticize FTX if they didn't see it as associated with EA in the public mind. Also, insofar as FTX was part of the "EA ingroup", people might've been reluctant to criticize them due to tribalism.

I also think that CEA would have very likely approved any request by Sam to be affiliated with the movement, so your safeguard would have I think just differentially made it harder for the higher-integrity people (who CEA sadly tends to want to be associated less with, due to them by necessity also having more controversial beliefs) to actually be affiliated with EA, without helping much with the Sam/FTX case.

Re: controversial beliefs, I think Sam was unusually willing to bite bullets in public even by EA standards -- see here.

Presumably any CEA approval process from here on would account for lessons learned from Sam. And any approval process would hopefully get better over time as data comes in about bad actors.

In any case, I expect that paying for audits (or criticism contests, or whatever) is generally a better way to achieve scrutiny of one's organization than using EA in one's marketing.

Load more