Building effective altruism
Building EA
Growing, shaping, or otherwise improving effective altruism as a practical and intellectual project

Quick takes

3
1h
With another EAG nearby, I thought now would be a good time to push out this draft-y note. I'm sure I'm missing a mountain of nuance, but I stand by the main messages:   "Keep Talking" I think there are two things EAs could be doing more of, on the margin. They are cheap, easy, and have the potential to unlock value in unsuspecting ways. Talk to more people I say this 15 times a week. It's the most no-brainer thing I can think of, with a ridiculously low barrier to entry; it's usually net-positive for one while often only drawing on unproductive hours of the other. Almost nobody would be where they were without the conversations they had. Some anecdotes: - A conversation led both parties discovering a good mentor-mentee fit, leading to one dropping out of a PhD, being mentored on a project, and becoming an alignment researcher. - A first conversation led to more conversations which led to more conversations, one of which illuminated a new route to impact which this person was a tremendously good fit for. They're now working as a congressional staffer. - A chat with a former employee gave an applicant insight about a company they were interviewing with and helped them land the job (many, many such cases). - A group that is running a valuable fellowship programme germinated from a conversation between three folks who previously were unacquainted (the founders) (again, many such cases).   Make more introductions to others (or at least suggest who they should reach out to) By hoarding our social capital we might leave ungodly amounts of value on the table. Develop your instincts and learn to trust them! Put people you speak with in touch with other people who they should speak with -- especially if they're earlier in their discovery of using evidence and reason to do more good in the world. (By all means, be protective of those whose time is 2 OOMs more precious; but within +/- 1, let's get more people connected: exchanging ideas, improving our thinking,
10
3d
6
(EA) Hotel dedicated to events, retreats, and bootcamps in Blackpool, UK?  I want to try and gauge what the demand for this might be. Would you be interested in holding or participating in events in such a place? Or work running them? Examples of hosted events could be: workshops, conferences, unconferences, retreats, summer schools, coding/data science bootcamps, EtG accelerators, EA charity accelerators, intro to EA bootcamps, AI Safety bootcamps, etc.  This would be next door to CEEALAR (the building is potentially coming on the market), but most likely run by a separate, but close, limited company (which would charge, and funnel profits to CEEALAR, but also subsidise use where needed). Note that being in Blackpool in a low cost building would mean that the rates charged by such a company would be significantly less than elsewhere in the UK (e.g. £300/day for use of the building: 15 bedrooms and communal space downstairs to match that capacity). Maybe think of it as Whytham Abbey, but at the other end of the Monopoly board: only 1% of the cost! (A throwback to the humble beginnings of EA?) From the early days of the EA Hotel (when we first started hosting unconferences and workshops), I have thought that it would be good to have a building dedicated to events, bootcamps and retreats, where everyone is in and out as a block, so as to minimise overcrowding during events, and inefficiencies of usage of the building either side of them (from needing it mostly empty for the events); CEEALAR is still suffering from this with it’s event hosting. The yearly calendar could be filled up with e.g. 4 10-12 week bootcamps/study programs, punctuated by 4 1-3 week conferences or retreats in between.  This needn't happen straight away, but if I don't get the building now, the option will be lost for years. Having it next door in the terrace means that the building can be effectively joined to CEEALAR, making logistics much easier (and another option for the building could be
4
2d
New: “card view” for frontpage posts We’re testing out a new “card view” for the main post list on the home page. You can toggle the layout by clicking the dropdown circled in red below. You can see more details in GitHub here. Let us know what you think! :)
28
1mo
2
Please advertise applications at least 4 weeks before closing! (more for fellowships!) I've seen a lot of cool job postings, fellowships, or other opportunities that post that applications are open the forum or on 80k ~10 days before closing.  Because many EA roles or opportunities often get cross-posted to other platforms or newsletters, and there's a built in lag-time between the original post and the secondary platform, this is especially relevant to EA. For fellowships or similar training programs, where so much work has gone into planning and designing the program ahead of time, I would really encourage to open applications ~2 months before closing.  Keep in mind that most forum posts don't stay on the frontpage very long, so "posting something on the forum" does not equal "the EA community has seen this". As someone who runs a local group and a newsletter, opportunities with short application times are almost always missed by my community, since there's not enough turnaround time between when we see the original post, the next newsletter, and time for community members to apply.
19
25d
1
Mini EA Forum Update You can now subscribe to be notified when posts are added to a sequence. You can see more details in GitHub here. We’ve also made it a bit easier to create and edit sequences, including allowing users to delete sequences they’ve made. I've been thinking a bit about how to improve sequences, so I'd be curious to hear: 1. How you use them 2. What you'd like to be able to do with them 3. Any other thoughts/feedback
101
8mo
19
My overall impression is that the CEA community health team (CHT from now on) are well intentioned but sometimes understaffed and other times downright incompetent. It's hard to me to be impartial here, and I understand that their failures are more salient to me than their successes. Yet I endorse the need for change, at the very least including 1) removing people from the CHT that serve as a advisors to any EA funds or have other conflict of interest positions, 2) hiring HR and mental health specialists with credentials, 3) publicly clarifying their role and mandate.  My impression is that the most valuable function that the CHT provides is as support of community building teams across the world, from advising community builders to preventing problematic community builders from receiving support. If this is the case, I think it would be best to rebrand the CHT as a CEA HR department, and for CEA to properly hire the community builders who are now supported as grantees, which one could argue is an employee misclassification. I would not be comfortable discussing these issues openly out of concern for the people affected, but here are some horror stories: 1. A CHT staff pressured a community builder to put through with and include a community member with whom they weren't comfortable interacting. 2. A CHT staff pressured a community builder to not press charges against a community member who they felt harassed by. 3. After a restraining order was set by the police in place in this last case, the CHT refused to liaison with the EA Global team to deny access to the person restrained, even knowing that the affected community builder would be attending the event. 4. My overall sense is that CHT is not very mindful of the needs of community builders in other contexts. Two very promising professionals I've mentored have dissociated from EA, and rejected a grant, in large part because of how they were treated by the CHT. 5. My impression is that the CHT staff underm
109
9mo
11
GET AMBITIOUS SLOWLY Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring  speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.  Faced with big dreams but unclear ability to enact them, people have a few options.  *  try anyway and fail badly, probably too badly for it to even be an educational failure.  * fake it, probably without knowing they're doing so * learned helplessness, possible systemic depression * be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you.  * discover more skills than they knew. feel great, accomplish great things, learn a lot.  The first three are all very costly, especially if you repeat the cycle a few times. My preferred version is ambition snowball or "get ambitious slowly". Pick something big enough to feel challenging but not much more, accomplish it, and then use the skills and confidence you learn to tackle a marginally bigger challenge. This takes longer than immediately going for the brass ring and succeeding on the first try, but I claim it is ultimately faster and has higher EV than repeated failures. I claim EA's emphasis on doing The Most Important Thing pushed people into premature ambition and everyone is poorer for it. Certainly I would have been better off hearing this 10 years ago  What size of challenge is the right size? I've thought about this a lot and don't have a great answer. You can see how things feel in your gut, or compare to past projects. My few rules: * stick to problems where failure will at least be informative. If you can't track reality well eno
42
3mo
1
My biggest takeaway from EA so far has been that the difference in expected moral value between the consensus choice and its alternative(s) can be vastly larger than I had previously thought. I used to think that "common sense" would get me far when it came to moral choices. I even thought that the difference in expected moral value between the "common sense" choice and any alternatives was negligible, so much so that I made a deliberate decision not to invest time into thinking about my own values or ethics.  EA radically changed my opinion, and now I hold the view that the consensus view is frequently wrong, even when the stakes are high, and that is possible to make dramatically better moral decisions by approaching them with rationality, and a better-informed ethical framework. Sometimes I come across people who are familiar with EA ideas but don't particularly engage with them or the community. I often feel surprised, and I think the above is a big part of why. Perhaps more emphasis could be placed on this expected moral value gap in EA outreach?
Load more (8/79)