Comment author: kbog  (EA Profile) 12 January 2017 06:49:35AM *  9 points [-]

I like your thoughts and agree with reframing it as epistemic virtue generally instead of just lying. But I think EAs are always too quick to think about behavior in terms of incentives and rational action. Especially when talking about each other. Since almost no one around here is rationally selfish, some people are rationally altruistic, and most people are probably some combination of altruism, selfishness and irrationality. But here people are thinking that it's some really hard problem where rational people are likely be dishonest and so we need to make it rational for people to be honest and so on.

We should remember all the ways that people can be primed or nudged to be honest or dishonest. This might be a hard aspect of an organization to evaluate from the outside but I would guess that it's at least as internally important as the desire to maximize growth metrics.

For one thing, culture is important. Who is leading? What is their leadership style? I'm not in the middle of all this meta stuff, but it's weird (coming from the Army) that I see so much talk about organizations but I don't think I've ever seen someone even mention the word "leadership."

Also, who is working at EA organizations? How many insiders and how many outsiders? I would suggest that ensuring that a minority of an organization is composed of identifiable outsiders or skeptical people would compel people to be more transparent just by making them feel like they are being watched. I know that some people have debated various reasons to have outsiders work for EA orgs - well here's another thing to consider.

I don't have much else to contribute, but all you LessWrong people who have been reading behavioral econ literature since day one should be jumping all over this.

Comment author: Richard_Batty 17 January 2017 04:01:09PM 0 points [-]

What sort of discussion of leadership would you like to see? How was this done in the Army?

Comment author: Richard_Batty 05 January 2017 11:16:56PM *  8 points [-]

I know some effective altruists who see EAs like Holden Karnofsky or what not do incredible things, and feel a little bit of resentment at themselves and others; feeling inadequate that they can’t make such a large difference.

I think there's a belief that people often have when looking at successful people which is really harmful, the belief that "I am fundamentally not like them - not the type of person who can be successful." I've regularly had this thought, sometimes explicitly and sometimes as a hidden assumption behind other thoughts and behaviours.

It's easy to slip into believing it when you hear the bios of successful people. For example, William MacAskill's bio includes being one of the youngest associate professors of philosophy in the world, co-founder of CEA, co-founder of 80,000 Hours, and a published author. Or you can read profiles of Rhodes Scholars and come across lines like "built an electric car while in high school and an electric bicycle while in college".

When you hear these bios it's hard to imagine how these people achieved these things. Cal Newport calls this the failed simulation effect - we feel someone is impressive if we can't simulate the steps by which they achieved their success. But even if we can't immediately see the steps they're still there. They achieved their success through a series of non-magic practical actions, not because they're fundamentally a different sort of person.

So a couple of suggestions:

If you're feeling like you fundamentally can't be as successful as some of the people you admire, start by reading Cal Newport's blog post. It gives the backstory behind a particularly impressive student, showing the exact (non-magical) steps he took to achieve an impressive bio. Then, when you hear an impressive achievement, remind yourself that there is a messy practical backstory to this that you're not hearing. Maybe read full biographies of successful people to see their gradual rise. Then go work on the next little increment of your plan, because that's the only consistent way anyone gets success.

If you're a person others look up to as successful, start communicating some of the details of how you achieved what you did. Show the practicalities, not just the flashy bio-worthy outcomes.

Comment author: Telofy  (EA Profile) 03 January 2017 11:16:15AM 2 points [-]

Here is something I recently proposed in a .impact chat: Do we need to make it more clear how people can ask questions? When people are new to EA, they’ll have lots of questions, and Google might not always be able to point them to the best documents answering them.

(1) The international effective altruism group has a high bar for quality, so it’s not a good fit for asking a random question; (2) the EA Forum is being used for longer articles, so people who are sensitive to that will refrain from asking questions here; (3) in open threads, questions that arrive rather late are easily overlooked; and (4) not everyone has a meetup nearby or is curious enough about any particular answer to go to one. is the best fit that I can think of for asking questions, but it took me a while to remember that it exists, and I don’t recognize any (nick) names there, so if it really is the best place to ask questions, then it would need to be promoted more to newcomers and seasoned EAs. People who work in outreach could add the Reddit feed to their Feedly accounts, and the people operating the EA Forum could put up a link to recommend it as a place to ask questions.

What do you think?

Comment author: Richard_Batty 03 January 2017 12:20:00PM 2 points [-]

An EA stackexchange would be good for this. There is one being proposed:

But it needs someone to take it on as a project to do all that's necessary to make it a success. Oli Habryka has been thinking about how to make it a success, but he needs someone to take on the project.

Comment author: Richard_Batty 28 November 2016 10:15:42AM 0 points [-]

Is it worth cross-posting this to LessWrong? Anna Salamon is leading an effort to get LessWrong used again as a locus of rationality conversation, and this would fit well there.

Comment author: Richard_Batty 24 September 2016 04:59:40PM 1 point [-]

Apart from 80k, do you know if the other organisations have had few applicants to these jobs or lots of applicants but no-one good enough?

Comment author: Peter_Hurford  (EA Profile) 24 September 2016 02:08:23AM *  20 points [-]

For those who don't know, I work as a data scientist / software engineer, have more than two years experience, identify as an EA, and donate a considerable portion of my income (right now living on ~$40K and donating ~$50K).

This post resonates with me because I often get annoyed that my earning to give job has little to none direct social impact and takes a lot of time away from my volunteering that I would otherwise like to do. However, when considering these direct work jobs, I usually end up not applying for some of the following reasons, in order from most feared to least feared:

(a) For personal reasons, I need to work in Chicago right now, but none of the other organizations are in Chicago. I would definitely consider remote work, but that sounds like it would make me really lonely as I have few friends outside of work and my girlfriend is long distance. My thought is that someone else who can live in the Bay would be a better fit.

(b) I'm concerned the jobs won't be technically challenging enough. My perception is that these jobs often involve maintaining WordPress sites or chaining Google Sheets together and don't involve making great technology. This motivation of mine exists alongside my EA motivations and is what keeps me interested in my for-profit job, but I'm afraid I'd lose it in a direct work job and I wouldn't be fulfilled from EA drive alone. My thought is that someone else who is less experienced or not motivated by technical challenge would be a better fit. I'd definitely be interested in any EA org that had a strong need for data science though, whether it be in generating predictions, creating product recommendations, classifying objects, etc.

(c) I'm concerned I don't have the relevant skills. These jobs seem very front-end focused or focused on making mobile apps, and my experience is in data science and back-end engineering. I'm not good at designing apps and would think that someone else more skilled at front-end design would be a better fit.

(d) I'm concerned these jobs don't offer sufficient salary. While I definitely identify with EA and want to donate a lot of money, I'm not super into sacrifice and would like to do things that, in Chicago, require $40K or more. I'd also like to save to eventually raise a family, buy a house, send kids to college, etc. My thought is that someone who can take a lower salary would be a better fit.

(e) I think that doing for-profit work will build better career capital that could launch an even more impactful career in the future, perhaps in tech entrepreneurship.

I don't know if (a)-(e) are actually true, but they're fears that keep me from exploring much further. I also think (a)-(e) is also based in the thought that there are many other EA software engineers who could easily be a better fit because of these limitations, but maybe that's not true? I didn't know any EA org other than 80K was struggling to hire software engineers, so that definitely updates me in that direction.

Comment author: Richard_Batty 24 September 2016 04:55:48PM 2 points [-]

In response to b, I think that's true for the 80k job. I decided not to apply for the 80k job because it was WordPress, which is horrible to work with and bad for career capital as a developer. Other developers I spoke to about it felt similarly.

But this isn't true of all of the jobs.

For example, the GiveDirectly advert says "GiveDirectly is looking for a full-stack developer who is ready to own, develop, and refine a broad portfolio of products, ranging from mobile and web applications to backend data integrations. As GiveDirectly’s only full-time technologist they will be responsible for developing solutions to the organization's most challenging technical problems, and owning the resolution from end to end."

When I unsuccessfully applied to Wave it similarly sounded like a standard backend web development job, not WordPress or tying together google sheets.

Comment author: Habryka 22 September 2016 11:08:18PM *  2 points [-]

Regarding 3:

The EA Hub, the EA survey, the traffic numbers for the forum and the location of EAG attendees,as well as most other survey data we have all tend to agree quite well on the distribution of the EA community. They all look roughly like the EA Hub map:

For reference, here is the distribution of people who answered the EA survey (conditional on people who filled out the whole thing and gave additional information):

For reference, here is the distribution of traffic to the Forum:

And here is the distribution by country:

It would take me a while to make the origins of the participants for EAG 2015 into a nice map, but it generally follows a similar distribution, with the East Coast being naturally somewhat underrepresented (since we didn't have an event there).

In general, San Francisco is the biggest hub, the East Coast has a good amount of people but is quite spread out, and London+Oxford is about half the size of the Bay Area, with a good amount of people spread around the UK. Usually London + Berlin still is only about 50% - 60% of the size of the Bay Area. (For the Google Analytics data above, make sure to add up Oakland, Berkeley and San Francisco to get an accurate number for the Bay Area, and probably add up Cambridge, Oxford and London to get a somewhat similar comparison for the London area).

Comment author: Richard_Batty 23 September 2016 10:13:14PM 3 points [-]

In addition to AGB's point about the forum data, the EA Hub map in its default zoom state shows 746 in Europe, 669 in Eastern US, and 460 in Western US.

For the EA survey in its default zoom state, you get 298 in Europe, 377 in Eastern US, and 289 in Western US.

Comment author: RomeoStevens 01 September 2016 12:41:46AM *  2 points [-]

The meetings framing is one of the things I mean. The reference class of meetings is low value. What I'm proposing is that the current threshold for "a big enough chance of being useful to risk a 30 minute skype call" is set too high and that more video calls should happen on the margin. I don't think these should necessarily be thought of as meetings with clear agendas. Exploratory conversations are there to discover if there is any value, not exploit known value from the first minute.

I agree about the house party parameters. I'd be interested in efforts to host a regular virtual meetup. VR solutions are likely still a couple years off from being reasonably pleasant to use casually (phones rather than dedicated headsets, since most will not have dedicated hardware, but mid-range phones will be VR capable soon)

Comment author: Richard_Batty 01 September 2016 08:44:59PM 0 points [-]

I agree that changing the framing away from meetings would be good, I'm just not sure how to do that.

Do you fancy running a virtual party?

Comment author: RomeoStevens 31 August 2016 08:20:28PM *  4 points [-]

Video calls have phenomenally lowered the cost of conversations, but most of us don't use them much. The remaining frictions seem to mostly be soft, etiquette and social-anxiety related. As such, I think that developing some sort of generic protocol for reaching out and having video calls would be helpful. Scripts reduce anxiety about the proper thing to do.

Some ideas in that vein:

  • Calls should last no more than 90 minutes to avoid burnout and feelings that calls are a virtuous obligation more than a fun opportunity.

  • More generally, calls should end once someone runs out of energy, and the affordance to end the call without hurt feelings should exist. More total calls exist if everyone is really enjoying them! We care about this more than awkwardness for a few seconds!

  • Reaching out to people and scheduling. This can induce ugh fields when the video call is made to feel like more like scheduling a meeting than a casual exploration that maybe there is some value here. People in our community care a lot about being on time, and not wasting another person's time. These tendencies both add friction to more exploratory interaction. Example: "Hey, I'd be interested in a brief chat about X with you whenever we're both free, you can ping me at these times, alternately, are there good times to ping you to check if you are free?"

  • People might feel tempted to stick to virtuous topics when reaching out, which is fine, but everything goes better if it's also a topic you genuinely care a lot about. This makes for better conversations, which reinforces the act of having conversations etc.

  • Variability tolerance. This whole thing works much better if you go into it being okay with many of your conversations not leading to large, actionable insights. Lots more cross-connections in the EA movement serve more than just immediate benefits. These conversations can lay the groundwork for further cross connections in the future as you build a richer map of who is interested in what and we can help each other make the most fruitful connections.

  • Why not just emails since asynchronous is even easier? The de facto state of email is that we impose a quality standard on them that makes them onerous to write. Conversations, in contrast, have a much easier time of staying in a casual, exploratory mode that is conducive to quickly homing in on the areas where you can have the most fruitful information transfer.

  • Lastly, the value of more video calls happening is likely higher value than an intuitive guess would estimate. Myself and many people I have spoken with have had the experience of unsticking projects in ways we didn't even realize they were stuck after a short call with someone else interested in the area.

If you would like practice, please reach out to me for a skype call on facebook. :)

Comment author: Richard_Batty 01 September 2016 12:21:17AM 1 point [-]

Video calls could help overcome geographic splintering of EAs. For example, I've been involved in EA for 5 years and I still haven't met many bay area EAs because I've always been put off going by the cost of flights from the UK.

I've considered skyping people but here's what puts me off:

  • Many EAs defend their time against meetings because they're busy. I worry that I'd be imposing by asking for a skype
  • I feel bad asking for a skype without a clear purpose
  • Arranging and scheduling a meeting feels like work, not social

However, at house parties I've talked to the very same people I'd feel awkward about asking to skype with because house parties overcome these issues.

The ideal would be to somehow create the characteristics of a house party over the internet:

  • Several people available
  • You can join in with and peel off from groups
  • You can overhear other conversations and then join in
  • There are ways to easily and inoffensively end a conversation when it's run its course
  • You can join in with a group that contains some people you know and some people you don't
  • The start time and end time are fuzzy so you can join in when you want
  • You can randomly decide to go without much planning, and can back out without telling anyone

Some things that have come closer to this than a normal skype:

  • The complice EA study hall: this has chat every pomodoro break. It's informal, optional, doesn't require arranging, and involves several people. It's really nice but it's only in pomodoro breaks and is via chat rather than voice.
  • Phone calls and skypes with close friends and family where it's not seen as weird to randomly phone them

Maybe a MVP would be to set up a google hangouts party with multiple hangouts. Or I wonder if there's some better software out there designed for this purpose.

Comment author: Kerry_Vaughan 25 August 2016 03:37:22PM 1 point [-]

Thanks for this. Very helpful.

Comment author: Richard_Batty 28 August 2016 02:27:37PM *  2 points [-]

I'm not sure if this discussion has changed your view on using deceptive marketing for EA Global, but if it has, what do you plan to do to avoid it happening in future work by EA Outreach?

Also, it's easy for EAs with mainly consequentialist ethics to justify deception and non-transparency for the greater good, without considering consequences like the ones discussed here about trust and cooperation. Would it be worth EAO attempting to prevent future deception by promoting the idea that we should be honest and transparent in our communications?

View more: Next