Comment author: Peter_Hurford  (EA Profile) 09 September 2017 09:35:14PM *  1 point [-]

What does "systemic change" actually refer to? I don't think I ever understood the term.

Comment author: WillPearson 09 September 2017 07:20:34PM 0 points [-]

I'm not sure if there are many EAs interested in it, because of potential low tractability. But I am interested in "systemic change" as a cause area.

Comment author: WillPearson 09 September 2017 06:59:32PM *  0 points [-]

Just a heads up "technological risks" ignores all the non-anthropogenic catastrophic risks. Global catastrophic risks seems good.

Comment author: Peter_Hurford  (EA Profile) 09 September 2017 04:51:50PM 0 points [-]

Hi. This is not your fault. The EA Hub team had to take down a few sites because of a CPU overuse error. Until they work out what the problem is and an alternative, some sites will have to stay offline. An archive has been retained.

Comment author: Gina_Stuessy  (EA Profile) 09 September 2017 01:10:09PM 0 points [-]

I get the same thing.

Comment author: mercivadeboncoeur 09 September 2017 10:40:25AM *  0 points [-]

Thank you for sharing this with us here. Depression is a state of being where a person tends to feel absolutely empty, anxious and lost. It is the lowest point on the emotional scale and is marked by a variety of negative feelings, like guilt, helplessness, anger, irritability, etc. There are many practical ways to deal with depression. Ways like Prioritize your life, Put a grin on your face, Show kindness to others, Remember you are human, Exercise, hang out with right People, Repair relationships. Anti- depressants can also help. If the above told things don't work, One can consult to some Professional like Voyante Sérieuse for help.

Comment author: itaibn 09 September 2017 10:16:28AM 0 points [-]

While I couldn't quickly find the source for this, I'm pretty sure Eliezer read the Lectures on Physics as well. Again, I think Surely You're Joking is good, I just think the Lectures on Physics is better. Both are reasonable candidates for the list.

Comment author: Austen_Forrester 09 September 2017 04:36:08AM -1 points [-]

Of course, I totally forgot about the "global catastrophic risk" term! I really like it and it doesn't only suggest extinction risks. Even its acronym sounds pretty cool. I also really like your "technological risk" suggestion, Rob. Referring to GCR as "Long term future" is a pretty obvious branding tactic by those that prioritize GCRs. It is vague, misleading, and dishonest.

Comment author: Benito 08 September 2017 09:24:04PM 0 points [-]

"Surely You're Joking Mr Feynman" still shows genuine curiosity, which is rare and valuable. But as I say, it's less about whether I can argue for it, and more about whether the top intellectual contributors in our community found it transformative in their youth. I think many may have read Feynman when young (e.g. it had a big impact on Eliezer).

Comment author: astupple 08 September 2017 07:46:57PM 4 points [-]

Like me, I suspect many EA's do a lot of "micro advising" to friends and younger colleagues. (In medicine, this happens almost on a daily basis). I know I'm an amateur, and I do my best to direct people to the available resources, but it seems like creating some basic pointers on how to give casual advice may be helpful.

Alternatively, I see the value in a higher activation energy for potentially reachable advisees- if they truly are considering adjusting their careers, then they'll take the time to look at the official EA material.

Nonetheless, it seems like even this advice to amateurs like myself could be helpful - "Give your best casual advice. If things are promising, give them links to official EA content."

Comment author: kbog  (EA Profile) 08 September 2017 05:03:32AM *  0 points [-]

Accelerating the development of machine intelligence is not a net negative since it can make the world better and safer at least as much as it is a risk. The longer it takes for AGI algorithms to be developed, the more advanced hardware and datasets there will be to support an uncontrolled takeoff. Also, the longer it takes for AI leaders to develop AGI then the more time there is for other nations and organizations to catch up, sparking more dangerous competitive dynamics. Finally, even if it were a net negative, the marginal impact of one additional AI researcher is tiny whereas the marginal impact of one additional AI safety researcher is large, due to the latter community being much smaller.

Comment author: itaibn 08 September 2017 01:00:00AM 1 point [-]

The article on machine learning doesn't discuss the possibility that more people to pursuing machine jobs can have a net negative effect. It's true your venue will generally encourage people that will be more considerate of the long-term and altruistic effects of their research and so will likely have a more positive effect than the average entrant to the field, but if accelerating the development of strong AI is a net negative then that could outweigh the benefit of the average researcher being more altruistic.

Comment author: itaibn 08 September 2017 12:38:24AM 0 points [-]

What do you mean by Feynman? I endorse his Lectures in Physics as something that had a big effect on my own intellectual development, but I worry many people won't be able to get that much out of it. While his more accessible works are good, I don't rate them as highly.

Comment author: gworley3  (EA Profile) 07 September 2017 10:15:03PM 3 points [-]

I think the challenge with a project like this is that it is not 'neutral' in the way most EA causes are.

Most EA causes I can think of are focused on some version of saving lives or reducing suffering. Although there may be disagreement about how to best save lives or reduce suffering (and what things suffer), there is almost no disagreement that we should save lives and reduce suffering. Although this is not a philosophically neutral position, it's 'neutral' in that you will find a vanishingly small number of people who disagree with the goal of saving lives and reducing suffering.

To put it another way, it's 'neutral' because everyone values saving lives and reducing suffering so everyone feels like EA promotes their values.

Specific books, unless they are complete milk-toast, are not neutral in this way and implicitly promote particular ideas. Much of introductory EA literature, if nothing else, assumes positive act utilitarianism (although within the community there are many notable voices opposed to this position). And if we move away from EA books to other books we think are valuable, they are also going to drift further from 'neutral' values everyone can get behind.

This is not necessarily bad, but it is a project that doesn't seem to fit well to me with much of the EA brand because whatever impact it has will have to be measured in terms of values not everyone agrees with.

For example, lots of people in the comments list HPMOR, The Sequences, or GEB. I like all of these a lot and would like to see more people read them, but that's because I value the ideas and behaviors they encourage. You don't have to look very far in EA though to find people who don't agree with the rationalist project and wouldn't like to see money spent on sending people copies of these books.

In a position like that, how do you rate the effectiveness of such a project? The impact will be measured in terms of value transmission around values that not everyone agrees on spreading. Unless you limit yourself to books that just promote the idea that we can save lives, reduce suffering, and be a little smarter about how we go about that, I think you'll necessarily attract a lot of controversy in terms of evaluation.

I'm not saying I'm not in favor of people taking on projects like this. I just want to make sure we're aware it's not a normal EA project because the immediate outcome seems to be idea transmission and it's going to be hard to evaluate what ideas are even worth spreading.

Comment author: Robert_Wiblin 07 September 2017 09:32:30PM 0 points [-]

Sorry I missed the endnote. I think in that case, where the change is most naturally interpreted as neutral or positive, it's worth giving it a stronger caveat than a footnote, for people just skimming the numbers. :)

Comment author: Robert_Wiblin 07 September 2017 08:18:15PM 1 point [-]

Glad you like them! Tell your friends. ;)

Comment author: Tom_Davidson 07 September 2017 07:38:46PM 4 points [-]

Great podcasts!

Comment author: weeatquince  (EA Profile) 07 September 2017 11:33:52AM *  2 points [-]

Hi, In case helpful for considering the additional Facebook information, I have a bunch of data on EA social media presence to help me compare growth in London to other locations, including a lot of downloaded Sociograph data from 2016.

For example the EA Facebook group size over the last year:

03/06/2016 _ 10263

13/01/2017 _ 12070

10/06/2017 _ 12,953

Obviously you'd expect these things to grow as people join then do not leave (but might ignore it), even if the movement was shrinking.

Comment author: Wei_Dai 07 September 2017 08:10:51AM 3 points [-]

This is a bit tangential, but do you know if anyone has done an assessment of the impact of HPMoR? Cousin_it (Vladimir Slepnev) recently wrote:

The question then becomes, how do we set up a status economy that will encourage research? Peer review is one way, because publications and citations are a status badge desired by many people. Participating in a forum like LW when it's "hot" and frequented by high status folks is another way, but unfortunately we don't have that anymore. From that perspective it's easy to see why the massively popular HPMOR didn't attract many new researchers to AI risk, but attracted people to HPMOR speculation and rational fic writing. People do follow their interests sometimes, but mostly they try to find venues to show off.

Taking this one step further, it seems to me that HPMoR may have done harm by directing people's attentions (including Eliezer's own) away from doing the hard work of making philosophical and practical progress in AI alignment and rationality, towards discussion/speculation of the book and rational fic writing, thereby contributing to the decline of LW. Of course it also helped bring new people into the rationalist/EA communities. What would be a fair assessment of its net impact?

Comment author: lukeprog 07 September 2017 04:00:23AM 3 points [-]

We got close to doing this when I was at MIRI but just didn't have the outreach capacity to do it. The closest we got was to print a bunch of paperback copies of (the first 17 chapters of) just one book, HPMoR, and we shipped copies of that to contacts at various universities etc. I think we distributed 1000-2000 copies, not sure if more happened after I left.

View more: Prev | Next