PeterSlattery

Research @ MIT FutureTech/Monash University/Ready Research
3102 karmaJoined Dec 2015Working (6-15 years)Sydney NSW, Australia
www.pslattery.com/

Bio

Participation
4

Visiting Scientist at MIT FutureTech helping with research, communication and operations. Doing some 'fractional movement building'. 

On leave from roles as i) a behaviour change researcher at BehaviourWorks Australia at Monash University and ii) EA course development at University of Queensland.

Founder and team member at Ready Research.

Former movement builder for i) UNSW, Sydney, Australia, ii) Sydney, Australia and iii) Ireland, EA groups.

Marketing Lead for the 2019 EAGx Australia conference.

Founder and former lead for the EA Behavioral Science Newsletter.

See my LinkedIn profile for more of my work.

Leave (anonymous) feedback here.

Sequences
1

A proposed approach for AI safety movement building

Comments
384

Topic contributions
3

Also someone messaged me about a recent controversy that Bryan was involved in. I thought he had been exonerated but this person thought that he had still done bad things.

See: https://www.dailymail.co.uk/news/article-11692609/Anti-aging-biotech-tycoon-accused-dumping-fianc-e-breast-cancer-diagnosis.html

And his response https://twitter.com/bryan_johnson/status/1734257098119356900?t=DHcSxlZ5PkxhREVJkAdXag&s=19

Worth knowing about when judging his character.

Yeah I think that's part of it. I also thought it was very interesting how he justified what he was doing as being important for the long term future given the expected emergence of superhuman AI. E.g., he is running his life by an algorithm in expectation that society might be run in a similar way.

I will definitely say that he does come across as hyper rational and low empathy in general but there's also some touching moments here where he clearly has a lot of care for his family and really doesn't want to lose them. Could all be an act of course.

Thanks for sharing your opinion. What's your evidence for this claim?

Yeah, he could be planning to donate money once his attempt to reduce our overcome mortality is resolved.

He said several times that what he's doing now is only part one of the plants so I guess there is a opportunity to withhold judgment and see what he does later.

Having said all that I don't want to come across as trusting him. I just heard the interview and was really surprised by all the EA themes which emerged and the narrative he proposed for why what he's doing is important

Thanks for the input!

I think of EA as a cluster of values and related actions that people can hold/practice to different extents. For instance, caring about social impact, seeking comparative advantage, thinking about long term positive impacts, and being concerned about existential risks including AI. He touched on all of those.

It's true that he doesn't mention donations. I don't think that discounts his alignment in other ways.

Useful to know he might not be genuine though.

Thanks for this, I appreciate that someone read everything in depth and responded. 

I feel I should say something because I defended Nonlinear (NL) in previous comments, and it feels like I am ignoring the updated evidence/debate if I don't.

I also really don’t want to get sucked in, so I will try to keep it short:

How I feel
I previously said that I was very concerned after Ben's post, then persuaded by the response from NL that they are not net negative. 

Since then, I realized that there have been more negative views expressed towards NL than I realized. I have been somewhat influenced by the credibility of some of the people who disagree with me.

Having said that, the current evidence is still not enough to persuade me that NL will be net-negative if involved in EA in the future. They may have made some misjudgments, but they have paid a very high price, and it seems relatively easy to avoid similar mistakes in the future.

(I also feel frustrated that I have put so much time into this and wish it was not a public trial of sorts)

I agree with this point:

"An effective altruist organization needs to be actively good."

BUT I am not sure if you can reasonably conclude that NL is not actively good from the current balance of evidence because it is fuzzy and incomplete. Little effort has been invested in figuring out their positive impacts and weighing them against their negative impacts.

Contrary to the post, I expect that Kat and Emerson probably do agree with this now (as general principles) after their failed experiment here:

People should be paid for their work in money and not in ziplining trips.

A person should not simultaneously be your friend, your housemate, your mentee, an employee of the charity you founded, and the person you pay to clean your house.
Nonprofits should follow generally accepted accounting principles and file legally required paperwork.
Employers should not ask their employees to commit felonies unrelated to the job they were hired for.[16]

I disagree somewhat with Ozy/Jeff on this:
"Much of Kat and Emerson’s effective altruist work focuses on mentoring young effective altruists and incubating organizations. I believe the balance of the evidence—much of it from their own defense of their actions—shows that they are hilariously ill-suited for that job. Grants to Nonlinear to do mentoring or incubation are likely to backfire. Grantmakers should also consider the risk of subsidizing Nonlinear’s mentoring and incubation programs when they give grants to Nonlinear for other programs (e.g. its podcast)."

NL's misjudgements here show a really bad fit for their chosen role in connecting and mentoring people." 

- again, we are just hearing about the bad stuff here from a small minority of people involved in a program. What about the various people who had good experiences? We might be considering less than a percent of the available feedback, etc.

I think it is fairer to say that they made a mistake here that reflects badly and raised concern etc. That we should investigate more and be concerned maybe, but not that we should have strong confidence etc.

I think I agree with Jeff on this:
"We need to do the best we have with the information we have, put things together, gather more information as needed, and make a coherent picture."

What I therefore think should happen next:
NL should acknowledge any mistakes, say what they will do differently in the future, and be able to continue being part of the community while continuing to be scrutinized accordingly (however that proceeds).

I'd like people to be more cautious than previously in their engagements with them, but not to write them off completely or assume they are bad actors.  (EA) organizations/people can and do make mistakes and then improve and avoid those mistakes in the future. 

Thanks for the detailed response, I appreciate it!

Thanks for writing this, Joseph. 

Minor, but I don't really understand this claim:

Someone made a forum post about taking several months off work to hike, claiming that it was a great career decision and that they gained lots of transferable skills. I see this as LinkedIn-style clout-seeking behavior.

I am curious why you think this i) gains them clout or ii) was written with that intention? 

It seems very different to the other examples, which seem about claiming unfair competencies or levels of impact etc. 

I personally think that taking time off work to hike is more likely to cost you status than give you status in EA circles! I therefore read that post as an attempt to promote new community norms (around work and life balance and self-discovery etc) than to gain status.

One disclaimer here is that I think I know this person, so I am probably biased. I am genuinely curious though and not feeling defensive etc.

If you have time, can you provide some examples of what you saw as evidence of wrongdoing? 

I feel that much of what I saw from my limited engagement was a valid refutation of the claims made. For instance, see the examples given in the post above.

There were responses to new claims and I saw those as being about making it clear that other claims, which had been made separately from Ben's post, were also false.

I did see some cases where a refutation and claim didn't exactly match, but I didn't register that as wrongdoing (which might be due to bias or not realising the implications of the changes etc)

Also, are you sure it is fair to claim that most evidence they provided misstated the claims and provided evidence of wrong doing? Was it really most of the evidence, or just (potentially) some of it?

Load more