PeterSlattery

Research @ MIT FutureTech/Monash University/Ready Research
3093 karmaJoined Dec 2015Working (6-15 years)Sydney NSW, Australia
www.pslattery.com/

Bio

Participation
4

Visiting Scientist at MIT FutureTech helping with research, communication and operations. Doing some 'fractional movement building'. 

On leave from roles as i) a behaviour change researcher at BehaviourWorks Australia at Monash University and ii) EA course development at University of Queensland.

Founder and team member at Ready Research.

Former movement builder for i) UNSW, Sydney, Australia, ii) Sydney, Australia and iii) Ireland, EA groups.

Marketing Lead for the 2019 EAGx Australia conference.

Founder and former lead for the EA Behavioral Science Newsletter.

See my LinkedIn profile for more of my work.

Leave (anonymous) feedback here.

Sequences
1

A proposed approach for AI safety movement building

Comments
379

Topic contributions
3

Thanks for this, I appreciate that someone read everything in depth and responded. 

I feel I should say something because I defended Nonlinear (NL) in previous comments, and it feels like I am ignoring the updated evidence/debate if I don't.

I also really don’t want to get sucked in, so I will try to keep it short:

How I feel
I previously said that I was very concerned after Ben's post, then persuaded by the response from NL that they are not net negative. 

Since then, I realized that there have been more negative views expressed towards NL than I realized. I have been somewhat influenced by the credibility of some of the people who disagree with me.

Having said that, the current evidence is still not enough to persuade me that NL will be net-negative if involved in EA in the future. They may have made some misjudgments, but they have paid a very high price, and it seems relatively easy to avoid similar mistakes in the future.

(I also feel frustrated that I have put so much time into this and wish it was not a public trial of sorts)

I agree with this point:

"An effective altruist organization needs to be actively good."

BUT I am not sure if you can reasonably conclude that NL is not actively good from the current balance of evidence because it is fuzzy and incomplete. Little effort has been invested in figuring out their positive impacts and weighing them against their negative impacts.

Contrary to the post, I expect that Kat and Emerson probably do agree with this now (as general principles) after their failed experiment here:

People should be paid for their work in money and not in ziplining trips.

A person should not simultaneously be your friend, your housemate, your mentee, an employee of the charity you founded, and the person you pay to clean your house.
Nonprofits should follow generally accepted accounting principles and file legally required paperwork.
Employers should not ask their employees to commit felonies unrelated to the job they were hired for.[16]

I disagree somewhat with Ozy/Jeff on this:
"Much of Kat and Emerson’s effective altruist work focuses on mentoring young effective altruists and incubating organizations. I believe the balance of the evidence—much of it from their own defense of their actions—shows that they are hilariously ill-suited for that job. Grants to Nonlinear to do mentoring or incubation are likely to backfire. Grantmakers should also consider the risk of subsidizing Nonlinear’s mentoring and incubation programs when they give grants to Nonlinear for other programs (e.g. its podcast)."

NL's misjudgements here show a really bad fit for their chosen role in connecting and mentoring people." 

- again, we are just hearing about the bad stuff here from a small minority of people involved in a program. What about the various people who had good experiences? We might be considering less than a percent of the available feedback, etc.

I think it is fairer to say that they made a mistake here that reflects badly and raised concern etc. That we should investigate more and be concerned maybe, but not that we should have strong confidence etc.

I think I agree with Jeff on this:
"We need to do the best we have with the information we have, put things together, gather more information as needed, and make a coherent picture."

What I therefore think should happen next:
NL should acknowledge any mistakes, say what they will do differently in the future, and be able to continue being part of the community while continuing to be scrutinized accordingly (however that proceeds).

I'd like people to be more cautious than previously in their engagements with them, but not to write them off completely or assume they are bad actors.  (EA) organizations/people can and do make mistakes and then improve and avoid those mistakes in the future. 

Thanks for the detailed response, I appreciate it!

Thanks for writing this, Joseph. 

Minor, but I don't really understand this claim:

Someone made a forum post about taking several months off work to hike, claiming that it was a great career decision and that they gained lots of transferable skills. I see this as LinkedIn-style clout-seeking behavior.

I am curious why you think this i) gains them clout or ii) was written with that intention? 

It seems very different to the other examples, which seem about claiming unfair competencies or levels of impact etc. 

I personally think that taking time off work to hike is more likely to cost you status than give you status in EA circles! I therefore read that post as an attempt to promote new community norms (around work and life balance and self-discovery etc) than to gain status.

One disclaimer here is that I think I know this person, so I am probably biased. I am genuinely curious though and not feeling defensive etc.

If you have time, can you provide some examples of what you saw as evidence of wrongdoing? 

I feel that much of what I saw from my limited engagement was a valid refutation of the claims made. For instance, see the examples given in the post above.

There were responses to new claims and I saw those as being about making it clear that other claims, which had been made separately from Ben's post, were also false.

I did see some cases where a refutation and claim didn't exactly match, but I didn't register that as wrongdoing (which might be due to bias or not realising the implications of the changes etc)

Also, are you sure it is fair to claim that most evidence they provided misstated the claims and provided evidence of wrong doing? Was it really most of the evidence, or just (potentially) some of it?

Thank you for explaining all of this.

If it is ok, can you please clarify what you said here because I am not sure if I properly understand it: "in our direct messages about this post prior to publication, provided a snippet of a private conversation about the ACX meetup board decision where you took a maximally broad interpretation of while I had limited ways of verifying, pressured me to add it as context to this post in a way that would have led to a substantially false statement on my part, then admitted greater confusion to a board member while saying nothing to me about the same, after which I reconfirmed with the same board member that the wording I chose was accurate to his perception."

(I do also want to clarify that Me and Ben were somewhat ironically not shared on this post in advance, and I would have left comments with evidence falsifying at least one or two of the claims)

I agree that you probably should have had the chance to review the post (noting what TracingWoodgrain has said makes me a little less certain of this, though I still believe it)

I also think that most people, myself included, would be happy to read your comments now if you want to share them.

I'll just quickly say that my experience of this saga was more like this: 

Before BP post: NL are a sort of atypical, low structure EA group, doing entrepreneurial and coordination focused work that I think is probably positive impact.
After BP post: NL are actually pretty exploitative and probably net negative overall. I'll wait to hear their response, but I doubt it will change my mind very much.
After NL post: NL are probably not exploitative. They made some big mistakes (and had bad luck) with some risks they took in hiring and working unconventionally. I think they are probably still likely to have a positive impact on expectation. I think that they have been treated harshly.
After this post: I update to be feeling more confident that this wasn't a fair way to judge NL and that these sorts of posts/investigations shouldn't be a community norm. 

Thank you for putting so much effort into helping with this community issue. 

What do you think community members should do in situations similar to what Ben and Oliver believed themselves to be in: where a community member believes that some group is causing a lot of harm to the community, and it is important to raise awareness?

Should they do a similar investigation, but better or more fairly? Should they hire a professional? Should we elect a group (e.g., the CEA community health team (or similar)) to do these sorts of investigation? 

Here is what I eventually extracted and will share, just in case it's useful. 

**★★★ (RP DG) By what year will at least 15% of patents granted in the US be for designs generated primarily via AI? Reasons for inclusion: both an early sign that AI might be able to design dangerous technology and an indicator that AIs will be economically useful to deploy across diverse industries. Question resolves according to the best estimate by the [resolution council].

**★★★ (UF RP) How long will be the gap between the first creation of an AI which could automate 65% of current labour and the availability of an equivalently capable model as a free open-source program?​​

**★★★ (RP) Meta-capabilities question: by 2029 will there be a better way to assess the capabilities of models than testing their performance on question-and-answer benchmarks?​​

**★★★ (RP UF) How much money will the Chinese government cumulatively spend on training AI models between 2024 and 2040 as estimated by the [resolution council]?​

**★★★ (UF, FE, RP) Consider the first AI model able to individually perform any cognitive labour that a human can. Then, how likely is the chance of an deliberately engineered pandemic which kills >20% of the world's population in the 50 years after the first such model is built?

**★★★ (UF, FE, RP) How does the probability of the previous question change if models are widely available to citizens and private businesses, compared to if only government and specified trusted private organizations are allowed to use them?

**★★★ (FE, RP) What is the total number of EAs in technical AI alignment Across academia, industry, independent research organizations, ¿government?, etc. See The academic contribution to AI safety seems large for an estimate from 2020.

**★★★ (FE, RP) What is the total number of non-EAs in technical AI alignment? Across academia, industry, independent research organizations, ¿government?, etc.

**★★★ (RP) How likely is it that an AI could get nanomachines built just by making ordinary commercial purchases online, and obtaining the cooperation of <30 human beings without scientific skills above masters degrees in relevant subjects?

**★★★ (UF, RP) Take-off speed: after automating 15% of labour, how long will it take until 60% of labour is automated? Question note: 99%+ of labour has been already been automated, since most humans don't work on agriculture any more. This question asks about automating 15% and 60% of labour of the type done in 2023; see "recurring terms".

**★★★ (FE, RP) How long does it take TSMC to manufacture 100k GPUs? Relevance: Not that high, but a neat Fermi estimate warm up. Might just generally be good for having good models of the world, though.

**★★★ (UF, RP) What is the % chance that by 2025/2030/35/40 an AI will persuade a human to commit a crime in order to further the AI's purposes? If one wanted to make this question resolvable: Question resolves according to the [resolution council]'s probability that this has happened. This would require a platform that accepts probabilistic resolutions. See also below "When will the US' SEC accuse someone of committing securities fraud substantially aided by AI systems?"

**★★★ (RP, FE) What fraction of labour will be automated between 2023 and 2028/2035/2040/2050/2100? Question operationalization: See "recurring terms" section For a reference on an adjacent, see Phil Trammell's Economic growth under transformative AI.

Load more