New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
15
harfe
5h
0
Consider donating all or most of your Mana on Manifold to charity before May 1. Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.
Animal Justice Appreciation Note Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it. Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!
GiveWell and Open Philanthropy just made a $1.5M grant to Malengo! Congratulations to @Johannes Haushofer and the whole team, this seems such a promising intervention from a wide variety of views
Quote from VC Josh Wolfe: > Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios. > > There's a ton that are gonna come on wave, and this is exciting because you can be a scientist on the beach in the Bahamas, pull up your iPad, run an experiment. The robots are performing 90% of the activity of Pouring something from a beaker into another, running a centrifuge, and then the data that comes off of that. > > And this is the really cool part. Then the robot and the machines will actually say to you, “Hey, do you want to run this experiment but change these 4 parameters or these variables?” And you just click a button “yes” as though it's reverse prompting you, and then you run another experiment. So the implication here is that the boost in productivity for science, for generation of truth, of new information, of new knowledge, That to me is the most exciting thing. And the companies that capture that, forget about the societal dividend, I think are gonna make a lot of money. https://overcast.fm/+5AWO95pnw/46:15
I think it would be good if lots of EAs answered this twitter poll, so we could get a better sense for the communities views on the topic of Enlightenment / Awakening: https://twitter.com/SpencrGreenberg/status/1782525718586413085?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet

Popular comments

Recent discussion

This is an anonymous account (Ávila is not a real person). I am posting on this account to avoid potentially negative effects on my future job prospects.

SUMMARY:

  • I've been rejected from 18 jobs or internships, 12 of which are "in EA."
  • I briefly spell out my background information
...
Continue reading

Paul Graham had a nice take on this:

The more arbitrary college admissions criteria become, the more the students at elite universities will simply be those who were most determined to get in.

I think "actually really want to apply" is not enough of a correlation to base decisions on. The fact is that even qualified+motivated applicants would need to apply to a dozen+ places, and often EA application questions require a lot of thought anyway.

To give an example, lots of EAs are from top unis, and I'm pretty sure the meta-strategy for applying to selective uni... (read more)

1
ZacharyRudolph
5h
Hey, Zack from XLab here. I'd be happy to provide a couple sentence feedback on your application if you send me an email.  The most common reasons for rejection before an interview were things like no indication of having US citizenship or student visa, ChatGPT-seeming responses, responses to the exercise that didn't clearly and compellingly indicate how it was relevant for global catastrophic risk mitigation, or lack of clarity on how mission aligned the applicant was. We appreciate the feedback, though.      
2
GraceAdams
6h
Thanks for the kind feedback about our hiring process! I'll encourage the team to write up how we have approached the hiring for some roles where we think we ran a good process! [Edit: Actually Michael Townsend wrote this in the past about our hiring process, which is worth reading]

Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.


Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse

...
Continue reading
1
Deborah W.A. Foulkes
4h
Not to be confused with The Macrostrategy Partnership: https://www.macrostrategy.co.uk/
22
Linch
9h
Sure, social aggression is a rather subjective call. I do think decoupling/locality norms are relevant here. "Garden variety incompetence" may not have been the best choice of words on Sean's part,[1] but it did seem like a) a locally scoped comment specifically answering a question that people on the forum understandably had, b) much of it empirically checkable (other people formerly at FHI, particularly ops staff, could present their perspectives re: relationship management), and c) Bostom's capacity as director is very much relevant to the discussion of the organization's operations or why the organization shut down.  Your comment first presents what I consider to be a core observation that is true and important, namely, FHI did a lot of good work, and this type of magic might not be easy to replicate if you do everything with apparent garden-variety competence. But afterwards, it also brought in a bunch of what I consider to be extraneous details on Sean's competency, judgment, and integrity. The points you raise are also more murkily defined and harder to check. So overall I think of your comment as more escalatory. 1. ^ or maybe it was under the circumstances. I don't know the details here, maybe the phrase was carefully chosen. 

It wasn't carefully chosen. It was the term used by the commenter I was replying to. I was a little frustrated, because it was another example of a truth-seeking enquiry by Milena getting pushed down the track of only-considering-answers-in-which-all-the-agency/wrongness-is-on-the-university side (including some pretty unpleasant options relating to people I'd worked with ('parasitic egregore/siphon money').

>Did Oxford think it was a reputation risk? Were the other philosophers jealous of the attention and funding FHI got? Was a beaurocratic parasitic e... (read more)

Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk).

Introduction

I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a ...

Continue reading

I must admit I did not have time to re-read your post carefully, but thought it worth pointing out that after reading it I am left a bit confused by the multiple "culture wars" references. Could you please expand on this a bit?

I guess my confusion is that "culture wars" seem to be "attention grabbing" words you used in the beginning of your post, but I feel that after reading the full post that they were not fully addressed. I would be keen to understand if you only meant these to be rhetorical devices to make the reading more captivating, or if you have o... (read more)

23
Larks
5h
I think the idea of a motivational shadow is a good one, and it can be useful to think about these sorts of filters on what sorts of evidence/argument/research people are willing to share, especially if people are afraid of social sanction. However, I am less convinced by this concrete application. You present a hierarchy of activities in order of effort required to unlock, and suggest that something like 'being paid full time to advocate for this' pushes people up multiple levels: I don't believe that the people who are currently doing high quality Xrisk advocacy would counter-factually be writing nasty newspaper hit pieces; these just seem like totally different activities, or that Timnit would write more rigourously if people gave her more money. My impression is that high quality work on both sides is done by people with strong inherent dedication to truth-seeking and intellectual inquiry, and there is no need to first pass through a valley of vitriol before your achieve a motivational level-up to an ascended state of evidence. Indeed, historically a lot of Xrisk advocacy work was done by people for whom such an activity had negative financial and social payoff. I also think you miss a major, often dominant motivation: people love to criticize, especially to criticize things that seem to threaten their moral superiority. 
10
Linch
7h
I genuinely don't know if this is an interesting/relevant question that's unique to EA. To me, the obvious follow-up question here is whether EA is unique or special in having this (average) level of vitriol in critiques of us? Like is the answer to "why so much EA criticism is hostile and lazy" the same answer to "why is so much criticism, period, hostile and lazy?" Or are there specific factors of EA that's at all relevant here? I haven't been sufficiently embedded in other intellectual or social movements. I was a bit involved in global development before and don't recall much serious vitirol, maybe something like Easterly or Moyo are closest. I guess maybe MAGA implicitly doesn't like global dev?  But otoh I've heard of other people involved in say animal rights who say that the "critiques" of EA are all really light and milquetoast by comparison. I'd really appreciate answers from people who have been more "around the block" than I have. 
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
harfe posted a Quick Take 5h ago
harfe
5h15
1
0
2

Consider donating all or most of your Mana on Manifold to charity before May 1.

Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.

Continue reading
yanni kyriacos posted a Quick Take 6h ago

I think it would be good if lots of EAs answered this twitter poll, so we could get a better sense for the communities views on the topic of Enlightenment / Awakening: https://twitter.com/SpencrGreenberg/status/1782525718586413085?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet

Continue reading

Crosspost of my blog.  

You shouldn’t eat animals in normal circumstances. That much is, in my view, quite thoroughly obvious. Animals undergo cruel, hellish conditions that we’d confidently describe as torture if they were inflicted on a human (or even a dog). No hamburger is worth that kind of cruelty. However, not all animals are the same. Contra Napoleon in Animal Farm, all animals are not equal.

Cows are big. The average person eats 2400 chickens but only 11 cows in their life. That’s mostly because chickens are so many times smaller than cows, so you can only get so many chicken sandwiches out of a single chicken. But how much worse is chicken than cow?

Brian Tomasik devised a helpful suffering calculator chart. It has various columns—one for how sentient you think the animals are, compared to humans, one for how long the animals lives, etc. You can change the numbers around if you...

Continue reading

How much weight should we give the long-term future, given that nobody may be around to experience it? Both economists and philosophers see extinction risk as a rationale for discounting future costs and benefits. David Thorstad has recently claimed it poses a major challenge...

Continue reading

I've raised related points here, and also here with followup, about how exponential decay with a fixed decay rate is not a good model to use for estimating long-term survival probability.

Cullen posted a Quick Take 9h ago

Quote from VC Josh Wolfe:

Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios.

There's a ton that are gonna come on wave, and this is exciting because you can be a scientist on the beach in the Bahamas, pull up your iPad, run an experiment. The robots are performing 90% of the activity of Pouring something from a beaker into another, running a centrifuge, and then the data that comes off of that.

And this is the really cool part. Then the robot and the machines will actually say to you, “Hey, do you want to run this experiment but change these 4 parameters or these variables?” And you just click a button “yes” as though it's reverse prompting you, and then you run another experiment. So the implication here is that the boost in productivity for science, for generation of truth, of new information, of new knowledge, That to me is the most exciting thing. And the companies that capture that, forget about the societal dividend, I think are gonna make a lot of money.

https://overcast.fm/+5AWO95pnw/46:15

Continue reading