New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
35
harfe
12h
2
Consider donating all or most of your Mana on Manifold to charity before May 1. Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then. Also this part might be relevant for people with large positions they want to sell now: > One week may not be enough time for users with larger portfolios to liquidate and donate. We want to work individually with anyone who feels like they are stuck in this situation and honor their expected returns and agree on an amount they can donate at the original 100:1 rate past the one week deadline once the relevant markets have resolved.
Animal Justice Appreciation Note Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it. Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!
GiveWell and Open Philanthropy just made a $1.5M grant to Malengo! Congratulations to @Johannes Haushofer and the whole team, this seems such a promising intervention from a wide variety of views
Quote from VC Josh Wolfe: > Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios. > > There's a ton that are gonna come on wave, and this is exciting because you can be a scientist on the beach in the Bahamas, pull up your iPad, run an experiment. The robots are performing 90% of the activity of Pouring something from a beaker into another, running a centrifuge, and then the data that comes off of that. > > And this is the really cool part. Then the robot and the machines will actually say to you, “Hey, do you want to run this experiment but change these 4 parameters or these variables?” And you just click a button “yes” as though it's reverse prompting you, and then you run another experiment. So the implication here is that the boost in productivity for science, for generation of truth, of new information, of new knowledge, That to me is the most exciting thing. And the companies that capture that, forget about the societal dividend, I think are gonna make a lot of money. https://overcast.fm/+5AWO95pnw/46:15
2
Otto
7h
0
High impact startup idea: make a decent carbon emissions model for flights. Current ones simply use flight emissions which makes direct flights look low-emission. But in reality, some of these flights wouldn't even be there if people could be spread over existing indirect flights more efficiently, which is why they're cheaper too. Emission models should be relative to counterfactual. The startup can be for-profit. If you're lucky, better models already exist in scientific literature. Ideal for the AI for good-crowd. My guess is that a few man-years work could have a big carbon emissions impact here.

Popular comments

Recent discussion

Regulators should review the 2014 DeepMind acquisition. When Google bought DeepMind in 2014, no regulator, not the FTC, not the EC's DG COMP, nor the CMA, scrutinized the impact. Why? AI startups have high value but low revenues. And so they avoid regulation (...

Continue reading

Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn't read this post as x-risk motivated (though I admit I was confused what it's primary motivation was).

tlevin commented on AI Regulation is Unsafe 22m ago

Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be.

There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks...

Continue reading

This post correctly identifies some of the major obstacles to governing AI, but ultimately makes an argument for "by default, governments will not regulate AI well," rather than the claim implied by its title, which is that advocating for (specific) AI regulations is net negative -- a type of fallacious conflation I recognize all too well from my own libertarian past.

3
Mjreard
2h
I think you've failed to think on the margin here. I agree that the broad classes of regulation you point to here have *netted out* badly, but this says little about what the most thoughtful and determined actors in these spaces have achieved.  Classically, Germany's early 2000s investments in solar R&D had enormous positive externalities on climate and the people who pushed for those didn't have to support restricting nuclear power also. The option space for them was not "the net-bad energy policy that emerged" vs "libertarian paradise;" it was: "the existing/inevitable bad policies with a bet on solar R&D" vs "the existing/inevitable bad policies with no bet on solar R&D." I believe most EAs treat their engagement with AI policy as researching and advocating for narrow policies tailored to mitigate catastrophic risk. In this sense, they're acting as an organized/expert interest group motivated by a good, even popular per some polls, view of the public interest. They are competing with rather than complementing the more selfishly motivated interest groups seeking the kind of influence the oil & gas industry did in the climate context. On your model of regulation, this seems like a wise strategy, perhaps the only viable one. Again the alternative is not no regulation, but regulation that leaves out the best, most prosocial ideas.  To the extent you're trying to warn EAs not to indiscriminately cheer any AI policy proposal assuming it will help with x-risk, I agree with you. I don't however agree that's reflective of how they're treating the issue. 

Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk).

Introduction

I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a ...

Continue reading

Strongly agree. I think there's also a motivation gap in knowledge acquisition. If you don't think there's much promise in an idea or a movement, it usually doesn't make sense to spend years learning about it. This leads to large numbers of very good academics writing poorly-informed criticisms. But this shouldn't be taken to indicate that there's nothing behind the criticisms. It's just that it doesn't pay off career-wise for these people to spend years learning enough to press the criticisms better.

3
Ulrik Horn
8h
I must admit I did not have time to re-read your post carefully, but thought it worth pointing out that after reading it I am left a bit confused by the multiple "culture wars" references. Could you please expand on this a bit? I guess my confusion is that "culture wars" seem to be "attention grabbing" words you used in the beginning of your post, but I feel that after reading the full post that they were not fully addressed. I would be keen to understand if you only meant these to be rhetorical devices to make the reading more captivating, or if you have opinions on the frequent "white boys" criticisms of EA? It is fine if it is the former, I just felt a bit like I was left hanging after reading the post which I think otherwise did some good analysis on financial motives for criticism, comparing AI to e.g Climate Change. I think others might be interested in this topic as well, especially as JEID concerns was raised by many EAs, and especially women and non-binary EAs. I also think some EAs might think that the "white boys"/culture wars criticisms of EA is actually criticism we should take seriously, although the tone in which these criticisms are made are often not the most optimal for engaging in fruitful dialogue (but I can understand if people with bad experiences can find it hard to suppress their anger - and perhaps sometimes anger is appropriate).
41
Larks
12h
I think the idea of a motivational shadow is a good one, and it can be useful to think about these sorts of filters on what sorts of evidence/argument/research people are willing to share, especially if people are afraid of social sanction. However, I am less convinced by this concrete application. You present a hierarchy of activities in order of effort required to unlock, and suggest that something like 'being paid full time to advocate for this' pushes people up multiple levels: I don't believe that the people who are currently doing high quality Xrisk advocacy would counter-factually be writing nasty newspaper hit pieces; these just seem like totally different activities, or that Timnit would write more rigourously if people gave her more money. My impression is that high quality work on both sides is done by people with strong inherent dedication to truth-seeking and intellectual inquiry, and there is no need to first pass through a valley of vitriol before your achieve a motivational level-up to an ascended state of evidence. Indeed, historically a lot of Xrisk advocacy work was done by people for whom such an activity had negative financial and social payoff. I also think you miss a major, often dominant motivation: people love to criticize, especially to criticize things that seem to threaten their moral superiority. 
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.


Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse

...
Continue reading
7
Rían O.M
4h
This makes me sad as I enjoy reading your comments and find them insightful. That said, I understand and support your reasoning. I feel as though some amount of "mistake mindset" has disappeared a little in the two years I've been reading the forum. 
14
Sean_o_h
3h
Thanks Rían, I appreciate it. And to be fair, this is from my perspective as much a me thing as it is an Oli thing. Like, I don't think the global optimal solution is an EA forum that's a cuddly little safe space for me. But we all have to make the tradeoffs that make most sense for us individually, and this kind of thing is costly for me.

I don't think the global optimal solution is an EA forum that's a cuddly little safe space for me.

I agree with this, but also think the forum "not being cuddly for Sean" and "not driving contributors away" aren't mutually exclusive. Maybe I am not seeing all the tradeoffs though. 

harfe posted a Quick Take 12h ago

Consider donating all or most of your Mana on Manifold to charity before May 1.

Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 ...

Continue reading
Luke Moore commented on Priors and Prejudice 2h ago
62
3

This post is easily the weirdest thing I've ever written. I also consider it the best I've ever written - I hope you give it a chance. If you're not sold by the first section, you can safely skip the rest.

I

Imagine an alternate version of the Effective Altruism movement,...

Continue reading

Loved this post! 

Crosspost of my blog.  

You shouldn’t eat animals in normal circumstances. That much is, in my view, quite thoroughly obvious. Animals undergo cruel, hellish conditions that we’d confidently describe as torture if they were inflicted on a human (or even a dog). No hamburger...

Continue reading
8
jackva
3h
@Vasco Grillo would be well-placed to do the math here, but I have the strong intuition that under most views giving some weight to animal welfare the marginal climate damage from additional beef consumption will be outweighed by animal suffering reduction by a large margin.  

Thanks for tagging me, Johannes! I have not read the post, but in my mind one should overwhelmingly focus on minimising animal suffering in the context of food consumption. I estimate the harm caused by the annual food consumption of a random person is 159 times that caused by their annual GHG emissions.

Fig. 4 of Kuruc 2023 is relevant to the question. A welfare weight of 0.05 means that one values 0.05 units of welfare in humans as much as 1 unit of welfare in animals, and it would still require a social cost of carbon of over 7 k$/t for prioritising beed... (read more)

1
Vidur Kapur
4h
Even within the dairy and red meat categories, there are ways to reduce your greenhouse gas emissions. Milk is better than cheese, and lamb is better than beef. Also, mussels and oysters do well on climate and (probably) welfare grounds.

Many thanks to Andrew Snyder-Beattie and Joshua Monrad for their feedback during this project. This project was completed as part of contract work with Open Philanthropy, but the views and work expressed here do not represent those of Open Philanthropy. All thoughts are...

Continue reading

Thank you so much for flagging this! Very much agreed this is an important correction; the update that the US doesn't dominate the biosecurity spend this way is indeed important and I think a welcome one. Will certainly amend.

Heramb Podar posted a Quick Take 4h ago

I see way too many people confusing movement with progress in the policy space. 

There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective - lots of legal leeway for companies to exploit. 

Continue reading