Cross-posted on LessWrong.
This article is the fourth in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific...
Tl;dr: One of the biggest problems facing any kind of collective action today is the fracturing of the information landscape. I propose a collective, issue-agnostic observatory with a mix of algorithmic and human moderation for the purposes of aggregating information, separate...
I think overall this post plays into a few common negative stereotypes of EA: Enthusiastic well-meaning people (sometimes with a grandiose LoTR reference username) proposing grand plans to solve an enormously complex problem without really acknowledging or understanding the nuance.
Suggesting that we simply develop an algorithm to identify "high quality content" and that a combination of crowds and experts will reliably be able to identify factual vs non-factual information seems to completely miss the point of the problem, which is that both of these things are extremely difficult and that's why we have a disinformation crisis.
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly...
Thanks for sharing your thoughts!
It's a pity you don't feel comfortable inviting people to the conference - that's the last thing we want to hear!
So far our visual style for EAGxUtrecht hasn't been austere[1] so we'll think more about this. Normally, to avoid looking too fancy, I ask myself: would this be something the NHS would spend money on?
But I'm not sure how to balance the appearance of prudence with making things look attractive. Things that make me lean towards making things look attractive include:
For what it's worth, the total budget this year is about half of what was spent in 2022, and we have the capacity for almost the same number of attendees (700 instead of 750).
In case it's useful, here are some links that show the benefits of EAGx events. I admit they don't provide a slam-dunk case for cost-effectiveness, but they might be useful when talking to people about why we organise them:
Thanks again for sharing your thoughts! I hope your pseudonymous account is helping you use the forum, although I definitely don't think you need to worry about looking dumb :)
We're going for pink and fun instead. We're only going to spend a few hundred euros on graphic design.
Hi James, I feel quite guilty for prompting you to write such a long, detailed, and persuasive response! Striving to find a balance between prudence and appeal seems to be the ideal goal. Using the NHS's spending habits as a heuristic to avoid extravagance seems smart (although I would not say that this should apply to other events!). Most importantly, I am relieved to learn that this year's budget per person will likely be significantly lower.
I totally agree that these events are invaluable. EAGs and EAGxs have been crucial in expanding my network and enhancing my impact and agency. However, as mentioned, I am concerned about perceptions. Having heard this I feel reassured, and I will see who I can invite! Thank you!
If you've read Leif's WIRED article or Poverty is No Pond & have questions for him, I'd love to share them with him & in turn share his answers here.
Thank you, M, for sharing this with me & encouraging me to connect.
Your article concludes with an anecdote about your surfer friend Aaron who befriended a village and helped upgrade their water supply. Is this meant to be an alternative model of philanthropy? Would you really encourage people to do this on a large scale? How would you avoid this turning this into voluntourism, where poor people in the third world have to pretend to befriend wannabe white saviours in exchange for money?
I’m Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they’ll touch on topics—like FTX—that I expect will be of interest to Forum readers...
I think you're missing some important ground in between "reflection process" and "PR exercise".
I can't speak for EV or other people then on the boards, but from my perspective the purpose of the legal investigation was primarily about helping to facilitate justified trust. Sam had by many been seen as a trusted EA leader, and had previously been on the board of CEA US. It seemed it wouldn't be unreasonable if people in EA (or even within EV) started worrying that leadership were covering things up. Having an external investigation was, although not a cheap...
This post was partly inspired by, and shares some themes with, this Joe Carlsmith post. My post (unsurprisingly) expresses fewer concepts with less clarity and resonance, but is hopefully of some value regardless.
Content warning: description of animal death.
I live in a ...
My prior would be that unless you check extremely frequently, this sounds like a lot of suffering. But not sure about the other options.
___________________________________________________
tldr;
___________________________________________________
Effective Altruism (EA) has embraced longtermism as one of its guiding principles. In What we owe the future, MacAskill lays out the foundational principles of longtermism, urging us to expand our ethical considerations to include the well-being and prospects of future generations.
Say, hypothetically, you have a coworker you work well with, but get in heated political arguments with as well. This only happens maybe once a quarter, so they rarely even register as hiccups in your working relationship.
Say, now, that during one of these arguments you recognize a cognitive bias in the coworker's argumentation. The likelihood is high that they are falling into the bias in other contexts (you might argue that it seems more likely that they can clearly compartmentalize and only display thoughts which show evidence of the bias when they are heated - so, say for the sake of the argument that you can now remember an obvious example from daily work where you have seen them lean on the bias).
Here's my dilemma: since you now have evidence they are employing a cognitive bias, do you have a moral (or even, team-based or business) obligation to point out the bias to them? If yes: ...
TLDR: If you're an EA-minded animal funder donating $200K/year or more, we'd love to connect with you about several exciting initiatives that AIM is launching over the next several months.
AIM (formerly Charity Entrepreneurship) has a history of incubating and supporting...
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
– –
Quality of our reports
I would like to push back a bit on Joey's response here. I agree that our research is quicker scrappier and goes into less depth than other orgs, but I am not convinced that our reports have more errors or worse reasoning that reports of other organisations (thinking of non-peer reviewed global health and animal welfare organisations like GiveWell, OpenPhil, Animal Charity Evaluators, Rethink Priorities, Founders Pl...
Executive summary: Current and proposed regulations require AI-generated content to be labeled and watermarked, but these lightweight methods have limitations in preventing misuse and ensuring accountability.
Key points:
- Labeling and watermarking AI-generated content informs users and enables tracing the source AI model.
- The US, China, and EU have proposed or enacted rules requiring conspicuous labeling and robust watermarking of AI content.
- Labeling and watermarking are lightweight methods with precedent, but compliance and effectiveness can vary.
- Labels and w
... (read more)