In a recent announcement, Manifold Markets say they will change the exchange rate for your play-money (called "Mana") from 1:100 to 1:1000. Importantly, one of the ways to use this Mana is to do charity donations.
TLDR: The CTA here is to log in to your Manifold account ...
Speaker there was me - I think there's like a ~70% chance we decide to end the charity program after this round of payments, tentatively as of May 15 or or end of May.
The primary reason is that the real money cash outs should supersede it, and running the charity program is operationally kind of annoying. The charity program is neither a core focus for Manifold or Manifund, so we might not want to keep it up. Will make a broader announcement if this ends up being the case.
I want to throw in a bit of my philosophy here.
Status note: This comment is written by me and reflects my views. I ran it past the other moderators, but they might have major disagreements with it.
I agree with a lot of Jason’s view here. The EA community is indeed much bigger than the EA Forum, and the Forum would serve its role as an online locus much less well if we used moderation action to police the epistemic practices of its participants.
I don’t actually think this that bad. I think it is a strength of the EA community that it is large enough and has sufficiently many worldviews that any central discussion space is going to be a bit of a mishmash of epistemologies.[1]
Some corresponding ways this viewpoint causes me to be reluctant to apply Habryka’s philosophy:[2]
Something like a judicial process is much more important to me. We try much harder than my read of LessWrong to apply rules consistently. We have the Forum Norms doc and our public history of cases forms something much closer to a legal code + case law than LW has. Obviously we’re far away from what would meet a judicial standard, but I view much of my work through that lens. Also notable is that all nontrivial moderation decisions get one or two moderators to second the proposal.
Related both to the epistemic diversity, and the above, I am much more reluctant to rely on my personal judgement about whether someone is a positive contributor to the discussion. I still do have those opinions, but am much more likely to use my power as a regular user to karma-vote on the content.
Some points of agreement:
Old users are owed explanations, new users are (mostly) not
Agreed. We are much more likely to make judgement calls in cases of new users. And much less likely to invest time in explaining the decision. We are still much less likely to ban new users than LessWrong. (Which, to be clear, I don’t think would have been tenable on LessWrong when they instituted their current policies, which was after the launch of GPT-4 and a giant influx of low quality content.)
I try really hard to not build an ideological echo chamber
Most of the work I do as a moderator is reading reports and recommending no official action. I have the internal experience of mostly fighting others to keep the Forum an open platform. Obviously that is a compatible experience with overmoderating the Forum into an echo chamber, but I will at least bring this up as a strong point of philosophical agreement.
Final points:
I do think we could potentially give more “near-ban” rate limits, such as the 1 comment/3 days. The main benefit of this I see is as allowing the user to write content disagreeing with their ban.
Controversial point! Maybe if everyone adopted my own epistemic practices the community would be better off. It would certainly gain in the ability to communicate smoothly with itself, and would probably spend less effort pulling in opposite directions as a result, but I think the size constraints and/or deference to authority that would be required would not be worth it.
Note that Habryka has been a huge influence on me. These disagreements are what remains after his large influence on me.
I do think we could potentially give more “near-ban” rate limits, such as the 1 comment/3 days. The main benefit of this I see is as allowing the user to write content disagreeing with their ban.
I think the banned individual should almost always get at least one final statement to disagree with the ban after its pronouncement. Even the Romulans allowed (will allow?) that. Absent unusual circumstances, I think they -- and not the mods -- should get the last word, so I would also allow a single reply if the mods responded to the final statement.
More generally, I'd be interested in ~"civility probation," under which a problematic poster could be placed for ~three months as an option they could choose as an alternative to a 2-4 week outright ban. Under civility probation, any "probation officer" (trusted non-mod users) would be empowered to remove content too close to the civility line and optionally temp-ban the user for a cooling-off period of 48 hours. The theory of impact comes from the criminology literature, which tells us that speed and certainty of sanction are more effective than severity. If the mods later determined after full deliberation that the second comment actually violated the rules in a way that crossed the action threshold, then they could activate the withheld 2-4 week ban for the first offense and/or impose a new suspension for the new one.
We are seeing more of this in the criminal system -- swift but moderate "intermediate sanctions" for things like failing a drug test, as opposed to doing little about probation violations until things reach a certain threshold and then going to the judge to revoke probation and send the offender away for at least several months. As far as due process, the theory is that the offender received their due process (consideration by a judge, right to presumption of innocence overcome only by proof beyond a reasonable doubt) in the proceedings that led to the imposition of probation in the first place.
Crosspost of my blog.
You shouldn’t eat animals in normal circumstances. That much is, in my view, quite thoroughly obvious. Animals undergo cruel, hellish conditions that we’d confidently describe as torture if they were inflicted on a human (or even a dog). No hamburger...
Someone DM'd me asking for more information. See https://www.mostly-fat.com/eat-meat-not-too-little-mostly-fat/ and https://www.youtube.com/watch?v=UOQCKEoflPc
Ben West recently mentioned that he would be excited about a common application. It got me thinking a little about it. I don't have the technical/design skills to create such a system, but I want to let my mind wander a little bit on the topic. This is just musings and 'thinking out out,' so don't take any of this too seriously.
What would the benefits be for some type of common application? For the applicant: send an application to a wider variety of organizations with less effort. For the organization: get a wider variety of applicants.
Why not just have the post openings posted to LinkedIn and allow candidates to use the Easy Apply function? Well, that would probably result in lots of low quality applications. Maybe include a few question to serve as a simple filter? Perhaps a question to reveal how familiar the candidate is with the ideas and principles of EA? Lots of low quality applications aren't really an issue if you have an easy way to filter them out. As a simplistic example, if I am hiring for a job that requires fluent Spanish, and a dropdown prompt in the job application asks candidates to evaluate their Spanish, it is pretty easy to filter out people that selected "I don't speak any Spanish" or "I speak a little Spanish, but not much."
But the benefit of Easy Apply (from the candidate's perspective) is the ease. John Doe candidate doesn't have to fill in a dozen different text boxes with information that is already on his resume. And that ease can be gained in an organization's own application form. An application form literally can be as simple as prompts for name, email address, and resume. That might be the most minimalistic that an application form could be while still being functional. And there are plenty of organizations that have these types of applications...
Use this thread to share information about EA-related roles you are looking to fill!
We’d like to help applicants and hiring managers coordinate, so we’ve set up this thread, and another called Who wants to be hired? (we did this last in 2022[1]).
To add your ...
CEA is hiring for someone to lead the EA Global program. CEA's three flagship EAG conferences facilitate tens of thousands of highly impactful connections each year that help people build professional relationships, apply for jobs, and make other critical career decisions.
This is a role that comes with a large amount of autonomy, and one that plays a key role in shaping a key piece of the effective altruism community’s landscape.
See more details and apply here!
With the US presidential election coming up this year, some of y’all will probably want to discuss it.[1] I think it’s a good time to restate our politics policy. tl;dr Partisan politics content is allowed, but will be restricted to the Personal Blog category. On-topic policy discussions are still eligible as frontpage material.
Or the expected UK elections.
The last ten years have witnessed rapid advances in the science of animal cognition and behavior. Striking results have hinted at surprisingly rich inner lives in a wide range of animals, driving renewed debate about animal consciousness.
To highlight these advances...
Is it random that this appeared in the New York Times yesterday, or are the two related?
How Do We Know What Animals Are Really Feeling? - The New York Times (nytimes.com)
Regardless, it is great to see more realisation and communication around this topic. Most people just do not make any mental association between "food" and "animal suffering". One day this will all appear utterly barbaric, the way slavery appears barbaric to us today even though some highly reputed figures throughout history owned slaves.
The more communication we have around animal c...
This post is easily the weirdest thing I've ever written. I also consider it the best I've ever written - I hope you give it a chance. If you're not sold by the first section, you can safely skip the rest.
Imagine an alternate version of the Effective Altruism movement,...
Great post, and an interesting counterfactual history!
Hooray for moral trade.
Evolutionary debunking arguments feel relevant re the causal history of our beliefes.
For pandemics that aren’t ‘stealth’ pandemics (particularly globally catastrophic pandemics):
Thank you for writing this article! As a complete newcomer to pandemic preparedness at large, I found this extremely useful and a great example of work that surfaces and questions often unstated assumptions.
Although I don't have enough expertise to provide much meaningful feedback, I did want to bring up some thoughts I had regarding your arguments in Reason 2. Your 44 hospitalizations threshold in the numerical examples strikes me as reasonable, but it does also seem to me that the metagenomic sequencing of COVID-19 was related ― if not a critical precond...