New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Marcus Daniell appreciation note @Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!
39
harfe
3d
10
FHI has shut down yesterday: https://www.futureofhumanityinstitute.org/
An alternate stance on moderation (from @Habryka.) This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason.  I found it thought provoking. I'd recommend reading it. > Thanks for making this post!  > > One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to participate in discussion on their own posts (so seeing a harsh rate-limit of something like "1 comment per 3 days" is not equivalent to a general ban from LessWrong, but should be more interpreted as "please comment primarily on your own posts", though of course it shares many important properties of a ban). This is a pretty opposite approach to the EA forum which favours bans. > Things that seem most important to bring up in terms of moderation philosophy:  > > Moderation on LessWrong does not depend on effort > > "Another thing I've noticed is that almost all the users are trying.  They are trying to use rationality, trying to understand what's been written here, trying to apply Baye's rule or understand AI.  Even some of the users with negative karma are trying, just having more difficulty." > > Just because someone is genuinely trying to contribute to LessWrong, does not mean LessWrong is a good place for them. LessWrong has a particular culture, with particular standards and particular interests, and I think many people, even if they are genuinely trying, don't fit well within that culture and those standards.  > > In making rate-limiting decisions like this I don't pay much attention to whether the user in question is "genuinely trying " to contribute to LW,  I am mostly just evaluating the effects I see their actions having on the quality of the discussions happening on the site, and the quality of the ideas they are contributing.  > > Motivation and goals are of course a relevant component to model, but that mostly pushes in the opposite direction, in that if I have someone who seems to be making great contributions, and I learn they aren't even trying, then that makes me more excited, since there is upside if they do become more motivated in the future. I sense this is quite different to the EA forum too. I can't imagine a mod saying I don't pay much attention to whether the user in question is "genuinely trying". I find this honesty pretty stark. Feels like a thing moderators aren't allowed to say. "We don't like the quality of your comments and we don't think you can improve". > Signal to Noise ratio is important > > Thomas and Elizabeth pointed this out already, but just because someone's comments don't seem actively bad, doesn't mean I don't want to limit their ability to contribute. We do a lot of things on LW to improve the signal to noise ratio of content on the site, and one of those things is to reduce the amount of noise, even if the mean of what we remove looks not actively harmful.  > > We of course also do other things than to remove some of the lower signal content to improve the signal to noise ratio. Voting does a lot, how we sort the frontpage does a lot, subscriptions and notification systems do a lot. But rate-limiting is also a tool I use for the same purpose. > Old users are owed explanations, new users are (mostly) not > > I think if you've been around for a while on LessWrong, and I decide to rate-limit you, then I think it makes sense for me to make some time to argue with you about that, and give you the opportunity to convince me that I am wrong. But if you are new, and haven't invested a lot in the site, then I think I owe you relatively little.  > > I think in doing the above rate-limits, we did not do enough to give established users the affordance to push back and argue with us about them. I do think most of these users are relatively recent or are users we've been very straightforward with since shortly after they started commenting that we don't think they are breaking even on their contributions to the site (like the OP Gerald Monroe, with whom we had 3 separate conversations over the past few months), and for those I don't think we owe them much of an explanation. LessWrong is a walled garden.  > > You do not by default have the right to be here, and I don't want to, and cannot, accept the burden of explaining to everyone who wants to be here but who I don't want here, why I am making my decisions. As such a moderation principle that we've been aspiring to for quite a while is to let new users know as early as possible if we think them being on the site is unlikely to work out, so that if you have been around for a while you can feel stable, and also so that you don't invest in something that will end up being taken away from you. > > Feedback helps a bit, especially if you are young, but usually doesn't > > Maybe there are other people who are much better at giving feedback and helping people grow as commenters, but my personal experience is that giving users feedback, especially the second or third time, rarely tends to substantially improve things.  > > I think this sucks. I would much rather be in a world where the usual reasons why I think someone isn't positively contributing to LessWrong were of the type that a short conversation could clear up and fix, but it alas does not appear so, and after having spent many hundreds of hours over the years giving people individualized feedback, I don't really think "give people specific and detailed feedback" is a viable moderation strategy, at least more than once or twice per user. I recognize that this can feel unfair on the receiving end, and I also feel sad about it. > > I do think the one exception here is that if people are young or are non-native english speakers. Do let me know if you are in your teens or you are a non-native english speaker who is still learning the language. People do really get a lot better at communication between the ages of 14-22 and people's english does get substantially better over time, and this helps with all kinds communication issues. Again this is very blunt but I'm not sure it's wrong.  > We consider legibility, but its only a relatively small input into our moderation decisions > > It is valuable and a precious public good to make it easy to know which actions you take will cause you to end up being removed from a space. However, that legibility also comes at great cost, especially in social contexts. Every clear and bright-line rule you outline will have people budding right up against it, and de-facto, in my experience, moderation of social spaces like LessWrong is not the kind of thing you can do while being legible in the way that for example modern courts aim to be legible.  > > As such, we don't have laws. If anything we have something like case-law which gets established as individual moderation disputes arise, which we then use as guidelines for future decisions, but also a huge fraction of our moderation decisions are downstream of complicated models we formed about what kind of conversations and interactions work on LessWrong, and what role we want LessWrong to play in the broader world, and those shift and change as new evidence comes in and the world changes. > > I do ultimately still try pretty hard to give people guidelines and to draw lines that help people feel secure in their relationship to LessWrong, and I care a lot about this, but at the end of the day I will still make many from-the-outside-arbitrary-seeming-decisions in order to keep LessWrong the precious walled garden that it is. > > I try really hard to not build an ideological echo chamber > > When making moderation decisions, it's always at the top of my mind whether I am tempted to make a decision one way or another because they disagree with me on some object-level issue. I try pretty hard to not have that affect my decisions, and as a result have what feels to me a subjectively substantially higher standard for rate-limiting or banning people who disagree with me, than for people who agree with me. I think this is reflected in the decisions above. > > I do feel comfortable judging people on the methodologies and abstract principles that they seem to use to arrive at their conclusions. LessWrong has a specific epistemology, and I care about protecting that. If you are primarily trying to...  > > * argue from authority,  > * don't like speaking in probabilistic terms,  > * aren't comfortable holding multiple conflicting models in your head at the same time,  > * or are averse to breaking things down into mechanistic and reductionist terms,  > > then LW is probably not for you, and I feel fine with that. I feel comfortable reducing the visibility or volume of content on the site that is in conflict with these epistemological principles (of course this list isn't exhaustive, in-general the LW sequences are the best pointer towards the epistemological foundations of the site). It feels cringe to read that basically if I don't get the sequences lessWrong might rate limit me. But it is good to be open about it. I don't think the EA forum's core philosophy is as easily expressed. > If you see me or other LW moderators fail to judge people on epistemological principles but instead see us directly rate-limiting or banning users on the basis of object-level opinions that even if they seem wrong seem to have been arrived at via relatively sane principles, then I do really think you should complain and push back at us. I see my mandate as head of LW to only extend towards enforcing what seems to me the shared epistemological foundation of LW, and to not have the mandate to enforce my own object-level beliefs on the participants of this site. > > Now some more comments on the object-level:  > > I overall feel good about rate-limiting everyone on the above list. I think it will probably make the conversations on the site go better and make more people contribute to the site.  > > Us doing more extensive rate-limiting is an experiment, and we will see how it goes. As kave said in the other response to this post, the rule that suggested these specific rate-limits does not seem like it has an amazing track record, though I currently endorse it as something that calls things to my attention (among many other heuristics). > > Also, if anyone reading this is worried about being rate-limited or banned in the future, feel free to reach out to me or other moderators on Intercom. I am generally happy to give people direct and frank feedback about their contributions to the site, as well as how likely I am to take future moderator actions. Uncertainty is costly, and I think it's worth a lot of my time to help people understand to what degree investing in LessWrong makes sense for them. 
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX". Changes: * Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing). * Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features. * More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say * We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things * Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing. * OpenPhil is hiring more internally Non-changes: * Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis) * Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him * Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
While AI value alignment is considered a serious problem, the algorithms we use every day do not seem to be subject to alignment. That sounds like a serious problem to me. Has no one ever tried to align the YouTube algorithm with our values? What about on other types of platforms?

Popular comments

Recent discussion

Reducing the influence of malevolent actors seems useful for reducing existential risks (x-risks) and risks of astronomical suffering (s-risks). One promising strategy for doing this is to develop manipulation-proof measures of malevolence.

I think better measures would...

Continue reading

I agree that most measures (including the ones that I mentioned being pessimistic about) could be used to update one’s estimated probability that an actor is malevolent, but like you, I’d be most interested in which measures give the highest value of information (relative to the costs and invasiveness of the measure). 

 

I could have done a better job of explaining why I think that pupillometry, and particularly the measurement of pupillary responses to specific stimuli, would be much more difficult to game (if it was possible at all) relative... (read more)

You can give me anonymous feedback about anything you want here.

Summary

  • Interventions in the effective altruism community are usually assessed under 2 different frameworks, existential risk mitigation, and nearterm welfare improvement.
    • It looks like 2 distinct frameworks
...
Continue reading

I'm confused by some of the set-up here. When considering catastrophes, your "cost to save a life" represents the cost to save that life conditional on the catastrophe being due to occur? (I'm not saying "conditional on occurring" because presumably you're allowed interventions which try to avert the catastrophe.)

Understood this way, I find this assumption very questionable:

, since I feel like the effect of having more opportunities to save lives in catastrophes is roughly offset by the greater difficulty of preparing to take advantage of those opportu

... (read more)

One of the largest cryptocurrency exchanges, FTX, recently imploded after apparently transferring customer funds to cover losses at their affiliated hedge fund. Matt Levine has good coverage, especially his recent post on their balance sheet. Normally a crypto...

Continue reading

"At a certain point, we just have to trust the peer-review process"

Coming here late, found it an interesting comment overall, but just thought I'd say something re interpreting the peer reviewed literature as an academic, as I think people often misunderstand what peer review does. It's pretty weak and you don't just trust what comes out! Instead, look for consistent results being produced by at least a few independent groups, without there being contradictory research (researchers will rarely publish replications of results, but if a set of results don't ... (read more)

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Hi Everyone!

I'm currently a high school student in the United States. I've been casually following and supporting EA for about 1.5 years now, doing what I can with donating any extra money to effective causes. However, I have recently been getting a lot more interested ...

Continue reading
Answer by BolunApr 20, 20241
0
0

I second Nathan's answer, but besides that here are a few programs specifically for HS students you might be interested in.

I'd also recommend applying to those university programs! When I was talking with the organizers of those programs at EAG, n=3 seemed to believe it'd be perfectly appropriate for me to apply. I suspect other programs would also (major caveat: may only apply to non-residential programs for <18 for legal reasons).

(also, Yarrow B's answer is a joke, but, anecdotally, a major failu... (read more)

4Answer by Nathan Young2h
Attempt to bequeath your future self a kinder, more joyful, more competent person. This might involve: * Learning new skills - eg coding, managing some contractors, research, event organisation * Getting into the practise of doing things - build a website that you think should exist, run an event you would attend * Spend a portion of current resources effectively - set aside some resources, do some research, tell others what you did * Learn what you like - what makes you smile? What do you really strongly endorse * Get some mentors - write to people you respect asking them for tips on how you could improve * Get a good sleep and exercise schedule - many people struggle with this later so it's good to get it locked down early.  * Get on top of your emails, and social media time - many people struggle to wean themselves of email burnout or social media addition. You could get ahead of that * Look into different project management systems - often these help people get more done * Take time to empathise with people in different worlds to you - you could watch youtube videos of people in poorer nations talking about their situation. GiveDirectly has a load of these I think  * Gather resources - your future self may have projects they want to work on. Earning money and putting it aside will give you more options * Build a close network of like minded ambitious people  * Learn how you learn - you can probably get much more done at high school than you are. I wasted lots of time not realising this * Introspect about who you want to be - what are your key goals? how would you know if you were closer to them * Consider not being involved - often the first community one becomes involved in can be an unhealthy relationship. It's good to spend some time considering the alternative. You could work hard and have lots of nice things. And that would be okay. I don't say this to say EA is bad, but to say that if I were your friend I would want you to do things out of j

TL;DR

Healthier Hens (HH) aims to improve cage-free hen welfare, focusing on key issues such as keel bone fractures (KBFs). In the last 6 months, we’ve conducted a vet training in Kenya, found a 42% KBF prevalence, and are exploring alternative promising interventions in...

Continue reading
3
MichaelStJules
6h
  For the most promising, limited access to feed (feeders), at 0.27 cents/hour of disabling pain, this is around 0.067 years of disabling pain/$. It's worth benchmarking against corporate campaigns for comparison. From Duffy, 2023, using disabling pain-equivalent: At first, this looks much less cost-effective, 1.7/0.067 = 25. However, Emily Oehlsen from Open Phil said And Duffy's estimate is based on the same analysis by Saulius. So, more like 5x less cost-effective. However, Duffy's estimate also included milder pains: More than half of the equivalent hours of disabling pain is actually not from disabling pain at all, instead hurtful pain. So a fairer comparison would either omit the hurtful pain for corporate campaigns or also include hurtful pain for this other intervention. This could bring us closer to around 2.5x, naively, which seems near enough to the funding bar. On the other hand,  I picked the most promising of the interventions, and it's less well-studied and tested than corporate campaigns, so we might expect some optimizer's curse or regression towards being less cost-effective.
3
MichaelStJules
6h
Another benchmark is GiveWell-recommended charities, which save a life for around $5,000. Assuming that's 70 years of life saved (mostly children), that would be 70 years of human life/$5000 = 0.014 years of human life/$. People spend about 1/3rd of their time sleeping, so it's around 0.0093 years of waking human life/$. Then, taking ratios of cost-effectiveness, that's about 7 years of disabling chicken pain prevented per year of waking human life saved. Then, we could consider: 1. How bad disabling pain is in a human vs a chicken 2. How bad human disabling pain is vs how valuable additional waking human life is 3. Indirect effects (of the additional years of human life, influences on attitudes towards nonhuman animals, etc.)

Thanks for providing these external benchmarks and making it easier to compare! Do you mind if I updated the text to include a reference to your comments?

Indeed, since these were initial estimates, we excluded reporting the other pain intensities to keep it brief. However, once we go through the follow-up data and have the second set of estimates, we'll make sure to include all of the ranges, so that more comprehensive comparisons could be made. But my understanding is that for water and feed, it could be ~1:5:7 (disabling:hurtful:annoying) and ~1:1:0.1 fo... (read more)

I have not researched longtermism deeply. However, what I have found out so far leaves me puzzled and skeptical. As I currently see it, you can divide what longtermism cares about into two categories:

1) Existential risk.

2) Common sense long-term priorities, such as:

  • economic
...
Continue reading
1
Yarrow B.
2h
Does this apply to things other than existential risk?
1
Yarrow B.
2h
This is an interesting point, and I guess it’s important to make, but it doesn’t exactly answer the question I asked in the OP. In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that it’s so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.) I feel like people in the EA community only started talking about "longtermism" in the last few years, whereas they had been talking about existential risk many years prior to that.  Suppose I already bought into Bostrom’s argument about existential risk and future people in 2013. Does longtermism have anything new to tell me?  

I guess I think of caring about future people as the core of longtermism, so if you're already signed up to that, I would already call you a longtermist? I think most people aren't signed up for that, though.

Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.


Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse

...
Continue reading

A very interesting summary, thanks.

However I'd like to echo Richard Chappell's unease at the praising of the use of short-term contracts in the report. These likely cause a lot of mental health problems and will dissuade people who might have a lot to contribute but can't cope with worrying about whether they will need to find a new job or even career in a couple of years' time. It could be read as a way of avoiding dealing with university processes for firing people - but then the lesson for future organisations may be to set up outside a university structure, and have a sensible degree of job security.

2
Hamish McDoodles
1h
Why was relationship management even necessary? Wasn't FHI bringing prestige and funding to the university? Aren't the incentives pretty well aligned?
1
Jelle Donders
3h
FHI almost singlehandedly made salient so many obscure yet important research topics. To everyone that contributed over the years, thank you!

Open Philanthropy commissioned a report from Stefan Dercon on economic growth as the main driver of poverty reduction. In the report, Dercon highlights a set of overlooked policies that can help boost economic growth in developing countries, as well as key reasons ...

Continue reading

Here is the abstract:

Starting from the premise that growth is essential for some of the poorest countries, this note suggests some less obvious investments complementary to the usual approaches that encourage capital transfers and technical assistance in specific areas (e.g., by private foundations or the World Bank). It uses a framing that places a key reason for lagging growth in the agency of those with power and influence — the elite — and the coalition among them — their elite-bargain — that is not conducive to growth. Proposals are articulated that t

... (read more)

A case against focusing on tail-end nuclear war risks: Why I think that nuclear risk reduction efforts should prioritize preventing any kind of nuclear war over preventing or preparing for the worst case scenarios 

Summary

  • This is an essay written in the context of my
...
Continue reading

Hi Sarah,

I have just published a post somewhat related to yours where I wonder whether saving lives in normal times is better to improve the longterm future than doing so in catastrophes.