New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX". Changes: * Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing). * Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features. * More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say * We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things * Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing. * OpenPhil is hiring more internally Non-changes: * Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis) * Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him * Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
Could it be more important to improve human values than to make sure AI is aligned? Consider the following (which is almost definitely oversimplified):   ALIGNED AI MISALIGNED AI HUMANITY GOOD VALUES UTOPIA EXTINCTION HUMANITY NEUTRAL VALUES NEUTRAL WORLD EXTINCTION HUMANITY BAD VALUES DYSTOPIA EXTINCTION For clarity, let’s assume dystopia is worse than extinction. This could be a scenario where factory farming expands to an incredibly large scale with the aid of AI, or a bad AI-powered regime takes over the world. Let's assume neutral world is equivalent to extinction. The above shows that aligning AI can be good, bad, or neutral. The value of alignment exactly depends on humanity’s values. Improving humanity’s values however is always good.  The only clear case where aligning AI beats improving humanity’s values is if there isn’t scope to improve our values further. An ambiguous case is whenever humanity has positive values in which case both improving values and aligning AI are good options and it isn’t immediately clear to me which wins. The key takeaway here is that improving values is robustly good whereas aligning AI isn’t - alignment is bad if we have negative values. I would guess that we currently have pretty bad values given how we treat non-human animals and alignment is therefore arguably undesirable. In this simple model, improving values would become the overwhelmingly important mission. Or perhaps ensuring that powerful AI doesn't end up in the hands of bad actors becomes overwhelmingly important (again, rather than alignment). This analysis doesn’t consider the moral value of AI itself. It also assumed that misaligned AI necessarily leads to extinction which may not be accurate (perhaps it can also lead to dystopian outcomes?). I doubt this is a novel argument, but what do y’all think?
The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.
I recently wrote a post on the EA forum about turning animal suffering to animal bliss using genetic enhancement. Titotal raised an thoughtful concern: "How do you check that your intervention is working? For example, suppose your original raccoons screech when you poke them, but the genetically engineered racoons don't. Is that because they are experiencing less pain, or have they merely evolved not to screech?" This is a very good point. I was recently considering how we could be sure to not just change the expressions of suffering and I believe that I have determined a means of doing so. In psychology, it is common to use factor analysis to study a latent variables--the variables that we cannot measure directly. It seems extremely reasonable to think that animal pain is real, but the trouble is measuring it. We could try to get at pain by getting a huge array of behaviors and measures that are associated with pain (heart rate, cortisol levels, facial expressions, vocalizations, etc.) and find a latent factor of suffering that accounts for some of these behaviors. To determine if an intervention is successful at changing the latent factor of suffering for the better, we could test for measurement invariance which is an important step in making a relevant comparison between two groups. This basically tests whether the nature of the factor loadings remains the same between groups. This would mean a reduction in all of the traits associated with suffering. This would also seem relevant for environmental interventions as well.  As an illustration: imagine that I measure wefare of a raccoon by the amount of screeching it does. A bad intervention would be taping the raccoons mouth shut. This would reduce screeching, but there is no good reason to think that would alleviate suffering. However, imagine I gave the raccoon a drug and it acted less stressed, screeched less, had less cortisol, and started acting much more friendly. This would be much better evidence of true reduction in suffering.  There is much more to be defended in my thesis but this felt like a thought worth sharing.

Popular comments

Recent discussion

Summary

  • In this post, I hope to inspire other Effective Altruists to focus more on donation and commiserate with those who have been disappointed in their ability to get an altruistic job.
  • First, I argue that the impact of having a job that helps others is complicated. In this section, I discuss annual donation statistics of people in the Effective Altruism community donate, which I find quite low.
  • In the rest of the post, I describe my recent job search, my experience substituting at public schools, and my expenses.

Having a job that helps others might be overemphasized

  • Doing a job that helps others seems like a good thing to do. Weirdly, it's not as simple as that.
    • While some job vacancies last for years, other fields are very competitive and have many qualified applicants for most position listings.
      • In the latter case, if you take the job offer, you may think you are doing good in the world.
...
Continue reading

Many thanks to Andrew Snyder-Beattie and Joshua Monrad for their feedback during this project. This project was completed as part of contract work with Open Philanthropy, but the views and work expressed here do not represent those of Open Philanthropy. All thoughts are...

Continue reading

Interesting work, and some smart/decisive decision-making in terms of methodology. There is a trade-off in the effort required to map and data mine funding streams exhaustively, and what you have done makes a lot of sense.

Have you considered spot comparisons of the data included here against existing R&D funding repositories like Policy Cures Research's G-FINDER or NIH World RePORT? (disclosure that I used to work at PCR). For purely product-related R&D expenditure, I think, it supports your approach in that there are only a few funders not already... (read more)

Pat Myron commented on Ives Parr's quick take 2h ago

I recently wrote a post on the EA forum about turning animal suffering to animal bliss using genetic enhancement. Titotal raised an thoughtful concern: "How do you check that your intervention is working? For example, suppose your original raccoons screech when you poke...

Continue reading

Thanks Pat. That is something good to consider. 

Great thoughts. I will need to think more deeply about how to make this possible cost wise. We need a large sample to find the genes, but the brain imaging might make this challenging. 

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
17
2

Wouldn't it be more efficient to create new programs within existing charities rather than starting new charities?

Has someone written about this?

I've particularly noticed that there are many small charities working on animal advocacy.

(I can imagine both options have pros...

Continue reading

I think we have the exact opposite problem. When I see the budget of many big EA orgs I throw up in my mouth, thinking about how many smaller charities the money could have funded.

Maybe I'm wrong. Got any evidence that larger EA orgs are most cost effective then smaller ones?

11Answer by NickLaing10h
First, focus and drive to scale is very important. The "dream" as a small charity is to figure out how do your  one thing well (give mosquito nets, give money, incentives for vaccines ets.) then iterate, replicate and scale up. Bigger charities don't think that way and are comfortable just to "add" and "maintain" programs rather than drive hard for impact and scale. I doubt bigger charities would have the focus and drive to scale new initiatives compared to founders of small charities which have more energy and where the sky is the limit. Second, new small charities can be lean  (bit of a cliche). On the other hand many big charities actually become more inefficient as they get bigger (I'm sure there are many exceptions) and already ogten have "locked in" many inefficiencies. My evidence for this is that the classic big charities which do lots of things (oxfam, save the children, world vision etc.) are some of the least efficvient charities around.  Unfortunately as charities scale up, its often the opposite of business. You don't gain much from economies of scale, instead you add lots of middle management and each "unit-of-good" can become more expensive than it was when you were a smaller charity. I'm even struggling with this situation a bit with our charity at the moment. I actually think the opposite should often be the case, bigger charities could split up or downsize, in order to focus on the one thing they are best at. I'm talking mainly about the GHD space here by the way, don't know anything about animal charities. I've written about this kind of thing a bit more here. https://ugandapanda.com/2021/02/04/ngos-should-only-do-one-thing/    

Could it be more important to improve human values than to make sure AI is aligned?

Consider the following (which is almost definitely oversimplified):

 

ALIGNED AI

MISALIGNED AI

HUMANITY GOOD VALUES

UTOPIA

EXTINCTION

HUMANITY NEUTRAL VALUES

NEUTRAL WORLD

EXTINCTION

HUMANITY

...
Continue reading

I think a neutral world is much better than extinction, and most dystopias are also preferable to human extinction. The latter is debatable but the former seems clear? What do you imagine by a neutral world?

“I really needed to hear that”

His eyes were downcast, his normally jocular expression now solemn. I had really said something that had spoken to him, that had begun to assuage some hurt which had before remained unacknowledged.

It’s not your fault. Four words.

Later, I was...

Continue reading

I appreciate you helping others learn from your experiences, and I'm sorry they were difficult ones. And thank you for flagging the risks here.

In this new podcast episode, I discuss with Will MacAskill what the Effective Altruism community can learn from the FTX / SBF debacle, why Will has been limited in what he could say about this topic in the past, and what future directions for the Effective Altruism community...

Continue reading

Thanks very much to both of you for having this difficult conversation, and handling it with such professionalism.

Cards on the table, I agree with MacAskill about character vs structure/governance. So to me the 30 minutes of trying to get inside Bankman-Fried's head seemed a little fruitless. Though I guess there's something fascinating about trying to get into bad people's heads.

I would have liked more questions about due diligence. MacAskill says that he and Bankman-Fried chatted in early 2021 and then again with Beckstead after the FTX Foundation. That'... (read more)

14
Larks
7h
Thanks! Perhaps you or someone else with an iPhone could copy-paste it.
2
James Herbert
7h
Ah yes, that would be handy. I can't see a way of doing that, unfortunately. 
Nathan Young posted a Quick Take 5h ago

I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX".

Changes:

  • Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing).
  • Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features.
  • More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say
  • We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things
  • Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing.
  • OpenPhil is hiring more internally

Non-changes:

  • Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis)
  • Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
  • Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
Continue reading

Summary

  1. Many views, including even some person-affecting views, endorse the repugnant conclusion (and very repugnant conclusion) when set up as a choice between three options, with a benign addition option.
  2. Many consequentialist(-ish) views, including many person-affecting
...
Continue reading

It seems the relevant question is whether your original argument for A goes through. I think you pretty much agree that ethics requires persons to be affected, right? Then we have to rule out switching to Z from the start: Z would be actively bad for the initial people in S, and not switching to Z would not be bad for the new people in Z, since they don't exist.

Furthermore, it arguably isn't unfair when people are created (A+) if the alternative (A) would have been not to create them in the first place.[1] So choosing A+ wouldn't be unfair to anyone. A+ wo... (read more)