New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
From a utilitarian perspective, it would seem there are substantial benefits to accurate measures of welfare.  I was listening to Adam Mastroianni discuss the history of trying measure happiness and life satisfaction and it was interesting to find a level of stability across the decades. Could it really be that the increases in material wealth do not result in huge objective increases in happiness and satisfaction for humans? It would seem the efforts to increase GDP and improve standard of living beyond the basics may be misdirected. Furthermore, it seems like it would be extremely helpful in terms of policy creation to have an objective unit like a util.  We could compare human and animal welfare directly, and genetically engineer animals to increase their utils.  While efforts might not super successful, it would seem very important to merely improve objective measures of wellbeing by say 10%.
Could it be more important to improve human values than to make sure AI is aligned? Consider the following (which is almost definitely oversimplified):   ALIGNED AI MISALIGNED AI HUMANITY GOOD VALUES UTOPIA EXTINCTION HUMANITY NEUTRAL VALUES NEUTRAL WORLD EXTINCTION HUMANITY BAD VALUES DYSTOPIA EXTINCTION For clarity, let’s assume dystopia is worse than extinction. This could be a scenario where factory farming expands to an incredibly large scale with the aid of AI, or a bad AI-powered regime takes over the world. Let's assume neutral world is equivalent to extinction. The above shows that aligning AI can be good, bad, or neutral. The value of alignment exactly depends on humanity’s values. Improving humanity’s values however is always good.  The only clear case where aligning AI beats improving humanity’s values is if there isn’t scope to improve our values further. An ambiguous case is whenever humanity has positive values in which case both improving values and aligning AI are good options and it isn’t immediately clear to me which wins. The key takeaway here is that improving values is robustly good whereas aligning AI isn’t - alignment is bad if we have negative values. I would guess that we currently have pretty bad values given how we treat non-human animals and alignment is therefore arguably undesirable. In this simple model, improving values would become the overwhelmingly important mission. Or perhaps ensuring that powerful AI doesn't end up in the hands of bad actors becomes overwhelmingly important (again, rather than alignment). This analysis doesn’t consider the moral value of AI itself. It also assumed that misaligned AI necessarily leads to extinction which may not be accurate (perhaps it can also lead to dystopian outcomes?). I doubt this is a novel argument, but what do y’all think?
The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.
Many organizations I respect are very risk-averse when hiring, and for good reasons. Making a bad hiring decision is extremely costly, as it means running another hiring round, paying for work that isn't useful, and diverting organisational time and resources towards trouble-shooting and away from other projects. This leads many organisations to scale very slowly. However, there may be an imbalance between false positives (bad hires) and false negatives (passing over great candidates). In hiring as in many other fields, reducing false positives often means raising false negatives. Many successful people have stories of being passed over early in their careers. The costs of a bad hire are obvious, while the costs of passing over a great hire are counterfactual and never observed. I wonder  whether, in my past hiring decisions, I've properly balanced the risk of rejecting a potentially great hire against the risk of making a bad hire. One reason to think we may be too risk-averse, in addition to the salience of the costs, is that the benefits of a great hire could grow to be very large, while the costs of a bad hire are somewhat bounded, as they can eventually be let go.

Popular comments

Recent discussion

1. Will all-gender bathrooms exist at EAG London?

As I understand it, the UK government has mandated that new buildings be constructed with bathrooms that are either multi-stall and single-gender or single-stall and all-gender. In the past, CEA has rented event spaces with...

Continue reading

not an answer but what the hell is going on over there

4Answer by Saul Munn1h
Hi! I'd recommend reaching out directly to the organizing team. You can reach them here: hello@eaglobal.org.
3
Saul Munn
1h
You can do this in parallel to having this post publicly; in fact, you can even email them to notify them that this post exists! However, I wouldn't expect that they'll see this question on the Forum by default. There's a lot of content on the Forum, and the EAG team is extremely busy.
Ives Parr posted a Quick Take 2h ago

I recently wrote a post on the EA forum about turning animal suffering to animal bliss using genetic enhancement. Titotal raised an thoughtful concern: "How do you check that your intervention is working? For example, suppose your original raccoons screech when you poke them, but the genetically engineered racoons don't. Is that because they are experiencing less pain, or have they merely evolved not to screech?"

This is a very good point. I was recently considering how we could be sure to not just change the expressions of suffering and I believe that I have determined a means of doing so. In psychology, it is common to use factor analysis to study a latent variables--the variables that we cannot measure directly. It seems extremely reasonable to think that animal pain is real, but the trouble is measuring it. We could try to get at pain by getting a huge array of behaviors and measures that are associated with pain (heart rate, cortisol levels, facial expressions, vocalizations, etc.) and find a latent factor of suffering that accounts for some of these behaviors.

To determine if an intervention is successful at changing the latent factor of suffering for the better, we could test for measurement invariance which is an important step in making a relevant comparison between two groups. This basically tests whether the nature of the factor loadings remains the same between groups. This would mean a reduction in all of the traits associated with suffering. This would also seem relevant for environmental interventions as well. 

As an illustration: imagine that I measure wefare of a raccoon by the amount of screeching it does. A bad intervention would be taping the raccoons mouth shut. This would reduce screeching, but there is no good reason to think that would alleviate...

Continue reading

Author’s Note: This post is part of a larger sequence on addiction, and sampled from an appendix post of mine. For more background on the appendix format I used, read this.

If you are in, suspect you are in, or have struggled in the past with some sort of addiction, feel free to join this Discord server. It is a recovery group I set up focused on helping EAs struggling, in case they think they would benefit from having a space where they can discuss more unique struggles with a group of people who are more likely to understand them. It is currently relatively inactive, but I am trying to change this. If you are uncomfortable with this for any reason, but still want help, feel free to get in touch via DMs, and I can try to help you in some other way.

Image from painting “Autumnal Cannibalism” by Salvador Dali

While working on this other post, one of my favorite bloggers released a quite relevant...

Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Author’s Note: This post is part of a larger sequence on addiction, and sampled from an appendix post of mine. For more background on the appendix format I used, read this.

If you are in, suspect you are in, or have struggled in the past with some sort of addiction, feel free to join this Discord server. It is a recovery group I set up focused on helping EAs struggling, in case they think they would benefit from having a space where they can discuss more unique struggles with a group of people who are more likely to understand them. It is currently relatively inactive, but I am trying to change this. If you are uncomfortable with this for any reason, but still want help, feel free to get in touch via DMs, and I can try to help you in some other way.

Image from the painting “The Poet Max Herrmann-Neisse” by George Grosz

In my first blog post, I gave my quick review of the idea that we should...

Continue reading

Three recent posts that may be of interest:

...
Continue reading

Richard I really love your writing, but as a parent I find it so hard to just sit and read stuff. 95% of the forum's content I get via the podcast feeds. Now, I don't expect everyone to go full Experimental History or Joe Carlsmith and audio narrate each post, but unless you're wanting to keep things on Substack turf, you might consider cross-posting the full thing here (like Bentham's Bulldog did for the critique of the wired article). I don't ask this of everyone, so please consider this a compliment: I love your work and want it in my ears.[1] 

  1. ^

    Tha

... (read more)


Author: Leonard Dung

Abstract: Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems capable of disempowering humanity by 2100. Second, due to incentives and coordination problems, if it is possible to build such AI, it will be built. Third, since it appears to be a hard technical problem to build AI which is aligned with the goals of its designers, and many actors might build powerful AI, misaligned powerful...

Continue reading
49
47

Use this thread to share information about EA-related roles you are looking to fill!  

We’d like to help applicants and hiring managers coordinate, so we’ve set up this thread, and another called Who wants to be hired? (we did this last in 2022[1]).

To add your ...

Continue reading

FutureSearch is hiring! We're seeking Full-Stack Software Engineers, Research Engineers, and Research Scientists to help us build an AI system that can answer hard questions, including the forecasting and research questions critical for effective allocation of resources. You can read more our motivations, and how it works.

Salary and benefits: $70k - $120k, location and seniority depending. We aim to offer higher equity than startups at our size (6 people) typically do.

Location: Full remote. We pay for travel every few months to work together around the US ... (read more)

This is mostly a linkpost to a Gdoc which itself links to notes on 20 EA-relevant books (to be updated on an ongoing basis). I hope you'll find it useful! Here is the list, with links included for convenience:

Communication

Chip and Dan Heath (2007) Made to Stick: Why Some...

Continue reading

Hi Cam, I'm glad you found the notes useful! Most of these (with The Precipice being an exception) were notes taken from audiobooks. As I was listening, I'd write down brief notes (sometimes as short as a key word or phrase) on the Notes app on iPhone. Then, once a day/once every couple days, I'd reference the Notes app to jog my memory, and write down the longer item of information in a Gdoc. Then, when I'd finished the book, I'd organize/synthesize the Gdoc into a coherent set of notes with sections etc. 

These days I follow a similar system, but use... (read more)

quinn commented on A High Decoupling Failure 6h ago

High-decoupling vs low-decoupling or decoupling vs contextualizing refers to two different cultural norms, cognitive skills, or personal dispositions that change the way people approach ideas.

High-decouplers isolate ideas from each other and the surrounding context. This

...
Continue reading

Decoupling is uncorrelated with the left-right political divide.

Say more? How do we know this?

1
SummaryBot
17h
Executive summary: This post discusses the differences between "high-decoupling" and "low-decoupling" or "contextualizing" approaches to ideas and claims, and argues that the high-decoupling approach, while valuable for science and causal inference, often fails to fully account for the interplay between legal changes and cultural attitudes. Key points: 1. High-decouplers isolate ideas from their context, while low-decouplers or contextualizers treat ideas as inseparable from their narratives, associations, and histories. 2. High-decoupling is important for science and causal inference, but it underrates the feedback between legality and cultural approval. 3. Most voters are low-decouplers who conflate legal changes with cultural support, so political campaigns often intertwine legal and cultural prescriptions. 4. When evaluating policy changes like drug legalization or assisted dying, high-decouplers may neglect the cultural impact of the policy change. 5. While high-decoupling is often correct, democratic politics unavoidably attaches ideas to their associated narratives, groups, and histories. 6. High-decouplers can convince each other based on evidence, but low-decouplers know that policy pitches come paired with cultural narratives.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.