New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
133
· 1d ago · 9m read

Posts tagged community

Quick takes

Show community
View more
In this "quick take", I want to summarize some my idiosyncratic views on AI risk.  My goal here is to list just a few ideas that cause me to approach the subject differently from how I perceive most other EAs view the topic. These ideas largely push me in the direction of making me more optimistic about AI, and less likely to support heavy regulations on AI. (Note that I won't spend a lot of time justifying each of these views here. I'm mostly stating these points without lengthy justifications, in case anyone is curious. These ideas can perhaps inform why I spend significant amounts of my time pushing back against AI risk arguments. Not all of these ideas are rare, and some of them may indeed be popular among EAs.) 1. Skepticism of the treacherous turn: The treacherous turn is the idea that (1) at some point there will be a very smart unaligned AI, (2) when weak, this AI will pretend to be nice, but (3) when sufficiently strong, this AI will turn on humanity by taking over the world by surprise, and then (4) optimize the universe without constraint, which would be very bad for humans. By comparison, I find it more likely that no individual AI will ever be strong enough to take over the world, in the sense of overthrowing the world's existing institutions and governments by surprise. Instead, I broadly expect unaligned AIs will integrate into society and try to accomplish their goals by advocating for their legal rights, rather than trying to overthrow our institutions by force. Upon attaining legal personhood, unaligned AIs can utilize their legal rights to achieve their objectives, for example by getting a job and trading their labor for property, within the already-existing institutions. Because the world is not zero sum, and there are economic benefits to scale and specialization, this argument implies that unaligned AIs may well have a net-positive effect on humans, as they could trade with us, producing value in exchange for our own property and services. Note that my claim here is not that AIs will never become smarter than humans. One way of seeing how these two claims are distinguished is to compare my scenario to the case of genetically engineered humans. By assumption, if we genetically engineered humans, they would presumably eventually surpass ordinary humans in intelligence (along with social persuasion ability, and ability to deceive etc.). However, by itself, the fact that genetically engineered humans will become smarter than non-engineered humans does not imply that genetically engineered humans would try to overthrow the government. Instead, as in the case of AIs, I expect genetically engineered humans would largely try to work within existing institutions, rather than violently overthrow them. 2. AI alignment will probably be somewhat easy: The most direct and strongest current empirical evidence we have about the difficulty of AI alignment, in my view, comes from existing frontier LLMs, such as GPT-4. Having spent dozens of hours testing GPT-4's abilities and moral reasoning, I think the system is already substantially more law-abiding, thoughtful and ethical than a large fraction of humans. Most importantly, this ethical reasoning extends (in my experience) to highly unusual thought experiments that almost certainly did not appear in its training data, demonstrating a fair degree of ethical generalization, beyond mere memorization. It is conceivable that GPT-4's apparently ethical nature is fake. Perhaps GPT-4 is lying about its motives to me and in fact desires something completely different than what it professes to care about. Maybe GPT-4 merely "understands" or "predicts" human morality without actually "caring" about human morality. But while these scenarios are logically possible, they seem less plausible to me than the simple alternative explanation that alignment—like many other properties of ML models—generalizes well, in the natural way that you might similarly expect from a human. Of course, the fact that GPT-4 is easily alignable does not immediately imply that smarter-than-human AIs will be easy to align. However, I think this current evidence is still significant, and aligns well with prior theoretical arguments that alignment would be easy. In particular, I am persuaded by the argument that, because evaluation is usually easier than generation, it should be feasible to accurately evaluate whether a slightly-smarter-than-human AI is taking bad actions, allowing us to shape its rewards during training accordingly. After we've aligned a model that's merely slightly smarter than humans, we can use it to help us align even smarter AIs, and so on, plausibly implying that alignment will scale to indefinitely higher levels of intelligence, without necessarily breaking down at any physically realistic point. 3. The default social response to AI will likely be strong: One reason to support heavy regulations on AI right now is if you think the natural "default" social response to AI will lean too heavily on the side of laissez faire than optimal, i.e., by default, we will have too little regulation rather than too much. In this case, you could believe that, by advocating for regulations now, you're making it more likely that we regulate AI a bit more than we otherwise would have, pushing us closer to the optimal level of regulation. I'm quite skeptical of this argument because I think that the default response to AI (in the absence of intervention from the EA community) will already be quite strong. My view here is informed by the base rate of technologies being overregulated, which I think is quite high. In fact, it is difficult for me to name even a single technology that I think is currently clearly underregulated by society. By pushing for more regulation on AI, I think it's likely that we will overshoot and over-constrain AI relative to the optimal level. In other words, my personal bias is towards thinking that society will regulate technologies too heavily, rather than too loosely. And I don't see a strong reason to think that AI will be any different from this general historical pattern. This makes me hesitant to push for more regulation on AI, since on my view, the marginal impact of my advocacy would likely be to push us even further in the direction of "too much regulation", overshooting the optimal level by even more than what I'd expect in the absence of my advocacy. 4. I view unaligned AIs as having comparable moral value to humans: This idea was explored in one of my most recent posts. The basic idea is that, under various physicalist views of consciousness, you should expect AIs to be conscious, even if they do not share human preferences. Moreover, it seems likely that AIs — even ones that don't share human preferences — will be pretrained on human data, and therefore largely share our social and moral concepts. Since unaligned AIs will likely be both conscious and share human social and moral concepts, I don't see much reason to think of them as less "deserving" of life and liberty, from a cosmopolitan moral perspective. They will likely think similarly to the way we do across a variety of relevant axes, even if their neural structures are quite different from our own. As a consequence, I am pretty happy to incorporate unaligned AIs into the legal system and grant them some control of the future, just as I'd be happy to grant some control of the future to human children, even if they don't share my exact values. Put another way, I view (what I perceive as) the EA attempt to privilege "human values" over "AI values" as being largely arbitrary and baseless, from an impartial moral perspective. There are many humans whose values I vehemently disagree with, but I nonetheless respect their autonomy, and do not wish to deny these humans their legal rights. Likewise, even if I strongly disagreed with the values of an advanced AI, I would still see value in their preferences being satisfied for their own sake, and I would try to respect the AI's autonomy and legal rights. I don't have a lot of faith in the inherent kindness of human nature relative to a "default unaligned" AI alternative. 5. I'm not fully committed to longtermism: I think AI has an enormous potential to benefit the lives of people who currently exist. I predict that AIs can eventually substitute for human researchers, and thereby accelerate technological progress, including in medicine. In combination with my other beliefs (such as my belief that AI alignment will probably be somewhat easy), this view leads me to think that AI development will likely be net-positive for people who exist at the time of alignment. In other words, if we allow AI development, it is likely that we can use AI to reduce human mortality, and dramatically raise human well-being for the people who already exist. I think these benefits are large and important, and commensurate with the downside potential of existential risks. While a fully committed strong longtermist might scoff at the idea that curing aging might be important — as it would largely only have short-term effects, rather than long-term effects that reverberate for billions of years — by contrast, I think it's really important to try to improve the lives of people who currently exist. Many people view this perspective as a form of moral partiality that we should discard for being arbitrary. However, I think morality is itself arbitrary: it can be anything we want it to be. And I choose to value currently existing humans, to a substantial (though not overwhelming) degree. This doesn't mean I'm a fully committed near-termist. I sympathize with many of the intuitions behind longtermism. For example, if curing aging required raising the probability of human extinction by 40 percentage points, or something like that, I don't think I'd do it. But in more realistic scenarios that we are likely to actually encounter, I think it's plausibly a lot better to accelerate AI, rather than delay AI, on current margins. This view simply makes sense to me given the enormously positive effects I expect AI will likely have on the people I currently know and love, if we allow development to continue.
First in-ovo sexing in the US Egg Innovations announced that they are "on track to adopt the technology in early 2025." Approximately 300 million male chicks are ground up alive in the US each year (since only female chicks are valuable) and in-ovo sexing would prevent this.  UEP originally promised to eliminate male chick culling by 2020; needless to say, they didn't keep that commitment. But better late than never!  Congrats to everyone working on this, including @Robert - Innovate Animal Ag, who founded an organization devoted to pushing this technology.[1] 1. ^ Egg Innovations says they can't disclose details about who they are working with for NDA reasons; if anyone has more information about who deserves credit for this, please comment!
49
harfe
4d
5
Consider donating all or most of your Mana on Manifold to charity before May 1. Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then. Also this part might be relevant for people with large positions they want to sell now: > One week may not be enough time for users with larger portfolios to liquidate and donate. We want to work individually with anyone who feels like they are stuck in this situation and honor their expected returns and agree on an amount they can donate at the original 100:1 rate past the one week deadline once the relevant markets have resolved.
Dustin Moskovitz claims "Tesla has committed consumer fraud on a massive scale", and "people are going to jail at the end" https://www.threads.net/@moskov/post/C6KW_Odvky0/ Not super EA relevant, but I guess relevant inasmuch as Moskovitz funds us and Musk has in the past too. I think if this were just some random commentator I wouldn't take it seriously at all, but a bit more inclined to believe Dustin will take some concrete action. Not sure I've read everything he's said about it, I'm not used to how Threads works
With the US presidential election coming up this year, some of y’all will probably want to discuss it.[1] I think it’s a good time to restate our politics policy. tl;dr Partisan politics content is allowed, but will be restricted to the Personal Blog category. On-topic policy discussions are still eligible as frontpage material. 1. ^ Or the expected UK elections.

Popular comments

Recent discussion

The last ten years have witnessed rapid advances in the science of animal cognition and behavior. Striking results have hinted at surprisingly rich inner lives in a wide range of animals, driving renewed debate about animal consciousness. 

To highlight these advances...

Continue reading

Could someone please explain how much extra value this adds given that we already have the Cambridge declaration?

Per the discussion in my last advertising post, I'm currently aiming to write a post every 3 months advertising EA infrastructure projects that would otherwise struggle to get and maintain awareness.

Please let me know if there's a project I should add (see inclusion criteria...

Continue reading

I've put them all in a sequence, whose link is at the very top, but I guess they need something more visible?

David_Moss commented on Priors and Prejudice 1h ago
133
7

This post is easily the weirdest thing I've ever written. I also consider it the best I've ever written - I hope you give it a chance. If you're not sold by the first section, you can safely skip the rest.

I

Imagine an alternate version of the Effective Altruism movement,...

Continue reading
2
saulius
1h
I like making a distinction between superficial beliefs and deeply held beliefs which are often entirely subconscious. You have a superficial belief that Starcraft is balanced but a deeply held belief that your faction is the weakest.  For another example, my dad lived all his life in a world where alcohol was socially acceptable, while everyone agreed that all other drugs were the worst thing ever, quickly leading to addiction, etc. He once even remarked how if alcohol was invented today, it would surely be illegal because it has so many negative consequences, even compared to some other drugs. But it’s just a funny thought to him. He offers me a drink whenever I come to visit him, but he got immediately very concerned when I mentioned that I’ve tried cannabis. He can’t just suddenly rewire his brain to change the associations he has with something like cannabis. Even if I tell him about some studies about cannabis not being that harmful, especially when used rarely, in his subconsciousness, there might barely be a difference between cannabis and drugs like heroin. Maybe he could rewire his subconscious reaction by going through all his memories where he was told something bad about drugs and reinterpreting them in the face of the new evidence. But ain’t nobody has time for that.  Well, it’s worth trying to rewire yourself about deeply held beliefs that really harm you like “I am unlovable”, "I don't deserve happiness", "I can't trust anyone", etc. This is a big part of what therapy does, I think. But for most topics like Starcraft factions, we just have to accept that there will always be a mismatch between superficial beliefs and deeply held beliefs.

Sounds broadly like the belief vs alief distinction.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
Photo by Adeolu Eletu

My definition of “capitalism” is:

An economy with capital markets (in addition to markets in goods and services).

Most of my friends and acquaintances generally don’t have a precise definition of “capitalism”, but use the word to mean something like:

The

...
Continue reading

I think of publicly traded firms as "publicly" (collectively) owned in the sense that many members of the public own shares of them directly or indirectly through things like ETFs and mutual funds. It gets complicated by the fact that ownership of most publicly traded companies is concentrated among a few stockholders.

2
Bob Jacobs
13h
Because governments can trade. E.g., if the governments of the Netherlands and Germany are looking to sell some firms they own, and the governments of Belgium and Luxembourg are giving competing offers to buy those firms, we have a market without the firms being privately owned.
1
Yarrow B.
15h
Unfortunately, I believe there’s more terminological confusion between us than I currently have the energy to try to clear up.

The Centre for Exploratory Altruism Research (CEARCH) is an EA organization working on cause prioritization research as well as grantmaking and donor advisory. This project was commissioned by the leadership of the Meta Charity Funders (MCF) – also known as the Meta Charity...

Continue reading
32
Arden Koehler
16h
Hey, Arden from 80,000 Hours here –  I haven't read the full report, but given the time sensitivity with commenting on forum posts, I wanted to quickly provide some information relevant to some of the 80k mentions in the qualitative comments, which were flagged to me. Regarding whether we have public measures of our impact & what they show It is indeed hard to measure how much our programmes counterfactually help move talent to high impact causes in a way that increases global welfare, but we do try to do this. From the 2022 report the relevant section is here. Copying it in as there are a bunch of links.  Some elaboration:  * DIPY estimates are our measure of contractual career plan shifts we think will be positive for the world. Unfortunately it's hard to get an accurate read on counterfactuals and response rates, so these are only very rough estimates & we don't put that much weight on them. * We report on things like engagement time & job board clicks as *lead metrics* because we think they tend to flow through to counterfactual high impact plan changes, & we're able to measure them much more readily. * Headlines from some of the links above:  * From our own survey (2138 respondents): * On the overall social impact that 80,000 Hours had on their career or career plans,  * 1021 (50%) said 80,000 Hours increased their impact * Within this we identified 266 who reported >30% chance of 80,000 Hours causing them to taking a new jobs or graduate course (a “criteria based plan change”) * 26 (1%) said 80,000 Hours reduced their impact. * Themes in answers were demoralisation and causing career choices that were a poor fit * Open Philanthropy's EA/LT survey was aimed at asking their respondents " “What was important in your journey towards longtermist priority work?” – it has a lot of different results and feels hard to summarise, but it showed a big chunk of people considered 80k a factor in ending up working where the

The 2020 EA survey link says "More than half (50.7%) of respondents cited 80,000 Hours as important for them getting involved in EA". (2022 says something similar

 

I would also add these results, which I think are, if anything, even more relevant to assessing impact:

... (read more)

I wanted to reflect on my first year as a full-time community builder at EA Switzerland. The lessons I share here might be more useful for people who are more or less involved in community building or field building / coordination, but I think some of them are not only work-useful but also life-useful (at least to me). I don't think they are specific to the Swiss context either.

So here is a pile of things I (re)learned:

On People:

1. Sometimes unstructured conversations are the most productive conversations.

I tend to prepare meetings and think about the best ways to make the conversation time as useful as possible. Most of the time, it is also what's expected of me, especially when the person I'm meeting with is very busy and their time is more valuable than mine. And I might project that need for time optimization for all my meetings.

This involves a lot of guesswork and anticipation about...

Continue reading

Written by Claude, and very lightly edited.

In a recent episode of The Diary of a CEO podcast, guest Bryan Johnson, founder of Kernel and the Blueprint project, laid out a thought-provoking perspective on what he sees as the most important challenge and opportunity of our...

Continue reading
Linch
8h14
0
0
1

I thought this summary by TracingWoodgrains was good (in terms of being a summary. I don't know enough about the object-level to know if it was true). If roughly accurate, it paints an extremely unflattering picture of Johnson.

11
9

Originally posted on my blog

A very interesting discussion I came across online between Cosmicskeptic (Alex) and Earthlings Ed (Ed Winters) brought forth several points that I have wondered about in the past.  In one segment, Alex poses the following question: ...

Continue reading

Thanks for the comment. I suspect there are a couple of distinct elements that have been conflated in your arguments that I will try to disentangle. 

As far as practical considerations in the context of personal changes to limit harm towards animals go, I not only agree with you that first-order veganism is sensible, it is also one of the key reasons why I am a 99% first-order vegan. Forget animals, I am just being kind to myself and eliminating decision fatigue by following a simple rule that says : animal products, no go.  It just makes things s... (read more)

We recently published a new core career advice series. It provides a concise, accessible intro to some of the most important ideas for planning an impactful career. Check it out on our site!

What is the core advice series?

The core advice series distills the most important...

Continue reading

Hey Jamie, thanks for the comment!

80K and Probably Good have the same goal: get more people into impactful careers. Where we differ is mostly in emphasis and approach. 

At a high level Probably Good differs in a few significant ways:

  • While 80K focuses more on longtermism, x-risk, and AI risk, we aim to provide impact-focused career advice for people in a wide range of high-impact careers, across many cause areas (more cause areas still coming :)).
  • Correspondingly, we aim to give (relatively) more weight to worldview diversification, moral uncertainty, an
... (read more)