MU

markov_user

399 karmaJoined May 2018

Comments
94

Note that A or B decisions are often false dichotomies, and you may be overlooking alternative options that combine the advantages. So narrowing in on given options too soon may sometimes be a mistake, and it can be useful to try to come up with more alternatives.

Also, in my experience many of the decisions I get stuck with fall somewhere between 2 and 3: I know their implications and have most of the information, but the results differ on various dimensions. E.g. option 1 is safe and somewhat impactful, while option 2 is potentially higher impact but much riskier and comes at the cost of disappointing somebody you care about. I'm not sure to what degree a decision doc is suitable for these types of problems in particular - but I've at least had a few cases where friends came up with some helpful way to reframe the situation that led to a valuable insight.

(But I should mention I definitely see your point that many EAs may be overthinking some of their decisions - though even then I personally wouldn't feel comfortable in case of value conflicts to just flip a coin. But in many other cases I agree that getting to any decision quickly rather then getting stuck in decision paralysis is a good approach.)

Would you say that, almost 4 years later, we've made progress on that front?

Köln does have a somewhat active local group currently (see here https://forum.effectivealtruism.org/groups/6BpGMKtfmC2XLeih8 ) - I think they mostly coordinate via Signal, which interestingly is hidden behind the "Join us on Slack" button on the forum page. Don't think this had much to do with this post though.

I'm not aware of anything having happened in Dortmund or the general Ruhrgebiet in the last year or so, with the exception of the Doing Good Together Düsseldorf group.

why restarting your device works to solve problems, but it does (yes, I did look it up, so no need to explain it

I'm now stuck in "I think I know a decent metaphor but you don't want me to share it" land... but then maybe I'll just share it for other people. :P

Basically it's less about how computers work on any technical level, and more about which state they're in. Imagine you want to walk to your favorite store. If you're at home, you probably know the way by heart and can navigate there reliably. But now imagine you've been up for a while and have been walking around for hours following some semi-random commands from different people. And by following all these unrelated commands, you've now ended up doing a handstand on some hill next to a lake on the opposite end of town, where you've never been before. It can easily happen now that, from that weird state, going to the store close to your home will not work out and you get stuck somewhere. Restarting the computer is basically the same as teleporting home. It's in a well defined, clean, predictable state again, where you know that most of the usual day to day actions can be performed reliably. And the longer it's running without restart, the more chances it has to, in one way or another, get into a state that makes it fail at certain tasks you want it to do.

Productivity, perfectionism, and self-leadership increased in the correspondingly themed groups.

I guess "increased" here should be "improved"? Unless perfectionism actually increased as well, but this would seem like a surprising outcome. :)

On the one hand yes, but on the other hand it seems crucial to at least mention these observer effects (edit: probably the wrong term, rather anthropic principle). There's a somewhat thin line between asking "why haven't we been wiped out?" and using the fact that we haven't been wiped out yet as evidence that this kind of scenario is generally unlikely. Of course it makes sense to discuss the question, but the "real" answer could well be "random chance" without having further implications about the likelihood of power-seeking AGI.

Highly agree with the post. I discussed almost the same thing with a friend during the conference. Basically, the typical "don't attend talks, most of them are recorded and you can just watch them online later" advice isn't great imho - it seems like a fake alternative to me, in the sense that you miss out on a talk because you tell yourself "ah I'll just watch it later", but in probably >90% of cases this just won't happen. So the actual alternative you're choosing is not "watch online later", but "don't watch at all". Because by the time the talk is online, you'll have forgotten about it, and even if you remember it, just deciding to watch a 30 minute talk at home on your own requires a lot more activation energy to overcome than, in comparison, the act of just walking to another room while you are at a conference.

Indeed 1-1s will be more valuable most of the time for most people, and it's important to make this point to first-timers who otherwise might fill their schedule with talks and workshop. But if there's a talk or two or three that are especially relevant to you, there's a strong case to be made that you should attend it. Even if you're sure that you would indeed watch it online later, it may still be worth attending merely for hearing the talk a few months sooner than you otherwise would.

Plus, you can also schedule virtual 1-1s with people after the conference, so it's not necessarily that you're missing out on anything. (and I'd argue a "hey, my schedule is pretty packed, would you be available for a zoom call some time next week?" message to an actual person will yield a much higher chance of actually happening than a vague "yeah I'll probably watch this talk online at some point!" intention)

Side note: I've read the post on pocket first, and it simply omitted section 7 without any hint of its existence. Wonder if that happens more frequently.

As for the post itself, I do agree with most of it. I think though that it (particularly point 1) has some risk of reinforcing some people's perception of reaching out to well known people as a potential status violation, which I think is already quite common in EA (although I know of some people who would disagree with me on this). I would guess most people already have a tendency to "not waste important people's time" (whatever "important" means to them) and rather err on the side of avoiding these people and e.g. not ask for a 1-1 at a conference even though they might benefit greatly from it. To make it short, I agree quite strongly with your point 7, but not so much with (the general vibe of) point 1.

The recent push for productization is making everyone realize that alignment is a capability. A gaslighting chatbot is a bad chatbot compared to a harmless helpful one. As you can see currently, the world is phasing out AI deployment, fixing the bugs, then iterating.

While that's one way to look at it, another way is to notice the arms race dynamics and how every major tech company is now throwing LLMs into the public head over heels even when they stil have some severe flaws. Another observation is that e.g. OpenAI's safety efforts are not very popular among end users, given that in their eyes these safety measures make the systems less capable/interesting/useful. People tend to get irritated when their prompt is answered with "As a language model trained by OpenAI, I am not able to <X>", rather than feeling relief over being saved from a dangerous output.

As for your final paragraph, it is easy to say "<outcome X> is just one ouf of infinite possibilities", but you're equating trajectories with outcomes. The existence of infinite possibilities doesn't really help when there's a systematic reason that causes many or most of them to have human extinction as an outcome. Whether this is actually the case or not is of course an open and hotly debated question, but just claiming "it's just a single point on the x axis so the probability mass must be 0" is surely not how you get closer to an actual answer.

why the overemphasis/obsession on doom scenario?

Because it is extremely important that we do what we can to avoid such a scenario. I'm glad that e.g. airlines still invest a lot in improving flight safety and preventing accidents even though flying is already the safest way of traveling. Humanity is basically at this very moment boarding a giant AI-rplane that is about to take off for the very first time, and I'm rather happy there's a number of people out there looking at the possible worst case and doing their best to figure out how we can get this plane safely off the ground rather than saying "why are people so obsessed with the doom scenario? A plane crash is just one out of infinite possibilities, we're gonna be fine!".

it's not AI, more code completion with crowd-sourced code

Copilot is based on GPT3, so imho it is just as much AI or not AI as ChatGPT is. And given it's pretty much at the forefront of currently available ML technology, I'd be very inclined to call it AI, even if it's (superficially) limited to the use case of completing code.

Load more