Consider donating to the Malaria Consortium, or the Against Malaria Foundation.
I recently interviewed Sofya Lebedeva (currently pursuing a PhD in Clinical Medicine at Oxford University & co-founder of ARMOR - Alliance for Reducing Microbial Resistance).
We discussed:
▹ How she has forged a career path in biosecurity
▹ Skills and mindsets essential for success
▹ Her thoughts on PhDs and entrepreneurship
... and more!
This was such an energising, inspiring conversation! I'd love to conduct more career interviews with ambitious, mission-driven women working on mitigating risks from emerging technologies (like AI, biotech, WMDs) — so if you would like to be interviewed or have ideas for who I should talk to next, please let me know! 🙏
As you may have noticed, 80k After Hours has been releasing a new show where I and some other 80k staff sit down with a guest for a very free form, informal, video(!) discussion that sometimes touches on topical themes around EA and sometimes… strays a bit further afield...
Dustin Moskovitz claims "Tesla has committed consumer fraud on a massive scale", and "people are going to jail at the end"
https://www.threads.net/@moskov/post/C6KW_Odvky0/
Not super EA relevant, but I guess relevant inasmuch as Moskovitz funds us and Musk has in the past too. I think if this were just some random commentator I wouldn't take it seriously at all, but a bit more inclined to believe Dustin will take some concrete action. Not sure I've read everything he's said about it, I'm not used to how Threads works
Consider donating all or most of your Mana on Manifold to charity before May 1.
Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 ...
Forum post saying the same thing, with some discussion: https://forum.effectivealtruism.org/posts/SM3YzTsXmQ6BaFcsL/you-probably-want-to-donate-any-manifold-currency-this-week
Any help is appreciated: I am looking (so far unsuccessfully) for a number that nicely illustrates that people prefer their donations to help locally rather than internationally.
P.S.: One number I found was TLYCS's claim that only 10% of individual donations go to international causes. However, this seems misleading. This claim is based on the Giving USA's report which has international affairs as one cause area. However, the alternative cause areas are not necessarily national since they include, for example, giving to foundations (which comprises international development as well).
Malaria is massive. Our World in Data writes: “Over half a million people died from the disease each year in the 2010s. Most were children, and the disease is one of the leading causes of child mortality.” Or, as Rob Mather, CEO of the Against Malaria Foundation (AMF) phrases...
These questions came up as I read for this post, and I'd love to hear answers from more knowledgeable people:
In this "quick take", I want to summarize some my idiosyncratic views on AI risk.
My goal here is to list just a few ideas that cause me to approach the subject differently from how I perceive most other EAs view the topic. These ideas largely push me in the direction...
I want to say thank you for holding the pole of these perspectives and keeping them in the dialogue. I think that they are important and it's underappreciated in EA circles how plausible they are.
(I definitely don't agree with everything you have here, but typically my view is somewhere between what you've expressed and what is commonly expressed in x-risk focused spaces. Often also I'm drawn to say "yeah, but ..." -- e.g. I agree that a treacherous turn is not so likely at global scale, but I don't think it's completely out of the question, and given that I think it's worth serious attention safeguarding against.)
The obvious example would be synthetic biology, gain-of-function research, and similar.
Can you explain why you suspect these things should be more regulated than they currently are?
In particular, I am persuaded by the argument that, because evaluation is usually easier than generation, it should be feasible to accurately evaluate whether a slightly-smarter-than-human AI is taking unethical actions, allowing us to shape its rewards during training accordingly. After we've aligned a model that's merely slightly smarter than humans, we can use it to help us align even smarter AIs, and so on, plausibly implying that alignment will scale to indefinitely higher levels of intelligence, without necessarily breaking down at any physically realistic point.
This reasoning seems to imply that you could use GPT-2 to oversee GPT-4 by boostrapping from a chain of models of scales between GPT-2 and GPT-4. However, this isn't true, the weak-to-strong generalization paper finds that this doesn't work and indeed bootstrapping like this doesn't help at all for ChatGPT reward modeling (it helps on chess puzzles and for nothing else they investigate I believe).
I think this sort of bootstrapping argument might work if we could ensure that the each model in the chain was sufficiently aligned and capable of reasoning that it would carefully reason about what humans would want if they were more knowledgeable and then rate outputs based on this. However, I don't think GPT-4 is either aligned enough or capable enough that we see this behavior. And I still think it's unlikely it works under these generous assumptions (though I won't argue for this here).
From the Table of Contents, you might've guessed that this post makes different offerings to different people making personal growth investments.
It's built like a playlist for your interests. Jumping to sections that beckon your interests is a great way to navigate...
Glad it was helpful! Happy to see that you utilized the 'playlist'-type function of this to kick off these thoughts
This sounds like a nice process you've carved out for yourself. Always pleased to see when people are at such an advanced position in being conscientious about their growth.
Similar to what it sounds like your process is, my sense is that the best frequency for working with most coaches/therapists follows an 'organic cadence' that's tied to particular phases and occasions. It seems like, in most cases, consistent indefinite sessions are m...
Most things you use and particularly the food we consume rely on an intact global supply chain. Without trade, essential resources such as fertilizers would become inaccessible, making food production much harder. This post aims to provide an overview of the current trade system, highlighting its potential vulnerabilities and exploring the factors that have contributed to this state. The focus is on food trade given its importance and vulnerability.
A good overview of the state of the trade system is given in D’Odorico et al. (2014), which tracks global flows of food via trade data from the Food and Agriculture Organization of the United Nations (FAO). They find that around a quarter of the food we produce is traded and that this share has increased in recent decades. Also the amount of food we trade increases quicker than the food we produce. This...
I don't think my comment is likely to be all that useful, but putting it here anyway.
I personally find it difficult to pay attention to podcasts with more than 2 people. I tried to listen to the first episode for about 30 minutes and this one for about 5 minutes, and I couldn't comfortably follow them while paying attention to other tasks (walking around, cleaning, cooking etc.).
I think it's likely that more diversity in the space is good though, as many of the most popular podcasts I see on e.g. Youtube tend to be more than two people. I suspe... (read more)