Ways the world is getting better
Click the banner to add a piece of good news

Posts tagged community

Quick takes

Show community
View more
FAQ: “Ways the world is getting better” banner The banner will only be visible on desktop. If you can't see it, try expanding your window. It'll be up for a week.  How do I use the banner? 1. Click on an empty space to add an emoji,  2. Choose your emoji,  3. Write a one-sentence description of the good news you want to share,  4. Link an article or forum post that gives more information.  If you’d like to delete your entry, click the cross that appears when you hover over it. It will be deleted for everyone. What kind of stuff should I write? Anything that qualifies as good news relevant to the world's most important problems.  For example, Ben West’s recent quick takes (1, 2, 3). Avoid posting partisan political news, but the passage of relevant bills and policies is on topic.  Will my entry be anonymous? All submissions are displayed without your Forum name, so they are ~anonymous to users, however, usual moderation norms still apply (additionally, we may remove duplicates or borderline trollish submissions. This is an experiment, so we reserve the right to moderate heavily if necessary). Ask any other questions you have in the comments below. Feel free to dm me with feedback or comments.  
2
Linch
25m
0
Introducing Ulysses*, a new app for grantseekers.    We (Austin Chen, Caleb Parikh, and I) built an app! You can test the app out if you’re writing a grant application! You can put in sections of your grant application** and the app will try to give constructive feedback about your applicants. Right now we're focused on the "Track Record" and "Project Goals" section of the application. (The main hope is to save back-and-forth-time between applicants and grantmakers by asking you questions that grantmakers might want to ask. Austin, Caleb, and I hacked together a quick app as a fun experiment in coworking and LLM apps. We wanted a short project that we could complete in ~a day. Working on it was really fun! We mostly did it for our own edification, but we’d love it if the product is actually useful for at least a few people in the community! As grantmakers in AI Safety, we’re often thinking about how LLMs will shape the future; the idea for this app came out of brainstorming, “How might we apply LLMs to our own work?”. We reflected on common pitfalls we see in grant applications, and I wrote a very rough checklist/rubric and graded some Manifund/synthetic applications against the rubric.  Caleb then generated a small number of few shot prompts by hand and then used LLMs to generate further prompts for different criteria (e.g., concreteness, honesty, and information on past projects) using a “meta-prompting” scheme. Austin set up a simple interface in Streamlit to let grantees paste in parts of their grant proposals. All of our code is open source on Github (but not open weight 😛).*** This is very much a prototype, and everything is very rough, but please let us know what you think! If there’s sufficient interest, we’d be excited about improving it (e.g., by adding other sections or putting more effort into prompt engineering). To be clear, the actual LLM feedback isn’t necessarily good or endorsed by us, especially at this very early stage. As usual, use your own best judgment before incorporating the feedback. *Credit to Saul for the name, who originally got the Ulysses S. Grant pun from Scott Alexander. ** Note: Our app will not be locally saving your data. We are using the OpenAI API for our LLM feedback. OpenAI says that it won’t use your data to train models, but you may still wish to be cautious with highly sensitive data anyway.  *** Linch led a discussion on the potential capabilities insights of our work, but we ultimately decided that it was asymmetrically good for safety; if you work on a capabilities team at a lab, we ask that you pay $20 to LTFF before you look at the repo.  
Common prevalence estimates are often wrong. Example: snakebites and my experience reading Long Covid literature. Both institutions like the WHO and academic literature appear to be incentivized to exaggerate. I think the Global Burden of Disease might be a more reliable source, but have not looked into it. I advise everyone using prevalence estimates to treat them with some skepticism and look up the source.
Inside Wytham Abbey, the £15 Million Castle Effective Altruism Must Sell [Bloomberg] From the article: > Effective Ventures has since come to a settlement with the FTX estate and paid back the $26.8 million given to it by FTX Foundation. [...] It’s amid such turmoil that Wytham Abbey is being listed on the open market for £15 million [...] > > Adjusted for inflation, the purchase price of the house two years ago now equals £16.2 million. [...] The listing comes as homes on the UK’s once-hot country market are taking longer to sell, forcing some owners to offer discounts. I still think the intangible reputational damage is worse, but a loss of a million pounds (that could've been spent on malaria bed nets) would be nothing to sneeze at either. (archive link)
EAGxUtrecht (July 5-7) is now inviting applicants from the UK (alongside other Western European regions that don't currently have an upcoming EAGx).[1] Apply here! Ticket discounts are available and we have limited travel support. Utrecht is very easy to get to. You can fly/Eurostar to Amsterdam and then every 15 mins there's a direct train to Utrecht, which only takes 35 mins (and costs €10.20). 1. ^ Applicants from elsewhere are encouraged to apply but the bar for getting in is much higher.

Popular comments

Recent discussion

Linch posted a Quick Take

Introducing Ulysses*, a new app for grantseekers. 


 

We (Austin Chen, Caleb Parikh, and I) built an app! You can test the app out if you’re writing a grant application! You can put in sections of your grant application** and the app will try to give constructive feedback about your applicants. Right now we're focused on the "Track Record" and "Project Goals" section of the application. (The main hope is to save back-and-forth-time between applicants and grantmakers by asking you questions that grantmakers might want to ask.

Austin, Caleb, and I hacked together a quick app as a fun experiment in coworking and LLM apps. We wanted a short project that we could complete in ~a day. Working on it was really fun! We mostly did it for our own edification, but we’d love it if the product is actually useful for at least a few people in the community!

As grantmakers in AI Safety, we’re often thinking about how LLMs will shape the future; the idea for this app came out of brainstorming, “How might we apply LLMs to our own work?”. We reflected on common pitfalls we see in grant applications, and I wrote a very rough checklist/rubric and graded some Manifund/synthetic applications against the rubric.  Caleb then generated a small number of few shot prompts by hand and then used LLMs to generate further prompts for different criteria (e.g., concreteness, honesty, and information on past projects) using a “meta-prompting” scheme. Austin set up a simple interface in Streamlit to let grantees paste in parts of their grant proposals. All of our code is open source on Github (but not open weight 😛).***

This is very much a prototype, and everything is very rough, but please let us know what you think! If there’s sufficient interest, we’d be excited about improving...

Continue reading

Co-Authors: @Rocket, @Ryan Kidd, @LauraVaughan, @McKennaFitzgerald, @Christian Smith, @Juan Gil, @Henry Sleight

The ML Alignment & Theory Scholars program (MATS) is an education and research mentorship program for researchers entering the field of AI safety. This winter, we held the fifth iteration of the MATS program, in which 63 scholars received mentorship from 20 research mentors. In this post, we motivate and explain the elements of the program, evaluate our impact, and identify areas for improving future programs.

Summary

Key details about the Winter Program:

...
Continue reading

This post is a mix of

  • shameless explicit self-promotion
  • trying to explain some subtleties about what kind of "LessOnline" actually is.

Shameless self promotion portion bit:

LessOnline is in 3 weeks (May 31 – June 2). It's "a festival of writers who are wrong on the internet (by try to be less so)", celebrating truthseeking and blogging.

Early ticket prices end this Monday (aka 3 days from now as of me posting this).

Writers attending include Scott AlexanderZvi MowshowitzEliezer YudkowskyPatrick McKenzieAgnes CallardKatja GraceKevin SimlerAndy MatuschakCremieux RecueilDuncan SabienJoe CarlsmithAellaClara CollierAlexander WalesSarah Constantin, and more.

It'll be a weekend filled with talks, workshops, puzzle-hunts, dance parties, and late-night conversations around the fireside. There's also on-site

...
Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
Peter Wildeford posted a Quick Take

This could be a long slog but I think it could be valuable to identify the top ~100 OS libraries and identify their level of resourcing to avoid future attacks like the XZ attack. In general, I think work on hardening systems is an underrated aspect of defending against future highly capable autonomous AI agents.

Continue reading

Cross-posted from LessWrong

In this Rational Animations video, we discuss s-risks (risks from astronomical suffering), which involve an astronomical number of beings suffering terribly. Researchers on this topic argue that s-risks have a significant chance of occurring and...

Continue reading

I'm surprised the video doesn't mention cooperative AI and avoiding conflict among transformative AI systems, as this is (apparently) a priority of the Center on Long-Term Risk, one of the main s-risk organizations. See Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda for more details.

2
mic
I wouldn't consider factory farming to be an instance of astronomical suffering, as bad as the practice is, since I don't think the suffering from one century of factory farming exceeds hundreds of millions of years of wild animal suffering. However, perhaps it could be an s-risk if factory farming somehow continues for a billion years. For reference, here is definition of s-risk from a talk by CLR in 2017:

Animal Ethics has recently launched Senti, an Ethical AI assistant designed to answer questions related to animal ethics, wild animal suffering, and longtermism. We at Animal Ethics believe that while AI technologies could potentially pose significant risks to animals, ...

Continue reading

Great to see this! One quick piece of feedback: It takes a while to see a response from the chatbot. Are you planning on streaming text responses in the future?

Thank you to James Ozden for feedback on this post.

Edit: I added "relatively" to the title to more precisely capture my claim. To be clear, I think CWRs are still underfunded in absolute terms.
 

In this post I argue that corporate welfare reforms (CWRs)* are relatively...

Continue reading

This is indeed a legitimate concern. We do not have accurate information on the distribution of BCC-approved breeds used in the committments made so far, but I believe that organizations working on and monitoring the committments (possibly the Humane League and CIWF, which publishes the Chicken Track), are likely to have this information. From statements of company's representatives, it seems that the Hubbard breeds are prevailing in Europe, see e.g. this statement: "In Europe, where the issue of breed is more advanced than in the U.S., the Hubbard JA757, ... (read more)

Just as the 2022 crypto crash had many downstream effects for effective altruism, so could a future crash in AI stocks have several negative (though hopefully less severe) effects on AI safety.

Why might AI stocks crash?

The most obvious reason AI stocks might crash is that...

Continue reading
6
Greg_Colbourn
Well, the bottom line is extinction, for all of us. If the COIs block enough people from taking sufficient action, before it's too late, then that's what happens. The billions of EA money left in the bank as foom-doom hits will be useless. Might as well never have been accumulated in the first place. I'll also note that there are plenty of other potential good investments out there. Crypto has gone up about as much as AI stocks in general over the last year, and some of them (e.g. SOL) have gone up much more than NVDA. There are promising start-ups in many non-AI areas. (Join this group to see more[1]). To answer your bottom two questions: 1. I think avoiding stock-market-wide index funds is probably going too far (as they are neutral about AI - if AI starts doing badly, e.g. because of regulation, then the composition of the index fund will change to reflect this). 2. I wouldn't recommend this as a strategy, unless they are already on their way down and heavy regulation looks imminent. 1. ^ But note that people are still pitching the likes of Anthropic in there! I don't approve of that.

Bitcoin is only up around 20% from its peaks in March and November 2021. It seems far riskier in general than just Nvidia (or SMH) when you look over longer time frames. Nvidia has been hit hard in the past, but not as often or usually as hard.

Smaller cap cryptocurrencies are even riskier.

I also think the case for outperformance of crypto in general is much weaker than for AI stocks, and it has gotten weaker as institutional investment has increased, which should increase market efficiency. I think the case for crypto has mostly been greater fool theory (a... (read more)

8
Vasco Grilo
Thanks for the post, Ben! I like that Founders Pledge's Patient Philanthropy Fund (PPF) invests in "a low-fee Global Stock Index Fund". I also have all my investments in global stocks (Vanguard FTSE All-World UCITS ETF USD Acc).

According to some highly authoratitive anecdotal accounts, when a lone crab is placed in a bucket it will crawl out of its own accord but put a pile of crabs in a bucket and they will pull each other down in an attempt to escape, dooming them all. This is a classic illustration...

Continue reading

It's odd that you say the reviewer provides no support for his assertions. It seems to me like the reviewer presents quite a bit of evidence.

For example, in responding to Bregman's claim that male control over female sexuality (and gender inequality more generally) began with the rise of agriculture, Buckner (the reviewer) mentions arranged marriages among the !Kung, a hunter-gatherer society. Buckner also references husbands beating their wives for infidelity among the Kaska, a nomadic foraging society. He also references the Ache, a hunter-gatherer socie... (read more)