Healthier Hens (HH) aims to improve cage-free hen welfare, focusing on key issues such as keel bone fractures (KBFs). In the last 6 months, we’ve conducted a vet training in Kenya, found a 42% KBF prevalence, and are exploring alternative promising interventions in collaboration with the Welfare Footprint Project, publishing transferrable findings along the way. Our staff satisfaction remains high, but concerns about operational capacity are on the rise. We're deciding on future strategies, considering funding and organizational changes. Our budget for Y3 (‘23 Sep - ‘24 Sep) is $65k-$135k, with a $0-70k funding gap.
In this post, we share key updates, lessons learned and our plans for immediate next steps. We hope others can benefit from what we’re observing and our attempts to identify promising pathways towards improved hen health and welfare. We welcome feedback from the community...
TLDR; Participate online or in-person on the weekend 3rd to 5th May in an exciting and fun AI safety research hackathon focused on demonstrating and extrapolating risks to democracy from real-life threat models. We invite researchers, cybersecurity professionals, and governance experts to join but it is open for everyone, and we will introduce starter code templates to help you kickstart your team's projects. Join here.
Despite some of the largest potential risks from AI being related to our democratic institutions and the fragility of society, there is surprisingly little work demonstrating and extrapolating concrete risks from AI to democracy.
By putting together actual demonstrations of potential dangers and mindfully extrapolating these risks into the late 2020s, we can raise awareness among key decision-makers and stakeholders, thus driving the development...
Welcome! Use this thread to introduce yourself or ask questions about anything that confuses you.
PS- this thread is usually entitled "Open thread", but I'm experimenting with a more descriptive title this time.
The "Guide to norms on...
I have not researched longtermism deeply. However, what I have found out so far leaves me puzzled and skeptical. As I currently see it, you can divide what longtermism cares about into two categories:
1) Existential risk.
2) Common sense long-term priorities, such as:
Longtermism suggests a different focus within existential risks, because it feels very differently about "99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisation" and "100% of humanity is destroyed, civilisation ends", even though from the perspective of people alive today these outcomes are very similar.
I think relative to neartermist intuitions about catastrophic risk, the particular focus on extinction increases the threat from AI and engineered biorisks relative to e.g. climate change and natural pandemics. Basically, total ...
The topics of working for an EA org and altruist careers are discussed occasionally in our local group.
I wanted to share my rough thoughts and some relevant forum posts that I've compiled in this google doc. The main thesis is that it's really difficult to get a job at an EA org, as far as I know, and most people will have messier career paths.
Some of the posts I link in the doc, specifically around alternate career paths:
The career and the community
Consider a wider range of jobs, paths and problems if you want to improve the long-term future
My current impressions on career choice for longtermists
Epistemic status: 97.75% certain.
For most of history, physical prowess and stature have been important. If you lived during pre-agricultural times, being taller and more agile would have made you a more capable hunter and gatherer. In confrontations with other...
One potential downside: shorter people means less physical capacity, as somewhat mentioned in the above area. I agree that most recources would be [doubled*], but recources that require physical labor will be comparatively twice as slow, perhaps having a nagative inmact on the economy. I'm not sure if "the doubling" offsets this. Epistemic status: "Idk tho"
Four days ago I posted a question Why are you reluctant to write on the EA Forum?, with a link to Google Form. I received 20 responses.
This post is in three parts:
Use this thread to share information about EA-related roles you are looking to fill!
We’d like to help applicants and hiring managers coordinate, so we’ve set up this thread, and another called Who wants to be hired? (we did this last in 2022[1]).
To add your ...
Princeton University’s Research Program in Development Economics is looking for a Research and Policy Manager to provide high-level research support to Professor Pascaline Dupas and Prof. Seema Jayachandran, plus their colleagues. The role is similar to being a “chief of staff.”
We are looking for an exceptionally strong analytical thinker who has good writing and people skills and is dependable and competent, i.e., gets things done regardless of the task.
Salary: $85,000-100,000, depending on seniority
Locati...
This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! |
Commenting and feedback guidelines |
do you have a rough guess at what % this is a deal breaker for?
It's less of "%" and more of "who will this intimidate".
Many of your top candidates will (1) currently be working somewhere, and (2) will look at many EA aligned jobs, and if many of them require a work trial then that could be a problem.
(I just hired someone who was working full time, and I assume if we required a work trial then he just wouldn't be able to do it without quitting)
Easy ways to make this better:
Hi Ale, great to have you here!
Let me know (here or via DM) if you have any questions about the Forum, or want any content recommendations,
- Toby, Content Manager for the EA Forum.