All posts

New & upvoted

Today, 28 April 2024
Today, 28 Apr 2024

Personal Blogposts

Quick takes

A lot of policy research seems to be written with an agenda in mind to shape the narrative. And this kind of destroys the point of policy research which is supposed to inform stakeholders and not actively convince or really nudge them. This might cause polarization in some topics and is in itself, probably snatching legitimacy away from the space. I have seen similar concerning parallels in the non-profit space, where some third-sector actors endorse/do things which they see as being good but destroys trust in the This gives me scary unilaterist's curse vibes..
In case you're interested in supporting my EA-aligned YouTube channel A Happier World: I've lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn't reached, you won't get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal. Manifund fundraising page EA Forum post announcement
How to communicate EA to the commonsense Christian: has it been done before? I'm considering writing a series of posts exploring the connection between EA and the common-sense Christianity one might encounter on the street if you were to ask someone about their 'faith.' I've looked into EA for Christians a bit, and haven't done a deep dive into their articles yet. I'm wondering what the consensus is on this group, and if anyone involved can give me a synopsis on how that's been going. Has it been effective? I'm posting this quick take as a means of feeling out this idea. This mini-series would probably consist of exploring EA from a commonsense place, considering how the use of Church-language can allow one to communicate more effectively and bypass being seen as a member of the out-group, and hopefully enable more Christians to see this movement as something they may want to be a part of even if they don't share the same first premises. I don't want to put more time into work that has been deeply covered in the community but feel that this is an area I can provide some insight into, as I have my motivations for reconciliation beyond academic interest. What are your thoughts?

Saturday, 27 April 2024
Sat, 27 Apr 2024

Personal Blogposts

Quick takes

I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from. I really like Zvi's work, and he has been right about a lot of things I was initially on the other side of, so I would be curious to read more of his or similar people's thoughts on this. Seems like potentially a place where there is a motivation gap: non-animal welfare people have little incentive to convince me that they think the things I work on are not that useful.

Friday, 26 April 2024
Fri, 26 Apr 2024

Frontpage Posts

Quick takes

American Philosophical Association (APA) announces two $10,000 AI2050 Prizes for philosophical work related to AI, with June 23, 2024 deadline:  https://dailynous.com/2024/04/25/apa-creates-new-prizes-for-philosophical-research-on-ai/ https://www.apaonline.org/page/ai2050 https://ai2050.schmidtsciences.org/hard-problems/
Paul Graham about getting good at technology (bold is mine): > How do you get good at technology? And how do you choose which technology to get good at? Both of those questions turn out to have the same answer: work on your own projects. Don't try to guess whether gene editing or LLMs or rockets will turn out to be the most valuable technology to know about. No one can predict that. Just work on whatever interests you the most. You'll work much harder on something you're interested in than something you're doing because you think you're supposed to. > > If you're not sure what technology to get good at, get good at programming. That has been the source of the median startup for the last 30 years, and this is probably not going to change in the next 10. From "HOW TO START GOOGLE", March 2024. It's a talk for ~15 year olds, and it has more about "how to get good at technology" in it.
A corporation exhibits emergent behavior, over which no individual employee has full control. Because the unregulated market selects for profit and nothing else, any successful corporation becomes a kind of "financial paperclip optimizer". To prevent this, the economic system must change.
Everyone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact.    This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).   And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing. We seem to be in whack-a-mole territory now because of the overton window shifting for investors.
This WHO press release was a good reminder of the power of immunization – a new study forthcoming publication in The Lancet reports that (liberally quoting / paraphrasing the release) * global immunization efforts have saved an estimated 154 million lives over the past 50 years, 146 million of them children under 5 and 101 million of them infants  * for each life saved through immunization, an average of 66 years of full health were gained – with a total of 10.2 billion full health years gained over the five decades * measles vaccination accounted for 60% of the lives saved due to immunization, and will likely remain the top contributor in the future  * vaccination against 14 diseases has directly contributed to reducing infant deaths by 40% globally, and by more than 50% in the African Region * the 14 diseases: diphtheria, Haemophilus influenzae type B, hepatitis B, Japanese encephalitis, measles, meningitis A, pertussis, invasive pneumococcal disease, polio, rotavirus, rubella, tetanus, tuberculosis, and yellow fever * fewer than 5% of infants globally had access to routine immunization when the Expanded Programme on Immunization (EPI) was launched 50 years ago in 1974 by the World Health Assembly; today 84% of infants are protected with 3 doses of the vaccine against diphtheria, tetanus and pertussis (DTP) – the global marker for immunization coverage * there's still a lot to be done – for instance, 67 million children missed out on one or more vaccines during the pandemic years

Thursday, 25 April 2024
Thu, 25 Apr 2024

Frontpage Posts

Quick takes

In this "quick take", I want to summarize some my idiosyncratic views on AI risk.  My goal here is to list just a few ideas that cause me to approach the subject differently from how I perceive most other EAs view the topic. These ideas largely push me in the direction of making me more optimistic about AI, and less likely to support heavy regulations on AI. (Note that I won't spend a lot of time justifying each of these views here. I'm mostly stating these points without lengthy justifications, in case anyone is curious. These ideas can perhaps inform why I spend significant amounts of my time pushing back against AI risk arguments. Not all of these ideas are rare, and some of them may indeed be popular among EAs.) 1. Skepticism of the treacherous turn: The treacherous turn is the idea that (1) at some point there will be a very smart unaligned AI, (2) when weak, this AI will pretend to be nice, but (3) when sufficiently strong, this AI will turn on humanity by taking over the world by surprise, and then (4) optimize the universe without constraint, which would be very bad for humans. By comparison, I find it more likely that no individual AI will ever be strong enough to take over the world, in the sense of overthrowing the world's existing institutions and governments by surprise. Instead, I broadly expect unaligned AIs will integrate into society and try to accomplish their goals by advocating for their legal rights, rather than trying to overthrow our institutions by force. Upon attaining legal personhood, unaligned AIs can utilize their legal rights to achieve their objectives, for example by getting a job and trading their labor for property, within the already-existing institutions. Because the world is not zero sum, and there are economic benefits to scale and specialization, this argument implies that unaligned AIs may well have a net-positive effect on humans, as they could trade with us, producing value in exchange for our own property and services. Note that my claim here is not that AIs will never become smarter than humans. One way of seeing how these two claims are distinguished is to compare my scenario to the case of genetically engineered humans. By assumption, if we genetically engineered humans, they would presumably eventually surpass ordinary humans in intelligence (along with social persuasion ability, and ability to deceive etc.). However, by itself, the fact that genetically engineered humans will become smarter than non-engineered humans does not imply that genetically engineered humans would try to overthrow the government. Instead, as in the case of AIs, I expect genetically engineered humans would largely try to work within existing institutions, rather than violently overthrow them. 2. AI alignment will probably be somewhat easy: The most direct and strongest current empirical evidence we have about the difficulty of AI alignment, in my view, comes from existing frontier LLMs, such as GPT-4. Having spent dozens of hours testing GPT-4's abilities and moral reasoning, I think the system is already substantially more law-abiding, thoughtful and ethical than a large fraction of humans. Most importantly, this ethical reasoning extends (in my experience) to highly unusual thought experiments that almost certainly did not appear in its training data, demonstrating a fair degree of ethical generalization, beyond mere memorization. It is conceivable that GPT-4's apparently ethical nature is fake. Perhaps GPT-4 is lying about its motives to me and in fact desires something completely different than what it professes to care about. Maybe GPT-4 merely "understands" or "predicts" human morality without actually "caring" about human morality. But while these scenarios are logically possible, they seem less plausible to me than the simple alternative explanation that alignment—like many other properties of ML models—generalizes well, in the natural way that you might similarly expect from a human. Of course, the fact that GPT-4 is easily alignable does not immediately imply that smarter-than-human AIs will be easy to align. However, I think this current evidence is still significant, and aligns well with prior theoretical arguments that alignment would be easy. In particular, I am persuaded by the argument that, because evaluation is usually easier than generation, it should be feasible to accurately evaluate whether a slightly-smarter-than-human AI is taking bad actions, allowing us to shape its rewards during training accordingly. After we've aligned a model that's merely slightly smarter than humans, we can use it to help us align even smarter AIs, and so on, plausibly implying that alignment will scale to indefinitely higher levels of intelligence, without necessarily breaking down at any physically realistic point. 3. The default social response to AI will likely be strong: One reason to support heavy regulations on AI right now is if you think the natural "default" social response to AI will lean too heavily on the side of laissez faire than optimal, i.e., by default, we will have too little regulation rather than too much. In this case, you could believe that, by advocating for regulations now, you're making it more likely that we regulate AI a bit more than we otherwise would have, pushing us closer to the optimal level of regulation. I'm quite skeptical of this argument because I think that the default response to AI (in the absence of intervention from the EA community) will already be quite strong. My view here is informed by the base rate of technologies being overregulated, which I think is quite high. In fact, it is difficult for me to name even a single technology that I think is currently clearly underregulated by society. By pushing for more regulation on AI, I think it's likely that we will overshoot and over-constrain AI relative to the optimal level. In other words, my personal bias is towards thinking that society will regulate technologies too heavily, rather than too loosely. And I don't see a strong reason to think that AI will be any different from this general historical pattern. This makes me hesitant to push for more regulation on AI, since on my view, the marginal impact of my advocacy would likely be to push us even further in the direction of "too much regulation", overshooting the optimal level by even more than what I'd expect in the absence of my advocacy. 4. I view unaligned AIs as having comparable moral value to humans: This idea was explored in one of my most recent posts. The basic idea is that, under various physicalist views of consciousness, you should expect AIs to be conscious, even if they do not share human preferences. Moreover, it seems likely that AIs — even ones that don't share human preferences — will be pretrained on human data, and therefore largely share our social and moral concepts. Since unaligned AIs will likely be both conscious and share human social and moral concepts, I don't see much reason to think of them as less "deserving" of life and liberty, from a cosmopolitan moral perspective. They will likely think similarly to the way we do across a variety of relevant axes, even if their neural structures are quite different from our own. As a consequence, I am pretty happy to incorporate unaligned AIs into the legal system and grant them some control of the future, just as I'd be happy to grant some control of the future to human children, even if they don't share my exact values. Put another way, I view (what I perceive as) the EA attempt to privilege "human values" over "AI values" as being largely arbitrary and baseless, from an impartial moral perspective. There are many humans whose values I vehemently disagree with, but I nonetheless respect their autonomy, and do not wish to deny these humans their legal rights. Likewise, even if I strongly disagreed with the values of an advanced AI, I would still see value in their preferences being satisfied for their own sake, and I would try to respect the AI's autonomy and legal rights. I don't have a lot of faith in the inherent kindness of human nature relative to a "default unaligned" AI alternative. 5. I'm not fully committed to longtermism: I think AI has an enormous potential to benefit the lives of people who currently exist. I predict that AIs can eventually substitute for human researchers, and thereby accelerate technological progress, including in medicine. In combination with my other beliefs (such as my belief that AI alignment will probably be somewhat easy), this view leads me to think that AI development will likely be net-positive for people who exist at the time of alignment. In other words, if we allow AI development, it is likely that we can use AI to reduce human mortality, and dramatically raise human well-being for the people who already exist. I think these benefits are large and important, and commensurate with the downside potential of existential risks. While a fully committed strong longtermist might scoff at the idea that curing aging might be important — as it would largely only have short-term effects, rather than long-term effects that reverberate for billions of years — by contrast, I think it's really important to try to improve the lives of people who currently exist. Many people view this perspective as a form of moral partiality that we should discard for being arbitrary. However, I think morality is itself arbitrary: it can be anything we want it to be. And I choose to value currently existing humans, to a substantial (though not overwhelming) degree. This doesn't mean I'm a fully committed near-termist. I sympathize with many of the intuitions behind longtermism. For example, if curing aging required raising the probability of human extinction by 40 percentage points, or something like that, I don't think I'd do it. But in more realistic scenarios that we are likely to actually encounter, I think it's plausibly a lot better to accelerate AI, rather than delay AI, on current margins. This view simply makes sense to me given the enormously positive effects I expect AI will likely have on the people I currently know and love, if we allow development to continue.
Dustin Moskovitz claims "Tesla has committed consumer fraud on a massive scale", and "people are going to jail at the end" https://www.threads.net/@moskov/post/C6KW_Odvky0/ Not super EA relevant, but I guess relevant inasmuch as Moskovitz funds us and Musk has in the past too. I think if this were just some random commentator I wouldn't take it seriously at all, but a bit more inclined to believe Dustin will take some concrete action. Not sure I've read everything he's said about it, I'm not used to how Threads works
My recommended readings/resources for community builders/organisers * CEA's groups resource centre, naturally  * This handbook on community organising  * High Output Management by Andrew Groves * How to Launch a High-Impact Nonprofit * LifeLabs's coaching questions (great for 1-1s with organisers you're supporting/career coachees) * The 2-Hour Cocktail Party * Centola's work on social change, e.g., the book Change: How to Make Big Things Happen * Han's work on organising, e.g., How Organisations Develop Activists (I wrote up some notes here) * This 80k article on community coordination * @Michael Noetel's forum post - 'We all teach: here's how to do it better'  * Theory of change in ten steps * Rumelt's Good Strategy Bad Strategy * IDinsight's Impact Measurement Guide
In order to be able to communicate about malaria from a fundraising perspective, it would be amazing if there would be a documentary about malaria. Personal compelling stories that anyone can relate to. Not about the science behind the disease, as that wouldn't work probably. Just like "An inconvenient truth", but then around Malaria. I am truly baffled I can't find anything close to what I was hoping would exist already. Anyone knows why this is? Or am I googling wrong?
Given how bird flu is progressing (spread in many cows, virologists believing rumors that humans are getting infected but no human-to-human spread yet), this would be a good time to start a protest movement for biosafety/against factory farming in the US.

Topic Page Edits and Discussion

Wednesday, 24 April 2024
Wed, 24 Apr 2024

Quick takes

First in-ovo sexing in the US Egg Innovations announced that they are "on track to adopt the technology in early 2025." Approximately 300 million male chicks are ground up alive in the US each year (since only female chicks are valuable) and in-ovo sexing would prevent this.  UEP originally promised to eliminate male chick culling by 2020; needless to say, they didn't keep that commitment. But better late than never!  Congrats to everyone working on this, including @Robert - Innovate Animal Ag, who founded an organization devoted to pushing this technology.[1] 1. ^ Egg Innovations says they can't disclose details about who they are working with for NDA reasons; if anyone has more information about who deserves credit for this, please comment!
With the US presidential election coming up this year, some of y’all will probably want to discuss it.[1] I think it’s a good time to restate our politics policy. tl;dr Partisan politics content is allowed, but will be restricted to the Personal Blog category. On-topic policy discussions are still eligible as frontpage material. 1. ^ Or the expected UK elections.
Ben West recently mentioned that he would be excited about a common application. It got me thinking a little about it. I don't have the technical/design skills to create such a system, but I want to let my mind wander a little bit on the topic. This is just musings and 'thinking out out,' so don't take any of this too seriously. What would the benefits be for some type of common application? For the applicant: send an application to a wider variety of organizations with less effort. For the organization: get a wider variety of applicants. Why not just have the post openings posted to LinkedIn and allow candidates to use the Easy Apply function? Well, that would probably result in lots of low quality applications. Maybe include a few question to serve as a simple filter? Perhaps a question to reveal how familiar the candidate is with the ideas and principles of EA? Lots of low quality applications aren't really an issue if you have an easy way to filter them out. As a simplistic example, if I am hiring for a job that requires fluent Spanish, and a dropdown prompt in the job application asks candidates to evaluate their Spanish, it is pretty easy to filter out people that selected "I don't speak any Spanish" or "I speak a little Spanish, but not much." But the benefit of Easy Apply (from the candidate's perspective) is the ease. John Doe candidate doesn't have to fill in a dozen different text boxes with information that is already on his resume. And that ease can be gained in an organization's own application form. An application form literally can be as simple as prompts for name, email address, and resume. That might be the most minimalistic that an application form could be while still being functional. And there are plenty of organizations that have these types of applications: companies that use Lever or Ashby often have very simple and easy job application forms (example 1, example 2). Conversely, the more than organizations prompt candidates to explain "Why do you want to work for us" or "tell us about your most impressive accomplishment" the more burdensome it is for candidates. Of course, maybe making it burdensome for candidates is intentional, and the organization believes that this will lead to higher quality candidates. There are some things that you can't really get information about by prompting candidates to select an item from a list.
Maybe EA philanthropists should be invest more conservatively, actually The pros and cons of unusually high risk tolerance in EA philanthropy have been discussed a lot, e.g. here. One factor that may weigh in favor of higher risk aversion is that nonprofits benefit from a stable stream of donations, rather than one that goes up and down a lot with the general economy. This is for a few reasons: * Funding stability in a cause area makes it easier for employees to advance their careers because they can count on stable employment. It also makes it easier for nonprofits to hire, retain, and develop talent. This allows both nonprofits and their employees to have greater impact in the long run. Whereas a higher but more volatile stream of funding might not lead to as much impact. * It becomes more politically difficult to make progress in some causes during a recession. For example, politicians may have lower appetite for farm animal welfare regulations and might even be more willing to repeal existing regulations if they believe the regulations stifle economic growth. This makes it especially important for animal welfare orgs to retain funding.
I don't think we have a good answer to what happens after we do auditing of an AI model and find something wrong.   Given that our current understanding of AI's internal workings is at least a generation behind, it's not exactly like we can isolate what mechanism is causing certain behaviours. (Would really appreciate any input here- I see very little to no discussion on this in governance papers; it's almost as if policy folks are oblivious to the technical hurdles which await working groups)

Tuesday, 23 April 2024
Tue, 23 Apr 2024

Quick takes

49
harfe
6d
5
Consider donating all or most of your Mana on Manifold to charity before May 1. Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then. Also this part might be relevant for people with large positions they want to sell now: > One week may not be enough time for users with larger portfolios to liquidate and donate. We want to work individually with anyone who feels like they are stuck in this situation and honor their expected returns and agree on an amount they can donate at the original 100:1 rate past the one week deadline once the relevant markets have resolved.
This is an extremely "EA" request from me but I feel like we need a word for people (i.e. me) who are Vegans but will eat animal products if they're about to be thrown out. OpportuVegan? UtilaVegan?
I see way too many people confusing movement with progress in the policy space.  There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective - lots of legal leeway for companies to exploit. 
2
Otto
5d
0
High impact startup idea: make a decent carbon emissions model for flights. Current ones simply use flight emissions which makes direct flights look low-emission. But in reality, some of these flights wouldn't even be there if people could be spread over existing indirect flights more efficiently, which is why they're cheaper too. Emission models should be relative to counterfactual. The startup can be for-profit. If you're lucky, better models already exist in scientific literature. Ideal for the AI for good-crowd. My guess is that a few man-years work could have a big carbon emissions impact here.
I think it would be good if lots of EAs answered this twitter poll, so we could get a better sense for the communities views on the topic of Enlightenment / Awakening: https://twitter.com/SpencrGreenberg/status/1782525718586413085?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet

Monday, 22 April 2024
Mon, 22 Apr 2024

Frontpage Posts

Quick takes

CEA is hiring for someone to lead the EA Global program. CEA's three flagship EAG conferences facilitate tens of thousands of highly impactful connections each year that help people build professional relationships, apply for jobs, and make other critical career decisions. This is a role that comes with a large amount of autonomy, and one that plays a key role in shaping a key piece of the effective altruism community’s landscape.  See more details and apply here!
Quote from VC Josh Wolfe: > Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios. > > There's a ton that are gonna come on wave, and this is exciting because you can be a scientist on the beach in the Bahamas, pull up your iPad, run an experiment. The robots are performing 90% of the activity of Pouring something from a beaker into another, running a centrifuge, and then the data that comes off of that. > > And this is the really cool part. Then the robot and the machines will actually say to you, “Hey, do you want to run this experiment but change these 4 parameters or these variables?” And you just click a button “yes” as though it's reverse prompting you, and then you run another experiment. So the implication here is that the boost in productivity for science, for generation of truth, of new information, of new knowledge, That to me is the most exciting thing. And the companies that capture that, forget about the societal dividend, I think are gonna make a lot of money. https://overcast.fm/+5AWO95pnw/46:15
I noticed that many people write a lot not only on forums but also on personal blogs and Substack. This is sad. Competent and passionate people are writing in places that get very few views. I too am one of those people. But honestly, magazines and articles are stressful and difficult, and forums are so huge that even if they have a messaging function, it is difficult to achieve a transparent state where each person can fully recognize their own epistemological status. I'm interested in such collaborative blogs, similar to the early Overcoming Bias. I believe that many bloggers and writers need help and that we can help each other. Is there anyone who wants to be with me?
Has anyone seen an analysis that takes seriously the idea that people should eat some fruits, vegetables and legumes over others based on how much animal suffering they each cause? I.e. don't eat X fruit, eat Y one instead, because X fruit is [e.g.] harvested in Z way, which kills more [insert plausibly sentient creature].
The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI). I thought it might be useful to spell that out.

Sunday, 21 April 2024
Sun, 21 Apr 2024

Quick takes

GiveWell and Open Philanthropy just made a $1.5M grant to Malengo! Congratulations to @Johannes Haushofer and the whole team, this seems such a promising intervention from a wide variety of views
I'm going to make a quick take thread of EA-relevant software projects I could work on. Agree / disagree vote if you think I should/ should not do some particular project.
I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist. Gemini did a good job of summarising it: This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown: What it Doesn't Mean: * Self-Flagellation: This practice isn't about beating yourself up or dwelling on guilt. * Ignoring External Factors: It doesn't deny the role of external circumstances in a situation. What it Does Mean: * Owning Your Reaction: It's about acknowledging how a situation makes you feel and taking responsibility for your own emotional response. * Shifting Focus: Instead of blaming others or dwelling on what you can't control, you direct your attention to your own thoughts and reactions. * Breaking Negative Cycles: By understanding your own reactions, you can break free from negative thought patterns and choose a more skillful response. Analogy: Imagine a pebble thrown into a still pond. The pebble represents the external situation, and the ripples represent your emotional response. While you can't control the pebble (the external situation), you can control the ripples (your reaction). Benefits: * Reduced Suffering: By taking responsibility for your own reactions, you become less dependent on external circumstances for your happiness. * Increased Self-Awareness: It helps you understand your triggers and cultivate a more mindful response to situations. * Greater Personal Growth: By taking responsibility, you empower yourself to learn and grow from experiences. Here are some additional points to consider: * This practice doesn't mean excusing bad behavior. You can still hold others accountable while taking responsibility for your own reactions. * It's a gradual process. Be patient with yourself as you learn to practice this approach.

Topic Page Edits and Discussion

Saturday, 20 April 2024
Sat, 20 Apr 2024

Quick takes

Animal Justice Appreciation Note Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it. Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!
Be the meme you want to see in the world (screenshot).  

Friday, 19 April 2024
Fri, 19 Apr 2024

Frontpage Posts

Quick takes

While AI value alignment is considered a serious problem, the algorithms we use every day do not seem to be subject to alignment. That sounds like a serious problem to me. Has no one ever tried to align the YouTube algorithm with our values? What about on other types of platforms?
The topics of working for an EA org and altruist careers are discussed occasionally in our local group.  I wanted to share my rough thoughts and some relevant forum posts that I've compiled in this google doc. The main thesis is that it's really difficult to get a job at an EA org, as far as I know, and most people will have messier career paths. Some of the posts I link in the doc, specifically around alternate career paths: The career and the community Consider a wider range of jobs, paths and problems if you want to improve the long-term future My current impressions on career choice for longtermists
The New York Declaration on Animal Consciousness and an article about it: https://sites.google.com/nyu.edu/nydeclaration/declaration https://www.nbcnews.com/science/science-news/animal-consciousness-scientists-push-new-paradigm-rcna148213

Load more days