N

Naryan

131 karmaJoined Jul 2018

Comments
18

Topic contributions
2

Martijn, your comment points me to something I've noticed around communicating 'systems thinking' and a complexity mindset with some EAs. Gideon points to a more fundamental ontological difference between those who tend to focus on that which is predictable (measurable and quantifieable) and those who pay attention to shifting patterns that seem contextual and more nebulous.

I read your comment as an invitation to translate across different ontologies - to explain the nebulous concretely, to explain the unpredictable in predictable terms. I personally haven't found success in my attempts, and I'd love to hear more about how you communicate around complexity.

I've most often found success in pointing out parts of one's experience that feel unknown and then getting mutually curious about the successful strategies one might use to navigate. To invite one into a place where their existing tools aren't working anymore and there is real curiosity to try a different approach. When I've tried speaking about complexity in the abstract or as applied to something that people see as 'potentially predictable', the deeper sense of complexity tends to be missed - often getting translated into "that's a cool tool, but aren't you just describing a more accurate way of modeling?"

The comment below about embracing a pluralistic approach seems to provide a path forward that doesn't rely on translation though... lots of interesting ideas in this comment section already.

I've observed that some folks (in EA or other disciplines) have skepticism for the idea of a polycrisis, while others view it as obviously correct, and others (like myself) are see it as plausible and worth exploring further.  I suspect part of the differences in reaction have something to do with how we make sense of nebulosity (like Jackson's comment suggests).

Part of what may make polycrisis framing so challenging to grapple with is that it is so big, so multifaceted, that individual attempts to 'hold the whole concept in our heads' is often not helpful.  I'm quite interested in how we may collectively become more capable of working with this kind of incredibly complex challenge.  And how might we coordinate on challenges that we can each individually only grasp a part of?

Karthik, I'd love to hear if you have more to share about your thinking on this thread overall.  Cheers!

Answer by NaryanJul 11, 20228
1
0

I enjoy this prompt to add a few thoughts into this thread, it inspired me to think in a new direction.  These thoughts aren't a direct response to the prompt, rather some thinking around the edges.

I already see cause areas as interconnected areas of interest, and as EAs 'define' a cause area, we are creating a distinction that says that this area/theory of change is meaningfully different from others.  This kind of categorical thinking feels less useful for me when describing overlapping fields, which are strongly connected through a myriad of relationships.  The comparisons between the 'cause areas' of climate change vs wild animal wellfare vs biodiversity loss are meaningfully different lenses of (what feels to me) the same territory.

So I mentally rephrase the question into something like: If one cares about life on Earth, what do we see through the lens of biodiversity?

What I'm seeing (as I sit on my front porch in downtown Toronto), is a vast web/network - of inter-related individuals and species, each playing a role in an ecosystem in balance (more or less).  I can imagine over time, more and more of the nodes disappearing, causing many relationships to disappear.  From network theory, this loss of redundancy may increase the fragility of the overall system, and shifting the overall network dynamics in often non-linear and unpredictable ways.

How many nodes can be removed before the autopoiesis* fails?

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

What other things do EAs see if we look at planetary health through the lens of biodiversity?  What are the blindspots inherent in viewing 'planetary health' through other "certified EA cause area" lenses?

Shameless plug: I'm interested in helping parts of EA to talk to one another, and in having EA interact with other movements also making the world a better place.  If you like looking at the interconnections between things, and integrating different models into meta-models, feel free to reach out to me!

*Autopoiesis refers to a system capable of producing and maintaining itself by creating its own parts

Wanting to leave a breadcrumb here for other EAs interested in the leading edge of network thinking.

 What this article describes is a beautiful introduction to something quite complex and powerful.  We do this work naturally - it is indeed something that emerges in our system by itself.  By increasing our understanding of how it works, we can increase our awareness of the networks we participate in, and intentionally shift our behaviours to change the properties of the network.

In my experience, the theory is great, yet studying the theory doesn't really give one the skill to actually do the thing.  It's a bit like learning to ride a bike, or learning how to have really good conversations, or learning how to show up with both warmth and competence at the same time - it takes a good learning container to practice in and coaching/support from someone with greater awareness and skill.

Here's the breadcrumb: there is a network of folks with a ton of capacity in this area, already playing within EA and communities beyond.  In several communities (though not directly within EA yet), we actually offer training on developing these skills.  If you find this note and are interested in connecting, I'd love to hear from you!  If we aren't connected directly, I'm sure you could use your network to find me :)

Thanks so much for posting about your experience, I anticipate your tips will help me improve my strategy for incorporating EA concepts at the large corporation I work in. I'll chime in with my own experiences in case they are also helpful to others.

I work in a Canadian company with 50k employees. In early 2019, I reached out to our company's charitable giving team, expressing an interest in helping run events to increase charitable giving engagement within my part of the business. This invitation was met with enthusiasm and support, over 2 conference calls and several emails. I didn't explicitly mention EA, just that I was well connected and had a fun systematic way for looking at charitable giving.

As we approached the giving campaign period in the fall, I reached out again with an exciting proposal to run a Giving Game, and asked if it could be included as an official charitable giving campaign event. This didn't end up working out (reasons are opaque to me, a few emails went by without response), but I'm hopeful for 2020. Instead, I invited folks from my network to attend, and we had a really good 10 person Giving Game.

This was the first one that I've run, and it seemed to land really well with the attendees. One key aspect was to show employees how they could donate to effective charities through RC Forward, directly from their paycheck. I hope to leverage their testimonials to support whatever proposal I come up with this year.

I think there is a lot of potential to incorporate EA concepts into a greater conversation at my company, and see two paths forward:

1. Grow a grass-roots conversation by finding people who are enthusiastic enough about EA to actually form a core team. Currently it's just me by myself, and this seems like a work-intensive long-term goal.

2. Shortcut the process by building a stronger relationship with our charitable giving team. Changes made by this team could be very high leverage - anything from changing the company matching program to include high impact charities (we don't), to tweaking default donation options and search functionality, to a paradigm shift in how folks view doing good.

I'm also playing with system-wide influence through a new role that I've taken on, which could transform the company culture. It's still early days, but I'm making meta-moves to create a community that increases empathy, connection, and systematic (rational) thinking.

Hoping my story is helpful for folks here. I'm interested in hearing more anecdotes from anyone else who's looking at EA from the context of a corporation.

Very cool summary, I've sent this to a few groups I'm a part of. I'm selfishly hoping it will lead to even better gatherings in my circles in the future!

Hi Michael and team,

Thanks for thinking about this topic - I agree that this is an important update for the community, and I think you gave it the treatment it required.

I think the puzzle of wealth/income vs SWB is an interesting one. The finding that relative wealth plays a role in SWB made sense - and leads me to hypothesize that countries with lower inequality would be happier.

I found a meta-analysis on the topic which couldn't find a strong correlation. "The association between income inequality and SWB is weak, complex and moderated by the country economic development." - https://www.ncbi.nlm.nih.gov/pubmed/29067589

It is interesting to think about the reduction in happiness due to a neighbour getting a cash transfer (the spillover effect mentioned in source 21).

  • Could this be due to jealousy decreasing one's happiness? Do we need secret cash transfers?

  • Does the reverse also hold true - if your neighbours become poorer, does that make one happier? Seems dangerous to generalize these findings, but this area of research would be quite applicable to the conversations on basic income.

It's a bit of a rabbit-hole, but wondering if you've seen any research that speaks to this?

Great to see this being looked at. Do you have any examples of this method in use? I'd be interested to see various animals and situations ranked using this method - as it could provide a baseline to quantify the benefits of various interventions.

I also attempted to create my own method of comparing animal suffering while I was calculating the value of going vegetarian. I'll provide a quick summary here, and would love to hear if anyone else has tried something similar.

The approach was to create an internally consistent model based upon my naive intuitions and what data I could find. I spent a while tuning the model so that various trade-offs would make sense and didn't lead to incoherent preferences. It is super rough, but was a first step in my self-examination of ethics.

  1. I created a scale of the value of [human/animal] experience from torture (-1000) to self-actualization (+5) with neutral at 0.
  2. I guessed where various animal experiences fell on the scale, averaged over a lifetime. This is a very weak part of the model - and where Joey's method could really come in handy.
  3. I then multiplied the experience by the lifespan of the animal (as a percentage of human life).
  4. Finally, I added a 'cognitive/subjectivity' multiplier based on the animal's intelligence. This is contentious, but helps so I don't value the long-lived cicada (insect) the same as a human. This follows from other ethical considerations in my model, but some people prefer to remove this step.

The output of this rough model was to value various animal lives as a percentage of human lives - a more salient/comparable measure for me.

This model was built over about 5 hours and is still updating as I have more conversations around animal suffering. Would love to hear if anyone else tried a different strategy!

Pretty cool idea - since I'm new to EA, I hope this will become a neat snapshot for me to look back on in a few years to see how far I've come.

Growing up, I believe I was raised to be a decent member of society - be kind to others, don't litter, help those in need. I never really thought explicitly about ethics, or engaged deeply with any causes. Sure, I'd raise money for cancer at "Relay for Life", but it wasn't because I thought the $100 dollars would make a difference - more because it would be fun to have a camp-out with friends.

In my twenties, my goal was primarily to make money to retire early so I could travel, and maybe volunteer my time to help increase financial literacy, or apply my career experience in a not-for-profit. Fairly ephemeral goals though - I also considered becoming a full-time music producer.

Rationality

When I was 28, I found Less Wrong from a link my friend posted on Facebook. Over the next two years I read every essay in the Rationality sequences, supplemented by a healthy amount of psychology/economy/math/self-help style audiobooks. Reading that material was an enjoyable journey and lead to a few minor epiphanies.

  • I love improving my thinking, and upgrading my effectiveness
  • I thought deeply about my ethics for the first time
  • I have a responsibility to improve the world in the biggest and best way possible

Seriously - the last book "Becoming Stronger" and the sequence "Challenging the Difficult" really motivated me to think much larger than I had before. Discovering 80,000 Hours around the same time was a great template to follow.

Effective Altruism

In May 2018 I attended my first EA meet-up. I recall thinking, "Wow! There are actually other rationalists out there". Up until that point, I'd never really met others who thought or spoke similarly, let alone a whole room full of them. I'm currently enjoying the learning curve, finding more questions than answers.

  • Attending weekly meet-ups at the Fox & Fiddle with EA Toronto
  • Hosting games nights, going on hikes, watching debates
  • Independently tackling cause prioritization, clarifying my ethics and their implications for where I should dedicate my effort
  • Excited to attend the EA Summit 2018!

I'm currently working with a team of amazing EAs towards my top cause priority, and hope to launch this autumn.

Load more