Hide table of contents
This text has many, many hyperlinks, it is useful to at least glance at frontpage of the linked material to get it. It is an expression of me thinking so it has many community jargon terms. Thank Oliver Habryka, Daniel Kokotajlo and James Norris for comments. No, really, check the front page of the hyperlinks. 
  • Why I Grew Skeptical of Transhumanism
  • Why I Grew Skeptical of Immortalism
  • Why I Grew Skeptical of Effective Altruism
  • Only Game in Town

 

Wonderland’s rabbit said it best: The hurrier I go, the behinder I get.

 

We approach 2016, and the more I see light, the more I see brilliance popping everywhere, the Effective Altruism movement growing, TEDs and Elons spreading the word, the more we switch our heroes in the right direction, the behinder I get. But why? - you say.

Clarity, precision, I am tempted to reply. I have left the intellectual suburbs of Brazil, straight into the strongest hub of production of things that matter, The Bay Area, via Oxford’s FHI office, I now split my time between UC Berkeley, and the CFAR/MIRI office. In the process, I have navigated an ocean of information, read hundreds of books, papers, saw thousands of classes, became proficient in a handful of languages and a handful of intellectual disciplines. I’ve visited the Olympus and I met our living demigods in person as well.

Against the overwhelming forces of an extremely upbeat personality surfing a hyper base-level happiness, these three forces: approaching the center, learning voraciously, and meeting the so-called heroes, have brought me to the current state of pessimism.

I was a transhumanist, an immortalist, and an effective altruist.

 

Why I Grew Skeptical of Transhumanism

The transhumanist in me is skeptical of technological development fast enough for improving the human condition to be worth it now, he sees most technologies as fancy toys that don’t get us there. Our technologies can’t and won’t for a while lead our minds to peaks anywhere near the peaks we found by simply introducing weirdly shaped molecules into our brains. The strangeness of Salvia, the beauty of LSD, the love of MDMA are orders and orders of magnitude beyond what we know how to change from an engineering perspective. We can induce a rainbow, but we don’t even have the concept of force yet. Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.

Believe me, I have read thousands of pages of papers in the most advanced topics in cognitive neuroscience, my advisor spent his entire career, from Harvard to Tenure, doing neuroscience, and was the first person to implant neurons that actually healed a brain to the point of recovering functionality by using non-human neurons. As Marvin Minsky, who nearly invented AI and the multi-agent computational theory of mind, told me: I don’t recommend entering a field where every four years all knowledge is obsolete, they just don’t know it yet.

 

Why I Grew Skeptical of Immortalism

The immortalist in me is skeptical because he understands the complexity of biology from conversations with the centimillionaires and with the chief scientists of anti-aging research facilities worldwide, he met the bio-startup founders and gets that the structure of incentives does not look good for bio-startups anyway, so although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.

Believe me, I was the first cryonicist among the 200 million people striding my country, won a prize for anti-ageing research at the bright young age of 17, and hang out on a regular basis with all the people in this world who want to beat death that still share in our privilege of living, just in case some new insight comes that changes the tides, but none has come in the last ten years, as our friend Aubrey will be keen to tell you in detail.

 

Why I Grew Skeptical of Effective Altruism

The Effective Altruist is skeptical too, although less so, I’m still founding an EA research institute, keeping a loving eye on the one I left behind, living with EAs, working at EA offices and mostly broadcasting ideas and researching with EAs. Here are some problems with EA which make me skeptical after being shook around by the three forces:

  1. The Status Games: Signalling, countersignalling, going one more meta-level up, outsmarting your opponent, seeing others as opponents, my cause is the only true cause, zero-sum mating scarcity, pretending that poly eliminates mating scarcity, founders X joiners, researchers X executives, us institutions versus them institutions, cheap individuals versus expensive institutional salaries, it's gore all the way up and down.

  2. Reasoning by Analogy: Few EAs are able to and doing their due intellectual diligence. I don’t blame them, the space of Crucial Considerations is not only very large, but extremely uncomfortable to look at, who wants to know our species has not even found the stepping stones to make sure that what matters is preserved and guaranteed at the end of the day? It is a hefty ordeal. Nevertheless, it is problematic that fewer than 20 EAs (one in 300?) are actually reasoning from first principles, thinking all things through from the very beginning. Most of us are looking away from at least some philosophical assumption or technological prediction. Most of us are cooks and not yet chefs. Some of us have not even waken up yet.

  3. Babies with a Detonator: Most EAs still carry their transitional objects around, clinging desperately to an idea or a person they think more guaranteed to be true, be it hardcore patternism about philosophy of mind, global aggregative utilitarianism, veganism, or the expectation of immortality.

  4. The Size of the Problem: No matter if you are fighting suffering, Nature, Chronos (death), Azathoth (evolutionary forces) or Moloch (deranged emergent structures of incentives), the size of the problem is just tremendous. One completely ordinary reason to not want to face the problem, or to be in denial, is the problem’s enormity.

  5. The Complexity of The Solution: Let me spell this out, the nature of the solution is not simple in the least. It’s possible that we luck out and it turns out the Orthogonality Thesis and the Doomsday Argument and Mind Crime are just philosophical curiosities that have no practical bearing in our earthly engineering efforts, that the AGI or Emulation will by default fall into an attractor basin which implements some form of MaxiPok with details that it only grasps after CEV or the Crypto, and we will be Ok. It is possible, and it is more likely than that our efforts will end up being the decisive factor. We need to focus our actions in the branches where they matter though.

  6. The Nature of the Solution: So let’s sit down side by side and stare at the void together for a bit. The nature of the solution is getting a group of apes who just invented the internet from everywhere around the world, and get them to coordinate an effort that fills in the entire box of Crucial Considerations yet unknown - this is the goal of Convergence Analysis, by the way - find every single last one of them to the point where the box is filled, then, once we have all the Crucial Considerations available, develop, faster than anyone else trying, a translation scheme that translates our values to a machine or emulation, in a physically sound and technically robust way (that’s if we don’t find a Crucial Consideration otherwise which, say, steers our course towards Mars). Then we need to develop the engineering prerequisites to implement a thinking being smarter than all our scientists together who can reflect philosophically better than the last two thousand years of effort while becoming the most powerful entity in the universe’s history, that will fall into the right attractor basin within mindspace. That’s if Superintelligences are even possible technically. Add to that we or it have to guess correctly all the philosophical problems that are A)Relevant B)Unsolvable within physics (if any) or by computers, all of this has to happen while the most powerful corporations, States, armies and individuals attempt to seize control of the smart systems themselves. without being curtailed by the hindrance counter incentive of not destroying the world either because they don’t realize it, or because the first mover advantage seems worth the risk, or because they are about to die anyway so there’s not much to lose.

  7. How Large an Uncertainty: Our uncertainties loom large. We have some technical but not much philosophical understanding of suffering, and our technical understanding is insufficient to confidently assign moral status to other entities, specially if they diverge in more dimensions than brain size and architecture. We’ve barely scratched the surface of technical understanding on happiness increase, and philosophical understanding is also in its first steps.

  8. Macrostrategy is Hard: A Chess Grandmaster usually takes many years to acquire sufficient strategic skill to command the title. It takes a deep and profound understanding of unfolding structures to grasp how to beam a message or a change into the future. We are attempting to beam a complete value lock-in in the right basin, which is proportionally harder.

  9. Probabilistic Reasoning = Reasoning by Analogy: We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.

  10. Excessive Trust in Institutions: Very often people go through a simplifying set of assumptions that collapses a brilliant idea into an awful donation, when they reason:
    I have concluded that cause X is the most relevant
    Institution A is an EA organization fighting for cause X
    Therefore I donate to institution A to fight for cause X.
    To begin with, this is very expensive compared to donating to any of the three P’s: projects, people or prizes. Furthermore, the crucial points to fund institutions are when they are about to die, just starting, or building a type of momentum that has a narrow window of opportunity where the derivative gains are particularly large or you have private information about their current value. To agree with you about a cause being important is far from sufficient to assess the expected value of your donation.

  11. Delusional Optimism: Everyone who like past-me moves in with delusional optimism will always have a blind spot in the feature of reality about which they are in denial. It is not a problem to have some individuals with a blind spot, as long as the rate doesn’t surpass some group sanity threshold, yet, on an individual level, it is often the case that those who can gaze into the void a little longer than the rest end up being the ones who accomplish things. Staring into the void makes people show up.

  12. Convergence of opinions may strengthen separation within EA:  Thus far, the longer someone is an EA for, the more likely they are to transition to an opinion in the subsequent boxes in this flowchart from whichever box they are at at the time. There are still people in all the opinion boxes, but the trend has been to move in that flow. Institutions however have a harder time escaping being locked into a specific opinion. As FHI moves deeper into AI, and GWWC into poverty, 80k into career selection etc… they become more congealed. People’s opinions are still changing, and some of the money follows, but institutions are crystallizing into some opinions, and in the future they might prevent transition between opinion clusters and free mobility of individuals, like national frontiers already do. Once institutions, which in theory are commanded by people who agree with institutional values, notice that their rate of loss towards the EA movement is higher than their rate of gain, they will have incentives to prevent the flow of talent, ideas and resources that has so far been a hallmark of Effective Altruism and why many of us find it impressive, it’s being an intensional movement. Any part that congeals or becomes extensional will drift off behind, and this may create unsurmountable separation between groups that want to claim ‘EA’ for themselves.

 

Only Game in Town

The reasons above have transformed a pathological optimist into a wary skeptical about our future, and the value of our plans to get there. And yet, I don’t see other option than to continue the battle. I wake up in the morning and consider my alternatives: Hedonism, well, that is fun for a while, and I could try a quantitative approach to guarantee maximal happiness over the course of the 300 000 hours I have left. But all things considered, anyone reading this is already too close to the epicenter of something that can become extremely important and change the world to have the affordance to wander off indeterminately. I look at my high base-happiness and don’t feel justified in maximizing it up to the point of no marginal return, there clearly is value elsewhere than here (points inwards), clearly the self of which I am made has strong altruistic urges anyway, so at least above a threshold of happiness, has reason to purchase the extremely good deals in expected value happiness of others that seem to be on the market. Other alternatives? Existentialism? Well, yes, we always have a fundamental choice and I feel the thrownness into this world as much as any Kierkegaard does. Power? When we read Nietzsche it gives that fantasy impression that power is really interesting and worth fighting for, but at the end of the day we still live in a universe where the wealthy are often reduced to having to spend their power in pathetic signalling games and zero sum disputes or coercing minds to act against their will. Nihilism and Moral Fictionalism, like Existentialism all collapse into having a choice, and if I have a choice my choice is always going to be the choice to, most of the time, care, try and do.

Ideally, I am still a transhumanist and an immortalist. But in practice, I have abandoned those noble ideals, and pragmatically only continue to be an EA.

It is the only game in town.


Most comments are on Lesswrong

3

0
0

Reactions

0
0

More posts like this

Comments16
Sorted by Click to highlight new comments since: Today at 8:53 AM

For the record, I dislike the down votes. I realize that some of this piece seems like an attack on existing effective altruists, but on the whole it seems reasonable to me.

More important, having negative votes is pretty harsh. I've had that before on LessWrong and it greatly discouraged me from posting more. I think that content like this is probably positive, and more should be encouraged.

I think some (rare!) pieces deserve to be downvoted, and am not a huge fan of this one. But it'd be nice if people who downvote could leave a brief comment precisely explaining their reasons. I realise this norm would make downvoting more onerous, but that seems an acceptable cost.

My experience on Lesswrong indicates that though well intentioned, this would be a terrible policy. The best predictor on Lesswrong if texts of mine would be upvoted or downvoted was wheter someone, in particular username Shminux, would give reasons for their downvote.

There is nothing I dislike or fear more, when I write on Lesswrong than Shminux giving reasons why he's downvoting this time.

Don't get me wrong, write a whole dissertation about what in the content is wrong, or bad, or unformatted, do anything else, but don't say, for instance "Downvoted because X"

It is a nightmare.

Having reasons for downvotes visible induces people to downvote, and we seldom do the mirror thing, writing down reasons for the upvote.

I have no doubt, none, after posting dozens of texts about all sort of things, on Lesswrong and EA forum, that having a policy of explaining downvotes is the worse possible thing from the writer's perspective.

Remember that death is the second biggest fear people have, the first one is speaking in public. Now think about the role of posts explaining downvotes, it would be a total emotional shutdown, specially for new posters who are still getting the gist of it.

Hmm, that's a good point and I don't know what to think any more.

I think you consistently make yourself seem like a douchebag in your writing, and that's why your articles get a lot of downvotes. Maybe it's not fair (though I did downvote you just for writing like a douchebag, cuz no one ever said life was fair), but I'm pretty confident that's how other people are perceiving you, even if subconsciously. Maybe you're not actually a douchebag, I have no idea. I just think you should understand why people are downvoting you, even if you don't change your writing style at all. Like I'm completely aware that people are downvoting me for being an asshole, so I give that some weight in my writing. But I'm going to be an asshole right now.

Your writing contains:

  • pompous verbosity that makes every article overly long

  • narcissism and humblebragging, e.g. name-dropping, talking about a prize you won as a teenager

  • apparently unironic declarations of your love for TED talks and the Bay Area. Like, you might as well talk about how Malcolm Gladwell changed your life. Or talk about how you're "such a Carrie".

  • talking about yourself in the third person. Enough said.

I think there's a decent, if slightly unoriginal article under here. But you're going to need some brutal editing to get there.

That sounds about right :)

I like your sincerity. The verbosity is something I actually like and quite praised in the human sciences I was raised in, I don't aim for the condensed information writing style. The nascissism I dislike and tried to fix before, but it's hard, it's a mix of a rigid personality trait with a discomfort from having been in the EA movement since long before it was an actual thing, having spent many years giving time resources and attention, and seeing new EAs who don't have knowledge or competence being rewarded (especially financially) by EA organizations that clearly don't deserve it. It also bugs me that people don't distinguish the much higher value of EAs who are not taking money from the EA sphere from those who have a salary, and to some extent are just part of the economic engine, like anyone with a normal NGO job that is just instantiating the economy.

I don't actually see any problem with people talking about what changed their lives or whether they are more like Ross than like Chandler. I usually like hearing about transformative experiences of others because it enlarges my possibility scope. Don't you?

This particular text was written for myself, but I think the editing tips also hold for the ones I write for others, so thanks! And yes, you do write like asshole sometimes on facebook. But so what, if that is your thing, good for you, life isn't fair.

1-3 and 9-11 seem to be criticisms of EAs, not EA.

To 4-8, I want to say, of course, the biggest problems in the universe are extremely hard ones. Are we really surprised?

Number 12 is easily the most important criticism. The more we professionalize and institutionalize this movement, the more fractured, intransigent, and immobile it will become.

On the side of optimism, the Open Philanthropy Project shows signs of one important institution rigorously looking into broad cause effectiveness.

It sounds like you’ve been aiming very high. That’s awesome, but it is important to continually remind ourselves that we’re doing it. One great advantage of aiming high is of course that we can win big, but another is that when we fail, it feels just sort of neutral because that was the most likely outcome all long. But for that it is important for us to continually remind ourselves that we’re aiming high.

What I also like to do is Nate’s care-o-meter frying in reverse. My System 2 is ecstatic about having a high expected impact spread thin across hundreds of thousands of people in the future (adapt the numbers for your case), but my System 1 wants to see and feel something, so I let it have that but imagining a fraction of my impact small enough and condensed enough to be feelable. And when that tiny fraction already feels so fulfilling, then, System 2 concludes, the entirely of the impact must’ve been worth every minute of my work no matter that it in turn is only a tiny fraction of a solution to the enormity of the problem.

Why I Grew Skeptical of Transhumanism

I don't see how this is a criticism of transhumanism. Transhumanism is about far future technologies that actually will change the human condition in ways unlike current technologies. See David Pearce's essays, e.g. The Hedonistic Imperative. The experiences of recreational drugs provide evidence against your brand of skepticism, rather than for it. Surely you didn't believe that transhumanism is merely about building better kinds of consumer goods, did you?

The Status Games:

This is a vague complaint and I don't see how there's any more "status gaining" in EA than in any other movement or sect of society. You might know better than me, but many of us don't see this as a problem, so it would be more helpful to explain how bad the problem is and what the consequences of the problem are.

Reasoning by Analogy:

This again is vague and seems to require reading a number of prior blog posts to understand it. Honestly I have a hard time understanding what you are even trying to say, and I haven't yet read the posts you've linked, but it would be helpful to at least summarize the point. I understand you're used to using these kinds of metaphors and analogies, but not all of us can understand the jargon, even though we're perfectly well involved with effective altruism.

Babies with a Detonator

If I understand you correctly... you're complaining that EAs strongly believe certain things (utilitarianism, veganism) to be true? Since when is it a problem to hold beliefs about these things? Surely, EAs are not badly dogmatic or irrational about any of these things, and have reasons for believing them. Could you perhaps justify your complaint? The way you're phrasing this, you're only saying "X bothers me" but you haven't given us a good picture of what needs to be changed and why.

The Size of the Problem:

This doesn't provide any rational reason for doubt or skepticism. Also, the blog posts you linked don't seem relevant to this.

The Complexity of The Solution:

Well, sure, but that's not much of a reason to not do anything. If anything, it's a reason to be more concerned about finding solutions, as long as the problems are tractable to some extent (which they are).

The Nature of the Solution:

Well, yes. No one said it would be easy. But again, that's not a reason to give up or not care.

How Large an Uncertainty:

There's been plenty of philosophical work on what suffering is and why it's bad. Honestly, that's not a hugely controversial or difficult topic in philosophy, so I'm not sure why you're bringing it out as a topic of uncertainty. There is more philosophical dispute regarding the nature of a good life, but it's not something that really stands in the way of most transhuman goals. Furthermore, as I've pointed out already, uncertainty in general doesn't provide any reason to not care about or not try to change things.

Macrostrategy is Hard:

This seems to be basically the same thing you've said earlier about uncertain and difficult problems, and my response will therefore be the same.

Probabilistic Reasoning = Reasoning by Analogy:

I'm not sure exactly what problems you're referring to, forgive me for not understanding all the jargon. It would be helpful if you actually discussed what sorts of problems there are in EA decision making and communities - you've merely given a description of some kind of problem, but with no substantiation of how badly it's being committed and what the consequences of this error are.

Excessive Trust in Institutions:

I'm quite sure that EAs in general do more due diligence on funding projects than you seem to give credit for. Moreover, institutions also accomplish much more long term work than individual projects, people and prizes. I'm afraid I can't really answer this any better because it depends on specific examples of institutions being effective or not, and it's not apparent that the institutions supported by EA are in general less effective than whatever alternatives you may have in mind.

Delusional Optimism:

It's not at all obvious that this is present, and you've given no reason to believe that it is. I haven't seen any of this.

Convergence of opinions may strengthen separation within EA:

This is badly speculative and neglects countless countervailing possibilities.

Honestly, I'd love to be able to discuss and figure out the concerns you raise, but this so unspecific and simple a set of complaints that I'm not really sure what to make of it. I don't know if you intended the linked blogs to back up your ideas, but they don't seem to make things any clearer.

I'll bite: 1) Transhumanism: The evidence is for the paucity of our knowledge. 2) Status: People are being valued not for the exp value they produce, but by the position they occupy. 3) Analogy: Jargon from Musk, meaning copying and tweaking someone else's idea instead of thinking of a rocket, for instance, from the ground up - follow the chef and cook link. 4) Detonator: Key word was "cling to", they stick with one they had to begin with, demonstrating lack of malleability. 5) Size: The size gives reason to doubt the value of action because to the extent you are moved by other forces (ethical or internal) the opportunity cost rises 6) Nature: Same as five. 7) Uncertainty: Same here, more uncertainty, more opportunity cost. 8) Macrostrategy: as with the items before, if you value anything else but aggregative consequentialism inddiferent to risk, the opportunity cost rises. 9)Probabilistic reasoning: No short summary, you'd have to search for reference class tennis, reference class, Bayes theorem, and reasoning by analogy to get it. I agree it is unfortunate that terms sometimes stand for whole articles, books, papers, or ideologies, I said this was a direct reflection of my thinking with myself. 10) Trust in Institutions: link. 11) Delusional: Some cling to ideas, some to heroes, some cling to optimistic expectations, all of them are not letting the truth destroy what can be destroyed by it. 12) I'd be curious to hear the countervailing possibilities. Many people who are examining the movement going forwards seem to agree this is a crucial issue.

Re. 10), it's worth saying that in the previous post Paul Chistiano and I noted that it wasn't at all obvious to us why you thought individuals were generally cheaper than institutions, with tax treatment versus administrative overhead leading to an unclear and context-specific conclusion. You never replied, so I still don't know why you think this. I would definitely be curious to know why you think this.

I will write that post once I am finantially secure with some institutional attachment. I think it is too important for me to write while I expect to receive funding as an individual, and don't want people to think "he's saying that because he is not financed by an institution." Also see this.

1) Transhumanism: The evidence is for the paucity of our knowledge.

Transhumanists don't claim that we already know exactly how to improve the human condition, just that we can figure it out.

2) Status: People are being valued not for the exp value they produce, but by the position they occupy.

I'm not sure what kind of valuing you're referring to, but it doesn't sound like the same thing as the status seeking you first talked about, which is a question of behavior. Really I don't see where the problem is at all - we don't have any need to "value" people as if we were sacrificing them, sounds like you're just getting worked about who's getting the most praise and adoration! Praise and blame cannot, and never will be, metered out on the basis of pure contribution. They will always be subject to various human considerations, and we'll always have to accept that.

Analogy: Jargon from Musk, meaning copying and tweaking someone else's idea instead of thinking of a rocket, for instance, from the ground up - follow the chef and cook link.

There's always a balance to be had between too many and too few original ideas. I see plenty of EAs who try to follow their own ideas instead of contributing to what we already know. Again, can you provide any evidence that this is an actual problem which is playing out in the movement, rather than your own personal opinion?

Detonator: Key word was "cling to", they stick with one they had to begin with, demonstrating lack of malleability.

So you expect people to be changing their moral opinions more? Why? Would you complain that civil rights activists cling too much to beliefs in equality? How is this a problem with negative ramifications for the movement, and why does it give you grounds to be "skeptical"? People should be rational, but they shouldn't change their opinions for the sake of changing them. Sometimes they just have good reasons to believe that they are right.

Size: The size gives reason to doubt the value of action because to the extent you are moved by other forces (ethical or internal) the opportunity cost rises

I'm sorry but I don't understand how this backs up your point. It seems to actually contradict what you were saying, because if the problems are large then there is a huge cost to ignoring them.

6) Nature: Same as five.

I don't even think this makes sense. Opportunistic cost and complexity of the issue are orthogonal issues.

7) Uncertainty: Same here, more uncertainty, more opportunity cost.

No... opportunity cost is a function of expected value, not certainty.

8) Macrostrategy: as with the items before, if you value anything else but aggregative consequentialism inddiferent to risk, the opportunity cost rises.

I can't tell what it is you would value that forces this dilemma.

9)Probabilistic reasoning: No short summary, you'd have to search for reference class tennis, reference class, Bayes theorem, and reasoning by analogy to get it. I agree it is unfortunate that terms sometimes stand for whole articles, books, papers, or ideologies, I said this was a direct reflection of my thinking with myself.

If you can't figure out how to explain your idea quickly and simply, that should serve as a warning to double check your assumptions and understanding.

10) Trust in Institutions: link.

All I see is that you believe individual EA workers can be more productive than when they work in an institution, but this depends on all sorts of things. You're not going to find someone who will take $25,000 to fight global poverty as well as an institution is going to with the same money, and you're not going to recruit lone people to do AI safety research. You need an actual organization to be accountable for producing results. Honestly, I can't even imagine how this system would look like for many of the biggest EA cause areas. I also think the whole premise looks flawed: if a worker is willing to work for $25,000 when employed by an individual then they will be willing to work for $25,000 when employed by an institution.

11)Delusional: Some cling to ideas, some to heroes, some cling to optimistic expectations, all of them are not letting the truth destroy what can be destroyed by it.

Okay, but like I said before - I can't take assumption after assumption on faith. We seem to be a pretty reasonable group of people as far as I can tell.

12) I'd be curious to hear the countervailing possibilities.

How about new people entering the EA movement? How about organizations branching out to new areas? How about people continuing to network and form social connections across the movement? How about EAs having a positive and cooperative mindset? Finally, I don't see any reason to expect ill outcomes to actually take place because I can't imagine what they would even be. I have a hard time picturing a dystopian scenario where nonprofits are at each other's throats as feasible and likely.

Furthermore, GWWC was always about poverty, and 80k was always about career selection - there doesn't seem to have been any of the congealing you mentioned.

~

Honestly, I don't know what to tell you. You seem to want a movement that is not only filled with perfectly rational people, but also aligns with all your values and ways of thinking. And that's just not reasonable to expect out of any movement. There's things about the EA movement that I would like to change, but they don't make me stop caring about the movement and I don't stay involved merely out of dissatisfaction with the alternatives. When there's something I dislike, I just see what I can do to fix it, because that's the most rational and constructive thing I can do. You've clearly got a lot of experience with some of these issues which is great and a valuable resource, but the best way to leverage that is to start a meaningful conversation with others rather than expecting us to simply agree with everything that you say.

I think we are falling prey to the transparency fallacy

https://en.wikipedia.org/wiki/Illusion_of_transparency

, the double transparency fallacy,

http://lesswrong.com/lw/ki/double_illusion_of_transparency/

and that there are large inferential gaps in our conversation in both directions.

We could try to close the gaps writing to one another here, but then both of us would end up sometimes taking a defensive stance which could hinder discussion progress. My suggestion is that we do one of these

1) We talk via skype or hangouts to understand each other's mind. 2) We wait organically for the inferential gaps to be filled and for both of us to grow as rationalists, and assume that we will converge more towards the future. 3) The third alternative - something I didn't think about, but you think might be a good idea.

Hi Diego. You don't need to lose hope just because the EA movement is drifting. You can still try to do your best in your own way. Thanks for sharing your experience :)

Curated and popular this week
Relevant opportunities