K

konrad

419 karmaJoined Apr 2015

Comments
65

Disclaimer: I have aphantasia and it seems that my subjective conscious experience is far from usual.[1] I don't have deep meditation experience; I have meditated a cumulative 300+ hours since 2015. I have never meditated for more than 2 hours a day.

I've found Headspace's mindfulness stuff unhelpful, albeit pleasant. It was just not what I needed but I only figured it out after a year or so. Metta (loving-kindness) is the practice I consistently benefit from most, also for my attention and focus. It's the best "un-clencher" when I'm by myself. And it can get me into ecstatic states or cessation. Especially "open-monitoring" metta is great for me.

Related writing that resonates with me and has shaped my perspective and practice are:

  1. Charlie Rogers on self-love.
  2. Nick Cammarata on attachment/"clenching", meditation as a laxative and how that affects happiness, clarity and motivation.
  3. A lot of the Qualia Research Institute's work is tangentially fascinating. They summarize the neuroscience on meditation quite well (afaict) and their scales of pain and pleasure map onto my experiences, too.

Maybe some of this can help you identify your own path forward?

I have a friend who did a personalized retreat with a teacher for 3 days and made major breakthroughs; i.e. overcoming multi-year depression, getting to 6th Jhana. The usual retreats are probably inefficient, it seems better to have a close relationship and tighter feedback loops with a teacher.

  1. ^

    I don't have an inner voice, I don't have mental imagery, my long-term memory is purely semantic (not sensory or episodic) and I have little active recall capacity. Essentially, my conscious mind is exceptionally empty as soon as I reduce external input. That doesn't mean I'm naturally great at focusing (there's so much input!). I'm just not fussed about things for longer than a few minutes because my attention is "stuck" in the present. I forget most things unless I build good scaffolds. I don't think this architecture is better or worse - there are tradeoffs everywhere. Happy to talk more about this if it seems relevant to anyone.

Thanks for writing this up, excited for the next!

One major bottleneck to adoption of software & service industries is that the infrastructure doesn't exist - more than 50% of people don't have access to the bandwidth that makes our lives on the internet possible. https://www.brookings.edu/articles/fixing-the-global-digital-divide-and-digital-access-gap/ (That's also not solved by Starlink because it's too expensive.)

For export of services to benefit the workers, you'd need local governance infrastructure that effectively maintains public goods, which also currently doesn't exist for most people.

As you hint at, access to the digital economy helps more developed areas at best, the worst off don't benefit. The poverty trap many are in is unfortunately harder to crack, and requires substantial upfront investment, not trickle down approaches. But most countries cannot get loans for such efforts and companies have little incentive to advance/maintain such large public goods.

I haven't thought about this enough and would appreciate reading reactions to the following: For lasting poverty alleviation, I'd guess it's better to focus on scalable education, governance and infrastructure initiatives, powered by locals to enable integration into the culture. Does it seem correct that the development of self-determination creates positive feedback loops that also aid cooperation?

Also, this can all be aided by AI, but focusing on AI, as some suggest in the comments, seems unlikely to succeed at solving economic & governance development in the poorest areas. Would you agree that AI deployment can't obviously reduce the drivers of coordination failures at higher levels of governance, as those are questions of inter-human trust?

I didn't say Duncan can't judge OP. I'm questioning the judgment.

FWIW, this sounds pretty wrongheaded to me: anonymization protects OP from more distant (mis)judgment while their entourage is aware of them having posted this. That seems like fair game to me and not at all as you're implying. 

We didn't evolve to operate at these scales, so this appears like a good solution.

Dear Nuño, thank you very much for the very reasonable critiques! I had intended to respond in depth but it's continuously not the best use of time. I hope you understand. Your effort has been thoroughly appreciated and continues to be integrated into our communications with the EA community. 

We have now secured around 2 years of funding and are ramping up our capacity . Until we can bridge the inferential gap more broadly, our blog offers insight into what we're up to. However, it is written for a UN audience and non-exhaustive, thus you may understandably remain on the fence.

Maybe a helpful reframe that avoids some of the complications of "interesting vs important" by being a bit more concrete is "pushing the knowledge frontier vs applied work"?

Many of us get into EA because we're excited about crucial considerations type things and too many get stuck there because you can currently think about it ~forever but it practically contributes 0 to securing posterity. Most problems I see beyond AGI safety aren't bottlenecked by new intellectual insights (though sometimes those can still help). And even AGI safety might turn out in practice to come down to a leadership and governance problem.

This sounds great. It feels like a more EA-accessible reframe of the core value proposition of Nora and my post on tribes. 

tl;dr please write that post

I'm very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA's community health team. But if I understand correctly, they're not that up front about why they're reaching out. Being more "on the nose" about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that's a question of qualified manpower - arguably our most limited resource - but we shouldn't let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.

Thanks very much for highlighting this so clearly, yes indeed. We are currently in touch with one potential such grantmaker. If you know of others we could talk to, that would be great.

The amount isn't trivial at ~600k. Max' salary also guarantees my financial stability beyond the ~6 months of runway I have. It's what has allowed us to make mid-term plans and me to quit my CBG.

Load more