HH

Hauke Hillebrandt

CEO @ hauke.substack.com
3612 karmaJoined Dec 2014Working (6-15 years)London, UK
hauke.substack.com

Bio

Follow me on hauke.substack.com 

I'm an independent researcher working on EA topics (Global Priorities Research, Longtermism, Global Catastrophic Risks, and Economics).

How others can help me

Looking for collaborators, hires, and grants.

How I can help others

I can give advice and offer research collaborations.

My current research projects.

Sequences
1

AI Competition

Comments
428

AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better?  If they they're already competing hard now, then it seems unlikely that they'll coordinate much on slowing down in the future.

Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.

AI policy folks and research economists could engage with the arguments and the cited literature.

Grassroots folks like Pause AI sympathizers could put pressure on politicians and regulators to investigate this more (some claims, like the tax avoidance stuff seems most robustly correct and good).

I only said we should look into this more and have reviewed the pros and cons from different angles (e.g. not only consumer harms). As you say, the standard argument is that breaking up monopolists like Google increases consumer surplus and this might also apply here. 

But I'm not sure in how far, in the short and long-run, this increases/decreases AI risks and/or race dynamics and within the west or between countries. This approach might be more elegant than Pausing AI, which definitely reduces consumer surplus.

Cool instance of black box evaluation - seems like a relatively simple study technically but really informative.

Do you have more ideas for future research along those lines you'd like to see?

it's AI generated w/ Gemini 1.5 Pro- I had initially indicated that but then had formatting issues and had to repaste and forgot about adding it - now fixed.

Reimagining Malevolence: A Primer on Malevolence and Implications for EA - AI Summary

This extensive post delves into the concept of malevolence, particularly within the context of effective altruism (EA).

Key points:

Defining Malevolence:

The post critiques the limitations of the Dark Triad/Tetrad framework and proposes the Dark Factor (D) as a more comprehensive model. D focuses on the willingness to cause disutility to others, encompassing traits like callousness, sadism, and vindictiveness.

The post also distinguishes between callousness (lack of empathy) and antagonism (active desire to harm), and further differentiates reactive antagonism (vengefulness) from instrumental antagonism (premeditated harm for personal gain).

Why Malevolence Persists:

Despite its negative consequences, malevolence persists due to evolutionary factors such as varying environmental pressures, frequency-dependent selection, and polygenic mutation-selection balance.

Chaotic and lawless environments tend to favor individuals with malevolent traits, providing them with opportunities for power and survival.

Factors Amplifying Malevolence:

  • Admiration: The desire for power and recognition can drive individuals to seek positions of influence, amplifying the impact of their malevolent tendencies.
  • Boldness: The ability to remain calm and focused in stressful situations can be advantageous in attaining power.
  • Disinhibition/Planfulness: A balance of impulsivity and self-control can be effective in achieving goals, both good and bad.
  • Conscientiousness: Hard work and orderliness contribute to success in various domains, including those with potential for harm.
  • General Intelligence: Higher intelligence can enhance an individual's ability to plan and execute harmful actions.
  • Psychoticism: Paranoia and impaired reality testing can lead to harmful decisions and actions.

Recommendations for EA:

  • Screening: Implementing psychometric measures to assess malevolence in individuals seeking positions of power.
  • Awareness: Recognizing that malevolence is not always linked to overt antisocial behavior or mental illness.
  • Intervention: While challenging, interventions should ideally target the neurological and biological underpinnings of malevolence, particularly during early development.
  • EA Community: While EA's values and selection processes may offer some protection against malevolent actors, its emphasis on rationality and risk-neutrality could inadvertently attract or benefit such individuals. Vigilance and robust institutions are crucial.
  • Compassion and Action:
  • The post concludes by acknowledging the complexity of human nature and the potential for evil within all individuals. However, it emphasizes the need to draw lines and prevent individuals with high levels of malevolence from attaining positions of power. This requires a combination of compassion, understanding, and decisive action to safeguard the well-being of society.
  •  

Great comment - thanks so much!

Regarding CCEI's effect of shifting deploy$ to RD&D$:

  • Yes, in the Guesstimate model the confidence intervals went from 0.1% to 1% lognormally distributed, with a mean of ~0.4%
  • With UseCarlo I used a metalog distribution with parameters 0%, 0.1%, 2%, 10%, resulting in a mean of ~5%

So you're right, there is indeed about an order of magnitude difference between the two estimates:

  • This is mostly driven by my assigning some credence to the possibility that CCEI might have had as much as a 10% influence, which I wouldn't rule out entirely.
  • However, the confidence intervals of the two estimates are overlapping.
  • I agree this is the weakest part of the analysis. As I highlighted, it's a guesstimate motivated by the qualitative analysis that CCEI is part of the coalition of key movers and shakers that shifted budget increases to energy RD&D.
  • I think both estimates are roughly valid given the information available. Without further analysis, I don't have enough precision to zero in on the most likely value. 
  • I lost access to UseCarlo during the writeup and the  after the analysis was delayed for quite some time (I had initially pitched it to FTX as an Impact NFT).
  • I just wanted to get the post out rather than delay further. With more resources, one could certainly dig deeper and make the analysis more rigorous and detailed. But I hope it provides a useful starting point for discussion and further research. 
  • One could further nuance this analysis e.g. by calculating marginal effect of our $1M on US climate policy philanthropy at the current ~$55M level vs. what it's now.

Thanks also for the astute observation about estimating expected cost-effectiveness in t/$ vs $/t. You raise excellent points and I agree it would be more elegant to estimate it as t/$ for the reasons you outlined.

I really appreciate you taking the time to engage substantively with the post.

AI Summary of the "Quick Update on Leaving the Board of EV" Thread (including comments):

Rebecca Kagan's resignation from the board of Effective Ventures (EV) due to disagreements regarding the handling of the FTX crisis has sparked an intense discussion within the Effective Altruism (EA) community. Kagan believes that the EA community needs an external, public investigation into its relationship with FTX and its founder, Sam Bankman-Fried (SBF), to address mistakes and prevent future harm. She also calls for clarity on EA leadership and their responsibilities to avoid confusion and indirect harm.

The post generated extensive debate, with many community members echoing the call for a thorough, public investigation and postmortem. They argue that understanding what went wrong, who was responsible, and what structural and cultural factors enabled these mistakes is crucial for learning, rebuilding trust, and preventing future issues. Some point to the concerning perception gap between those who had early concerns about SBF and those who seemingly ignored or downplayed these warnings.

However, others raise concerns about the cost, complexity, and legal risks involved in conducting a comprehensive investigation. They worry about the potential for re-victimizing those negatively impacted by the FTX fallout and argue that the key facts may have already been uncovered through informal discussions.

Alternative suggestions include having multiple individuals with relevant expertise conduct post-mortems, focusing on improving governance and organizational structures, and mitigating the costs of speaking out by waiving legal obligations or providing financial support for whistleblowers.

The thread also highlights concerns about recent leadership changes within EA organizations. Some argue that the departure of individuals known for their integrity and thoughtfulness regarding these issues raises questions about the movement's priorities and direction. Others suggest that these changes may be less relevant due to factors such as the impending disbanding of EV or reasons unrelated to the FTX situation.

Lastly, the discussion touches on the concept of "naive consequentialism" and its potential role in the FTX situation and other EA decisions. The OpenAI board situation is also mentioned as an example of the challenges facing the EA community beyond the FTX crisis, suggesting that the core issues may lie in the quality of governance rather than a specific blind spot.

Overall, the thread reveals a community grappling with significant trust and accountability issues in the aftermath of the FTX crisis. It underscores the urgent need for the EA community to address questions of transparency, accountability, and leadership to maintain its integrity and continue to positively impact the world.

What are the most surprising things that emerged from the thread?

Based on the summaries, a few surprising or noteworthy things emerged from the "Quick Update on Leaving the Board of EV" thread:

  1. The extent of disagreement and concern within the EA community regarding the handling of the FTX crisis, as highlighted by Rebecca Kagan's resignation from the EV board and the subsequent discussion.
  2. The revelation of a significant perception gap between those who had early concerns about Sam Bankman-Fried (SBF) and those who seemingly ignored or downplayed these warnings, suggesting a lack of effective communication and information-sharing within the community.
  3. The variety of perspectives on the necessity and feasibility of conducting a public investigation into the EA community's relationship with FTX and SBF, with some advocating strongly for transparency and accountability, while others raised concerns about cost, complexity, and potential legal risks.
  4. The suggestion that recent leadership changes within EA organizations may have been detrimental to reform efforts, with some individuals known for their integrity and thoughtfulness stepping back from their roles, raising questions about the movement's priorities and direction.
  5. The mention of the OpenAI board situation as another example of challenges facing the EA community, indicating that the issues extend beyond the FTX crisis and may be rooted in broader governance and decision-making processes.
  6. The discussion of "naive consequentialism" and its potential role in the FTX situation and other EA decisions, suggesting a need for the community to re-examine its philosophical foundations and decision-making frameworks.
  7. The emotional weight and urgency conveyed by many community members regarding the need for transparency, accountability, and reform, underscoring the significance of the FTX crisis and its potential long-term impact on the EA movement's credibility and effectiveness.

These surprising elements highlight the complex nature of the challenges facing the EA community and the diversity of opinions within the movement regarding the best path forward.

He did mention the head of the FTX foundation which was Nick Beckstead - not sure about the others, but would still seem weird for them to say it like that - maybe one of the younger staff members said something like 'I care more about the far future' or something along the lines of 'GiveDirectly is too risk averse'. but would still think he's painting quite the stereotype of EA here.

Pointing to white papers from think tanks that you fund isn't a good evidentiary basis to support the claim of R&D's cost effectiveness.

 

I cite a range of papers from the academia, government, and think tanks in the appendix. You don't cite anything either those are just like... your opinions no? 

The R&D benefit for advanced nuclear since the 1970s has yielded a net increase in price for that technology

Are you saying the more we invest in R&D the higher the costs? I agree that nuclear is getting more expensive on net but that can still mean that R&D will drive the price down.

After that, all the technology gains came from scaling, not R&D.

What about the perovskite fever from the mid '10s?

Also there's a long lag with research. 

And historic estimates are not necessarily indicative of future gains; we should expect diminishing returns.

Furthermore, most of the money in BIL and IRA were for demonstration projects - advanced nuclear, the hydrogen hubs, DAC credits. Notably NOT research and development. You make a subtle shift in your cost effectiveness table where you use unreviewed historic numbers on cost-effectiveness for research and development, and then apply that to the much larger demonstration and deployment dollars. Apples and oranges. The needs for low TRL tech is very different from high TRL tech.

I've simplified R&D to RD&D here, but I do cite RD&D projections - see and my calculation - do you think these numbers are off? What do you think they are? All models are wrong as they say.

Lastly, a Bill Gates retweet is not the humble brag you think it is. Bill has a terrible track record of success in energy ventures; he's uninformed and impulsive. Saying Bill Gates likes your energy startup is like saying Jim Cramer likes your stock. Both indicate a money-making opportunity for those who do the opposite.

That was a straightforward brag because he has millions of followers on X. I'm quite critical of Gates - I have blogged about this here. But also maybe we should give more credit to doing high-risk high reward stuff even if it doesn't work out... like Solyndra?

Load more