SummaryBot

470 karmaJoined Aug 2023

Bio

This account is used by the EA Forum Team to publish summaries of posts.

Comments
506

Executive summary: The scaling of AI training runs is expected to slow down significantly after GPT-5 due to the unsustainable power consumption required to continue scaling at the current rate, which would necessitate the equivalent of multiple nuclear power plants.

Key points:

  1. Current large data centers consume around 100 MW of power, limiting the number of GPUs that can be supported.
  2. GPT-4 used an estimated 15k to 25k GPUs, requiring 15 to 25 MW of power.
  3. A 10-fold increase in GPUs above GPT-5 would require a 1 to 2.5 GW data center, which doesn't exist and would take years to build.
  4. After GPT-5, the focus will shift to improving software efficiency, scaling at inference time, and decentralized training using multiple data centers.
  5. Scaling GPUs will be slowed down by regulations on lands, energy production, and build time, potentially leading to the construction of training data centers in low-regulation countries.
  6. The total growth rate of effective compute is expected to decrease significantly after GPT-5, from ~x22/year (or x6.2/year using pre-ChatGPT investment growth values) to ~x4/year, assuming no efficient decentralized training is developed.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: Ruth Harrison and Henry Spira pioneered effective advocacy strategies in the 20th century that helped shape the modern animal welfare movement, especially for farmed animals.

Key points:

  1. Harrison's 1964 book "Animal Machines" exposed factory farming cruelty, leading to the first on-farm animal welfare laws.
  2. Spira ran the first successful campaigns to end specific animal tests and win corporate animal welfare policies from companies like Revlon and McDonald's.
  3. They focused narrowly on winnable campaigns against the worst practices, coupling moderate demands with hard-hitting tactics.
  4. They engaged in both public pressure and private dialogue to get results, willing to compromise to achieve progress.
  5. They were meticulous about factual accuracy to maintain credibility.
  6. They focused on making external progress for animals rather than on internal movement debates.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The Mission Motor's pilot intervention to train and support animal and vegan advocacy organizations in Monitoring, Evaluation, and Learning (MEL) has revealed key lessons about the state of MEL in the animal movement, operational insights, and opportunities to advance evidence-based work in the animal cause area.

Key points:

  1. Interest in and perceived need for MEL exists in the animal cause area, but MEL is often seen as complex and specialized support is limited.
  2. Most animal and vegan advocacy charities lack the capacity to fully engage with MEL. The Mission Motor no longer strives to implement complete MEL systems for all participants and instead focuses on incremental steps and gathering key data.
  3. MEL tools from other cause areas are useful for the animal cause area, but there are cause-area-specific challenges such as small sample sizes and limited evidence base.
  4. Funders can drive increased evidence-based work by funding MEL, requiring suitable MEL, and embracing a learning attitude.
  5. The Mission Motor has deprioritized its cohort model and will focus on supporting "bandwagon interventions" to amplify impact. Using the right MEL tools at the right time is crucial for success.
  6. To advance MEL in the animal cause area, individuals can explore MEL for their charity, join the MEL Slack channel, donate or volunteer with research organizations, or work/volunteer with The Mission Motor.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The EA meta funding landscape saw rapid growth and contraction from 2021-2023, with Open Philanthropy being the largest funder and longtermism receiving the most funding, while the landscape faces high uncertainty in the short-to-medium term.

Key points:

  1. EA meta funding grew from $109M in 2021 to $193M in 2022, before shrinking to $117M in 2023, driven largely by changes in Open Philanthropy's spending.
  2. Funding allocation by cause, in descending order: longtermism ($274M) >> global health and development ($67M) > cross-cause ($53M) > animal welfare ($25M).
  3. Funding allocation by intervention, in descending order: other/miscellaneous ($193M) > talent ($121M) > prioritization ($92M) >> effective giving ($13M).
  4. The EA meta grantmaking landscape is highly concentrated, with core funders (Open Philanthropy, EA Funds, and SFF) making up 82% of total funding.
  5. Crucial considerations for prioritizing direct vs meta work include the relative cost-effectiveness of the direct work and whether opportunities are getting better or worse.
  6. The meta funding landscape faces high uncertainty in the short-to-medium term due to turnover at Open Philanthropy, uncertainty at EA Funds, and the ongoing consequences of the FTX collapse.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The definition of veganism is ambiguous and arbitrary, and a more nuanced view is needed that recognizes the continuum of harm reduction and the varying levels of effort required by individuals in different circumstances.

Key points:

  1. The Vegan Society's definition of veganism as avoiding animal exploitation "as far as is possible and practicable" is vague and open to interpretation.
  2. The conventional definition used by most vegans, which requires 100% avoidance of animal products, is philosophically questionable and arbitrary.
  3. A consequentialist definition that considers all downstream effects of actions is impractical and problematic.
  4. Harm reduction lies on a continuum, and the effort required to reduce harm varies for individuals based on their circumstances.
  5. Insisting on rigid definitions of veganism is unhelpful, and a range of "almost vegan" lifestyles should be recognized and encouraged.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: Cybersecurity of frontier AI models is a key concern for AI labs and regulators, with a focus on protecting user data, model weights, codebases, and training data from leaks that could enable misuse or accelerate competition.

Key points:

  1. AI labs are concerned about leaks of user data (violating privacy laws), model weights (enabling uncontrolled model use), codebases (revealing IP to competitors), and training data (accelerating competitor capabilities).
  2. Regulators share these concerns and want to prevent leaks that could benefit adversaries or allow unregulated access to potentially dangerous AI models.
  3. China and the EU have strong data privacy laws (e.g. GDPR) that apply to user data from AI models. The US is developing reporting requirements on cybersecurity measures for leading AI labs.
  4. Cybersecurity requirements beyond data privacy are likely to target a small group of top AI labs, which already have strong incentives and capabilities to protect their IP.
  5. Governments have historically struggled to consistently enforce data privacy laws, and the complexity of AI model security poses additional challenges. However, having fewer organizations to track may aid enforcement.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: The global food trade system is increasingly complex yet concentrated, making it vulnerable to cascading disruptions from export bans, chokepoint blockages, and reliance on a few key exporters, which could lead to societal collapse as seen in the Late Bronze Age.

Key points:

  1. Around a quarter of global food production is traded, with increasing complexity but also concentration among a few major exporters like the US, Australia, and Russia.
  2. Export bans, often triggered by food shortages or neighboring countries' actions, could cause cascading disruptions in the trade network.
  3. Chokepoints like the Panama Canal and Straits of Malacca are critical vulnerabilities due to climate change and geopolitical tensions.
  4. Concentration exists in key crops, exporting nations, and trading firms, driven by historical factors like colonialism and capitalism.
  5. The Late Bronze Age Collapse demonstrates how the loss of key trade and political nodes can unravel an interconnected system.
  6. While global trade overall may be becoming more resilient, modeling adaptations to major disruptions remains challenging, highlighting the need to prioritize food trade resilience.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: Sustainable fishing policies and demand reductions for wild-caught aquatic animals may counterintuitively increase fishing catch in the near term, but persistent demand reductions could potentially decrease catch over longer timelines.

Key points:

  1. Reducing fishing pressure allows more fish to be caught in the long run where there is overfishing.
  2. Sustainable fishery management policies generally aim to maximize or maintain high catch levels, not reduce catch.
  3. In the near term (10-20 years), demand reductions seem slightly more likely to increase than decrease catch, given the current prevalence of overfishing.
  4. Over longer timelines, demand reductions may decrease catch as overfishing is eliminated and with eventual human population decline, but this is uncertain.
  5. Efforts to reduce demand today could be made redundant by large independent drops in demand from factors like catastrophes or technological advances.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: Government regulation of AI is likely to exacerbate the risks of AI misuse and misalignment while limiting the potential benefits, due to governments' incentives for myopia, military competition, and protecting special interests.

Key points:

  1. AI risks come in two forms: misuse by humans and misalignment of AI systems with human interests.
  2. Governments have poor incentives to mitigate long-term, global risks and strong incentives to use AI for military advantage and domestic control.
  3. Government regulation is likely to preserve the most dangerous misuse risks, potentially exacerbate misalignment risks, and slow down beneficial AI progress.
  4. Even successful AI safety advocacy can be redirected by government incentives, as seen with environmental regulations now hindering decarbonization efforts.
  5. Private incentives for AI development, while imperfect, are better aligned with reducing existential risk than government incentives.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Executive summary: Motivation gaps between advocates and skeptics of a cause can lead to an imbalance in the quality and quantity of arguments on each side, making it difficult to accurately judge the merits of the cause based on the arguments alone.

Key points:

  1. Advocates of a cause (e.g. religion, AI risk) are intrinsically motivated to make high-effort arguments, while skeptics lack inherent motivation to do the same.
  2. This leads to an asymmetry where advocate arguments appear more convincing, even if the cause itself may be flawed.
  3. Counter-motivations like moral backlash, politics, money, annoyance, and entertainment can somewhat close the motivation gap for skeptics, but introduce their own biases.
  4. In-group criticism alone is insufficient due to issues like jargon barriers, agreement bias, evaporative cooling, and conflicts of interest.
  5. To account for motivation gaps, adjust the weight given to each side's arguments, be more charitable to critics, seek out neutral parties to evaluate the cause, and signal boost high-effort critiques.
  6. EA should make an extra effort to highlight good-faith criticism to encourage more productive engagement from skeptics.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Load more