Carla Cremer in Vox (a). 

23

0
0

Reactions

0
0
Comments22
Sorted by Click to highlight new comments since: Today at 7:15 AM
JWS
1y21
4
2

Reflecting on the piece as a whole, I think there are some very legitimate concerns being brought up, and I think that Cremer mostly comes across well (as she has consistently imo, regardless of whether you agree or disagree with her specific ideas for reform) - with the exception of a few potshots laced into the piece[1]. I think it would be a shame if she were to completely disavow engaging with the community, so I hope where this is disagreement we can be constructive, and where there is agreement we can actually act rather than just talk about it.

Some specific points from the article:

  • She does not think that longtermism or utilitarianism was the prime driver behind SBF's actions, so please update towards her not hating longtermism. Where she is against it is because it's difficult to have good epistemic feedback loops to deciding whether our ideas and actions actually are doing the most good (or even just doing good better):

"Futurism gives rationalization air to breathe because it decouples arguments from verification."[2]

  • Another underlying approach is to be wary of the risks of optimisation which shouldn't be too controversial? It reminds me of Galef's 'Straw Vulcan' - relentlessly optimising towards your current idea of The Good doesn't seem like a plausibly optimal strategy to me. It sounds very parsimonious with the 'Moral Uncertainty' approach.

"a small error between a measure of that which is good to do and that which is actually good to do suddenly makes a big difference fast if you’re encouraged to optimize for the proxy. It’s the difference between recklessly sprinting or cautiously stepping in the wrong direction. Going slow is a feature, not a bug."

  • One main thrust around the piece is her concern with the institutional design of the EA space: 

"Institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty..."

In what direction would she like EA to move? In her own words:

"EA should offer itself as the testing ground for real innovation in institutional decision-making."

We have a whole cause area about that! My prior is that it hasn't had as much sunlight as other EA cause areas though.

There are some fairly upsetting quotes about people who have contacted her because they don't feel like they can voice those doubts openly. I wish we could find a way to remove that fear asap.

"It increasingly looks like a weird ideological cartel where, if you don’t agree with the power holders, you’re wasting your time trying to get anything done.”

Summary:

On a second reading, there were a few more potshots than I initially remembered, but I suppose this a Vox article and not an actually set of reform proposal -  something more like that can probably be found in the Democratising Risk article itself.    

But I genuinely think that there's a lot of value here for us to learn from. And I hope that we can operationalise some ways to improve our own community's institutions so that the EA at the end of 2023 looks much healthier than the one right now.

  1. ^

    In particular, the shot at Cold Takes being "incomprehensible" didn't sit right with me - Holden's blog is a really clear presentation of the idea that misaligned AI can have significant effects on the long-run future, regardless of whether you agree with it or not.

  2. ^

    I think this is similar to criticism that Vaden Masrani made of the philosophy underlying longtermism.

In particular, the shot at Cold Takes being "incomprehensible" didn't sit right with me - Holden's blog is a really clear presentation of those concerned by the risk misaligned AI plays to the long-run future, regardless of whether you agree with it or not.

Agree that her description of Holden's thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as 'radically unfamiliar... a future galaxy-wide civilization... seem[ing] too "wild" to take seriously... we live in a wild time, and should be ready for anything... This thesis has a wacky, sci-fi feel.'

(Cremer points to this as an example of an 'often-incomprehensible fantasy about the future')

The quality of reasoning in the text seems somewhat troublesome. Using two paragraphs as example 

On Halloween this past year, I was hanging out with a few EAs. Half in jest, someone declared that the best EA Halloween costume would clearly be a crypto-crash — and everyone laughed wholeheartedly. Most of them didn’t know what they were dealing with or what was coming. I often call this epistemic risk: the risk that stems from ignorance and obliviousness, the catastrophe that could have been avoided, the damage that could have been abated, by simply knowing more. Epistemic risks contribute ubiquitously to our lives: We risk missing the bus if we don’t know the time, we risk infecting granny if we don’t know we carry a virus. Epistemic risk is why we fight coordinated disinformation campaigns and is the reason countries spy on each other.

Still, it is a bit ironic for EAs to have chosen ignorance over due diligence. Here are people who (smugly at times) advocated for precaution and preparedness, who made it their obsession to think about tail risks, and who doggedly try to predict the future with mathematical precision. And yet, here they were, sharing a bed with a gambler against whom it was apparently easy to find allegations of shady conduct. The affiliation was a gamble that ended up putting their beloved brand and philosophy at risk of extinction.


It appears that a chunk of  Zoe's epistemic risk bears a striking resemblance to financial risk. For instance, if one simply knew more about tomorrow's stock prices, they could sidestep all stock market losses and potentially become stupendously rich.

This highlights the fact that gaining knowledge in certain domains can be difficult task, with big hedge funds splashing billions and hiring some of the brightest minds just to gain a slight edge in simply knowing a bit more about asset prices. It extends to having more info about which companies may go belly up or engage in fraud.

Acquiring more knowledge comes at a cost. Processing knowledge comes at cost. Choosing ignorance is mostly not a result of recklessness or EA institutional design but a practical choice given the resources required to process information. It's actually rational for everyone to ignore most information most of the time (this is standard econ, check rational inattention and extensive literature on the topic).

One real question in this space is if EAs have allocated their attention wisely. The answer seems to be "mostly yes." In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX's health than established hedge funds is somewhat odd. EAs, like everyone else, face the challenge of allocating attention and their expertise lies in "using money for good" rather than "evaluating the health of big financial institutions". For the typical FTX grant recipient to assume they need to be smarter than Sequoia or SoftBank about FTX would likely not be a sound decision.

l question in this space is if EAs have allocated their attention wisely. The answer seems to be "mostly yes." In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX's health than established hedge funds is somewhat odd. 

Two things: 

  1. Sequoia et al. isn't a good benchmark – 

    (i) those funds were doing diligence in a very hot investing environment where there was a substantial tradeoff between depth of diligence and likelihood of closing the deal. Because EAs largely engaged FTX on the philanthropic side, they didn't face this pressure. 

    (ii) SBF was inspired and mentored by prominent EAs, and FTX was incubated by EA over the course of many years. So EAs had built relationships with FTX staff much deeper than what funds would have been able to establish over the course of a months-long diligence process. 
     
  2. The entire EA project is premised on the idea that it can do better at figuring things out than legacy institutions. 
  1. a. 
    Sequoia led FTX round B in Jul 2021 and had notably more time to notice any irregularities than grant recipients. 

    b.
    I would expect the funds to have much better expertise in something like "evaluating the financial health of a company".  

    Also it seem you are somewhat shifting the goalposts: Zoe's paragraph with "On Halloween this past year, I was hanging out with a few EAs." It is reasonable to assume the reader will interpret it as hanging out with basically random/typical EAs, and the argument should hold for these people.  Your argument would work better if she was hanging out  with "EAs working at FTX" or "EAs advising SBF" who could have probably done better than funds on evaluating stuff like how the specific people work. 
     
  2. The EA project is clearly not promised on the idea that it should, for example, "figure out stuff like stock price better than legacy institutions". Quite the contrary - the claim is while humanity actually invests decent amount of competent effort in stock, in comparison, it neglects problems like poverty or xrisk.


     

It seems like we're talking past each other here, in part because as you note we're referring to different EA subpopulations: 

  1. Elite EAs who mentored SBF & incubated FTX
  2. Random/typical EAs who Cremer would hang out with at parties 
  3. EA grant recipients 

I don't really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone we've mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTX's operations. 

I think we should expect elite EAs to have done better than Sequoia et al. at noticing red flags (e.g. the reports of SBF being shitty at Alameda in 2017; e.g. no ring-fence around money earmarked for the  Future Fund) and acting on what they noticed. 

I think your comment would've been a lot stronger if you had left it at 1. Your second point seems a bit snarky. 

I don't think snark cuts against quality, and we come from a long lineage of it

Which quality? I really liked the first part of of your comment and even weakly upvoted it on both votes for that reason, but I feel like the second point has no substance. (Longtermist EA is about doing things that existing institutions are neglecting; not doing the work of existing institutions better.) 

I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a): 

Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.  And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated).  When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.

If EA is going to do some lesson-taking, I would not want this point to be neglected. 

I previously addressed this here.

Thanks. I think Cowen's point is a mix of your (a) & (b). 

I think this mixture is concerning and should prompt reflection about some foundational issues.

Some questions for CEA:

But this changed fast. In 2019, I was leaked a document circulating at the Centre for Effective Altruism, the central coordinating body of the EA movement. Some people in leadership positions were testing a new measure of value to apply to people: a metric called PELTIV, which stood for “Potential Expected Long-Term Instrumental Value.” It was to be used by CEA staff to score attendees of EA conferences, to generate a “database for tracking leads” and identify individuals who were likely to develop high “dedication” to EA — a list that was to be shared across CEA and the career consultancy 80,000 Hours. There were two separate tables, one to assess people who might donate money and one for people who might directly work for EA.

Individuals were to be assessed along dimensions such as “integrity” or “strategic judgment” and “acting on own direction,” but also on “being value-aligned,” “IQ,” and “conscientiousness.” Real names, people I knew, were listed as test cases, and attached to them was a dollar sign (with an exchange rate of 13 PELTIV points = 1,000 “pledge equivalents” = 3 million “aligned dollars”).

What I saw was clearly a draft. Under a table titled “crappy uncalibrated talent table,” someone had tried to assign relative scores to these dimensions. For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120. Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.

The list showed just how much what it means to be “a good EA” has changed over the years. Early EAs were competing for status by counting the number of mosquito nets they had funded out of their own pocket; later EAs competed on the number of machine learning papers they co-authored at big AI labs.

When I confronted the instigator of PELTIV, I was told the measure was ultimately discarded. Upon my request for transparency and a public apology, he agreed the EA community should be informed about the experiment. They never were. Other metrics such as “highly engaged EA” appear to have taken its place.

 

  1. On CEA gathering information from EA conference attendees:
    1. Can someone from CEA clarify what information, if any, is currently being gathered on EA members
    2. which of these, if any, is being used for assessing individuals,
    3. for what purpose (e.g. for EAGs, other CEA opportunities, "identifying individuals who were likely to develop high dedication to EA"), and
    4. which organizations these are shared with, if relevant?
  2. Given CEA had a leadership change in 2019, the same year as the most recent leadership change, can someone from CEA clarify the timing of this measure of value (i.e. was this under Larissa Hesketh-Rowe or Max Dalton as CEO)?
  3. Can someone from CEA also justify the reasoning behind these two claims in particular, and the extent to which this represents the views of CEA leadership at present?

For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120.

 

Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.

Will also note the apparent consistency with the previous case of CEA over-emphasizing longtermism and AI over global health and animal welfare in earlier versions of the handbook, despite claiming to not take an organizational stance on any cause areas specifically.

Relevant info: this is essentially a CRM database (Customer Relations Management), which very commonly used by companies and non-profits. Your name is likely on hundreds of different CRM databases.

Let's imagine for example, my interaction with Greenpeace. I signed a petition for Greenpeace when I was a teenager, which input my phone number, email and name into a Greenpeace CRM. Greenpeace then might have some partners who match names and email address with age and earning potential. They categorise me as a student, with low earning potential but with potential to give later, so they flag me for a yearly call to try to get me to sign up to be a member. If I was flagged as being a particularly large earner, I imagine more research would have been done on me, and I would receive more intensive contact with Greenpeace. 


CRMs are by design pretty "creepy", for example, if you use Hub Spot for newsletters,  it shows de-anonymised data for who viewed what, and for how long. I imagine CRMs that have access to browser cookies are 100x more "creepy" than this. 

I'm not well-versed on how CRMs work, so this is useful information, thanks. Though my guess is that CRMs probably don't typically include assessments of IQ?

I am still interested in the answers to the above questions though, and potentially other follow-up Qs, like how CEA staff were planning on actually measuring EAG participants or members on these axes, the justifications behind the inputs in the draft, and what the proposed ideas may reflect in terms of the values and views held by CEA leadership.

Why is including an assessment of IQ morally bad to track potential future hires? Or do you think it's just a useless thing to estimate?

I'm not claiming measuring IQ is morally bad (I don't think I've made any moral claims in this comment thread?), but based just on "It was to be used by CEA staff to score attendees of EA conferences", I think there is a range of executions that could make me think "this is a ridiculous thing to even consider trying, how on earth is this going to be reliable" to "this might be plausibly net positive", and it's hard to know what is actually going on just by reading the vox article.

Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down "IQ 100" based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?

Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down "IQ 100" based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?

Yes. I'm in EA to give money/opportunities, not to get money/opportunities.

Edit: I do think some people (in and outside of EA) overvalue quick chats when hiring, and I'm happy that in EA everyone uses extensive work trials instead of those.

I'm glad that this will not affect you in this case, but folks interested in the EA space because it provides an avenue for a more impactful career may disagree, and for a movement that is at least partly about using evidence and reason to create more positive impact, I'd be surprised if people genuinely believed that operationalization listed above is a good reflection of those ideals.

Yeah I think measuring IQ is a stupid idea but suppose you were to do it anyway -- surely you'd want to measure IQ through an actual test and not just through guessing, right?

Curated and popular this week
Relevant opportunities