T

Tessa

2764 karmaJoined Jan 2017Working (6-15 years)Berkeley, CA, USA
tessa.fyi

Bio

Let's make nice things with biology. Working on biosecurity at iGEM. Also into lab automation, event production, donating to global health. From Toronto, lived in Paris, currently in the SF Bay. Website: tessa.fyi

Comments
195

Topic contributions
30

I was part of a youth delegation to the BWC in 2017, and I think the greatest benefit I got was that it raised my aspirations. I'm not sure I'd previously conceived of myself as the sort of person who could speak at the UN. I also heard an expert bowing out of dinner early because they had to go finish their slides for the next day, and realized there isn't some upper echelon of governance and society where everyone is hypercompetent and on top of things; even at the friggin' United Nations people are making their slides the night before.

I don't know how much of an effect this had on my decision to start a biosecurity meetup the next year and eventually transition to full-time biosecurity work, but I think it played a role. There are other benefits too; Schelling-point NGO networking, collecting lived-experience stories that make your understanding of diplomacy more vivid, and creating a pressure of prior consistency that increases the chance of that a delegate will continue to work on biosecurity (YMMV on whether the last item is a benefit).

Thanks for this comment, and thanks to Nadia for writing the post, I'm really happy to see it up on the forum!

Chris and I wrote the guidance for reading groups and early entrants to the field; this was partly because we felt that new folks are most likely to feel stuck/intimidated/forced-into-deference/etc. and because it's where we most often found ourselves repeating the same advice over and over.

I think there are people whose opinions I respect who would disagree with the guidance in a few ways:

  • We recommend a few kinds of interpersonal interventions, and some people think this is a poor way to manage information hazards, and the community should aim to have much more explicit / regimented policies
  • We recommend quite a bit of caution about information hazards, which more conservative people might consider an attention hazard in and of itself (drawing attention to the fact that information that would enable harm could be generated)
  • We recommend quite a bit of caution about information hazards, which less conservative people might consider too encouraging of deference or secrecy (e.g. people who have run into more trouble doing successful advocacy or recruiting/fostering talent, people who have different models of infohazard dyanmics, people who are worried that a lack of transparency worsens the community's prioritization)
  • We don't cover a lot of common scenarios, as Nadia noted in her comment

(Side note: it's always both flattering and confusing to be considered a "senior member" of this community. I suppose it's true, because EA is very young, but I have many collaborators and colleagues who have decade(s) of experience working full-time on biorisk reduction, which I most certainly do not.)

This is more a response to "it is easy to build an intuitive case for biohazards not being very important or an existential risk", rather than your proposals...

My feeling is that it is fairly difficult to make the case that biological hazards present an existential as opposed to catastrophic risk and that this matters for some EA types selecting their career paths, but it doesn't matter as much in the grand scale of advocacy? The set of philosophical assumptions under which "not an existential risk" can be rounded to "not very important" seems common in the EA community, but extremely uncommon outside of it.

My best guess is that any existential biorisk scenarios probably route through civilisational collapse, and that those large-scale risks are most likely a result of deliberate misuse, rather than accidents. This seems importantly different from AI risk (though I do think you might run into trouble with reckless or careless actors in bio as well).

I think a focus on global catastrophic biological risks already puts one's focus in a pretty different (and fairly neglected) place from many people working on reducing pandemic risks, and that the benefit of trying to get into the details of whether a specific threat is existential or catastrophic doesn’t really outweigh the costs of potentially generating infohazards.

My guess is that (2) will be fairly hard to achieve, because the sorts of threat models that are sufficiently detailed to be credible to people trying to do hardcore existential-risk-motivated cause prioritization are dubiously cost-benefitted from an infohazard perspective.

Happy to pitch in with a few stories of rejection!

  • 2010: I applied for MIT and Princeton for undergraduate studies and wasn't accepted to either. Not trying harder to get into those schools was a major regret of mine for about 5 years (I barely studied for the SATs, in part because I was the only person I knew who took them... it's uncommon for Canadians to attend university in the states). I later ended up working on teams with people who had gone to fancy US schools, such that I no longer believe this had a clearly negative impact on my trajectory.
  • 2018: Rejected for LTFF funding for the biosecurity conference that eventually became Catalyst. We re-applied in a subsequent round and were funded.
  • 2018: I applied to be a Research Analyst at Open Phil in their big 2018 recruitment round, and got through two rounds of work tests before ultimately being rejected after an interview. The interview really didn't go well; I felt like a total idiot, and didn't get the job. This was maybe the most rough rejection; I felt like I wasted basically all of my non-work time for a month on work tests, at a time when I was feeling pretty bad about how effectively I was spending my time.
  • 2018: rejected from the SynBioBeta conference fellowship run by Johns Hopkins, which at the time felt like it could have been an entry point into a biosecurity career transition. Definitely had some angst about whether it was even possible to make such a transition.
  • 2019: I was rejected from a really cool engineering role at Culture Biosciences after a phone screen interview. I got so distressed after this ("I'm not technical enough for a real hardware-y engineering job any more! augh!!") that I did some electronics projects that I really didn't have time for, largely out of angst. They later reached out to me again when they had a role closer to my (more software-specialzied) skillset, and I completed a full round of interviews and received an offer, though I ultimately decided not to leave my job in order to have more time to focus on my part-time biosecurity projects.

These were all pretty painful for me at the time... and I'm realizing I've since come up with stories where the rejections were okay, or part of a fine trajectory. I guess one message here is "just because you were rejected once doesn't mean you will be if you apply again"?

Maybe there’s a huge illusion in EA of “someone else has probably worked out these big assumptions we are making”. This goes all the way up to the person at Open Phil thinking “Holden has probably worked these out” but actually no one has.

I just wanted to highlight this in particular; I have heard people at Open Phil say things along the lines of "... but we could be completely wrong about this!" about large strategic questions. A few examples related to my work:

  • Is it net positive to have a dedicated community of EAs working on reducing GCBRs, or would it be better for people to be more fully integrated into the broader biosecurity field?
  • If we want to have this community, should we try to increase its size? How quickly?
  • Is it good to emphasize concerns about dual-use and information hazards when people are getting started in biosecurity, or does that end up either stymieing them (or worse, inspiring them to produce more harmful ideas)?

These are big questions, and I have spent dozens (though not hundreds) of hours thinking about them... which has led to me feeling like I have "working hypotheses" in response to each. A working hypothesis is not a robust, confident answer based on well-worked-out assumptions. I could be wrong, but I suspect this is also true in many other areas of community building and cause prioritisation, even "all the way up".

I recall meeting Karolina M. Sulich, the VP of Osmocosm, at EAGxBerlin last year, and thought some of her machine olfaction x biosecurity ideas were really cool! I'd be stoked for more people to look into this.

A few more you might share:

This is great! I think that project-based learning is simply a way more effective way to learn about a cause area than going through a reading list (I know you've written about this before). Cold Takes has quite a lot of writing about how just reading stuff is probably not the best way to form a view and robustly retain things.

It's also super generous of you to offer to review people's fit-test projects :)

Another poem about loss that moves me, this one specifically about grieving a dear friend:

It's what others do, not us, die, even the closest
on a vainglorious, glorious morning, as the song goes,
the yellow or golden palms glorious and all the rest
a sparkling splendour, die. They're practising calypsos,
they're putting up and pulling down tents, vendors are slicing
the heads of coconuts around the Savannah, men
are leaning on, then leaping into pirogues, a moon will be rising
tonight in the same place over Morne Coco, then
the full grief will hit me and my heart will toss
like a horse's head or a threshing bamboo grove
that even you could be part of the increasing loss
that is the daily dial of the revolving shade. Love
lies underneath it all though, the more surprising
the death, the deeper the love, the tougher the life.
The pain is over, feathers close your eyelids, Oliver.
What a happy friend and what a fine wife!
Your death is like our friendship beginning over.

for Oliver Jackman, Derek Walcott

Load more