BW

Brian Wang

97 karmaJoined

Comments
27

Panoplia Laboratories is developing broad-spectrum antivirals in order to fight future pandemics, and we are looking to hire one researcher to be a part of our team.

Job description: Responsibilities would include:

  • Designing and cloning new antiviral candidates
  • Designing and executing in vitro assays to characterize antiviral candidates
  • Assisting with the design and execution of in vivo studies for the characterization of antiviral candidates
  • Analyzing data from in vitro and in vivo studies
  • Actively communicating results with the rest of the team

As an early member of our team, you would also have the ability to learn new skills, receive technical mentorship, and play a central role in our path to making an impact.

Salary/benefits: $75,000/year, medical/dental/vision insurance, unlimited paid time off

Location: Cambridge, MA

Suggested skills/attributes: We encourage everyone who is interested to apply! But some of the skills/attributes we are looking for include:

  • Mammalian cell culture experience
  • Experience working with viruses in vitro
  • Familiarity with general molecular biology techniques: flow cytometry, PCR, gel electrophoresis, qPCR, ELISAs
  • Experience working with mice
  • Working well with a team
  • Flexibility in taking on different tasks
  • Enthusiasm about working towards a shared mission

The application process: Please fill out the Google Form here – the application is on a rolling deadline. We will follow up with individual applicants by e-mail to conduct a short work test and schedule a 1-hour interview with Executive Director Brian Wang. Finally, we’ll invite you for a tour of the lab/office and for lunch with the rest of the team (in-person or virtually).

More about Panoplia Laboratories: We are a nonprofit organization conducting research on broad-spectrum antivirals until they are technically de-risked enough to be further developed through the clinic by biotech or pharmaceutical companies. Our current research program focuses on the development of inhalable, DNA-encoded broad-spectrum antiviral proteins we call “virus-targeting chimeras” (VIRTACs), inspired by the innate immune system. You can find even more information about us in a public funding proposal we put out last year here.

If you have any questions, please reach out to info@panoplialabs.org!

Besides the 3-month-duration broadly effective antiviral prophylactics that Josh mentioned, I think that daily broadly antiviral prophylactics could also be promising if they could eventually become widespread consumer products. However, the science is still pretty nascent  – at least for prophylaxis, I don't believe there is much human data at all, and I haven't seen anything I've seen reaches truly 24 h duration efficacy (which I'd see as a major barrier to consumer uptake).

Here are some links:

"PCANS", and its commercial product Profi nasal spray

INNA-051 ferret study, Phase 2 influenza challenge study (press release and more negative take)

"SHIELD"

I think transmission reduction hasn't been tested specifically, but anything that's localized to the upper respiratory tract and is prophylactic should theoretically reduce transmission as well.
 

I think that if the broadly effective antiviral prophylactic was truly effective on an individual level, then there could be a reasonable market for it. But the market value would be based on its efficacy at protecting individuals, not on transmission reduction.

Which I think is fine - in the absence of specific incentives to make drugs that reduce transmission, a strategy that involves bringing transmission reduction "along for the ride" on otherwise already-valuable drugs makes sense to me. 

How does this change affect the eligibility of near-term applicants to LTFF/EAIF (e.g., those who apply in the next 6 months) who have received OpenPhil funds in the past / may receive funds from OpenPhil in the future? Currently my understanding is that these applicants are ineligible for LTFF/EAIF by default – does this change if EA funds and Open Philanthropy are more independent?

Estimates of the mortality rate vary, but one media source says, "While the single figures of deaths in early January seemed reassuring, the death toll has now climbed to above 3 percent." This would put it roughly on par with the mortality rate of the 1918 flu pandemic.

It should be noted that the oft-cited case-fatality ratio of 2.5% for the 1918 flu might be inaccurate, and the true CFR could be closer to 10%: https://rybicki.blog/2018/04/11/1918-influenza-pandemic-case-fatality-rate/?fbclid=IwAR3SYYuiERormJxeFZ5Mx2X_00QRP9xkdBktfmzJmc8KR-iqpbK8tGlNqtQ

EDIT: Also see this twitter thread: https://twitter.com/ferrisjabr/status/1232052631826100224

It seems that there are two factors here leading to a loss in altruistic belief:

1. Your realization that others are more selfish than you thought, leading you to feel a loss of support as you realize that your beliefs are more uncommon than you thought.

2. Your uncertainty about the logical soundness of altruistic beliefs.

Regarding the first, realize that you're not alone, that there are thousands of us around the world also engaged in the project of effective altruism – including potentially in your city. I would investigate to see if there are local effective altruism meetups in your area, or a university group if you are already at university. You could even start one if there isn't one already. Getting to know other effective altruists on a personal level is a great way to maintain you desire to help others.

Regarding the second, what are the actual reasons for people answering "100 strangers" to your question? I suspect that the rationale isn't on strong ground – that it is mostly borne out of a survival instinct cultivated in us by evolution. Of course, for evolutionary reasons, we care more about ourselves than we care about others, because those that cared too much about others at the expense of themselves died out. But evolution is blind to morality; all it cares about is reproductive fitness. But we care about so, so much more. Everything that gives our lives value - the laughter, love, joy, etc. – is not optimized for by evolution, so why trust the answer "100 strangers" if it is just evolution talking?

I believe that others' lives have an intrinsic value on par with my own life, since others are just as capable of all the experiences that give our lives value. If I experience a moment of joy, vs. if Alice-on-the-other side-of-the-world-whom-I've-never-met experiences a moment of joy, what's the difference from "the point of view of the universe"? A moment of joy is a moment of joy, and it's valuable in and of itself, regardless who experiences it.

Finally, if I may make a comment on your career plan – I might apply for career coaching from 80,000 hours. Spending 10 years doing something you don't enjoy sounds like a great recipe for burnout. If you truly don't think that you'll be happy getting a machine learning PhD, there might be better options for you that will still allow you to have a huge impact on the world.

I think the central "drawing balls from an urn" metaphor implies a more deterministic situation than that which we are actually in – that is, it implies that if technological progress continues, if we keep drawing balls from the urn, then at some point we will draw a black ball, and so civilizational devastation is basically inevitable. (Note that Nick Bostrom isn't actually saying this, but it's an easy conclusion to draw from the simplified metaphor). I'm worried that taking this metaphor at face value will turn people towards broadly restricting scientific development more than is necessarily warranted.

I offer a modification of the metaphor that relates to differential technological development. (In the middle of the paper, Bostrom already proposes a few modifications of the metaphor based on differential technological development, but not the following one). Whenever we draw a ball out of the urn, it affects the color of the other balls remaining in the urn. Importantly, some of the white balls we draw out of the urn (e.g., defensive technologies) lighten the color of any grey/black balls left in the run. A concrete example of this would be the summation of the advances in medicine over the past century, which have lowered the risk of a human-caused global pandemic. Therefore, continuing to draw balls out of the urn doesn't inevitably lead to civilizational disaster – as long as we can be sufficiently discriminate towards those white balls which have a risk-lowering effect.

Interesting idea. This may be worth trying to develop more fully?

Yeah. I'll have to think about it more.

I'm still coming at this from a lens of "actionable advice for people not in ea". It might be that the person doesn't know many other trusted individuals, what should be the advice then?

Yeah, for people outside EA I think structures could be set up such that reaching consensus (or at least a majority vote) becomes a standard policy or an established norm. E.g., if a journal is considering a manuscript with potential info hazards, then perhaps it should be standard policy for this manuscript to be referred to some sort of special group consisting of journal editors from a number of different journals to deliberate. I don't think people need to be taught the mathematical modeling behind the unilateralist's curse for these kinds of policies to be set up, as I think people have an intuitive notion of "it only takes one person/group with bad judgment to fuck up the world; decisions this important really need to be discussed in a larger group."

One important distinction is that people who are facing info hazards will be in very different situations when they are within EA vs. when they are out of EA. For people within EA, I think it is much more likely to be the case that a random individual has an idea that they'd like to share in a blog post or something, which may have info hazard-y content. In these situations the advice "talk to a few trusted individuals first" seems to be appropriate.

For people outside of EA, I think those who are in possession of info hazard-y content are much more likely to be embedded in some sort of larger institution (e.g., a research scientist or a journal editor looking to publish something), where perhaps the best leverage is setting up certain policies, rather than trying to teach everyone the unilateralist's curse.

As I understand it you shouldn't wait for consensus else you have the unilateralist's curse in reverse. Someone pessimistic about an intervention can block the deployment of an intervention needed to avoid disaster.

You're right, strict consensus is the wrong prescription. A vote is probably better. I wonder if there's mathematical modeling that you could do that would determine what fraction of votes is optimal, in order to minimize the harms of the standard unilateralist's curse and the curse in reverse? Is it a majority vote? A 2/3s vote? l suspect this will depend on what the "true sign" of releasing the potentially dangerous info is likely to be; the more likely it is to be negative, the higher bar you should be expected to clear before releasing.

If there is a single person with the knowledge of how to create safe efficient nuclear fusion they cannot expect other people to release it on their behalf.

Ah right. I suppose the unilateralist's curse is only a problem insofar as there are a number of other actors also capable of releasing the information; if you are a single actor then the curse doesn't really apply. Although one wrinkle might be considering the unilateralist's curse with regards to different actors through time (i.e., erring on the side of caution with the expectation that other actors in the future will gain access to and might release the information), but coordination in this case might be more challenging.

What the researcher can do is try and build consensus/lobby for a collective decision making body on the internal climate heating (ICH) problem. Planning to release the information when they are satisfied that there is going to be a solution in time for fixing the problem when it occurs.

Thanks, this concrete example definitely helps.

I think I am also objecting to the expected payoff being thought of as a fixed quantity. You can either learn more about the world to alter your knowledge of the payoff or try and introduce things/insituttions into the world to alter the expected payoff. Building useful institutions may rely on releasing some knowledge, that is where things become more hairy.

This makes sense. "Release because the expected benefit is above the expected risk" or "not release because the vice versa is true" is a bit of a false dichotomy, and you're right that we should be more thinking about options that could maximize the benefit while minimizing the risk when faced with info hazards.

Also as the the unilaterlist's curse suggests discussing with other people such that they can undertake the information release, sometimes increases the expectation of a bad out come. How should consensus be reached in those situations?

This can certainly be a problem, and is a reason not to go too public when discussing it. Probably it's best to discuss privately with a number of other trusted individuals first, who also understand the unilateralist's curse, and ideally who don't have the means/authority of releasing the information themselves (e.g., if you have a written up blog post you're thinking of posting that might contain info hazards, then maybe you could discuss in vague terms with other individuals first, without sharing the entire post with them?).

Load more