J

jeberts

Comms director @ 1Day Sooner
510 karmaJoined Working (0-5 years)Washington, DC, USA

Bio

My name is Jake. I live in DC. I used to work in foreign affairs, primarily Chinawatching, and then  as an investigative researcher (think due diligence, political mudslinging, corporate accountability, etc.). Then, I got dysentery as part of a human challenge trial. I tweeted about it like a maniac, went viral, and now I'm here. Life is funny. 

Feel free to reach out via Twitter DM or LinkedIn, or email me at jake dot eberts @ 1daysooner dot org.

Unless it's very obviously about 1Day Sooner stuff, assume what I post here is my own personal opinion.

Comments
16

Topic contributions
2

Sorry, that was ambiguous on my part. There's a delineation between research ethics issues (how trials are run, etc.) and clinical ethics (medical aid in dying, selling one's organs, accessing unapproved treatments, etc.). My work focuses on the former, not the latter, so I can't speak much to that. I meant "conservative" in the sense of hesitance to adjust existing norms or systems in research ethics oversight and, for example, a very strong default orientation towards any measures that reduce risk (or seem to reduce risk) for research participants. 

Yes, the studies should not have used disabled children at all, because disabled children cannot meaningfully provide consent and were not absolutely necessary to achieve the studies' aims. They were simply the easiest targets: they could not understand what was being done to them and their parents were coercible through misleading information and promises of better care, which should have been provided regardless. (More generally, I do not believe proxy consent from guardians is acceptable for any research that involves deliberate harm and no prospect of net benefit to children.) 

The conditions of the facility are also materially relevant. If it were true that children inevitably would contract hepatitis, then a human challenge would not be truly necessary. More importantly, though, I am comfortable calling Krugman's behavior evil because he spent 15 years running experiments at an institution that was managed with heinously little regard for its residents and evidently did not feel compelled to raise the issue with the public or authorities. Rather, he saw the immense suffering and neglect as perhaps unfortunate, but ultimately convenient leverage to acquire test subjects.

I strongly agree with this comment. I think it's important to have a theory of mind of why people think like this. As a non-bioethicist, my impression is a lot of if has to do with the history of the field of bioethics itself, which emerged in response to the horrid abuses in medical research. One major overarching goal that is imbued in bioethics training, research, and writing is prevention of medical abuse, which leads to small-c conservative views that tend to favor, wherever possible, protection of human subjects/patients and an aversion to calculations that sound like they might single out the groups that historically bore the brunt of such abuse.

Like, we've all heard of the Tuskegee Syphilis Experiment, but there were a lot more really awful things done in the last century, which have lasting effects to this day. At 1Day, we're working on trying to bring about safe, efficient human challenge studies to realize a hepatitis C vaccine. We've made great progress and it looks like they will begin within the next year! But the last time people did viral hepatitis human challenge studies, they did them on mentally disabled children! Just heinously evil. So I will not be surprised if some on the ethics boards when they review the proposed studies are quite skeptical at first! (Note: this doesn't mean that the current IRB system is optimal, or even anywhere near so; I view it sort of like zoning and building codes: good in theory — I don't want toxic waste dumps built near elementary schools — but the devil is in the details and how protections are operationalized.)

All of which is to say: like others here, I very strongly disagree with many prevalent views in bioethics. But as I've interacted more and more with this field as an outsider, my opinions have evolved from "wow, bioethics/research ethics is populated exclusively with morons" to "this is mostly a bunch of reasonable people whose frame of references are very different". The latter view allows me to engage more productively to try to change some of the more problematic/wrongheaded views when it comes up in my work and has let me learn a lot, too!

As someone who is not a bioethicist but interacts with many through work (though certainly not as many as Leah), I think that this position for many likely derives from a general opposition to treating people differently based on their intrinsic characteristics. In other words, If I know it's bad to be ageist, I might interpret the thought experiment that nudges someone to save a younger life as ageist (I've heard this argument from one person in bioethics before, but, y'know, n=1) and reject the premise of the question. So for that subset of bioethicists it may not be a serious argument in favor of the proposition but rather a strong preference against making moral judgments involving people that touch upon their intrinsic characteristics.

Chiming in to note a tangentially related experience that somewhat lowered my opinion of IHME/GBD, though I'm not a health economist or anything. I interacted with several analysts after requesting information related to IHME's estimates for global hepatitis C burden (which differed substantially from the WHO's). After a meeting and some emails promising to followup, we were ghosted. I have heard from one other organization that they've had a really hard time getting similar information out of IHME as well. This may be more of an organizational/operational problem rather than a methodological one, but it wasn't very confidence-inspiring.

Woops, link fixed (here it is again). That article is part of a dedicated supplement to HCV challenge/CHIM. 

Speaking in my personal capacity, I agree — I'd love for insurance/that sort of compensation to be the norm. That does not happen enough in medical research, challenge or otherwise. 

I can see why an insurance agency would be very wary. Establishing causation of cancer in general is hard. Even if someone were screened and in perfect liver health during the CHIM, that doesn't mean they won't later adopt common habits (e.g. smoking or excessive drinking) that are risk factors for liver cancer. 

Relatedly, another article in Clinical Infectious Diseases reviewed liver cancer risks due to CHIM, concluding that "[a]lthough it is difficult to precisely estimate HCC risk from an HCV CHIM, the data suggest the risk to be very low or negligible." This was based on analysis of three separate cohorts/datasets of people who had previously been infected with hepatitis C in other contexts. Still, the risk cannot be discounted entirely, and there are risks other than liver cancer that our FAQ document discusses, too.

Perhaps a workaround could be to establish some sort of trust that pays out to any former CHIM participant who develops liver cancer not obviously traceable to something like alcohol abuse disorder, and have this fund liquidate its assets after a certain number of decades. That would be very novel, expensive, and probably legally complicated, and I don't think it's been raised before.

Thanks for reading!

The donation equivalent aspect is pretty interesting. A study probably would not allow a participant not to take a donation, so in practice it might just be however much money from the study one chooses to donate to effective causes (minus taxes; trial income is usually treated as taxable income, which is probably bad policy). I might be misunderstanding your point, though.

I'll reiterate (this probably should've been worded clearer in the post), one of the arguments we make here is that assuming all participants who make it into the study are about equally useful, we think EAs are more likely to be effective as pre-participants as well. This is because the study is still under consideration: there are decisions about the study's design that may make it go faster, and informed advocacy from earnest pre-participants could be very persuasive for regulators and ethicists who might otherwise reject certain study design decisions on paternalistic grounds. The community and shared worldview of EA makes us think EAs will, on average, be more engaged when it comes to voicing their views on study design.

This interactive model app based on the paper we mention in footnote 4 lets you tinker with a bunch of variables related to challenge model development and vaccine deployment. Based on that, and after a conversation with the lead author, we get about 200 years of life saved for every day sooner the model is developed. (The app isn't that granular/to the day yet but it is supposed to be updated soon.) So pushing for stud decisions that condense things even by a month or two could be huge. 

Part of our work has included pushing for higher compensation in general, both because we believe it can make recruitment easier (and faster) but also because we think that pay should be more commensurate with the social value generated. I and a few other former human challenge volunteers wrote this paper published in Clinical Infectious Diseases calling for US$20,000 in compensation as a baseline. That's far higher than the norm for challenge studies; the highest I've seen is under $8,0000. 

Re: Why EAs specifically, we delve into that a bit in footnote 9. In short, the study is still in a stage where it can be modified to substantially increase potential QALYs/DALYs saved. The voices of prospective participants could be very, very persuasive to researchers, regulators, ethicists when considering study design. Non-EAs are certainly capable of advocating and supporting changes as well, but we think EAs are much more likely to a) grasp the case for certain changes and b) be willing to advocate for them. 

No one should feel like they're obligated to be in a study as an EA (or as a "normie," though I dislike that dichotomy with EAs). There are certainly people for whom time is better spent elsewhere, EA or not. But not everyone on the forum necessarily works for an EA organization, and there are also certainly people who feel they'd have spare capacity and time that they'd like to commit to this sort of thing. 

Load more