JM

Jc_Mourrat

220 karmaJoined Sep 2018

Comments
27

Concerning the scepticism about whether the AstraZeneca vaccine works on the over 65s, I think it's useful to keep in mind that the purpose of a clinical trial is not only to test for efficacy, but also to test for safety. Maybe some experts were concerned that older people would have more difficulties dealing with side effects, but chose to silence these possibly legitimate concerns and to only talk openly  about efficacy questions. If the world was utilitarian, then I think this would probably not be a very strong point. But as it stands, I think that a handful of deaths caused by a vaccine would cause a major backlash. (And, if you ask me, I would prefer a transparent communication strategy, but I'm not surprised if it turns out that they prefer something else.)

I think it would also be worth keeping in mind how hard it is to make progress on each front. Given that there seems to be widespread non-therapeutic use of antibiotics for farmed animals, and that (I believe) most people have accepted that antibiotics should be used sparingly, I would be surprised if there were no "low-hanging fruits" there. This is not meant as a complete solution, but rather is an attempt to identify our "next best move". I would believe that cheap in-farm diagnostic tools should now be within reach as well, or already existing.

Separately from this, I admit being confused about the nature of the question regarding the importance of dealing with over-use of antibiotics in farmed animals. My understanding is that we know that inter-species and horizontal gene transfers can occur, so is the question about knowing how much time it takes? I just don't have a clear model of how I'm supposed to think about it. Should I think that we are in a sort of "race against bacteria" to innovate faster than they evolve? Why would a delay in transfer to humans be a crucial consideration? Is it that antibiotic innovation is mostly focused on humans? Is there such a thing as a human-taylored antibiotic vs. farmed animal antibiotic? I suppose not? I cannot wrap my head around the idea that this delay in transmission to humans is important. So I guess I'm not thinking about it right?

[Added later:] Maybe the point is that if some very resistant strain emerges in a farmed animal species, we have time to develop a counter-measure before it jumps to humans?

Thanks a lot, this is super helpful! I particularly appreciated that you took the time to explain the internal workings of a typical think tank, this was not at all clear to me.

The impression I get from a (I admit relatively casual) look is that you are saying something along the following lines:

1) there is a big mystery concerning the fact that the rate of growth has been accelerating,

2) you will introduce a novel tool to explain that fact, which is stochastic calculus,

3) using this tool, you arrive at the conclusion that infinite explosion will occur before 2047 with 50% probability.

For starters, as you point out if we read you sufficiently carefully, there is no big mystery in the fact that the rate of growth of humanity has been super-exponential. This can be simply explained by assuming that innovation is an important component of the growth rate, and the amount of innovation effort itself is not constant, but grows with the size of the population, maybe in proportion to this size. So if you decide that this is your model of the world, and that the growth rate is proportional to innovation effort, then you write down some simple math and you conclude that infinite explosion will occur at some point in the near future. This has been pointed numerous times. For instance, as you point out (if we read you carefully), Michael Kremer (1993) checked that, going back as far as a million years ago, the idea that population growth rate is roughly proportional to (some positive power of the) population size gives you a good fit with the data up to maybe a couple of centuries ago. And then we know that the model stops to work, because for some reason at some level of income people stop to transform economic advancement into having more children. I don't think we should ponder for long about the fact that a model that matched well past data stopped to work at some point. This seems to me to be the natural fate of models of early growth of anything. So instead of speculating about this, Kremer adjusts his model to make it more realistic.

It is of course legitimate to argue that human progress over recent times is not best captured by population size, and that maybe gross world product is a better measure. For this measure, we have less direct evidence that a slowdown of the "naive model" is coming (By "naive model" I mean the model in which you just fit growth with a power law, without any further adjustment). Altough I do find works such as this or this quite convincing that future trends will be slower than what the "naive" model would say.

After reading a (very small) bit of your technical paper, my sense is that your main contribution is that you fixed a small inconsistency in how we go about estimating the parameters of the "naive model". I don't deny that this is a useful technical contribution, but I believe that this is what it is: a technical contribution. I don't think that it brings any new insight into questions such as, for instance, whether or not there will indeed be a near-infinite explosion of human development in the near future.

I am not comfortable with the fact that, in order to convey the idea of introducing randomness into the "naive model", you invoke "E = mc2", the introduction of calculus by Newton and Leibnitz, the work of Nobel prize winners, or the fact that "you experienced something like what [this Nobel prize winner] experienced, except for the bits about winning a Nobel". Introducing some randomness into a model is, in my opinion, a relatively common thing to do. That is, once we have a deterministic model that we find relatively plausible and that we want to refine somewhat.

Thanks for the link, that's very interesting! I've seen that you direct donations to the Clean Energy Innovation program of the Information Technology and Innovation Foundation. How confident are you that the funds are actually fully used for the purpose? I understand that their accounting will show that all of your funds will go there, but how confident are you that they will not reduce their discretionary spending to this progam as a consequence? (I glanced at some of their recent work, and they have some pieces that are fairly confrontational towards China. While this may make sense from a short-term US perspective, it might even be net harmful if one takes a broader view and/or takes into account the possiblity of a military escalation between the US and China.) Did you consider the Clean Air Task Force when looking for giving opportunities?

As a side comment, I think it can make perfect sense to work on some area and donate to another one. The questions "what can I do with my money to have maximal impact" and "how do I use my skill set to have maximal impact" are very different, and I think it's totally fine if the answers land on different "cause areas" (whatever that means).

Concerning x-risks, my personal point of disagreement with the community is that I feel more skeptical of the chances to optimize our influence on the long-term future "in the dark" than what seems to be the norm. By "in the dark", I mean in the absence of concrete short-term feedback loops. For instance, when I see the sort of things that MIRI is doing, my instinctive reaction is to want to roll my eyes (I'm not an AI specialist, but I work as a researcher in an academic field that is not too distant). The funny thing is that I can totally see myself from 10 years ago siding with "the optimists", but with time I came to appreciate more the difficulty of making anything really happen. Because of this I feel more sympathetic to causes in which you can measure incremental progress, such as (but not restricted to) climate change.

Often times climate change is dismissed on the basis that there is already a lot of money going into this. But it's not clear to me that this proves the point. For instance, it may well be that these large resources that are being deployed are poorly directed. Some effort to reallocate these resources could have a tremendously large effect. (E.g. supporting the Clean Air Task Force, as suggested by the Founders Pledge, may be of very high impact, especially in these times of heavy state intervention, and of coming elections in the US.) We should be careful to apply the "Importance-Neglectedness-Tractability" framework with caution. In the last analysis, what matters is the impact of our best possible action, which may not be small just on the basis of "there is already a lot of money going into this". (And, for the record, I would personally rate AI safety technical research as having very low tractability, but I think it's good that some people are working on it.)

I've just read the results of an interesting new study on the effect of red-flagging some information on social media, with flags such as "Multiple fact-checking journalists dispute the credibility of this news", and variations with "Multiple fact-checking journalists" replaced by, alternatively, "Major news outlets", "A majority of Americans", or "Computer algorithms using AI". The researchers tested the effect this had on the propensity of people to share the content. The effect of the "fact-checking" phrasing was the most pronounced, and very significant (a reduction of about 40% of the probability to share content; which jumps to 60% for people who identify as Democrats). Overall the effect of the "AI" phrasing was also very significant, but quite counterintuitively it has the effect of increasing the probability of sharing content for people who identify as Republicans! (By about 8%; it decreases that same probability by 40% for people who identify as Democrats.)
https://engineering.nyu.edu/news/researchers-find-red-flagging-misinformation-could-slow-spread-fake-news-social-media

I think it's worth noting that, for predictions concerning the next few decades, accelerating growth or "the god of straight lines" with 2% growth are not the only possibilities. There is for instance this piece by Tyler Cowen and Ben Southwood on the slowing down of scientific progress, which I find very good. Also, in Chapter 18 of Gordon's book on "the rise and fall of American growth", he predicts (under assumptions that I find reasonable) that the median disposable income per person in the US will grow by about 0.3% per year on average over the period 2015-2040. (This does not affect your argument though, as far as I understand it.)

Thanks for this interesting post! I particularly like your point that instant-runoff voting "has a track record in competitive elections and is much more in line with conventional notions of “majority”". Paraphrasing, the point is of course not to debate whether or not IRV produces a theoretically valid notion of majority; rather, it is about the psychological perception of the voting process and of the perceived legitimacy of the winner. I think these psychological aspects are very important, and are essentially impossible to capture by any theory.

Relatedly, I found this paragraph in Wikipedia's article on approval voting, which I find worrisome: "Approval voting was used for Dartmouth Alumni Association elections for seats on the College Board of Trustees, but after some controversy, it was replaced with traditional runoff elections by an alumni vote of 82% to 18% in 2009." My understanding is that voters in approval voting most often choose to support only one candidate, despite being given a much broader range of options; this, in elections with more than 2-3 possible candidates, often leads to a winner who collected only a small fraction of the votes, and who is then perceived as lacking legitimacy.

I have not studied the question in detail, but as of know my guess would be that instant-runoff voting should be preferred over approval voting.

Load more