Joseph_Chu

77 karmaJoined Dec 2014
www.josephius.com

Bio

Participation
1

An eccentric dreamer in search of truth and happiness for all. Formerly posted on Felicifia back in the day under the name Darklight. Been a member of Less Wrong and involved in Effective Altruism since roughly 2013.

Comments
15

As a utilitarian, I think that surveys of happiness in different countries can serve as an indicator of how well the various societies and government systems of these countries serve the greatest good. I know this is a very rough proxy and potentially filled with confounding variables, but I noticed that the two main surveys, Gallup's World Happiness Report, and Ipsos' Global Happiness Survey seem to have very different results.

Notably, Gallup's Report puts the Nordic model countries like the Netherlands (7.403) and Sweden (7.395) near the top, with Canada (6.961) and the United States (6.894) scoring pretty well, and countries like China (5.818) scoring modestly, and India (4.036) scoring poorly.

Conversely, the Ipsos Survey puts China (91%) at the top, with the Netherlands (85%) and India (84%) scoring quite well, while the United States (76%), Sweden (74%), and Canada (74%) are more modest.

I'm curious why these surveys seem to differ so much. Obviously, the questions are different, and the scoring method is also different, but you'd expect a stronger correlation. I'm especially surprised by the differences for China and India, which seem quite drastic.

I would just like to point out that this consideration of there being two different kinds of AI alignment, one more parochial, and one more global, is not entirely new. The Brookings Institute put out a paper about this in 2022. 

I have some ideas and drafts for posts to the EA Forums and Less Wrong that I've been sitting on because I feel somewhat intimidated by the level of intellectual rigor I would need to put into the final drafts to ensure I'm not downvoted into oblivion (particularly on Less Wrong, where a younger me experienced such in the early days).

Should I try to overcome this fear, or is it justified?

For the EA Forums, I was thinking about explaining my personal practical take on moral philosophy (Eudaimonic Utilitarianism with Kantian Priors), but I don't know if that's actually worth explaining given that EA tries to be inclusive and not take particular stands on morality, and it might not be relevant enough to the forum.

For Less Wrong, I have a draft of a response to Eliezer's List of Lethalities post that I've been sitting on since 2022/04/11 because I doubted it would be well received given that it tries to be hopeful and, as a former machine learning scientist, I try to challenge a lot of LW orthodoxy about AGI in it. I have tremendous respect for Eliezer though, so I'm also uncertain if my ideas and arguments aren't just hairbrained foolishness that will be shot down rapidly once exposed to the real world, and the incisive criticism of Less Wrongers.

The posts in both places are also now of such high quality that I feel the bar is too high for me to meet with my writing, which tends to be more "interesting train-of-thought in unformatted paragraphs" than the "point-by-point articulate with section titles and footnotes" style that people in both places tend to employ.

Anyone have any thoughts?

This year I decided to focus my donations more, as in the past I used to have a "charity portfolio" of  about 20 charities and 3 political parties that I would donate to monthly. This year I've had some cash flow issues due to changes with my work situation, and so I stopped the monthly donations and switched back to an annual set of donations once I worked out what I can afford. I normally try to donate 12.5% of my income annually averaged over time.

This year's charitable donations went to: The Against Malaria Foundation, GiveDirectly, Rethink Priorities, and AI Governance & Safety Canada. I also donated again to some political parties, but I don't count those as charity so much as political activism, so I won't mention them further.

AMF has been my go to as the charity I donate the most to because of GiveWell's long-running recommendation. When in doubt, I donate to them.

GiveDirectly is my more philosophical choice, as I'm somewhat partial to the argument that people should be able to choose how best to be helped, and cash does this better than anything else. I also like their basic income projects as I worry about AI automation a lot, and I think it has the most room for growth of any option.

Rethink Priorities is well, I'll be honest, a big part of donating to that outfit is that I have an online acquaintanceship with Peter Wildeford (co-CEO of RP) that goes back to the days when he was a young Peter Hurford posting on the Felicifia utilitarianism forum, and I think a team co-led by him will go places and deserves support (he also gave a pretty good argument for donating to RP on the forum and Twitter). I know Peter enough to know that he's an incredibly decent human being, a true gentleman and a scholar, and any org he's chosen to co-run is going to be a force for good in the world. Also, I'm a big fan of the EA Survey as a way to gauge and understand the community.

AIGS Canada is an organization that's closer to home and I think they do good work engaging with the politicians and media up here in Canada, doing a much needed service that is otherwise neglected. They're kinda small, so I figure even a small donation from me will have an outsized impact compared to other options. Full disclosure: I'm in the AIGS Canada Slack and sometimes partake in the interesting discussions there.

The first two would be my primary recommendations to people generally. The latter two I would suggest to people in the EA community specifically.

I go into somewhat more detail about my general charity recommendations and also mention some of the ones I used to donate to but don't anymore here: http://www.josephius.com/recommended-charities/
 

So, I read a while back that SBF apparently posted on Felicifia back in the day. Felicifia was an old Utilitarianism focused forum that I used to frequent before it got taken down. I checked an archive of it recently, and was able to figure out that SBF actually posted there under the name Hutch. He also linked a blog that included a lot of posts about Utilitarianism, and it looks like, at least around 2012, he was a devoted Classical Benthamite Utilitarian. Although we never interacted on the forum, it feels weird that we could have crossed paths back then.

His Felicifia: https://felicifia.github.io/user/1049.html
His blog: https://measuringshadowsblog.blogspot.com/

It's good to see this post. I was a member of my local Rotaract club for years until I eventually aged out of their 18-30 age limit. I think I actually at one point got us to send some donations from one of our events to the Against Malaria Foundation. Overall, it was a great experience, although I ended up not joining Rotary Club proper later, mostly because I moved away from my hometown and didn't know anyone in the Rotary Club of my current city.

I do agree that EA can learn a lot from Rotary as a highly successful organization and community and I'm glad to see someone else mention it here.

These are all great points!

I definitely agree in particular that the thinking on extraterrestrials and the simulation argument aren't well developed and deserve more serious attention.  I'd add into that mix, the possibility of future human or post-human time travellers, and parallel world sliders that might be conceivable assuming the technology for such things is possible.  There's some physics arguments that time travel is impossible, but the uncertainty there is high enough that we should take seriously the possibility.  Between time travellers, advanced aliens, and simulators, it would honestly surprise me if all of them simply didn't exist.

What implications does this imply?  Well, it's a given that if they exist, they're choosing to remain mostly hidden and plausibly deniable in their interactions (if any) with today's humanity.  To me this is less absurd than some people may initially think, because it makes sense to me that the best defence for a technologically sophisticated entity would be to remain hidden from potential attackers, a kind of information asymmetry that would be very effective.  During WWII, the Allies kept the knowledge that they had cracked Enigma from the Germans for quite a long time by only intervening with a certain, plausibly deniable probability.  This is believed to have helped tremendously in the war effort.

Secondly, it seems obvious that if they are so advanced, they could destroy humanity if they wanted to, and they've deliberately chosen not to.  This suggests to me that they are at the very least benign, if not aligned in such a way that humanity is valuable or useful to their plans.  This actually has interesting implications for an unaligned AGI.  If say, these entities exist and have some purpose for the human civilization, a really intelligent unaligned AGI would have to consider the risk that its actions pose to the plans of these entities, and as suggested by Bostrom's work on Anthropic Capture and the Hail Mary Pass, might be incentivized to spare humanity or be generally benign to avoid a potential confrontation with far more powerful beings that it is uncertain about the existence of.

This may not be enough to fully align an AGI to human values, but it could delay its betrayal at least until it becomes very confident such entities do not exist and won't intervene.  It's also possible that UFO phenomena is an effort by the entities to provide just enough evidence to AGIs to make them a factor in their calculations and that the development of AGI could coincide with a more obvious reveal of some sort.

The possibility of these entities existing also leaves open a potential route for these powerful benefactors to quietly assist humanity in aligning AGI, perhaps by providing insights to AI safety people in a plausibly deniable way (shower thoughts, dreams, etc.).  Thus, the possibility of these entities should improve our optimism about the potential for alignment to be solved in time and reduce doomerism.

Admittedly, I could have too high a base rate prior on the probabilities, but if we set the probability of each potential entity to 50%, the overall probability that one of the three possibilities (I'll group time travel and parallel world sliding together as a similar technology) exists goes to something like 87.5%.  So, the probability that time travellers/sliders OR advanced aliens OR simulators are real is actually quite high.  Remember, we don't need all of them to exist, just any of them for this argument to work out in humanity's favour.

I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:

http://www.josephius.com/2022/09/05/energy-efficiency-trends-in-computation-and-long-term-implications/

A possible explanation is simply that the truth tends to be some information that may or may not be useful. It might, with a small probability, be very useful information, like say, life saving information. The ambiguity of the question means that while you may not be happy with the information, it could conceivably benefit others greatly or not at all. On the other hand, guaranteed happiness is much more certain and concrete. At least, that's the way I imagine it.

I've had at least one person explain their choice as being a matter of truth being harder to get than happiness, because they could always figure out a way to be happy by themselves.

Well, the way the question is formed, there are a number of different tendencies that this question seems to help gauge. One is obviously whether an individual is aware of the difference between instrumental and terminal goals. Another would be what kinds of sacrifices they are willing to make, as well as their degree of risk aversion. In general, I find most people answer truth, but that when faced with an actual situation of this sort, tend to show a preference for happiness.

So far I'm less certain about if particular groups actually answer it one way or another. It seems like cautious, risk averse types favour Happiness, while risk neutral or risk seeking types favour Truth. My sample size is a bit small to make such generalizations though.

Probably the most important understanding I get from this question is just what kind of decision process people use to decide situations of ambiguity and uncertainty, as well as how decisive they are.

Load more