QubitSwarm99

293 karmaJoined Working (0-5 years)

Participation
2

  • Attended more than three meetings with a local EA group
  • Attended an EAGx conference

Comments
54

The dormant period occurred between applying and getting referred for the position, and between getting referred and receiving an email for an interview. These periods were unexpectedly long and I wish there had been more communication or at least some statement regarding how long I should expect to wait. However, once I had the interview, I only had to wait a week (if I am remembering correctly) to learn if I was to be given a test task. After completing the test task, it was around another week before I learned I had performed competently enough to be hired.   

I should have chosen a clearer phrase than "not through formal channels". What I meant was that my much of my forecasting work and experiences came about through my participation on Metaculus, which is "outside" of academia; this participation did not manifest as forecasting publications or assistantships (as would be done through a Masters or PhD program), but rather as my track record (linked in CV to Metaculus profile) and my GitHub repositories. There was also a forecasting tournament I won, which I also linked on the CV. 

I agree with this.

"Number of publications" and "Impact per publication" are separate axes, and leaving the latter out produces a poorer landscape of X-risk research. 

Glad to hear that the links were useful!

Keeping by Holden's timeline sounds good, and I agree that AGI > HLMI in terms of recognizability. I hope the quiz goes well once it is officially released!

I am not the best person to ask this question (@so8res, @katja_grace, @holdenkarnofsky) but I will try to offer some points.

  • These links should be quite useful: 
  • I don't know of any recent AI expert surveys for transformative AI timelines specifically, but have pointed you to very recent ones of human-level machine intelligence and AGI. 
  • For comprehensiveness, I think you should cover both transformative AI (AI that precipitates a change of equal or greater magnitude to the agricultural or industrial revolution) and HLMI. I have yet to read Holden's AI Timelines post, but believe it's likely a good resource to defer to, given Holden's epistemic track record, so I think you should use this for the transformative AI timelines. For the HLMI timelines, I think you should use the 2022 expert survey (the first link). Additionally, if you trust that a techno.-optimist leaning crowd's forecasting accuracy generalizes to AI timelines, then it might be worth checking out Metaculus as well.
  • Lastly, I think it might be useful to ask under the existential risk section what percentage of ML/AI researchers think AI safety research should be prioritized (from the survey: "The median respondent believes society should prioritize AI safety research “more” than it is currently prioritized. Respondents chose from “much less,” “less,” “about the same,” “more,” and “much more.” 69% of respondents chose “more” or “much more,” up from 49% in 2016.")

I completed the three quizzes and enjoyed it thoroughly. 

Without any further improvements, I think these quizzes would still be quite effective. It would be nice to have a completion counter (e.g., an X/Total questions complete) at the bottom of the quizzes, but I don't know if this is possible on quizmanity. 

Got through about 25% of the essay and I can confirm it's pretty good so far. 

Strong upvote for introducing me to the authors and the site. Thank you for posting. 

Every time I think about how I can do the most good, I am burdened by questions roughly like

  • How should value be measured? 
  • How should well-being be measured? 
  • How might my actions engender unintended, harmful outcomes? 
  • How can my impact be measured? 

I do not have good answers to these questions, but I would bet on some actions being positively impactful on the net.

For example

  • Promoting vegetarianism or veganism
  • Providing medicine and resources to those in poverty
  • Building robust political institutions in developing countries
  • Promoting policy to monitor develops in AI

W.r.t. the action that is most  positively impactful, my intuition is that it would take the form of safeguarding humanity's future or protecting life on Earth. 

Some possible actions that might fit this bill:  

  • Work that robustly illustrates the theoretical limits of the dangers from and capabilities of superintelligence.
  • Work that accurately encodes human values digitally  
  • A global surveillance system  for human and machine threats
  • A system that protects Earth from solar weather and NEOs

The problem here is that some of these actions might spawn harm, particularly (2) and (3). 

Thoughts and Notes: October 5th 0012022 (1) 

As per my last shortform, over the next couple of weeks I will be moving my brief profiles for different catastrophes from my draft existential risk frameworks post into shortform posts to make the existential risk frameworks post lighter and more simple. 

In my last shortform, I included the profile for the use of nuclear weapons and today I will include the profile for climate change. 

Climate change 

Does anyone have a good list of books related to existential and global catastrophic risk? This doesn't have to just include books on X-risk / GCRs in general, but can also include books on individual catastrophic events, such as nuclear war. 

Here is my current resource landscape (these are books that I have personally looked at and can vouch for; the entries came to my mind as I wrote them - I do not have a list of GCR / X-risk books at the moment; I have not read some of them in full): 

General:

AI Safety 

Nuclear risk

General / space

Biosecurity 

Load more