yanni kyriacos

Co-Founder & Director @ AI Safety ANZ, GWWC Advisory Board Member (Growth)
951 karmaJoined Working (15+ years)

Bio

Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).

Posts
22

Sorted by New

Comments
190

In the absence of a poll feature, please use the agree/disagree function and the "changed my mind" emoji in this quick take to help me get a sense for EA's views on a statement:

"Working on capabilities within a leading AI Lab makes someone a bad person"

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Agree = strongly agree or somewhat agree

Disagree = strongly disagree or somewhat disagree

▲ reaction emoji = unsure / neither agree nor disagree

downvote = ~ this is a bad and divisive question

upvote = ~ this is a good question to be asking

Very quick thoughts on setting time aside for strategy, planning and implementation, since I'm into my 4th week of strategy development and experiencing intrusive thoughts about needing to hurry up on implementation;

  • I have a 52 week LTFF grant to do movement building in Australia (AI Safety)
  • I have set aside 4.5 weeks for research (interviews + landscape review + maybe survey) and strategy development (segmentation, targeting, positioning),
  • Then 1.5 weeks for planning (content, events, educational programs), during which I will get feedback from others on the plan and then iterate it. 
  • This leaves me with 46/52 weeks to implement ruthlessly.

In conclusion, 6 weeks on strategy and planning seems about right. 2 weeks would have been too short, 10 weeks would have been too long, this porridge is juuuussttt rightttt.

keen for feedback from people in similar positions.

Relatedly, when I worked in ad agencies I'd always tell my clients to avoid putting their products on discount. The research I've read shows they:

1. don't impact long term sales

2. tend to just attract people who would've bought anyway (but at a lower price)

3. get people used to buying at a lower price

"a brand's normal-price buyers are a major source of its volume from price promotions" https://www.researchgate.net/publication/321247500_Buying_Brands_at_Both_Regular_Price_and_on_Promotion_over_Time

I think matching could be like discounts / price promotions (i'd guess at ~ 50% this is happening); you're just moving money around different timelines and having other negative effectives on your brand.

Thanks for taking the time to point this out Michael :) I appreciate it.

Some quick thoughts:

  • By "psychological condition" I don't mean "mentally ill" in the way that term would be used to describe people with mental-ill health. 
  • Basically, I think there is a really good chance (e.g. > 10%) that he is a psychopath or sociopath, and that is really worth worrying about.
  • I think it is fine to speculate about the mental health conditions of people with extreme amounts of power (Altman, Putin, Trump, Biden), if it is with purpose.
  • I don't think that this single statement (i.e. his quote) proves that, and if that is how my post is commonly read then that's MY fault!
  • Even upon reflection AND considering everyone disagreeing with my original comment, I stand by my feeling that it is an extremely weird thing to say. However, it would be wrong to say that my concern about him potentially being a psychopath or sociopath isn't mostly informed by other things, rather than what was captured in my photo.
  • I'm not speculating he is mentally ill because I don't like him. But I do feel negatively toward him because he is trying to build a thing most don't want and could kill everyone.

Anyway, thanks again for taking the time!

I'd be extremely interested to learn why people dislike this comment so much :)

Hello titotal. I'd prefer you didn't refer to small protests as "pathetically small". LMK if it isn't obvious why.

"Is it true that OpenAI has claimed that they aren't making anything dangerous and aren't likely to do so in the future? Where have they said this?"

Related > AFAICT they've also never said "We're aiming to make the thing that has a substantial chance of causing the end of humanity". I think that is a far more important point.

There are two obvious ways to be dishonest: tell a lie or not tell the truth. This falls into the latter category.

Two jobs in AI Safety Advocacy that AFAICT don't exist, but should and probably will very soon. Will EAs be the first to create them though? There is a strong first mover advantage waiting for someone -

1. Volunteer Coordinator - there will soon be a groundswell from the general population wanting to have a positive impact in AI. Most won't know how to. A volunteer manager will help capture and direct their efforts positively, for example, by having them write emails to politicians

2. Partnerships Manager - the President of the Voice Actors guild reached out to me recently. We had a very surprising number of cross over in concerns and potential solutions. Voice Actors are the canary in the coal mine. More unions (etc) will follow very shortly. I imagine within 1 year there will be a formalised group of these different orgs advocating together.

Load more