Scarlett Johansson makes a statement about the "Sky" voice, a voice for GPT-4o that OpenAI recently pulled after less than a week of prime time.
tl;dr: OpenAI made an offer last September to Johansson; she refused. They offered again 2 days before the public demo. Scarlett...
I'm prepping a new upper-level undergraduate/graduate seminar on 'AI and Psychology', which I'm aiming to start teaching in Jan 2025. I'd appreciate any suggestions that people might have for readings and videos that address the overlap of current AI research (both capabilities...
This course sounds cool! Unfortunately there doesn't seem to be too much relevant material out there.
This is a stretch, but I think there's probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117
For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai
Two jobs in AI Safety Advocacy that AFAICT don't exist, but should and probably will very soon. Will EAs be the first to create them though? There is a strong first mover advantage waiting for someone -
1. Volunteer Coordinator - there will soon be a groundswell from the general population wanting to have a positive impact in AI. Most won't know how to. A volunteer manager will help capture and direct their efforts positively, for example, by having them write emails to politicians
2. Partnerships Manager - the President of the Voice Actors guild reached out to me recently. We had a very surprising number of cross over in concerns and potential solutions. Voice Actors are the canary in the coal mine. More unions (etc) will follow very shortly. I imagine within 1 year there will be a formalised group of these different orgs advocating together.
After Sam Bankman-Fried proved to be a sociopathic fraudster and a massive embarrassment to EA, we did much soul-searching about what EAs did wrong, in failing to detect and denounce his sociopathic traits. We spent, collectively, thousands of hours ruminating about what...
I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs.
I'm very convinced about the Importance and Neglectedness of AI risks.
What are the best resources to get convinced about the Tractability?
I'm not concerned about many AI Safety projects having ~0 impact, I'm concerned about projects having negative impact (eg. Thoughts ...
I'm also concerned about many projects having negative impact, but think there are some with robustly positive impact:
As Shakeel noted on Twitter/X, this is "the closest thing we've got to an IPCC report for AI".
Below I've pasted info from the link.
The report was commissioned by the UK government and chaired by Yoshua Bengio, a Turing Award-winning AI academic and member of the UN’s Scientific Advisory Board. The work was overseen by an international Expert Advisory Panel made up of 30 countries including the UK and nominees from nations who were invited to the AI Safety Summit at Bletchley Park in 2023, as well as representatives of the European Union and the United Nations.
The report’s aim is to drive a shared, science-based, up-to-date understanding of the safety of advanced AI systems, and to develop that understanding over time. To do so, the report brings together world-leading AI countries and the best global AI expertise to analyse the best existing scientific research...
This post was written by Peli Grietzer, inspired by internal writings by TJ (tushant jha), for AOI[1]. The original post, published on Feb 5, 2024, can be found here: https://ai.objectives.institute/blog/the-problem-with-alignment.
The purpose of our work at the AI Objectives Institute (AOI) is to direct the impact of AI towards human autonomy and human flourishing. In the course of articulating our mission and positioning ourselves -- a young organization -- in the landscape of AI risk orgs, we’ve come to notice what we think are serious conceptual problems with the prevalent vocabulary of ‘AI alignment.’ This essay will discuss some of the major ways in which we think the concept of ‘alignment’ creates bias and confusion, as well as our own search for clarifying concepts.
At AOI, we try to think about AI within the context of humanity’s contemporary institutional structures: How do...
Maybe I already had a pretty dim view, but this incident did not update me about his character personally (whereas "sign a lifetime nondisparagement agreement within 60 days or lose all of your previously earned equity" did surprise me a bit).
I did update negatively on his competency/PR skills though.