P

Prometheus

251 karmaJoined

Comments
40

"GPT-5 training is probably starting around now." I'm pretty sure its already been trained, and they're now running evals.

Judging from all the comments in agreement, from people who probably have no political power to actually implement these things, but who might have been useful toward actually solving the problem, this pivot is probably a net negative. You will probably fail at having much of a political influence, but succeed at dissuading people from doing technical research.

Wouldn't Sam selling large amounts of chips to OAI's direct competitors constitute a conflict of interest? It also doesn't seem like something he would want to do, since he seems very devoted to OAI's success, for better or worse. Why would he want to increase decentralization?

I imagine Sam's mental model is the bigger lead OpenAI has over others, the more control they can have at pivotal moments, and (in his mind) the safer things will be. Everyone else is quickly catching up in terms of capability, but if OpenAI has special chips their competitors don't have access to, then they have an edge. Obviously, this can't really be distinguished from Sam just trying to maximize his own ambitions, but it doesn't necessarily undercut safety goals either.

This is useful, but shouldn't there be projects to breakdown the pipeline that could enable engineered pandemics? This seems the highest risk among all possibilities.

Should the US start mass-producing hazmat suits? So that, in the event of an engineered pandemic, the spread of the disease can be prevented, while still being able to maintain critical infrastructure/delivery of basic necessities.

Is there anything that can be done to get New START fully reinstated?

(crossposted from lesswrong)

I created a simple Google Doc for anyone interested in joining/creating a new org to put down their names, contact, what research they're interested in pursuing, and what skills they currently have. Overtime, I think a network can be fostered, where relevant people start forming their own research, and then begin building their own orgs/get funding. https://docs.google.com/document/d/1MdECuhLLq5_lffC45uO17bhI3gqe3OzCqO_59BMMbKE/edit?usp=sharing 

Out of the four major AI companies, three of them seem to be actively trying to build God-level AGI as-fast-as-possible. And none of them are Meta. To paraphrase Conner Leahy, watch the hands, not the mouth. Three of them talk about safety concerns, but actively pursue a reckless agenda. One of them dismisses safety concerns, but seems to lag behind the others, and is not currently moving at breakneck speed. I think the general anti-Meta narrative in EA seems to be because the three other AI companies have used EAs for their own benefit (poaching talent, resources, etc.) I do not think Meta has yet warranted being a target.

I'm curious what you think of this, and if it impedes what you're describing being effective or not: https://arxiv.org/abs/2309.05463 

Load more