J

JoyOptimizer

92 karmaJoined Working (0-5 years)

Bio

I'm an Atlas Fellow '22. I have an interest in large language models.

How others can help me

I'm looking for grants, opportunities, and learning related to:

  • improving humanity's long-term future through enhancing human cognition and communication (communication is just collective cognition)
  • improving global mental and socioemotional health
  • grants fellowships eligible for those under 20, that don't require a PhD
  • empirical alignment of transformative AI

Open to taking an AI degree at a school that allows me to develop my own curriculum, test out of some classes, and attend part time.

Comments
21

While they are insolvent, FTX and SBF have not declared bankruptcy. In developing scenarios, information is unclear and from unknown sources. (Alameda's balance sheet may prove incomplete.)

Calm down. It's a complex situation developing rapidly, let's wait and see for what happens as a final outcome.

I used a model I fine-tuned to generate takes on Effective Altruism. 

was unclear. It should be:

I used a model that I fine-tuned, in order to generate takes on Effective Altruism.

This model was not fine-tuned specifically for Effective Altruism. It was developed to explore the effects of training language models on a twitter account. I became surprised and concerned when I noticed it was able to generate remarkable takes regarding effective altruism, despite not being present in the original dataset. Furthermore, these takes are always criticism.

This particular model is fine-tuned OpenAI davinci. I plan to fine-tune GPT-EA on GPT-NeoX-20B. A predecessor to GPT-EA (GPT-EA-Forum) was trained using a third-party API. I want to train GPT-EA on a cloud platform so I can download a copy of the weights myself. I am not receiving technical support (or funding for GPU costs), it could be helpful. The dataset was selected and cleaned by myself, with input from community members, though I'm still looking for community input. 

I used a model I fine-tuned to generate takes on Effective Altruism. The prompt is "effective altruism is." Here are its first three:

effective altruism is vampirism, except instead of sucking blood you suck hours and happiness from helping people who would otherwise have spent the time improving their lives.

effective altruism is parasitic. it latches onto the success of actual altruism, which is genuine and humanizing, to justify its cold calculations and make them feel virtuous too.

effective altruism is rich kid hobbyism pretending to be a moral imperative

I'm somewhat concerned about the use of AI models to [generate propaganda? conduct information warfare?]. Here, the concern is this could be used to salt the earth by poisoning the perceived vibe to make certain demographics dislike EA before they can engage with it deeply.

I find it important to note the model was not designed to be harmful. It was finetuned to generate self-deprecating humor. Nevertheless, amplifying that capability seems to also amplify the capability to criticize EA.

I'm interested in what mitigations people have in mind. One way could be at the epistemic level: To teach people to engage kindly with new ideas.

Who is responsible for evaluating the success of the Century Fellowship?

What role do different people in reviewing applications for the fellowship, and who fills those roles?

Can you help write test prompts for GPT-EA? I want testcases and interesting prompts you want to see tried. This helps track and guide the development of GPT-EA versions. The first version, GPT-EA-Forum-v1 has been developed. GPT-EA-Forum-v2 will include more posts and also comments.

this is why we're building an AI to make humans kinder to each other

This is a call for test prompts for GPT-EA. (announcement post: https://forum.effectivealtruism.org/posts/AqfWhMvfiakEcpwfv/training-a-gpt-model-on-ea-texts-what-data) I want testcases and interesting prompts you want to see tried. This helps track and guide the development of GPT-EA versions. The first version, GPT-EA-Forum-v1 has been developed. GPT-EA-Forum-v2 will include more posts and also comments.

Load more