A

ABishop

Data scientist
17 karmaJoined Feb 2024

Posts
5

Sorted by New
1
· 22d ago · 1m read
0
· 9d ago · 3m read
9
· 20d ago · 2m read
3
· 20d ago · 3m read
1
· 24d ago · 1m read

Comments
14

I noticed that many people write a lot not only on forums but also on personal blogs and Substack. This is sad. Competent and passionate people are writing in places that get very few views. I too am one of those people. But honestly, magazines and articles are stressful and difficult, and forums are so huge that even if they have a messaging function, it is difficult to achieve a transparent state where each person can fully recognize their own epistemological status. I'm interested in such collaborative blogs, similar to the early Overcoming Bias. I believe that many bloggers and writers need help and that we can help each other. Is there anyone who wants to be with me?

Is there any research on the gap between AI safety research and reality? I wanted to read Eric Drexler's report on R&D automation in AI development, but it was too long so I put it on hold.
It is very doubtful whether such things are within the controllable area.
(1)OpenAI incident
(2)Open source projects such as stockfish have their development process made public. However, it is very unclear and opaque (despite their best efforts).
Overall, I feel strongly that research on AI safety is disconnected from reality.

Do you believe that altruism actually makes people happy? Peter Singer's book argues that people become happier by behaving altruistically, and psychoanalysis also classifies altruism as a mature defense mechanism. However, there are also concerns about pathological altruism and people pleasers. In-depth research data on this is desperately needed.

I'm not very confident on this topic. I was also evaluated as a very weak-hearted and sensitive person. I don't think it's up to me to discuss whether they exist or not. But it's very difficult because HSPs are a shield for many people. I have observed something close to “covert narcissism.” I would like to point out that they tend to describe themselves as "competent and in need of protection." They want to be overly privileged.

While AI value alignment is considered a serious problem, the algorithms we use every day do not seem to be subject to alignment. That sounds like a serious problem to me. Has no one ever tried to align the YouTube algorithm with our values? What about on other types of platforms?

It is like a seed. Basic trust and support are provided. It is doubtful whether long-term, indefinite provision is necessary. Wouldn’t it be similar to UBI? I don’t know because there is no research. I believe you are begging the question. I can't agree or disagree with the claim that it will soon return to its initial state without any long-term effects. As for the estimate... I'm not sure. I can't think of a good measure or anything yet. I might need a psychologist to help me. Perhaps an estimate for mental health or well-being, but I doubt QALYs or DALYs. But as an initial estimate, it seems like a good measure. Alternatively, it could be expressed as pain relief or social support. I confess I had no intention of doing any serious research, as I was simply asking for an idea. It's more of a question of whether it's worth it.

Hmm, I'm a little confused. If I cook a meal for someone, it doesn't seem to mean much. But if no one is cooking for someone, it is a serious problem and we need to help. Of course, I'm not sure if we're suffering from that kind of "skinship hunger."

I would like to estimate how effective free hugs are. Can anyone help me?

It's convincing. Couldn't it be improved or modified? Does this seem like an idea worth completely abandoning? I can't think of anything at the moment.

I am planning to write post about happiness guilt. I think many of EA would have it. Can you share resources or personal experiences?

Load more