Philosophy
Philosophy
Investigation of the abstract features of the world, including morals, ethics, and systems of value

Quick takes

5
1d
American Philosophical Association (APA) announces two $10,000 AI2050 Prizes for philosophical work related to AI, with June 23, 2024 deadline:  https://dailynous.com/2024/04/25/apa-creates-new-prizes-for-philosophical-research-on-ai/ https://www.apaonline.org/page/ai2050 https://ai2050.schmidtsciences.org/hard-problems/
1
15h
A corporation exhibits emergent behavior, over which no individual employee has full control. Because the unregulated market selects for profit and nothing else, any successful corporation becomes a kind of "financial paperclip optimizer". To prevent this, the economic system must change.
10
2mo
Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense. For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and how much to tip, especially with digital point-of-sale systems encouraging people to tip in more situations. (Just as an example of how conceptions of "common sense ethics" can differ: I just learned that apparently, you're supposed to tip the courier before you get a delivery now, otherwise they might refuse to take your order at all. I've grown up believing that you're supposed to tip after you get service, but many drivers expect you to tip beforehand.) You're never required to tip as a condition of service, so what if you just never tipped and always donated the equivalent amount to highly effective charities instead? That sounds unethical to me but technically it's legal and not a breach of contract. Going further, what if you started a company, like a food delivery app, that hired contractors to do the important work and paid them subminimum wages[1], forcing them to rely on users' generosity (i.e. tips) to make a living? And then made a 40% profit margin and donated the profits to GiveWell? That also sounds unethical - you're taking with one hand and giving with the other. But in a capitalist society like the U.S., it's just business as usual. 1. ^ Under federal law and in most U.S. states, employers can pay tipped workers less than the minimum wage as long as their wages and tips add up to at least the minimum wage. However, many employers get away with not ensuring that tipped workers earn th
15
7mo
In his recent interview on the 80000 Hours Podcast, Toby Ord discussed how nonstandard analysis and its notion of hyperreals may help resolve some apparent issues arising from infinite ethics (link to transcript). For those interested in learning more about nonstandard analysis, there are various books and online resources. Many involve fairly high-level math as they are aimed at putting what was originally an intuitive but imprecise idea onto rigorous footing. Instead of those, you might want to check out a book like that of H. Jerome Keisler's Elementary Calculus: An Infinitesimal Approach, which is freely available online. This book aims to be an introductory calculus textbook for college students, which uses hyperreals instead of limits and delta-epsilon proofs to teach the essential ideas of calculus such as derivatives and integrals. I haven't actually read this book but believe it is the best known book of this sort. Here's another similar-seeming book by Dan Sloughter.
3
1mo
I am wondering whether people view EA vs. cause-specific field-building differently, especially about the Scout Mindset. My general thoughts are: EA - Focuses on providing knowledge and evidence to facilitate the self-determination of individuals to rationally weigh up the evidence provided to decide on updating beliefs to inform actions wherever they may go. Scout Mindset is intrinsically valuable to provide flexibility and to update beliefs and work on the beliefs that individuals hold. Field-Building - Focusing on convincing people that this is a cause area worth working on and will have a significant impact; less focus on individual thoughts based on the strength of the arguments and evidence field-builders already possess. Scout Mindset is instrumentally valuable to update and work on the beliefs that field-builders hold. Argument for Instrumental value: A more instrumental perspective is that it is much easier to ask someone to understand one thing and act on it rather than understand many things and struggle to act on any, which may be counterfactually more impactful. Argument for Intrinsic value: By focusing on the intrinsic value you're measuring for the internal change process that occurs in EA to see and understand the reason behind different cultural shifts across time with specific emphasis on the potential for value-drift.  The core difference between the two, as I see it, is whether the community builder focuses on promoting the individual or the cause. However, this may be an oversimplification or unfair misrepresentation and I am keen to hear the community's views.
10
7mo
Julia Nefsky is giving a research seminar in the Institute for Futures Studies titled "Expected utility, the pond analogy and imperfect duties", which sounds interesting for the community. It will be on September 27 at 10:00-11:45 (CEST) and can be attended for free in person or online (via zoom). You can find the abstract here and register here. I don't know Julia or her work and I'm not philosopher, so I cannot directly assess the expected quality of the seminar, but I've seen several seminars from the Institute for Futures Studies that where very good (eg. from Olle Häggström --and in Sep 20 Anders Sandberg gives one as well). I hope this is useful information.
11
1y
1
Steelmanning is typically described as responding to the “strongest” version of an argument you can think of. Recently, I heard someone describe it a slightly different way, as responding to the argument that you “agree with the most.”  I like this framing because it signals an extra layer of epistemic humility: I am not a perfect judge of what the best possible argument is for a claim. In fact, reasonable people often disagree on what constitutes a strong argument for a given claim. This framing also helps avoid a tone of condescension that sometimes comes with steelmanning. I’ve been in a few conversations in which someone says they are “steelmanning” some claim X, but says it in a tone of voice that communicates two things: * The speaker thinks that X is crazy. * The speaker thinks that those who believe X need help coming up with a sane justification for X, because X-believers are either stupid or crazy. It’s probably fine to have this tone of voice if you’re talking about flat earthers or young earth creationists, and are only “steelmanning” X as a silly intellectual exercise. But if you’re in a serious discussion, framing “steelmanning” as being about the argument you "agree with the most" rather than the "strongest" argument might help signal that you take the other side seriously. Anyone have thoughts on this? Has this been discussed before? 
4
4mo
I think this isn't mentioned enough in EA, and I feel the need to point out this quote from William_MacAskill_when-should-an-effective-altruist-donate.pdf (globalprioritiesinstitute.org): " " (p. 7) In other words" v(A)=[P(Utilitarianism is correct)⋅v(A|Utilitarianism is correct)] + [P(Rationalism is correct)⋅v(A|Rationalism is correct) + ⋅⋅ ⋅  , where v(A) is the value of some action A, P(B) is the probability of some thing B being true, and f(A|B) is f(A), given that B is true.
Load more (8/30)