M

marswalker

Entrepreneur
147 karmaJoined Working (6-15 years)Pursuing an undergraduate degreeHuntsville, AL 35816, USA

Bio

Currently starting a charity in the rocket city! 

How others can help me

Start a community organization in Huntsville! (I'm obviously open and excited to help with this.)

How I can help others

The couch is open to those visiting obviously, I have a house right in the middle of downtown Huntsville so feel free to drop in! 

I also have experience founding and seeing a charity through the first phases of existence. 

Comments
21

TLDR because I got long-winded: If you ever find yourself planning to commit some morally horrible thing in the name of a good outcome, stop. Those kinds of choices aren't made in the real world, they are a thought exercise (normally a really stupid one too.)

Long version: 

Sorry that you got downvoted hard, keep in mind that knee-jerk reactions are probably pretty strong right now. While the disagrees are justified, the downvotes are probably not (I'm assuming this is a legit question.)

I'm constantly looking to learn more about ethics, philosophy, etc and I recently got introduced to this website:  What is Utilitarianism? | Utilitarianism.net  which I really liked. There are a few things that I disagree with or feel could have been more explored, but I think it's overall good.  

To restate and make sure that I understand where you're coming from, I think that you're framing the current objections like a trolley problem, or its more advanced version the transplant case. (Addressed in 8. Objections to Utilitarianism and Responses – Utilitarianism.net second paragraph under "General Ways of Responding to Objections to Utilitarianism")  if I was going to reword it, I would put it something like this:

"When considered in large enough situations, the ideal of precommitment would be swamped by the potential utility gains for defecting." 

This is the second response commonly used in defense of the utilitarian framework "debunk the moral intuition"  (paragraph 5 in the same chapter and section.) 

I believe, and I think most of us believe that this isn't the appropriate response (to this situation) because in this case, the moral intuition is correct. Any misbehavior on this scale results in a weaker economic system, harms thousands if not millions of people, and erodes trust in society itself. 

A response you might think would be something like "but if the stakes were even higher."  

And I agree, it would be pretty ridiculous if after the Avengers saved NYC from a chitauri invasion someone tried to sue the Hulk for using his car to crush an alien or something. We would all agree with you there, the illegal action (crushing a car) is justified by the alternative (aliens killing us all.) 

The problem with that kind of scale, however, is that if you ever find yourself in a situation where you think "I'm the only one that can save everyone, all it takes is 'insert thing that no one else wants me to do.'" stop what you're doing and do what the people around you tell you to do.  

If you think you're Jesus, you're probably not Jesus. (or in this case the Hulk.)

That's why the discussions of corrupted hardware and the unilateralist's curse (links provided by OP) are so important. 

For more discussion on this you can look in Elements and Types of Utilitarianism – Utilitarianism.net  "Multi-level Utilitarianism Versus Single-level Utilitarianism." 

One must-read section says that "In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis." 

I would encourage you to read that whole section (and the one that follows it if you think much of rule utilitarianism) as I think one of the most common problems with most people's understanding of utilitarianism is the single-level vs multi-level distinction.

I appreciate the links, these are exactly what I was looking for!  I'll be browsing through them as I get some time! 

It seems like you're on the "expert-master scale" to my "novice - apprentice" level.  Philosophy ultimately won't ever be much more than a fun hobby of mine, but I've always loved diving into some of the deeper stuff. Would you be open to me reaching out and talking with you as I comb through this and come up with questions? 

I understand you're probably busy, so if you have recommendations for some other resources or places to engage people with ideas like this (even if just to read what they write), I would appreciate those too!

I'm afraid that despite professing to be a utilitarian, I'm far from an expert. If you've got a moment, could you help me poke a little more into a niche section of this?

Is there some overlap between Hare's two-level utilitarian framework and what is being proposed in this article? It doesn't seem like they're arguing directly for a framework, more explaining why and how they chose their virtues. 

I've always found virtue ethics interesting, my first foray into reading philosophy on my own was focused on it, and I wouldn't have really described myself as a utilitarian until my later teens. 

When I stumbled across Hare's arguments, I began to think about ways to reconcile his "archangel and prole" analogy with the way we tend to primarily communicate (at least in my view) via intuitions and stories regarding character virtues. 

I've done some basic searching, nothing too in-depth.  I haven't really found much engagement. Do you have any ideas for further reading? I'd be interested in reading other examples of what people think of for utilitarian virtues! 

Not here to weigh in on the pro/anti nuclear arguments.

I just wanted to thank you for posting and engaging with the forum about your thoughts! I think that this style of post is one of the most useful because it leads to a better understanding for all involved. 

I'm sure you've all seen the EA hub post that was put up about a month ago.  But it's worth re-stating that it's hard to find someone specific in EA sometimes. 

I sometimes use the forum when I'm trying to get in contact with people, primarily by searching their name! 

I also filled out the form, so apologies if this is a double entry! 

Cotton Bot 

Economic growth

Problem: In 2021, a mere 30% of the world’s cotton harvest was gathered by machinery. This means
that over 60% of the 2021 worldwide supply of cotton was harvested using the same methods as
American slaves in the 1850’s. A significant amount of the hand harvesting includes forced labor.

Solution: The integration of existing technologies can provide a modular, robust, swarming team of
small-scale, low-cost harvesters. Thoughtful system design will ensure the harvesters are simple to
operate and maintain while still containing leading edge technical capability.

How to: The project is focused on developing a single row, robotic harvester that meets key
performance parameters and system attributes to allow operation in the most technologically remote
areas of the world with little or no logistics tail. The single row harvesters can intuitively communicate to swarm harvest in teams from two to two hundred independent systems.

Background: My father has been the REDACTED for a few years now. We have been talking about how much cotton gets wasted in the field near our house for years now, but this grant strikes me as a perfect to see if a prototype could be built. 

Pluses:

 1. He is not an EA (though he is adjacent, mostly from my prodding), so it's an opportunity to drag a non-EA to work on our projects.  

2. He has no desire to develop the business after making prototype and proving use case, so the patent would come back to the FTX future fund as investors. 

3. He has a lot of experience doing exactly this, so he will most likely be able to execute. 

Cons: 

 1. It's expensive because he intends to hire employees to work on it full time. 

2. He isn't an EA, so he may not perfectly represent EA interests in this (somewhat mitigated because I will also be working on it.)

3. He has no desire to develop the business after making, so we'll have to have someone do that (or give away the tech for free.) 

His name is REDACTED, and he works at the REDACTED in case anyone wants to look him up! 

I had a similar idea, and I think that a few more things need to be included in the discussion of this. 

There are multiple levels of ideas in EA, and I think that a red team becomes much more valuable when they are engaging with issues that are applicable to the whole of EA. 

I think ideas like the institutional critique of EA, the other heavy tail, and others are often not read and internalized by EAs. I think it is worth having a team that makes arguments like this, then breaks them down and provides methods for avoiding the pitfalls pointed out in them. 

Things brought up in critique of EA should be specifically recognized and talked about as good. These ideas should be recognized, held up to be examined, then passed out to our community so that we can grow and overcome the objections. 

I'm almost always lurking on the forum, and I don't often see posts talking about EA critiques. 

That should change. 

Led to personal lifestyle changes, bought an air purifier and gave them as gifts to friends and family. 

Glad I'm not the only one who sees it! I'm a low risk style investor, but I've sold everything I have and I'm doing cash covered put spreads, We'll see how it all turns out. 

Load more