BEA

Brian E Adams

Pricing and Marketing Manager @ Hudson Homes
29 karmaJoined Working (6-15 years)Hooquest.com

Bio

Married 35 year old Texan with 5 young children. USMA '09, US Army w/ 1 Afghanistan deployment, licensed Realtor, middle manager setting rents at an institutional landlord.

I identify as a Republican, rationalist, fusionist conservative, Austrian, classical liberal, Evangelical Christian, and neoconservative.

I meet the broad definition of an effective altruist (do the max good), but am more precisely an EA skeptic, particularly wary of the EA's technocratic inclinations and overconfidence in empiricism. I'm here for good faith persuasion as to why I'm wrong!

Favorite concept is Auftragstaktik. Favorite thinker is Hayek.

How others can help me

I want to have my views challenged.

How I can help others

I have real estate industry experience including pricing (forecasting) with iBuyers, institutional investors, real estate technology, and MLS institutions.

Comments
21

Given power comes from choices, it makes sense power would correlate with available choices. But I think its significant about which comes first.

It may seems difference without distinction, but not in my experience.

An example: lots of managers talk about empowerment in the workplace. But then remove control over choices from employees in the name of "consistency", "guardrails", "scalability", etc. Those choices is where power comes from, and it's therefore disempowering to elevate the choices higher and organizations usually suffer.

So I think we probably agree? I just think choice precedes power, not the other way around.

I also like Nassim Taleb's idea of optionality, in which choice, even inferior choices, have value in that they give us options.

Yes, if AGI removes choices, that would strike me as bad.

I think the best way to stop a bad guy with an AI is a good guy with an AI.

And hamstringing the well intentioned people willing to play by the rules will only give the edge to the bad guys.

Your designs could backfire spectacularly, were it even workable.

I think power is the product of choice. Power can be reduced to "having choices".

I believe marginal utility simply means that automation will reduce the cost of many things to negligible, meaning our resources will be free to spend on other domains that are by definition not automated and still labor intensive.

At the point there is no such job, we'll have, also by definition, achieved radical abundance at which point being jobless doesn't matter.

Wouldn't a UBI then artificially prop up the current economy to the detriment of achieving radical abundance? Because it would be paid for via a tax of some kind on these "so abundant it's free" goods and keep them from becoming....so abundant they're free, no?

Of all the things AGI concerns me about, losing my job is by far the least of my worries.

My half baked theory is that there will always be jobs shy of radical abundance in which case jobs won't be necessary.

If AI automated all knowledge work WITHOUT delivering radical abundance, then there would still be jobs delivering goods/services that AI is, by definition, not delivering.

And if so, we have nothing to fear.

I'm loathe to use this, but let's use QLY and assume, as I believe, it can never be less than 0 (e.g. better to die than live).

There is nothing worse than death. There are no benefits unless that death unlocks life.

I don't think the (likely nonexistent) positive effects of "generation replacement" will mean literally fewer deaths, and certainly not on a scale to justify discounting the deaths of entire generations of individuals.

I don't think "personal beliefs" should be included in an "all known factors" analysis of how we invest our resources. Should I value Muslim lives less because they may disagree with me on gay rights? Or capital punishment? Why not, in your framework?

I also don't think there's a "but" after "all lives are equal". That can be true AND we have to make judgment calls about how we invest our resources. My external action is not a reflection of your intrinsic worth as a human but merely my actions given constraints. Women and children may be first on the lifeboat, but that does not mean they are intrinsically worth more morally than men. I think it's a subtle but extremely important distinction, lest we get to the kind of reasoning that permits explicitly morally elevating some subgroups over others.

I do agree that there is private sector incentive for anti-aging, but I think that's true of a lot of EA initiatives. I'm personallg unsure of how wise diverting funds from Really Important Stuff is a good thing just because RIS happens to be profitable. I could perhaps make the case it's even MORE important to invest there, if you're inclined to be skeptical of the profit motive (though I'm not, so I'm not included).

I think I understand and that makes sense to me.

If I understand what you're saying correctly, this is another reason I don't identify as EA.

You're basically saying people dying is advantageous because their influence is replaced by people you deem as having superior virtues?

Its not obvious to me that "replacement" generations have superior values to those that they replace merely on account of being younger/newer etc..

But even accepting that's the case, how is discounting someone's life because they have the wrong opinions not morally demented?

Load more