Unintended are harder for campaigns to avoid than even governments from where I'm sitting. But yes worth looking at more and yes I'm interested. Nice post.
I agree this is really strange. I agree many ai people supposedly into safety don't seem to givemuch thought to the more obvious policies, at least publicly (unless someone can signpost).
Why not move national security research funding from ai development and application to safety research?
Why not call out the risks and bring more skepticism to a. The hope of ever achieving aligned AI, and b. That aligned AI really improving the human condition anyway, while reminding people of the risks?
Why not ask all companies or industry researchers t apply for a permit with some prior training in risks or safety prior to them working on anything more advanced than basic statistical algorithms?Or even professional registration? Just slow it down and make I more expensive. These bodies can be set up intemationally without having to be passed into law.
Why not tempt coders and researchers who are making particularly good.progress, to work on something else? This could be done around the world like counter recruitment in espionage or competitive industries.
That seems reasonable. The advice we've had both specifically and generally from legal people is that a will which appears not to take into account your life circumstances is open to challenge. Certainly in the UK, charitable legacies have been successfully challenged for not taking children into account (even when that appears to have been deliberate).
I guess my observation is that almost all people would expect to be in that more complex position before they die, and I expect that will have a large effect on the potential for ROI of these wills.
Yep. Would also be keep on the more comprehensive one 😊 well done though
Why not test this? Probably only suggesting this because I'm reluctant to trust one or two papers on this alone. Would be cheap to do. eg:
-Write two similar tests of the key dimensions of performance you care about.
-Recruit a number of participants
-Put each test in an envelope marked 1 or 2 for first or second, then put two envelopes in a bigger envelope, making sure that the smaller envelopes marked 1 and 2 don't contain the same test.
-Assign people to two rooms. In one, a friend has raised CO2 to 1200ppm, in another, its 600ppm. You don't know which and you don't tell them which. They do test one first in one of the rooms, and test two second in the other.
-Look at results
This was once controversial, but I now think that economists have settled into thinking that corruption is bad overall:
"Does corruption sand or grease the wheels of economic growth? This column reviews recent research that uses meta-analysis techniques to try to provide more concrete answers to this old-age question. From a unique, comprehensive data base of 460 estimates of the impact of corruption on growth from 41 studies, the main conclusion that emerges is that there is little support for the “greasing the wheels” hypothesis."
I don't think that addresses my comment. I'm not talking about corruption as a general phenomenon being correlated with higher growth. I'm talking about corruption being a political phenomenon and anti-corruption being a cause-blind political intervention. Without local knowledge you don't know if you're improving things or not. Political economy doesn't equal economics. But thanks, useful article!
You didn't mention policiing or accountability campaigning, which the politics/development literature suggests is often a necessary step for a country to come out of poverty - depending on which country you're in.
I think you need to think a bit deeper about the corruption thing. A political-economists view might be that there isn't that much harm in corruption per se, but there is a lot of harm in certain types of corruption. Sometimes corruption is a means of achieving fantastic policy goals, anti-corruption one of them. The key thing is to keep an eye on what matters and the effects of your actions, and make sure you're completely honest with those you love. Imagine saying to someone they should go into consultancy but never wear a suit - in some environments its a signal, and that signal can't be changed below in a meaningful way. But there are always counter-examples, like Dora Akunyili - but she wouldn't have done what she did without being a first class pharmacist with a fiery personality in the right place at the right time.
I sympathise with the point you make with this post.
However, isn't it antithetical to consequentialism, rather than EA? EAs can have prohibitions against causing harms to groups of people.
How does this speak to people who use rule-based ethics that obliges them to investigate the benefit of their charitable gifts?
I think they're consistent with a Kantian perspective. Also, a risk averse consequentialist. Also, someone that likes to take responsibility for the consequences of their actions in a like for like manner for ethical-aesthetic reasons.
I don't think ethical offsetting is antithetical to EA. I think it's orthogonal to EA.
We face questions in our lives of whether we should do things that harm others. Two examples are taking a long plane flight (which may take us somewhere we really want to go, but also release a lot of carbon and cause global warming) or whether we should eat meat (which might taste good but also contribute to animal suffering). EA and the principles of EA don't give us a good guide on whether we should do these things or not. Yes, the EA ethos is to do good, but there's also an understanding that none of us are perfect. A friend of a friend used to take cold showers, because the energy that would have heated her shower would be made by a polluted coal plant. I think that's taking ethical behavior in your personal life too far. But I also think that it's possible to take ethical behavior in your personal life not far enough, and counterproductively shrug it off with "Well, I'm an EA, who cares?" But nobody knows exactly how far is too far vs. not far enough, and EA doesn't help us figure that out.
Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg "I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it", or a literal way "I am actually going to pay 0.01 cents to offset the costs of this shower."
As such, I think all of your objections to offsetting fall short:
The reference class doesn't particularly matter. The point is that you worried you were doing vast harm to the world by taking a hot shower, but in fact you're only doing 0.01 cents of harm to the world. You can pay that back to whoever it most soothes your conscience to pay it back to.
Nobody is a perfectly effective altruist who donates 100% of their money to charity. If you choose to donate 10% of your money to charity, that remaining 90% is yours to do whatever you want with. If what you want is to offset your actions, you have just as much right to do that as you have to spend it on booze and hookers.
Ethical offsetting isn't an "anti-EA meme" any more than "be vegetarian" or "tip the waiter" are "anti-EA memes". Both involve having some sort of moral code other than buying bednets, but EA isn't about limiting your morality to buying bednets, it's about that being a bare minimum. Once you've done that, you can consider what other moral interests you might have.
People who become vegetarian believe that, along with their charitable donations, they feel morally pushed to being vegetarian. That's okay. People who want to offset meat-eating believe that, along with their charitable donations, they feel morally pushed to offset not being vegetarian. That's also okay. As long as they're not taking it out of the money they've pledged to effective charity, it's not EA's business whether they want to do that or not, just as it's not EA's business whether they become vegetarian or tip the waiter or behave respectfully to their parents or refuse to take hot showers. Other forms of morality aren't in competition with EA and don't subvert EA. If anything they contribute to the general desire to build a more moral world.
An important point. Failing to take this into account comes across as morally narrow.
Nice post, thanks. This is fun.
Class of people: modern slaves
Intervention: advocacy to bring about a decent evidence base, good law and effective policing in the countries most ameanable to change with the largest such populations.
Class of people: chickens
Intervention: experiments to find out what you can do to a factory farming environment to promote relaxation and prosocial behaviour (sounds, lights, temperature etc) among barn chickens
© 2017 Effective Altruism Forum |
Powered by reddit