Hide table of contents
12 min read 21

27

I’ve been wondering whether EA can’t find some strategic benefits from a) a peer-to-peer trust economy, or b) rational coordination towards various goals. These seem like simple ideas, but I haven’t seen them publicly discussed. 

I’ll start from the related and oversimplifying assumptions that 

a) there’s a wholly fungible pool of EA money (for want of a better name, let’s call it Gringotts) shared among EAs and EA organisations, and 

b) all EAs trust all other EAs as much as they trust themselves such that we form a megamind (the Hive), and 

c) all EAs consider all EA goals to be worthwhile and high value, even if they see some as substantially less so than others, such that we all have basically the same goal (collecting all the Pokemon)

In some cases these assumptions are so flawed as to be potentially fatal, but I think they’re an interesting starting point for some thought experiments - and we can focus on relevant problems with them as we go. But the EA movement is getting large enough that even if these assumptions were only to hold for microcosms of it, we might still be able to get some big wins. So here are some ideas for exploiting our Hivery, in two broad categories:

 

Building an EA social safety net

 

1) Intra-Hive insurance

Normal insurance is both inherently wasteful (insurance companies have to spend ages assessing risk to ensure that they make a profit on their rates) and negative expected value for the insuree (who pays for the waste, plus the insurance profits). In a well functioning Hive seeking all the Pokemon, with a sufficiently sizable Gringotts, each EA could just register things of irreplaceable value to them personally, and if they ever broke/lost/accidentally swallowed the item, job, existential status etc, they would get some commensurate amount of money with few questions asked. That would save Gringotts from the negative expected value (EV) of almost all insurance, give everyone peace of mind, and avoid a lot of time and angst spent dealing with potentially unscrupulous or opaque insurers.

In practice this is only as good an idea as our simplifying assumptions combined, and creates some dubious incentives, so might be a pipe dream. Still, if the EA community really is or could be a lot closer to the assumed ideal than society at large, it seems like there could be room for some small-scale operations like this - for example EA organisations offering such pseudo-insurance to their staff, and large-scale donors offering it to the EA organisations.

One way to potentially strengthen the trust requirement would be to develop an opt-in EA reputation system on an EA app or website, much like the ratings for Uber drivers. If it felt uncomfortable, obviously you wouldn’t have to get involved, but it could allow a fairly straightforward tier-based system of what you were eligible for based on your rating (probably weighted by other factors, like how many people had voted). You could also add some weighting to people currently working for EA organisations, though it might be too limiting to make that a strong prerequisite (Earn-to-Givers might want to insure themselves so they could safely give a higher proportion, for eg). As with normal insurance it would create moral hazard problems, but hopefully with some intelligent but low-cost reputation management this could still be a big net positive for Gringotts.

Personally, I think the reputation app is a really cool idea even if it never got used for anything substantial, but I’m prepared to be alone in that.

 

1.1) Guaranteed income pool for entrepreneurial EAs

This is much like insurance, with similar limitations and much the same potential for mitigating them. Except here it’s based on the idea that entrepreneurialism is one of the highest EV-earning pathways, but because of the ultra-high risks it’s out of reach to anyone who can’t get insta-VCed or support themselves for several months. As with insurance, Gringotts is big enough that it doesn’t really suffer from risks that affect a single person. In this case though, a further factor is that the EA would need to demonstrate some degree of competence to ensure that they were actually doing something positive EV, and to the extent that they could do so they might be able to get funding from regular pathways. 

Something similar might also be useful for people interested in starting EA charities, before the stage where they might be eligible for a substantial grant from eg Givewell or OpenPhil. Again, I’m not sure such a window exists, but it seems worth looking at for people from poorer backgrounds.

 

2) Low interest loans

Loans have all the waste and negative EV of insurance, except that you get the money straight away - and there’s no question about whether you get it. This maybe makes them a stronger candidate for Gringotts-coverage, since it removes one risk factor. Relatedly, they also avoid the incentive-distorting effects of insurance, removing another.

In the real world, loans also require a credit rating check, which can be based on some quite arbitrary criteria, such as not being able to guarantee repayments because you’re poor, whether you use a credit card or debit card, or even whether you’re registered to vote. And given the relatively low number of factors the credit rating relies on, there would probably be a lot of random noise in it even if they were all sensible.

Lastly, with a normal loan, something has necessarily gone wrong for the creditor if a repayment is missed. Gringotts, on the other hand, might sometimes be content for a debitor to miss repayments if the money nonetheless went towards gathering a lot of Pokemon, or even if it had been wisely spent on an ultimately doomed venture.

 

3) A Hive existential space network

Couchsurfing may already be A Thing, but there might be some opportunities for making it a smoother experience given a robust trust network. Also, since sleeping isn’t the only mode of existence, ‘living spaces’ aren’t the only form of living spaces; given how many EAs work remotely, there’s probably also a lot of demand for working spaces. EAs with more modest living accommodation could also offer solo- or duo- (etc) working spaces, in the former case if they would normally work elsewhere. It might even be helpful to have co-working spaces in a fairly close area, with explicitly differing cultures (eg one being mostly silent, the other having music or ambient sound, or freer conversation, or with people working on similar project types, people with similar - or deliberately disparate - skills, people bringing children etc)

Given the psychological benefits for some of us of having a separate space for living and working combined with the emotional benefits of having a short commute, EAs who live near each other might even benefit from just swapping homes for the working day.

 

4) EA for-profits offering discounts on VAT-applied goods and services

At the moment there are few EA for profits, and many of those mainly offer services to disadvantaged subgroups rather than to other EAs. Nonetheless in future we might see a proliferation of EA startups, even if the only sense in which they’re EA is a strong effective giving culture among their founders. In such a case, if the goods/services they offer are VAT (or similar) taxable, it would provide an incentive for them to offer heavy discounts to other EA organisations and/or EAs - since the lower the cost, the less Gringotts would leak in service tax.

Gringotts could incentivise this with one of the strategies above, though there might be legal implications. Nonetheless, the Hive would surely benefit from finding out exactly what the legal limits are and exploring the possibilities of going right up to them. 

 

Maximising the value of EA employees

 

5) EA organisations partially substituting salaries with benefits

Every time an EA working at an EA org buys something the org could have bought, Gringotts loses the income tax on whatever they’ve bought. In the UK at least, there’s a tax-free threshold of £11,500, but in an ideal world everything EA employees would want to spend money on above that threshold would be bought for them by EA organisations. More realistically, to keep things relatively egalitarian and maintain sensible incentives, the ideal might be to pay for any things that EA employees would need to maintain a healthy lifestyle. An initial laundry list of candidates I put together: 

  • accommodation, (not necessarily just for employees - we might ultimately be able to build peer-to-org or org-to-org existential space networks, per point 3 above)
  • bills, 
  • travel to and from work, 
  • a gym membership (or some equivalent physical activity for people who find the gym too sterile),
  • out-of-work education,
  • electronic equipment for unrestricted (within reason) personal use,
  • clothes,
  • pension contributions,
  • toiletries,
  • food, 
  • medical supplies

  

I know some of these are already offered by some EA organisations (and many for-profits, come to that), and there will surely be legal restrictions on how much money you can spend on employees like this without it getting taxed. But the potential savings are so big that again the Hive should surely explore and share knowledge of the exact legal boundaries.

 

6) Employees of EA organisations not donating 

Since every such donation is made with an EA’s taxed income, the same considerations as in 5) apply. Every time an EA does this, Gringotts loses the tax value of the donation. The simplest way to avoid this would be for EAs to just ask for a 10% lower salary (or whatever donation proportion they would imagine themselves having made) than they would have done for a comparable job elsewhere . 

This would potentially redistribute money among causes since EAs working at one org might not think it’s actually the best one. But unless the proportion redistributed would be more than the average income tax on an EA salary (somewhere in the vicinity of 20% seems like a plausible estimate), this would be an iterated prisoner’s dilemma. Any individual could move more money to their cause of choice by requesting a higher income, but the fewer of us did so, the more money would end up with all the causes. And it feels like a Hive of cooperating altruists should be able to deal with one little wafer thin prisoner’s dilemma… 

In cases where individuals are working for an EA org but feel that other organisations are substantially more than 20% more effective than their own, it feels like they should often prefer just earning to give. There are numerous possible exceptions - for example if you feel like the multiplier on the other org is higher than 20% but you wouldn’t earn enough in another to multiply to a net plus, or you’re planning to move to another EA organisation but are working in your current job to gain skills and reputation. It seems like such motivations would have intra-EA signalling costs, though, since they imply both that you’re defecting in the prisoner’s dilemma and that you don’t value the work of the people around you that highly. Ironically, it might actually look bad for an EA employee to admit to charitable donations.

Even so, the extra-EA signalling costs of not giving could conceivably outweigh both the intra-EA signals and the tax savings from doing so. If we believe this, an alternative approach would be to have EA orgs explicitly run donation-directing schemes. Each org could contribute to a pool of money they planned to redirect whose size was dependent on their number of staff and staff salaries. Then each employee could direct some proportion of it to the cause of their choice; the weight of their direction could either be proportional to the difference between their salary and the max salary they could have asked for or, more diplomatically, just equal for each employee. That way the money would still be distributed in much the same proportion as it currently is, but without being taxed - and EAs could still be said to be donating in some sense at least (and would still have an incentive to keep abreast of what’s going on elsewhere in the EA world).

 

7) Basing salary on expected career trajectory

Similarly to the previous idea, if I’m working at an EA organisation but expect that in the near future I’ll end up working in the private sector - either because I’m earning to give, because I’m trying to build career capital, or any number of other possible reasons - it doesn’t make sense for me to get a substantial amount more than I need to live on at the EA org and then give a lot of money away after I transition. Better to earn less now and give slightly less later.

Again, this follows from taxation - whether I later pay back the tax on the extra money I earned at the EA org or not, Gringotts will be that much poorer (because it partly comprises me). It also compounds to the extent that you agree with the haste consideration - the money saved now could be worth substantially more than the money you give later.

If you’re moving from the private sector into an EA org, the same strategy would probably make sense in reverse - if you’re in commerce and transitioning into an EA organisation, you would keep more now and ask for a commensurately lower salary from the EA org - though it would be less clear/less pronounced an effect because of the haste consideration. Also, the haste consideration suggests that if you’re never expecting to work at an EA organisation, it might be better to donate a declining proportion of your income (or rather, to donate in such a way as to increase the amount you keep for yourself over time, holding the net lifetime amount you expect to donate constant). Since this front-loads your donations, it also has the side-benefit of making future burnout less costly to Gringotts, and perhaps also less tempting for future-you.

This strategy is fairly high risk for an individual, in that if you suddenly need to pay for urgent medical assistance or some other emergency expenditure in your younger life, you might find yourself unable to afford it - but that’s just the sort of issue that could be mitigated or even resolved by Hive insurance.

It would also ‘lose’ the interest you’d have earned on the money you’d kept earlier, but you can account for that when calculating future donations. The effect will be dominated by the tax savings, and in any case, the money will still having been earning a (greater) return on investment through its EA use elsewhere in the Hive.

One complicating factor is that sometimes commercial employers will offer a salary based on the size of your current one, so taking a low salary from an EA org might harm future earning prospects. A possible remedy for this, if it wasn’t perceived as dishonest, and assuming the EA is leaving their organisation openly and on good terms with it, would be for them to briefly take a higher salary just as they started hunting for their next job. Personally I think this would be a poetic antidote to this obnoxious practice in the first place, but wider public opinion might disagree with me.

 

8) Offering clear financial security to all EA employees

Seemingly contrariwise to the above, but bear with me… 

EA employees will be more productive if they aren’t dealing with financial insecurity, since such insecurity has high costs in both time and mental health. 

According to 80K’s talent gap survey, even a junior hire is worth about $83,000 per year on average (median - much higher mean) to their EA organisation. If we take this literally, then a) EA organisations could comfortably test the effect of doubling (or more) the offered salaries on the number and quality of applications and perhaps more realistically b) they could afford to offer sufficiently high rates to even their most junior employees that money isn’t a substantial limiting factor in their lives. 

What ‘isn’t a substantial limiting factor’ means is obviously fairly vague, but it seems like if any EA is eg spending a lot of time commuting, waiting for dated hardware to run, or eating a lot of cheap unhealthy food or not participating in healthy hobbies, or otherwise losing time or health to save money, then it will impede their productivity. Again, taking the above survey at an admittedly naive face value, it would be worth the average EA org spending up to $830 more per year to increase their junior employees’ productivity by just 1% (perhaps more if the employee's productivity increase would compound over their career).

We should probably be sceptical of such striking survey results - nonetheless, there’s room to be more conservative and still see the potential for gain here. In an ideal world, the financial security offered could mostly come from the benefits and insurance discussed above - ie at a ~20% discount.

Lastly, this reasoning argues only for the option of higher salaries/benefits - many EAs on very low salaries seem perfectly able and willing to get by on them - and only to people who would otherwise be below whatever financial threshold would allow them to stop feeling constrained or anxious in daily life.

 


 

I’m aware that some EA organisations are already implementing some form of these strategies, but they’re far from universally adopted. Perhaps this is because they’re bad ideas - this was quite an off-the-cuff post - but I haven’t really heard substantial discussion of any of them, so let’s have it now. And if there’s any mileage in the core assumptions, I’d hope such discussion will reveal several more ways we can use our almighty collective will.

 



Full disclosure - I work for an EA organisation (Founders Pledge), so some of these strategies would potentially benefit me. But hopefully they’d benefit FP still more. 

Thanks to Kirsten Horton and John Halstead for some great feedback on this post.

 

 

Comments21
Sorted by Click to highlight new comments since:

Something along these lines that I've been looking into is providing a cheap hotel for EAs to live in for free whilst they independently study/research/work on start-ups. More information in the following facebook group: https://www.facebook.com/groups/1624791014242988/ [EA Hotel, free accommodation and board for up to 2 years, Blackpool, UK] Hoping to post more detail here soon.

For this group to make an effective social safety net for EAs having a bad time, more is needed than just money. When a real problem actually does arise, people tend to spam that person with uninformed suggestions which won't work. They're trying to help, but due to the "what you see is all there is" bias and others, they can't see that they are uninformed and spamming. The result is that the problem doesn't seem real to anyone.

So, the person who has a problem, who may not have any time or emotional energy or even intellectual capacity left over, must explain why dozens of spitball suggestions won't work.

How spitballing can totally sabotage people in need of help:

Imagine that to obtain help, you have to patiently and rigorously evaluate dozens of ill-conceived suggestions, support your points, meet standards of evidence, seem to have a positive attitude about each suggestion, and try not to be too frustrated with the process and your life.

The task of convincing people your problem is real while a bunch of friends are accidentally spamming you with clever but uninformed suggestions might be the persuasive challenge of a lifetime. If any of the ill-conceived options still seem potentially workable to your friends, you will not be helped. To succeed at this challenge, you have to make sure that every spitball you receive from friends is thoroughly addressed to their satisfaction.

A person with a real problem will be doing this challenge when they're stressed out, time poor and emotionally drained. They are at their worst.

A person at their worst shouldn't need to take on the largest persuasive challenge of their lives at that time. To assume that they can do this is about as helpful as "Let them eat cake.".

There's an additional risk that people will sour on helping you if they see that lots of solution ideas are being rejected. This is despite the fact that the same friends will tell you "most ideas will fail" in other circumstances. They know that ideas are often useless, but instead of realizing that the specific set of ideas in question are uninformed or not helpful, some people will jump to the conclusion that the problem is your attitude.

Just the act of evaluating a bunch of uninformed spitball suggestions can get you rejected!

Making a distinction between a problem that is too hard for the person to solve, and a person who has a bad attitude about solving their problem is a challenge. It's hard for both sides to communicate well enough to figure this out. Often a huge amount of information has to be exchanged.

The default assumption seems to be that a person with a problem should talk to a bunch of friends about it to see if anyone has ideas. If you count up the number of hours it actually takes to discuss dozens of suggestions in detail multiplied by dozens of people, it's not pretty. For many people who are already burdened by a serious problem, that sort of time investment just is not viable. In some cases the entire problem is insufficient time, so it can be unfair to demand for them to do this.

In the event that potential helpers are not convinced the problem is real, or aren't convinced to take the actions that would actually work, the person in need of help could easily waste 100 hours or more with nothing to show for it. This will cause them to pass up other opportunities and possibly make their situation far worse due to things like opportunity costs and burnout.

Solution: well-informed advocates.

For this reason, people who are experiencing a problem need an advocate. The advocate can take on the burden of evaluating solution ideas and advocating in favor of a particular solution.

Given that it often requires a huge amount of information to predict which solution ideas will work and which solution ideas will fail, it is probably the case that an advocate needs to be well-informed about the type of problem involved, or at least knows what it is like to go through some sort of difficult time due to past experience.

Another framing of that solution: EA needs a full time counselor who works with EAs gratis. I expect that paying the salary of such a person would be +ROI.

I would be interested in funding this.

Some thoughts on a few of these. I think that EA social safety nets already exist for many people, but it’s not formal in the way you laid out. It’s more based on specific connections and accomplishments. More or less each individual organization and donor has an implicit reputation system that is based on complex and organization/donor specific criteria. Some people will end up fitting multiple positive reputation systems which will give them more safety nets than someone who falls into a few or none. The system is of course dynamic so if you did fall into a category but then your reputation lowered (say by not doing anything that impressive over a long period of time) you could lose a safety net. Additional factors that affect how many safety nets you have in EA also relate to cost. It’s easy to provide a $10k safety net than a $100k safety net. You can imagine why donors/orgs would generally like this system better instead of funding an EA who scores high enough on community agreed on criteria. They could directly fund/support/loan etc someone who directly does well on their criteria.

To put this into a more practical perspective, I would expect that an EA with a strong enough reputation would be able to get support for a while doing entrepreneurship without getting insta-VCed. Likewise, the bar would be lower for a low/no interest loan. I think a case could be made that the system is ineffective or in-groupy and misses people who it should not. However, I think it’s worth acknowledging that informal systems like this definitely exist, so it’s more about the marginal cases who would not get supported by an informal system but would get supported by a formal one. For folks who want to do high risk projects but do not currently have informal connections/safety nets it seems worth considering just building up your reputation. This can be done with pretty low risk things like volunteering part time for an organization.

I agree with the career trajectory points and know that some but definitely not all EAs take this into consideration when determining salary.

The main reason I'm not looking for a full-time EA job right now is because I don't have enough runway and financial security. I estimate that it will take around 2 years to accomplish the amount of financial security and runway I need. If you accomplish building a safety net, this might result in a surge of people going into EA jobs. I'm not sure how many people are building up runway right now, or how many hours of EA work you can grab by liberating them from that, but it could be a lot!

This is a big part of why I find the 'EA is talent constrained not funding constrained' meme to be a bit silly. The obvious counter is to spend money learning how to convert money into talent. I haven't heard of anyone focusing on this problem as a core area, but if it's an ongoing bottleneck then it 'should' be scoring high on effective actions.

There is a lot of outside view research on this that could be collected and analyzed.

The obvious counter is to spend money learning how to convert money into talent. I haven't heard of anyone focusing on this problem as a core area, but if it's an ongoing bottleneck then it 'should' be scoring high on effective actions.

This is what many of the core organisations are focused on :) You could see it as 80k's whole purpose. It's also why CEA is doing things like EA Grants, and Open Phil is doing the AI Fellowship.

It's also a central internal challenge for any org that has funding and is trying to scale. But it's not easy to solve: https://blog.givewell.org/2013/08/29/we-cant-simply-buy-capacity/

Is 80k trying to figure out how to interview the very best recruiters and do some judgmental bootstrapping?

I pretty much agree with this - though I would add that you could also spend the money on just attracting existing talent. I doubt the Venn diagram of 'people who would plausibly be the best employee for any given EA job' and 'people who would seriously be interested in it given a relatively low EA wage' always forms a perfect circle.

Even if we had funds, the problem of who to fund is a hard one and perhaps it would be better spent simply hiring more staff for EA orgs? The best way to know that you can trust someone is to know them personally, but distributing funds in that manner creates concerns about favouritism, ect.

I strongly disagree on the grounds that these sorts of generators are, while not fully general counterarguments, sufficiently general that I think they are partially responsible for EAs having a bias towards inaction. Also, trying more cheap experiments when money is supposedly not the bottleneck.

Nice! I really like the idea of EAs getting ahead by coordinating in unconventional ways.

The ideas in "Building and EA social safety net" could be indirectly encouraged by just making EA a tighter community with more close friendships. I'm pretty happy giving an EA friend a 0-interest loan, but I'd be hesitant to do that for a random EA. By e.g. organizing social events where close friendships could form, more stuff like that would happen naturally. Letting these things happen naturally also makes them harder to exploit.

One issue with this sort of thinking is in practice setting up lots of events sometimes doesn't lead to people becoming this close. Some local communities for effective altruism have community members being roommates together, working at the same organizations, and doing all the social stuff. That waxes and wanes with how well organized it all is. Lots of EA community organizers will move from where they're from to another city, e.g., Berkeley, and basically on both ends switching the person who takes the role of de facto event organizer means organization will stagnate as someone else gets used to doing it all. That this apparently happens often means sustaining a space in which close friendships are likely to occur is hard. Doing it consistently over multiple years in hindsight appears hard. I don't know how good evidence there is for optimal methods in doing this.

Regarding potentially tax deductible items mentioned in section 5, usually accommodation or anything regarded as for personal use is not included. It would be regarded as payment in kind and therefore taxable (and also make the tax reporting more complicated!) This in the UK at least. E.g. https://www.gov.uk/expenses-and-benefits-accommodation/whats-exempt

I had a feeling that might be the case. That page still leaves some possible alternatives, though, eg this exemption:

an employer is usually expected to provide accommodation for people doing that type of work (for example a manager living above a pub, or a vicar looking after a parish)

It seems unlikely, but worth looking at whether developing a sufficient culture of EA orgs offering accommodation might suffice the 'usually expected' criterion.

It also seems a bit vague about what would happen if the EA org actually owned the accommodation rather than reimbursing rent as an expense, or if a wealthy EA would-be donor did, and let employees (potentially of multiple EA orgs) stay in it for little or no money (and if so, in the latter case, whether 'wealthy would-be donor' could potentially be a conglomerate a la EA funds)

There seems at least some precedent for this in the UK in that some schools and universities offer free accommodation to their staff, which don't seem to come under any of the exemptions listed on the page.

Obviously other countries with an EA presence might have more/less flexibility around this sort of thing. But if you have an organisation giving accommodation to 10 employees in a major developed world city, it seems like you'd be saving (in the UK) 20% tax on something in the order of £800 per month per employee, ie about £7600 per year, which seems like a far better return than investing the money would get (not to mention, if it's offered as a benefit for the job, being essentially doubly invested - once on the tax savings, once on the normal value of owning a property).

So while I'm far from confident that it would be ultimately workable, it seems like there would be high EV in an EA with tax law experience looking into it in each country with an EA org.

Have you got any examples of schools and universities offering free accommodation to staff? I've only heard of subsidised accommodation, and here - https://www.citizensadvice.org.uk/debt-and-money/tax/what-is-taxable-income/tax-on-benefits-in-kind/ - it says that the difference between the rent paid and market rate is taxable.

Of the three Black Mirror episodes I've seen, this reminds me of Nosedive: https://youtu.be/YrpK90bHO2U

I'm not saying such a program would succumb to such a weird state in our culture. Just a fun little aside. Regardless, I think if the EA insurance program happened, it would be awesome! That goes to say, there's a lot of different ideas in this article. I don't think our emerging movement is big enough...even for an insurance program. What do I know about starting an insurance firm? Nada.

Although, I do not trust people solely concerned about AI safety ;)

Keep in mind such insurance can happen at pretty much any scale - per Joey's description (above) of richer EAs just providing some support for poorer friends (even if the poorer friends are actually quite wealthy and E2G), for organisations supporting their employees, donors supporting their organisations (in the sense of giving them licence to take risks of financial loss that have positive EV), or EA collectives (such as the EA funds) backing any type of smaller entity.

I'm interpreting what your saying as one going without insurance, and having an arrangement with a much wealthier individual (friend) to cover them in case of an accident or medical procedure. If so, I believe that's ineffective altruism--even if the benefactor is E2G--and too idealistic.

Now, I assume most university students have their parents pay for their insurance (or get it significantly reduced though a state or university program). And I assume most professionals E2G are working for a company with a health insurance plan.

With that in mind, I think it wouldn't be worth it to start an EA insurance program. There wouldn't be enough people. And I don't believe the wealthier individuals would be inclined to doll out routine medical tests and high cost surgery to the less wealthy participants just because they claim to be EA.

I am speaking as someone who does not have an EA meetup/club nearby. I assume you're talking as if one does have comfy surroundings and support of nearby EAs (read: close EA friends).

Lastly, if I became a high-payed CEO or whatever, I wouldn't be supporting friends in place of them having an insurance program. To assume that other EAs would is unrealistic. Why do their wealthy lives matter more than someone else's in a different place? Each dollar of benevolence towards such a friend is a dollar not going to help someone at the other end. Money is mutually exclusive.

More from Arepo
Curated and popular this week
Relevant opportunities