Hide table of contents

Since FTX failure greatly touched EA community, there is a high chance for EA to increasingly add a lot of regulation and become quite a bureaucracy as time goes on. Implementing new rules and oversight is the usual go-to way of solving problems (like in finance, medicine, aviation). But established regulation are expensive to manage, very hard to change and it greatly slows down innovation. I am in favor of it, but since it is only the beginning, maybe it is very effective not to be too fast entangling ourselves in it?

Could more effective measures be found instead of ever more bureaucracy? For example, could normalizing action of whistleblowing be an answer? In particular, I propose extreme whistleblowing of tiny wrongdoings, as a thought experiment. If done right, it could reveal existing issues or prevent new shady stuff from slowly emerging in the future.

How could whistleblowing system work?

  • It may be a tool or process.
  • It would be well advertised and frequently used by everyone to report tiny wrongdoings of community members.
  • Tiny reports would accumulate to create clearer picture about consistent behavior of individuals.
  • Reports may be highly simplified to encourage people to use the system. For example, one can rate people interactions with basic categories and feeling in many cases (like rude, intolerant, hateful, risky...).
  • Reports would not be anonymous in order to be verifiable and accurately countable.
  • Reports would be accessed by most trusted organizations like CEA which would also need to become more trusted. For example, it may have to strengthen data protection multiple times which I may guess is needed anyway (like for all organizations).
  • Individuals should have a right to receive all anonymized data gathered about them in order to have an option for a peace of mind.
  • Reports would have an automatic expiration date (scheduled removal after some years) to have an option for individuals to change their behavior.
  • It has to be decided what are non issues as usual, so it would not hamper expression of ideas or creating other side effects which I will continue below.

Benefits:

  • This system would repel people from poor actions. But if someone is unable to abstain from doing shady stuff, they may feel repelled from being part of the community itself.
  • People who create issues would be spotted faster and their work stopped before it cause significant negative impact. Reporting sooner may prevent bigger things from escalating.
  • It it works, this might prevent establishing other less effective (more resource intensive) measures.
  • This would help examine applications for roles, events and grants.

Counterarguments:

  • People behavior are very complex so tiny mishaps may not be representative of person's character. But if we make evaluation instructions well known and agreed by the community, plus making sure it is sensitive to nuance, then we could expect higher evaluation quality.
  • Reporting is not acceptable in society (culture varies across countries), so it would be unpleasant to report other people, especially about tiny matters. But if we establish knowledge it is good for community, culture might change?
  • If tiny wrongdoings are common in the community, then this idea would face a lot of resistance. On the other hand, the more resistance there is, the more such a system might be needed. By the end of the day, the idea is not to punish, but to bring issues to light. If issues are known, they can be fixed. Fixing is the end goal.
  • Tiny wrongdoing is impossible to prevent or sometimes agree what that is. So the goal is not to pursue tiny things, but to gather enough clues to assemble larger picture if there is anything larger to be assembled.
  • EA already have similar processes, but it could be improved as number of actors in community grows.
  • I am unsure if it would create more trust environment (this is desirable), or fear environment (this is undesirable). Maybe it is the question of how far and how well this would be implemented.
  • What are other reasons for this not to work?

For people who enjoy movies, the film "The Whistleblower (2010)" is a fitting example displaying very disturbing corruption happening on a massive scale in United Nations mission in Bosnia, where almost everybody is turning a blind eye, because it does not fit their or their organization's interest, or because corruption slowly grew to hard to admit or manage levels (movie based on real facts).

28

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since:

I don't see how, if this system had been popularised five years ago, this would have actually prevented the recent problems. At best, we might have gotten a few reports of slightly alarming behaviour. Maybe one or two people would have thought "Hmm, maybe we should think about that", and then everyone would have been blindsided just as hard as we actually were.

Also...have you ever actually been in a system that operated like this? Let's go over a story of how this might go.

You're a socially anxious 20-year-old who's gone to an EA meeting or two. You're nervous, you want people to like you, but things are mostly going well. Maybe you're a bit awkward, but who's not? You hear about this EA reporting thing, and being a decent and conscientious person, you ask to receive all anonymized data about you, so you can see if there are any problems.

Turns out, there is! It's only a vague report - after all, we wanted it to be simplified, so people can use the system. Someone reported you under the category "intolerant". Why? What did you say? Did you say something offensive? Did someone overhear half a conversation? You have no idea what you did, who reported you, or how you can improve. Nobody's told you that it's not a big deal to get one or two reports, and besides, you're an anxious person at the best of times, you'd never believe them anyway. Given this problem, what should you do? Well, you have no idea what behaviour of yours caused the report, so you don't know. Your only solution is to be guarded at all times and very carefully watch what you say. This does not make it easy to enjoy yourself and make friends, and you always feel somewhat out of place. Eventually, you make excuses to yourself and just stop showing up for meetings.

This is definitely a made up story, but almost exactly this happened to me in my first year at my first job - I had an anonymous, non-specific complaint given to me by my manager, the only one I've ever received. I asked what I was supposed to do about that, and my manager had no good answer. Somewhat annoyed, I said maybe the best solution would be to just not make friends at work, and my manager actually agreed with me. Needless to say, I had much more cordial relationships with most colleagues after that. I was also older than 20 and I didn't actually care about being liked at my job much. I wanted to do well because it was my first job, but they were never my people. Eventually I grew up, got over it, and realised shit happens, but that takes time. I can imagine that if I were younger and amongst people whose ideology I admired, it would have stung far worse.

And...let's remember the first paragraph here. Why would such a system have actually worked? SBF gets some complaints about being rude or demanding in his job, and what? EA stops taking his money and refuses to take grants from the FTX Future Fund? I don't think such a system would ever have led to the kind of actions that would have discovered this ahead of time or significantly mitigated its effects on us.

If we're going to propose a system that encourages people to worry about any minor interaction being recorded as a black mark on them for several years within the community, imposing high costs on the type of socially anxious people who are highly unlikely to be predatory in the first place...well, let's at least make sure such a system solves the problem.

Nice points as always.

Main issue One of the main issues with FTX is taking super high risks. It was unacceptable long ago. If reporting would have been the norm, it seems likely that someone who seen the decision making process (and decisions made), would have made private disclosures to EA management (reported many times for many decisions). Would this information have prevented EA management from still taking a lot of money, or taking this seriously? I am leaning towards the answer of 'yes', because internal information is more valuable than public rumors. The action will surely be taken from this point onwards after being burned by this already. Your point about them being reported as "rude" in this situation is not the best example:)

And personalized stories you shared are important, I will take time to think more about such situations.

Strong agreement from me that EA orgs would improve if we had some whistleblowing mechanism, preferably also for things smaller than fraud, like treating candidates badly during an interview process or advertising a job in a false way

I have been thinking something similar, but had come to a few different conclusions from you. Now I'm wondering if we just need multiple complementary approaches:

  • I was thinking less about deliberate bad faith acts than people being bad at their jobs*
  • I would want something that isn't only visible to the 'most trusted organisations', since a) that assumes we've partially solved the problem we're addressing, b) there are ongoing questions about the level of their responsibility for the current hurricane, and c) the more people who see it, the more chances there are of spotting patterns
  • That means it would probably need to be open to everyone
  • That means it would have to be anonymous by default, though individuals could obviously identify themselves if they chose
  • That means it would need to apply some fairly strict epistemic standards, defined in advance, so it didn't just become a cesspool of slander
  • It would generally have mean more of an org-level focus rather than targeting individuals. 
  • My instinct is a policy of 'it's ok to name top managers of EA orgs (including retrospectively), but anyone further down the rung should be discussed anonymously'. It might make sense to specify the department of the org, so that the people running it take some responsibility

* Outside FTX I suspect this is more responsible for any culpability EA collectively has than any specific bad faith.

I think forum is a good place for what you described.

The forum is a generally bad place for pooling information in an easily retrievable way that gives equal emphasis to all of it, which is what we need for such information to be useful.

Sorry for being brief in my last answer. You made good reasonable points which I don't have much to add on.

I stick to my last answer that forum is a good place for that, because it is very hard and often close to impossible to create new services when functionality greatly overlaps with existing service. Think about Google+ which tried to compete with Facebook and what happened.
People use established service and forget to use similar one.

Forum is not perfect for it - yes, but for practical reasons I see it as the way to do epistemic standards and other things described in your comment. Forum is an established, central place for everything public like this.

Reports would be accessed by most trusted organizations like CEA

Are you suggesting reports should be non public?

I am suggesting tiny matters to be non public to achieve the goals described in the article. Discussion / disclosures can be public as well as their always are on the forum.

Which route is better? Or which one solves all the problems? Neither solves every layer, so multiple good solutions are needed.

Curated and popular this week
Relevant opportunities