PB

Paolo Bova

Economics Researcher @ Modeling Cooperation
58 karmaJoined Pursuing a doctoral degree (e.g. PhD)

Bio

I'm an early career independent researcher who graduated in Economics at University of Cambridge in 2019. I'm part of Modeling Cooperation, a team of independent researchers who work to build computational models and software tools for understanding the consequences of competition in transformative AI. We've previously investigated the consequences of a Windfall Clause in a model of AI Existential Safety (under review, see preprint on arXiv at: https://arxiv.org/abs/2108.09404). My current work focuses on building a model to explore policies to promote more resources for AI Safety research. 

 

In October I'm starting a PhD at Teeside University on  “Understanding dynamics of AI Safety development through behavioural and network modelling”.

Comments
6

Thanks for pushing the fix for Windows. The share buttons work on my device now.

Thanks for sharing this critique, Cullen.

I was curious about who would be the firm's opponent in this scenario, i.e. the actor trying to legally implement the Windfall Clause. 

In a world where a Windfall of this order of magnitude is possible, I would anticipate a number of additional actors of somewhat comparable magnitude. I'd also expect states to have more wealth too (even if the AI company didn't pay tax, an AI advanced enough to generate Windfall profits is likely to grow the economy dramatically). If this were true, I might expect there to be incentives (or the possibility of providing incentives) for sufficiently wealthy states or other actors to use their resources to keep the legal offense-defence ratio more manageable.

That being said, I'm very uncertain about the above. There is certainly precedence for companies to become dramatically richer than some states. Moreover, states benefiting considerably from transformative AI may not necessarily see defending a Windfall Clause as a priority. Nevertheless, I do think there's merit in thinking carefully about what kind of actors might exist in a world where the Windfall Clause looks like it will soon trigger.

Great to see the Predict feature. I might have missed this when you first added it, but I've seen it now. It looks great and the tool is easy to use! I also like the additional changes you've made to make the site more polished. Myself and a friend had some issues when clicking the 'share' button which I'll post as an issue on the Github later.

I'm very glad to see a paper on this topic. This paper is precisely what the field of AI Ethics has been missing!

Congratulations on the first publication, Fai!

A few highlights from the paper:

"It is significant that philosophers who disagree strongly with the view that animals have rights or are entitled to equal consideration of interests nevertheless accept that factory farming is indefensible."

  • In general, the piece appears to do a great job at preempting arguments against animal ethics. Here's hoping a lot of people see this!

 

"Companies that contribute to making the factory farming industry more resilient and better able to resist replacement by less cruel and more sustainable alternatives are acting unethically."

  • This makes a very clear statement.

 

"So, instead of self-driving cars creating a new ethical problem with regard to hitting animals, we will have, when self-driving cars become common, a potential solution to an old ethical problem, and with the new solution, new responsibilities."

  • This section on self-driving cars is brilliantly practical. Hopefully AI ethics scholars take note as this seems like a rather undaunting and high profile case study.

 

"While we appreciate Delphi’s developers’ effort ... we are yet to see any efort to make it less speciesist. Until that happens, we agree with Delphi’s developers that Delphi’s output, or outputs from any similar models, should “not be used for advice for humans,” nor should it be used as a model to build ethics in AI."

  • I agree with this point and I suspect it applies more widely to research advancing AI capabilities.

Fantastic summary, Nicholas, Andrew, and Robert. I'm looking forward to reading the paper.

A few quick thoughts on the summary:

  1. It's reassuring to hear that information hazards are unlikely for lower values of the decisiveness parameter.  One relevant follow-up question is how might AI developers form an opinion on what value the decisiveness parameter takes? Is this something we can hope to influence? 
  2. It's not quite as reassuring to hear that framing AI Safety as a group effort might discourage safety investments due to moral hazard. I do find your proposal to share safety knowledge with the leader to be promising. We might also want policymakers to have some way to ensure that those sharing this safety knowledge were well compensated. Doing so might give a preemptive motive for companies to invest in safety, as they might be able to sell it to the leader if they fall behind in the race.
  3. I really like that you caution against updating only on the basis of a model alone. It encourages me to think about how we might empirically test these claims concerning moral hazard and decisiveness.

Beautifully made! I love the visuals and my first impressions are that it communicates x-risk in a more hopeful way. The app looks great on mobile too.

Some quick thoughts:

- I anticipated that clicking on a node would either give me a tooltip to explain what that particular node should represent or take me to another page/section of the site which explained these scenarios in more detail. 
- I initially found it strange that all of the green nodes appear to link to the same prediction about population decline. I vaguely understood that this was a source of evidence for the number of green nodes, but the connection is not very clear. I think the app might benefit from a short explanation of why a user might want to click on these nodes. It might also help if hovering over one node highlighted all nodes which send you to the same place.
- I feel that the text on the graph is sufficient enough for me to understand the different clusters in the graph. Yet, I wonder if it might look better to use icons to represent these different clusters, and have the longer text appear on hover instead. Of course, I'd keep it as it is if user testing suggested that this change increased confusion.
- I will cast a vote for being able to input my own data. If I could input my own data, I also think it would be fun to share the resulting graphs.
- I don't think I have any ideas for a better title. I do feel that another title should aim to be of a similar length.
- A few ideas for promoting the app to other EAs. It might be nice to give a talk about the web app, or for someone whose work is closely related to predictions for x-risk to show it off in a talk. Also, perhaps you could reach out to one of the university EA groups to see if they'd be interested in having a visual like this to show in some of their introductory talks.

Lastly, I'd like to congratulate you on launching the site. I'm sure you've put in a lot of work to get it to this point, and as a result it looks fantastic!