D

DavidRooke

57 karmaJoined Sep 2014

Posts
1

Sorted by New

Comments
22

A lifetime learning to be a 9th Dan master at go perhaps? Building on the back of thousands of years of human knowledge and wisdom? Demolished in hours.... I still look at the game and it looks incredibly abstract!!

Don't get my wrong I am really concerned, I just consider the danger much closer than others, but also more soluble if we look at the right problem and ask the right questions.

"Skeptical about the framework" I do not agree with. Indeed it seems a useful model for how we as humans are. We become expert to varying degrees at a range of tasks or services through training - as we get in a car we turn on our "driving services" module (and sub modules) for example. And then underlying and separately we have our unconscious which drives the majority of our motivations as a "free agent" - our mammalian brain - which drives our socialising and norming actions, and then underneath that our limbic brain which deals with emotions like fear and status which in my experience are the things that "move the money" if they are encouraged.

It does not seem to me we are particularly "generally intelligent". Put in a completely unfamiliar setting without all the tools that now prop us up, we will struggle far more than a species already familiar in that environment.

The intelligent agent approach to me takes the debate in the wrong direction, and most concerningly dramatically understates the near and present danger of utility maximising services ("this is not superintelligence"), such as this example discussed by Yuval Noah Harari and Tristan Harris.

https://www.youtube.com/watch?v=v0sWeLZ8PXg

Yes - we have increasingly powerful utility maximisers already and they are in many applications increasingly dangerous.

6 describes the AGI as a "species" - services are not a species, agents are a species. 4 and 5 as written describe the AGI as an agent - surely once the AGI is described as an "it" that is doing something certainly sounds like an independent agent to me. A service and an agent are fundamentally different in nature, they are not just a different view, as the outcome would depend on the objectives of the instructing agent.

Hi Richard, really interesting! However I think all your 6 reasons still think of AGI as being an independent agent. What do you think of this https://www.fhi.ox.ac.uk/reframing/ by Drexler - AGI as a comprehensive set of services? To me this makes the problem much more tractable and better aligns with how we see things actually progressing.

If we avoid this dystopian near future of "superintelligent multi-level marketing" I hope the future will be more like this suggested by Steven Strogatz https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html which would leave the key remaining challenge being one of creating a mechanism for ensuring value alignment....

"But envisage a day, perhaps in the not too distant future, when AlphaZero has evolved into a more general problem-solving algorithm; call it AlphaInfinity. Like its ancestor, it would have supreme insight: it could come up with beautiful proofs, as elegant as the chess games that AlphaZero played against Stockfish. And each proof would reveal why a theorem was true; AlphaInfinity wouldn’t merely bludgeon you into accepting it with some ugly, difficult argument.

For human mathematicians and scientists, this day would mark the dawn of a new era of insight. But it may not last. As machines become ever faster, and humans stay put with their neurons running at sluggish millisecond time scales, another day will follow when we can no longer keep up. The dawn of human insight may quickly turn to dusk.

Suppose that deeper patterns exist to be discovered — in the ways genes are regulated or cancer progresses; in the orchestration of the immune system; in the dance of subatomic particles. And suppose that these patterns can be predicted, but only by an intelligence far superior to ours. If AlphaInfinity could identify and understand them, it would seem to us like an oracle.

We would sit at its feet and listen intently. We would not understand why the oracle was always right, but we could check its calculations and predictions against experiments and observations, and confirm its revelations. Science, that signal human endeavor, would reduce our role to that of spectators, gaping in wonder and confusion.

Maybe eventually our lack of insight would no longer bother us. After all, AlphaInfinity could cure all our diseases, solve all our scientific problems and make all our other intellectual trains run on time. We did pretty well without much insight for the first 300,000 years or so of our existence as Homo sapiens. And we’ll have no shortage of memory: we will recall with pride the golden era of human insight, this glorious interlude, a few thousand years long, between our uncomprehending past and our incomprehensible future."

Hi Kit - Happy New Year!

Thanks for that, yes I hope a more digestible summary will be produced. I am not intending to be hostile at all, I am just very worried about the AI issue, just I see it as a different issue from that highlighted by the EA community, much more like that highlighted in the paper, hence my purpose in highlighting it.

I think humanity are not particularly generally intelligent, just they become programmed/conditioned to be relatively good at a number of tasks necessary to survive in their environment (eg. a baby chucked into the rainforest will not survive as long as all of the much less intelligent animals that live there). Indeed my worry is we are surprisingly stupid and manipulable as a broad group - as a species our driving motivators (fear, status) generally create the narrative in our cognitive consciousness, and our "blind spot" is the belief that we are much smarter than we are. in the US and the UK the political process has become paralysed as seemingly logical statements apparently talking to our conscious brain are actually playing to our deep subconscious motivators, creating a ridiculous tribalism far removed from any form of logic.

We perhaps "feel" intelligent as we create complex intellectual frameworks that explain things in detail, but this is really a process of "mapping the territory". Its hollowness is shown in eg. Shogi, Chess and Go by Alpha Zero, since despite the many thousands of years of academic study poured into these subjects that has repeatedly mapped the territory this was blown aside by a self improving algorithm working out "what fits". Maps in the real world might be good talking points but are simply nowhere near accurate enough at a human level of intelligibility.

As an investment banker I never had much interest in mapping the territory (despite being logical) but I was interested in "the best way to get from here to there avoiding the obstacles" (I did not care how as long as it works). And this is how life in generality (outside of academia) is - "how can i profit maximise doing x, without breaking any laws (better still if i find a clever way around the laws". And with increasingly powerful self improving algorithms this ends up with the kind of dystopia shown in this video from Yuval Noah Harari and Tristan Harris - "supercomputers" (superintelligence) pointed at our brains. https://www.youtube.com/watch?v=v0sWeLZ8PXg

In all of this I know it is hard for EAs to properly engage - status is gained in any community (which is a powerful deep motivator) by largely agreeing with the norms of that community - and I know my views are far from normal in this community so status is gained by rejecting what I say. But as we share the same deep values - we want the world to be the best place it can be - (which is something very much other than making the most amount of money possible for already rich shareholders) - and I have huge belief in the potential and need for the EA community you will forgive me if I keep trying.

It appears unfortunate that nobody can be bothered to read the underlying paper, written by Eric Drexler, a senior Oxford Martin fellow which completely reframes the AI debate, to something far from paper clips and much closer to reality. If a human sat down at Chess, Go, and Shogi and simply by playing the game became far better than any other human in a couple of weeks we would all see this as superintelligent. That this achievement is so easily dismissed to me shows an a complete unwillingness to deal with reality as it is.

How about "rational altruists?" This to me is actually a better descriptor of the head and the heart than effective altruist, as a person could be really effective (on a QALY basis) without using the head at all - Live Aid was essentially emotionally driven, and drove a huge groundswell of support for tackling extreme poverty. The thing that sets effective altruism as currently named apart is the very high level of rational thinking that goes on in deciding what to do. Whether that is more or less effective than other approaches is probably an unhelpful starting point when it comes to outreach, as it can indeed sound arrogant, and ignores the fact that most people are emotionally driven in deciding how to give.

Wow this is a great post - thanks Katja!

My answer to the narrow question is that the idea has only recently emerged because of the recent emergence of social networks, allowing communities with a set of values outside the societal norm to emerge. As you describe it you thought deeply on your own about why people behaved the way they did, but on your own in a society with very particular values, effectively reinforced by marketing and group pressure you would have most likely simply conformed to your community norms over time (thinking about the meaning of life is not widely encouraged past university). Given that it requires an unusual level of "why" type thinking to focus on the issue this level of intellectual thinking was unlikely to achieve critical mass in any one particular physical location. That the movement is struggling to engage older mainstream groups is a demonstration of how deep conformance to social norms becomes, which only deepens I think the longer you are exposed to it, and makes the movement's emergence online in a young intellectual community (which would be the first place to see critical mass) only to be expected.

Why the societal norm is to give so little and relatively so ineffectively despite the minimal "happiness" utility to an individual of the incremental dollar as incomes rise above a modest level in Western society (despite the prevalence of Christian values requiring a particular focus on caring for those in extreme poverty) is a deeper question. In my view it is probably due to slow steady evolutionary change of society, with no particular shock to the system to cause a widespread re-evaluation.

Until perhaps the 1960s rising incomes in developed economies were generally well aligned to rising happiness in a very real sense - cars, washing machines, TVs, central heating all improved peoples lives in very measurable ways. At this point it would have been very hard to argue in a coherent way that it was better to give to those you have never seen and knew nothing about (if you could meaningfully give to them at all) than to those you knew and loved close at hand.

The rising incomes that increased happiness created a strong almost pavlovian link between efficiency, profit and "a good thing". With the introduction of the television powerful mass marketing became possible, increasingly playing at a sub-conscious level to our deep desires and motivations (fear, status, happiness) to create a need to consume more in order to allow profits to continue to grow as any capitalist organisation requires if it is to continue to flourish (which might otherwise have stalled). That, according to the World Happiness Report (page 3 http://www.earth.columbia.edu/sitefiles/file/Sachs%20Writing/2012/World%20Happiness%20Report.pdf) US GNP per capita has tripled since 1960 whilst happiness has remained relatively unchanged is an indication of the tenuous link that now exists in developed economies between growth and life improvement, but it is this key aspect that has allowed a "life improvement arbitrage" to be created that is now accurately observed and acted on by the effective altruism movement.

Now the concept has been created, and is to some extent obvious, it can be relatively easily understood by a very large group of people who with persuasion will wish to effectively invest in creating a better world. In selling the concepts to them though, and given the level of norming to societal benchmarks that has occurred I believe it will be necessary to use the same powerful marketing focussed on deep motivations to shift people's behaviour as was necessary to allow the life improvement arbitrage to be created in the first place. There is no reason that this cannot be carried out with the same rigour of comparison as effective altruism would bring to any other activity - a commercial enterprise is rigorous in its assessment of the return on the marketing dollar, and there is no reason for the EA movement to be any different. There are many more things that EAs can examine now that the concept has been created.

Load more