Hide table of contents

I was inspired to write this essay for the Future Fund Worldview Prize. I spent over two months writing it. So please read it before passing judgment.
 

Intro

This worldview combines the teachings of Karl Popper and David Deutsch with my own understandings. I attempt to explain areas where we need to improve our knowledge, before a genuine AGI is capable of being programmed to think, comprehend, create explanations and create knowledge. My take is that all the fear surrounding AGI is unnecessary. The common pessimistic themes are empirical, Bayesian induction and prophecy and not an image of what will happen in reality.

 

I cannot include everything in this article. There are too many places that need deeper explanations. Rather, this will be an introduction to my worldview; the remainder can follow in the comments section. My goal is to convince The Future Funds that this worldview should be seriously contemplated unless it can be rationally refuted.  (Downvotes don’t count as a scientific refutation, unless accompanied by a good explanation). Is this a popularity contest or one of rationality?

 

Arguments presented in this worldview:

 

1.     Empiricism, Bayesianism, Inductivism VS Fallibilism

2.     This worldview contest is paradox in itself. Are other worldviews allowed to         compete?

3.     Probabilities are not useful when referring to the nature of reality.

4.     AGI won’t be as dangerous as one may believe.

5.     AGI will be a lot harder to achieve than one may believe.

6.     Adopting a Fallibilist worldview over a Bayesian, empirical, inductive worldview will improve your investment objectives.

 

Worldviews

Worldview: A conception of the world.

 

This Future Funds worldview contest is about paradigms, worldviews, how our perspectives of reality differ and where one might be wrong in our perception of reality. Fundamentally, we examine and re-examine the way we think in an effort to converge upon the truth. It’s a very important exercise.

 

To begin, I notice a conflict in the game itself. Is the Future Funds Worldview Prizeorganized using Bayesian inference as a measuring stick and leaving out other options? This is a conflict even before the game begins.  The contest Invites discussion about existing worldviews, but within the confines of an empirical, Bayesian, inductive worldview.  What if the measuring stick is limited in the first place?

 

In this essay I am including two popular worldviews. One is based on empiricism and the other refers to inherent human – creative! –  fallibility. These are conflicting approaches to understanding how knowledge evolves; subsequently, how we can expand our understanding of reality.

 

Bayesian Epistemology

Empiricism: The belief that we acquire knowledge through observation, through the direct experience of our senses.

 

Bayesianism: The idea that, by confirming evidence, probabilities go up and disconfirming evidence makes probabilities go down. With seemingly higher probabilities one justifies the belief that one’s theories are closer to the truth.

 

Inductivism: The idea that, by confirming theories that use past and current data (info) as evidence, one can extrapolate that data(info) into the future.

 

Today these are accepted as the common-sense methodology for how science works. Together they can be regarded as Bayesian Epistemology, or a Bayesian theory of knowledge.

 

The Bayesian approach to learning supports one’s own chosen theory.  When evidence confirms our theories we feel confident we are right. With a growing list of evidence confirming a theory, we humans tend to create theories resistant to change. Such approaches become hierarchical and authoritative.

 

Popperian epistemology/Fallibilism

 

Popperian Epistemology, or Fallibilism, is the opposite of Bayesian Epistemology. Fallibilism is non-hierarchical, non-authoritative, and works by admitting that theories are inherently wrong but evolving to become less wrong.  We aim towards truth by continuously correcting errors. Using this epistemology one creates explanation through conjectures and rational criticism.  We make guesses (theories), then eliminate errors from those guesses. We find something to be more true by eliminating untruths.

 

Bayesianism is a belief system

 

Bayesianism purports that if we find enough confirming evidence we can at some point believe to have found “truth”. What is the process to fill the gaps in our knowledge about truths under Bayesianism? Do we fill the gaps with evidence? For how long do we collect the evidence before it becomes the ultimate truth. What if there is an infinite amount of confirming and disconfirming evidence we don’t know about?

 

Bayesianism uses todays evidence, extrapolates to the future, then assigns probabilities  More true, or less true.  Meanwhile it disregards the infinite potential of all the unknowable.  It is impossible to comprehend all the influences, all the dynamics, involved.

 

Fallibilism is not  a belief system

 

All our ideas are conjectural, but our conjectures become less wrong as we remove errors. At any moment our current knowledge can be replaced with better knowledge.  Practicing Fallibilism we have no final answers - but better and better explanations.

 

Is there is an end to scientific discovery? Will we eventually know everything there is to know and no new problems will arise? In such a reality, Bayesianism would work. But our reality isn’t finite, there will be no end to discovery, no end to problems or progress. Every problem solved introduces new and better problems. Reality includes infinite possibilities and is unpredictable.

 

My following article will challenge our current cultural belief system.  It is time to calibrate to a lens that doesn’t rely on a belief system but acknowledges the limitations of a Bayesian approach to Science.  To contemplate the future of AGI this is imperative..

 

What if current ideas about how scientific/technological progress should work is limited - or just wrong?

 

Identifying Problems

We want to relieve suffering, make life more enjoyable, understand more about our universe.  Rather than beginning with complex mathematical problems we need to focus first on the physical world, identifying, understanding, and solving conflicting ideas  - our problems.

 

Problems are people problems. Without people, problems aren’t recognized.  Dinosaurs didn’t know they had a problem before they went extinct.  Neither can any other entity that we know of understand problems.  To be genuine, an AGI must be able to identify problems. As example:  Does an AI get bored? No.  People do.  So we invented games as a temporary solution to boredom. Games solved a problem for people. AI wouldn’t invent games because it wouldn’t know it had a problem - unless we told it it did.

 

Reality has no boundary, it is infinite. Remembering this, people and AGI have the potential to solve an infinite number of problems. However, AI is limited, it can solve only a finite set of problems.  On its own, AI cannot solve problems which have not yet existed.

 

AI vs AGI vs People

AI (Artificial Intelligence): Mindless machine, which includes things we can explain and program into computers.  It is capable of solving finite problems. (Bayesianism works here)

 

AGI (Artificial General Intelligence): Capable of infinite problem solving.

To achieve AGI we will need to program the following:  

  • knowledge creating processes
  • emotions
  • creativity
  • free will
  • consciousness.

 

People and AGI will be similar beings in different packages. Each will be universal explainers, they will have the potential to comprehend anything in the universe. At this point, AGI will be like another race of people. At this level of emergence it’s not a case of man vs machine, or people vs AGI, it is about People/AGI and a never ending stream of problem solving.

 

AI can play chess better than humans, it can memorize better than we can, even be a better dancer; it can outdo us in many things . . . but not everything. AGI, on the other hand, will be better at everything and will have infinite potential. But to get an AGI, we first have challenging problems to solve. There’s a huge gap from AI to AGI.

 

Probabilities

We cannot calculate the probability of future events as if people didn’t exist. Recognizing this to be true, Is it relevant to assign probabilities when referring to the development of an AGI within a specified timeframe?

No.

People are abstract knowledge creators.  We cannot guess what knowledge people will create into the future. There exist infinite possibilities. What will we come up with next? Probabilities or Bayesian inference work within finite sets, like in a game of chess or poker. But probabilities don’t apply in the realm of actual reality, which is always expanding toward infinite possibility. Imagine a game of poker with an infinite number of cards.  Imagine also new types of cards showing up regularly – perhaps a 7 of mushrooms or a 14 of squares.   Probabilities work if nothing truly new is introduced to the game.

 

Bayesianism provides good explanations within a finite reality, a reality where it’s possible to count the known things involved. But, human knowledge can be infinite and with no bounds. So, realizing humans have the capacity to solve any problem in the real world, probabilities become irrelevant.

 

Imagine trying to explain the metaverse to someone from 1901.  1922?. Now, keep that in mind for the following…

 

We can’t predict the knowledge we will have in the future.  If we could, we would implement that knowledge today. No one who lived 100 years ago imagined how we would communicate easily around the globe through devices carried in our pockets, or that ordinary people could travel to the other side of the world to spend Christmas with family.  In the same vein, we can’t imagine most of our future tech.

 

Interestingly, people seem to find it easy to predict ways in which our tech could harm us. Less often we hear about how it could help us.  Pessimistic, dystopian Sci-fi movies are more common than optimistic Sci-fi movies – movies that solve problems.

 

Using predictions, we can guess only so far. If we could predict the outcome of an experiment, more than one step at a time, why wouldn’t we just jump past the first step, or second step? The subsequent outcomes introduce new possibilities that were not possible before. 

 

Assigning probabilities for a genuine AGI before a specific time is prophetic. Prophecy is not an accurate gauge of potential future events.  Our future inventions will grow from inventions that have gone before.  They can happen only after previous ones have been invented. If we could prophesy, we would just skip the middle steps – the creative process  - and jump to the final product.  We can’t do that. We have to create our way toward the future with no idea what we will come up with.  Our ideas lead us to unpredictable frontiers and the process is infinitely creative..

 

The Knowledge Clock

To make progress, we have to think not in terms of dates on a calendar or revolutions around the sun but rather about the speed at which knowledge grows. Assigning an arbitrary due date on the likelihood of AGI happening distracts from what is actually possible. The speed of our knowledge growth is our best metric, and this can’t be predicted, we can only measure it historically, after it has happened.

If we develop the necessary knowledge regarding the concepts I’ve listed in this worldview, then, sooner or later, we will have AGI.  But the timing is dependent upon the speed of our knowledge growth.

 

Is AGI Possible Or Impossible?

The first question one should ask is: “Is AGI possible?”  Not “Is it probable?”

 

There is no law of physics that will make AGI impossible to create. For example: It is possible for human consciousness to exist. This is a wetware computer. A human mind (software), runs on a human brain (hardware).

 

The Church–Turing–Deutsch principle states that a universal computing device can simulate every physical process. This is also know as the universality of computation and it explains that the laws of physics allow for rendering of all physical objects by a program on a general purpose computer. Therefore,  once we have developed the required knowledge, we can deduce that it is possible to program an AGI.

 

To Program An AGI We Need More Knowledge About Knowledge

Knowledge = information with influence.

 

Knowledge is information that has causal power.   Our genes have causal power.  Our ideas have causal power.

 

The development of knowledge evolves through our best guesses. For genes their best guesses don’t die before they replicate. For people, our best guesses are explanations that are hard to vary.

 

The knowledge creation process:
 

  • Problem  —>  Theory  (best guesses) —>  Error Correction (experiment)  —>  New Better Problem  —>  Repeat ( ∞ )…
     

If this problem solving system was a factory, Knowledge would be the resulting product. This is demonstrated in life by consistent improvement.

 

Knowledge is not something you can get pre-assembled off a shelf. For every piece of knowledge there has been a building process. Let’s identify the two types of knowledge building that we know of:

 

1.     Genes demonstrate the first knowledge creating process that we know of. It is a mindless process. Genes develop knowledge by adapting to an environment, They use replication, variation and selection. Genes embody knowledge. Theirs is a slow knowledge creating process.    

2.     Our minds demonstrate an intentional and much faster process. Our knowledge evolves as we recognize problems and creatively guess, developing  ideas to solve for that problem (adapting to an environment). We guess, then we criticize our guesses. This process is happening right now in you. I am not uploading knowledge into your brain. You are guessing what I’m writing about, comparing and criticizing those guesses using your own background knowledge. You are trying to understand the meaning of what I’m trying to share with you. Then that idea competes with your current knowledge on the subject matter. There’s a battle of ideas in your mind. If you are able to criticize your own ideas as well as the competing idea, the idea containing more errors can be discarded, leaving you with the better idea, therefore expanding your knowledge. Transferring the meaning of ideas (replicating them) is difficult. People are the only entity, that we know of, who can do this, and we do it imperfectly by variation and selection.

 

Perhaps we have the necessary technology today to program an AGI. What we lack is the necessary knowledge about how to program it.

 

Computers today are not creating new knowledge. Machines like AlphaZero look like they are creating new knowledge.  Actually, they are exploiting inefficiencies in a finite environment. We could learn a lot from AI. Machine learning will uncover new inefficiencies, and we can learn from that, but it is people that will find creative ways to utilize the information to evolve greater knowledge. Computers are trapped within the knowledge which people have already created.

 

Creativity

Creativity sets us apart from all life forms that we know of. It releases us from the confines of our genes. Creativity has enabled people to solve endless streams of problems, thus to open us to boundless knowledge. It appears that creativity has only evolved once, in us. It must be a very rare and difficult thing to achieve. And, it is a necessary element if we wish to achieve AGI. Why should we assume this will happen spontaneously in our machines?

 

Creativity is a transfer process.  It requires communication between people.  Through communication – people talking with people, conversing, exchanging ideas - knowledge is restructured in our minds.  Some ideas replicate; they become part of our culture and are shared as “memes”.

 

Creativity isn’t just about mimicking. It is more than trial and error. It isn’t limited to random image generation.  Mimicking, trial and error and random image generation are mechanical processes which AI can do today.

 

For our AI to make the leap to AGI, programmers must understand more clearly the human element of creativity.

 

AI and Machine Learning

There are claims today that machine learning has developed or is showing signs of creativity. This is a misconception. We are after genuine creativity, not the illusion of creativity.

 

Dall-E

A high level example of how the Image generators DALL-E works today:

 

1.     First, a text prompt is input into a text encoder that is trained to map the prompt to a representation space.

2.     Next, a model called “the prior” maps the text encoding to a corresponding image encoding that captures the semantic information of the prompt contained in the text encoding.

3.     Finally, an image decoder randomly generates an image which is a visual manifestation of this information found online.

 

This doesn’t come close to genuine creativity which I discussed in the previous section.

 

Alpha Zero

Machine learning, like Alpha zero is given the basic rules of the game. People invented these rules.  It then plays a game with finite moves on a finite board using trial and error to find the most efficient ways to win. No creativity is needed.  (In this situation, Bayesian induction does work).

Now . . .  superimpose that game over actual reality, which is a board with infinite squares. Straight away infinite new sets of problems arise.  New pieces show up repeatedly and the rules for them are unknown. How would Machine Learning? solve these new problems?

 

It can’t.  Machines don’t have the problem solving capabilities people have. People identify problems and solve them using creative conjectures and refutations. (Once the rules are in place, the algorithm can take over).

 

Lastly, it is people that interpret the results and come up with explanations to make any of this useful.

 

Consciousness

Most of us would acknowledge that we don’t yet understand “consciousness”.  Consciousness seems to be the subjective experience of the mind. It would seem to emerge from physical processes in our brains. Once we have understood consciousness, we can program it. David Deutsch (one of the Godfathers of quantum computing) has a rule of thumb: “If you can’t program it, you haven’t understood it”.  When we understand human consciousness well enough to program it into the software running on our computers, we will have a real AGI.

 

Abstractions.  Why are they important regarding AGI?

Abstractions are real, complex systems that have effects on the physical world. But they are not physical. They emerge from physical entities, tangible in our universe. Abstractions are powered by the physical.  “Mind” is an abstraction. Our brains (physical) carry knowledge (non-physical). The knowledge is encoded in our physical brains, like a program in a computer. But there is another “layer” above the physical, which  we call “mind”.  Another way I’ve come to understand abstractions is: They are more than the sum of their parts. The ‘More” is referring to abstractions. And they are objectively real.

 

Today, computer programs contain abstractions. They are made of atoms but they contain abstractions which can affect the world.  Example: If you are playing chess against a computer and it wins, what beat you? What beat you is the abstract knowledge which was embodied in that computer program. People put that knowledge there.

Our minds (like computer programs) are abstract, non-physical, but they influence physical entities.

Understanding abstractions is a necessary step to achieving AGI.  We need good explanations about how the abstract layers of mind work if we are to get closer to programming AGI. To create an AGI, we must program abstract knowledge into physical software.

 

Can AGI Evolve Artificially?

For AI to evolve to have AGI, to think like humans but faster, we first have to fill the gaps in our understanding of how life emerged. We don’t yet know how inorganic material became organic; we don’t know how life forms became self- replicating.  We need to fill in huge gaps before the process can be understood, then programmed.

 

There is another option for AGI to evolve: We could try to recreate the universe in an artificial simulator. For this we would have to be familiar with all the laws of physics.  We could then recreate our universe in a computer simulation according to those laws.

 

We cannot forget that, for evolution to happen, an entity must be able to solve problems.  By solving problems it adapts to its environment. Taking this into account, modelling our own universe seems logical. Even though there may be a way to produce life in a simulation, this may or may not be possible given the amount of physical material we would need for the computations and the time available before the end of the universe.

 

If we keep filling AI with human knowledge and if we increase speed and memory, will AI become a person spontaneously? No.  Such a theory would be similar to the old theory of Lamarckism which Darwin replaced with a better theory - evolution by natural selection.

 

Let’s fill in the gaps in our knowledge of how human intelligence has come into being before we think it can “just happen”.

 

AGI Progress So Far?

There is no fundamental difference between today’s computers and our original computers. Today’s computers are faster, have more memory and are less error prone but they are still following the same philosophy,

Today’s AI cannot genuinely pass the Turing test – a test that tries to fool a human judge into believing the AI is human. The AI is given questions to test if it can understand concepts as humans do but, as of yet, there is no understanding happening.

We can’t expect SIRI to be a go-to companion any time soon, neither can we expect AlphaZero to become a creative knowledge creator. We can’t expect any machine learning program to spontaneously evolve out of its finite realm to join us in our infinite realm. We must first expand the scope of our knowledge.

 

Conclusion

There are benefits to having a real AGI that we can imagine and more beyond our imagination. Immortality comes to mind.  Populating the rest of the universe does as well.

People and AGI together are a mind with an infinite repertoire of problem solving potential. Once we understand the mind and program an AGI with this potential it will be, by all definitions, a person. AGI will be able to understand concepts and create new knowledge independently. It could use that knowledge to help discover new, better problems – problems that, when solved carry expanded potential.

 

AGI will be universal knowledge creators and explainers like us. And we will treat them like people.

 

I hope our motivation to program AGI doesn’t fade.  It would be detrimental to us if we passed laws delaying AGI. Imagine what we might have discovered by now if the Inquisition hadn’t, constrained science as they did hundreds of years ago?

 

Today’s computers don’t have ideas but people do. Computers don’t comprehend meaning from words.  They can’t interpret implications or subtle nuances in voice tone. People can. It is possible to program an AGI but “probably” having an AGI before a certain time is prophecy. Only after we have more fully understood creativity and consciousness can we begin the process of programming it. Realizing that we have the ability to solve the challenging problems involved in programming an AGI, we can deal with our unpredictable future.

 

But . . . I may be wrong.

 


* A message to the Down Voters

 

This Worldview can be refined, but it needs your interaction. Therefore, the comment section is as important as the main paper. It will also give me a chance to correct errors and explain sections in more detail. This worldview is fallible as are all worldviews. Criticism can make it stronger.  What resists criticism will die.

The point of the Future Fund contest is to introduce the hosts to ideas they may not have otherwise thought through. Downvotes, without explanation make it unlikely the Future Fund will notice worldviews “beyond the chess board”, which defeats the purpose. Downvotes could mean I’m wrong, but it could also mean I’m writing something you just don’t agree with or that you are part of an authoritarian group which doesn’t allow conflicting worldviews (in other words doesn’t allow error correction).

So if you downvote, bring an explanation. This will help distinguish rational criticism from the irrational.

 

-8

0
0

Reactions

0
0

More posts like this

Comments24
Sorted by Click to highlight new comments since:

Why is this post being downvoted and no one is commenting to explain why they disagree and/or feel like it’s a bad post? This is why I dislike the voting system, it takes from people having to actually engage with others they disagree with (which is how we will make progress) !! It would also be helpful for someone like me (who is not in the AI space) to understand what it is people are disagreeing with about this post? I understand some people don’t feel that certain posts are worth engaging with (which is fine) but at least don’t downvote then?

I understand some people don’t feel that certain posts are worth engaging with (which is fine) but at least don’t downvote then?

I disagree, I think it's perfectly fine for people to downvote posts without commenting. The key function of the karma system is to control how many people see a given piece of content, so I think up-/downvotes should reflect "should this content be seen by more people here?" If I think a post clearly isn't worth reading (and in particular not engaging with), then IMO it makes complete sense to downvote so that fewer other people spend time on it. In contrast, if I disagree with a post but think it's well-argued and worth engaging with I would not downvote, and would engage in the comments instead.

When I see a net negative karma post, one of the first things I do is check the comments to see why people are downvoting it. Comments are much better than votes as a signal of the usefulness of a post. Note also that:

  1. I might disagree with the comment, giving me evidence to ignore the downvotes and read the post.
  2. I'm especially interested in reading worthwhile posts with downvotes, because they might contain counterarguments to trendy ideas that people endorse without sufficient scrutiny.
  3. Without comments, downvotes are anonymous. For all I know, the downvoters might have acted after reading a few sentences. Or they might be angry at the poster for personal reasons unrelated to the post. Or they might hold a lot of beliefs that I think are incorrect.
  4. Not sure how the EA Forum algorithm works, but it might be the case that fewer people see a post with downvotes, leading to a feedback loop that can bury a good idea before anyone credible reads it.
  5. In the best case, a comment summarizes the main ideas of the post. Even if the main ideas are clearly wrong, I'd rather hear about them so I can go "ah right, another argument of that form, those tend to be flawed" or "wait a minute, why is that flawed again? Let me think about it."
  6. At the very least, a comment tells me why the post got downvotes. Without any comments, I have to either (a) blindly trust the downvoters or (b) read some of the (possibly low quality) post.
  7. Comments can save time for everyone else. See (5) and (6).
  8. Comments are easy! I don't think anyone should downvote without having some reason for downvoting. If you have a reason for downvoting, you can probably spell this reason out with a short comment. This should take a minute or less.

All that being said, I can't remember any downvoted posts that I enjoyed reading. However, I rarely read downvoted posts because (a) I don't see many of them and (b) they often have comments.

Oh, I agree that comment + downvote is more useful for others than only downvote, my main claim was that only downvote is more useful than nothing. So I don't want there to be a norm that you need to comment when downvoting, if that leads to fewer people voting (which I think would be likely). See Well-Kept Gardens Die By Pacifism for some background on why I think that would be really bad.

Tbc, I don't want to discourage commenting to explain votes, I just think the decision of whether that is worth your time should be up to you.

I hope I'm wrong, but I suspect people are downvoting this post for not being highly familiar with insider EA jargon and arguments around AI, and pattern matching to 101 level objections like "machines can't have souls". 

I do disagree with a lot of the arguments made in the post. For example, I think machine learning is fundamentally different to regular programming, in that it's a hill-climbing trial and error machine, not just a set of commands. 

However I think a large part of the post is actually correct. Major conceptual breakthroughs will be required to turn the current AI tech into anything resembling AGI, and it's very hard to know when or if those breakthroughs will occur. That last sentence was basically a paraphrase of Stuart Russell, btw, so it's not just AI-risk skeptics that are saying it. It is entirely possible that we get stuck again, and it'll take a greater understanding of the human brain to get out of it. 

Thanks, that was sort of my sense. I appreciate that you are both sharing what you agree and disagree with too. It’s refreshing!

I'd say a crux is I believe the reason AI is at all useful in real tasks is something like 50% compute, 45% data, and 5% conceptual. In other words, compute and data were the big bottlenecks to usefulness, and I give an 80% that in 10 years time they will still be bottlenecks.

I'd say the only conceptual idea was the idea of a neural network at all, and everything since is just scaling at work.

Yet I think there is a simpler reason why this post is downvoted, and there are 3 reasons:

  1. It's way longer than necessary.

  2. Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won't come, and makes statements, but 0 evidence is there.

  3. Some portions of his argument don't really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.

Overall, I'd strongly downvote the post for these reasons.

Please see my response in Bold.

 

  1. It's way longer than necessary.

I understand, I struggled to find a way to make it shorter. I actually thought it needed to be longer, to make more explicit each section. I thought that if I could explain in detail, a lot more of this worldview would make sense to more people. If you could give me an example on how to condense a section and keep the message intact please let me know. It’s a challenge that I’m working on.
 

2. Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won't come, and makes statements, but 0 evidence is there.

On the contrary I believe AGI will come. I wrote this in may essay. AGI is possible. But I don’t think it will come spontaneously. We will need the required knowledge to program it.


3. Some portions of his argument don't really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.

I can see how I leaned very heavily on the Bayesian section (my wife had the same critique) but I felt it important to stress the differing approaches to scientific understandings, between Bayesianism and Fallibilism. I’m under the impression many people don’t know the differences.

and pattern matching to 101 level objections like "machines can't have souls".

I don't think the readers are pattern matching to "machines can't have souls", but some of the readers probably pattern match to the "humans need to figure out free will and consciousness before they can build AGI" claim. Imo they would not be completely wrong to perform this pattern matching if they give the post a brief skim.

I find the article odd in that it seems to be going on and on about how it's impossible to predict the date when people will invent AGI, yet the article title is "AGI isn't close", which is, umm, a prediction about when people will invent AGI, right?

If the article had said "technological forecasting is extremely hard, therefore we should just say we don't know when we'll get AGI, and we should make contingency-plans for AGI arriving tomorrow or in 10 years or in 100 years or 1000 etc.", I would have been somewhat more sympathetic.

(Although I still think numerical forecasts are a valuable way to communicate beliefs even in fraught domains where we have very little to go on -- I strongly recommend the book "Superforecasting".)

(Relatedly, the title of this post uses the word "close" without defining it, I think. Is 500 years "close"? 50 years? 5 years? If you're absolutely confident that "AGI isn't close" as in we won't have AGI in the next 30 years (or whatever), which part of the article explains why you believe that 30 years (or whatever) is insufficient?)

As written, the article actually strikes me as doing the crazy thing where people sometimes say "we don't know 100% for sure that we'll definitely have AGI in the next 30 years, therefore we should act as if we know 100% for sure that we definitely won't have AGI in the next 30 years". If that's not your argument, good.

Nice catch. Yes the title could use some refining, but it does catch more attention.

The point that I am am trying to make in the essay is, AGI is possible but putting a date on when we will have an AGI is just fooling ourselves. 
 

Thanks for taking the time to comment.

AGI is possible but putting a date on when we will have an AGI is just fooling ourselves. 

So if someone says to you “I’m absolutely sure that there will NOT be AGI before 2035”, you would disagree, and respond that they’re being unreasonable and overconfident, correct?

My phrasing below is more blunt and rude than I endorse, sorry. I’m writing quickly on my phone. I strong downvoted this post after reading the first 25% of it. Here are some reasons:

“Bayesianism purports that if we find enough confirming evidence we can at some point believe to have found “truth”.” Seems like a mischaracterization, given that sufficient new evidence should be able to change a Bayesian’s mind (tho I don’t know much about the topic).

“We cannot guess what knowledge people will create into the future” This is literally false, we can guess at this and we can have a significant degree of accuracy. E.g. I predict that there will be a winner in the 2020 US presidential election, even though I don’t know who it will be. I can guess that there will be computer chips which utilize energy more efficiently than the current state of the art, even though I do not know what such chips will look like (heck I don’t understand current chips).

“We can’t predict the knowledge we will have in the future. If we could, we would implement that knowledge today” Still obviously false. Engineers often know approximately what the final product will look like without figuring out all the details along the way.

“To achieve AGI we will need to program the following:

knowledge creating processes emotions creativity free will consciousness” This is a strong claim which is not obviously true and which you do not defend. I think it is false, as do many readers. I don’t know how to define free will, but it doesn’t seem necessary as you can get the important behavior from just following complex decision processes. Consciousness, likewise, seems hard to define but not necessary for any particular behavior (besides maybe complex introspection which you could define as part of consciousness).

“Reality has no boundary, it is infinite. Remembering this, people and AGI have the potential to solve an infinite number of problems” This doesn’t make much sense to me. There is no rule that says people can solve an infinite number of problems. Again, the claim is not obviously true but is undefended.

Maybe you won’t care about my disagreements given that I didn’t finish reading. I had a hard time parsing the arguments (I’m confused about the distinction between Bayesian reasoning and fallibilism, and it doesn’t line up with my prior understanding of Bayesianism), and many of the claims I could understand seem false or at least debatable and you assume their true.

This post is quite long and doesn’t feature a summary, making it difficult to critique without significant time investment.

Please see my reply’s in Bold
 

My phrasing below is more blunt and rude than I endorse, sorry. I’m writing quickly on my phone. I strong downvoted this post after reading the first 25% of it. Here are some reasons:

“Bayesianism purports that if we find enough confirming evidence we can at some point believe to have found “truth”.” Seems like a mischaracterization, given that sufficient new evidence should be able to change a Bayesian’s mind (tho I don’t know much about the topic).

 

Yes, that is how Baysianism works. A Bayesian will change their mind based on either the confirming or disconfirming evidence.

 

“We cannot guess what knowledge people will create into the future” This is literally false, we can guess at this and we can have a significant degree of accuracy. E.g. I predict that there will be a winner in the 2020 US presidential election, even though I don’t know who it will be.

 

I agree with you, but you can’t predict this for things that will happen 100 years from now. We may find better ways to govern by then.

 

I can guess that there will be computer chips which utilize energy more efficiently than the current state of the art, even though I do not know what such chips will look like (heck I don’t understand current chips).
 

Yes, this is the case if progress continues. But it isn’t inevitable. There are groups attempting to create a society that inhibit progress. If our growth culture changes, our “chip” producing progress could stop. Then any prediction about more efficient chips would be an error.

 

“We can’t predict the knowledge we will have in the future. If we could, we would implement that knowledge today” Still obviously false. Engineers often know approximately what the final product will look like without figuring out all the details along the way.

 

Yes, that works incrementally. But what the engineers can’t predict is what their next, next renditions of their product will be. And those subsequent steps is what I’m referring to in my essay.

 

“To achieve AGI we will need to program the following:

knowledge creating processes emotions creativity free will consciousness” This is a strong claim which is not obviously true and which you do not defend. I think it is false, as do many readers. I don’t know how to define free will, but it doesn’t seem necessary as you can get the important behavior from just following complex decision processes. Consciousness, likewise, seems hard to define but not necessary for any particular behavior (besides maybe complex introspection which you could define as part of consciousness).

 

This is a large topic and requires much explanation. But in short, what makes a person are those things listed. And AGI will by definitions will be a person.

 

“Reality has no boundary, it is infinite. Remembering this, people and AGI have the potential to solve an infinite number of problems” This doesn’t make much sense to me. There is no rule that says people can solve an infinite number of problems. Again, the claim is not obviously true but is undefended.

 

I agree, the claims in my essay depend on progress and the universe being infinite.

If you are truly interested in going deep on infinity, have a look at the book “The Beginning of Infinity”.
 

Maybe you won’t care about my disagreements given that I didn’t finish reading. I had a hard time parsing the arguments (I’m confused about the distinction between Bayesian reasoning and fallibilism, and it doesn’t line up with my prior understanding of Bayesianism), and many of the claims I could understand seem false or at least debatable and you assume their true.

 

Yes, each of my claims are debatable and contain errors. They are fallible as are all our ideas. And I appreciate you stress testing them, you brought up many important points.
 

This post is quite long and doesn’t feature a summary, making it difficult to critique without significant time investment.

 

This is a challenge, most of these theories need much more detailed explanation, Not less. I wish I could find a way to summarize and keep the knowledge intact.


 

Thank you for taking the time to make your  comments.

Hey, I just want to say that this post is too long for me, I would be interested to read it if it was short or summarized or something like that (said in a friendly way, allowing others to agree/disagree so that you'll get a broader picture of what people think about this opinion)

I agree, I wish I could have found a way to make it shorter and keep the knowledge intact. That’s something I will continue to work on.

I appreciate the comment.

Probabilities or Bayesian inference work within finite sets, like in a game of chess or poker.

Mathematically, Probabilities can also be used for infinite sets. For example, there is the uniform probability distribution over the real numbers between and (of which there are infinitely many).

To achieve AGI we will need to program the following:

  • knowledge creating processes
  • emotions
  • creativity
  • free will
  • consciousness.

With the exception of knowledge creating processes, these are just wrong in my opinion. As a counterexample, AIXI can be formulated without any knowledge of emotions, creativity, free will, consciousness. Approximations to AIXI can be programmed without any knowledge of these. And AIXI is (at least) AGI level (of course, AIXI is not real, non-approximated AIXI is impossible to build, and it is doubtful that approximating AIXI will be useful for building AGI; this is just an example that vast intelligence is possible without explicit programming of emmotions, etc.).

Example: If you are playing chess against a computer and it wins, what beat you? What beat you is the abstract knowledge which was embodied in that computer program. People put that knowledge there.

This is just wrong in the case of AlphaZero, where the knowledge was learned by training on self-played chess games, and not explicitly programmed in.

Mathematical problems (infinities) don’t need to reference the physical world. Math claims certainties, Science doesn’t. Science must reference the physical world.

AI like AlphaZero,  will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.

Great questions, I’m still putting some more thought into these. 
Thanks

A quick comment after reading about 50% of this article:  it seems to focus on statements instead of arguments,  e.g. "we cannot calculate the probability of future events as if people didn’t exist." or "We are after genuine creativity, not the illusion of creativity."At the same time, it doesn't really engage with the literature on AI risk or even explain why the definitions adopted e.g. the knowledge definition, are the most appropriate ones. There might be some interesting thoughts in there, but it'd be better for the author to develop them in shorter articles and make the arguments more clear. 

I agree, my essay was tackling a lot. A series of short articles would be a better approach. But this was for the Future Fund Worldview Prize and they required one essay introducing  a worldview. I may choose you’re approach in the future. 
 

Regards

I also wrote a commentary which was downvoted without comments. Later it turned out that the reason was that many people didn't like my title, which was negative on AI and, I admit, a little flamboyant. I changed the title and some people withdrew their downvotes. 

This makes the voting system rather dubious in my opinion.

I disagree, primarily because I think clickbait is actually a big problem, because attention matters.

So the voting system punishes clickbaity titles, which is exactly a good use of the voting system.

"The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information" is one of the classic papers of Cognitive Psychology. Is it clickbait?
What about 

The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology.

Fodor’s Guide to Mental Representation: The Intelligent Auntie’s Vade-Mecum.

What Darwin Got Wrong.

Tom swift and his procedural grandmother.

 

Clever titles aren't always clickbait. 

To achieve AGI we will need to program the following:  

  • knowledge creating processes
  • emotions
  • creativity
  • free will
  • consciousness.

I suspect a large part of the crux is the definition of AGI itself. I don't know many people who think that an agent / system must fulfill all of the above criteria to qualify as 'AGI'. I personally use the term AGI to refer to systems that have at least human-level capabilities at all tasks that a human is capable of performing, regardless of whether the system posses other properties like consciousness and free will. 

On a separate matter, I think it might be a good idea if there is a dual voting system for posts, just like comments, where people can upvote/downvote and also agree/disagree vote. This is a post that I would upvote but strong disagree on. In the meantime I gave it an upvote anyway, since I like posts that at least attempt to constructively challenge prevailing worldviews, and also to balance out all the downvotes. 

Curated and popular this week
Relevant opportunities