AI AM Artificial Intelligence Asimov Blog Eurisko International AI Race Literature machine learning

A Conversation with Doug Lenat – Gigaom

About this Episode

Episode 89 of Voices in AI features Byron speaking with Cycorp CEO Douglas Lenat on creating AI and the very nature of intelligence.

Take heed to this episode or read the complete transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI delivered to you by GigaOm, and I’m Byron Reese. I couldn’t be extra excited immediately. My guest is Douglas Lenat. He’s the CEO of Cycorp of Austin, Texas the place GigaOm is predicated, and he’s been a outstanding researcher in AI for a very long time. He’s been awarded the biannual IJCAI pc and thought award in 1976. He created the machine studying program AM. He worked on (symbolic, not statistical) machine learning with his AM and Eurisko packages, information representation, cognitive financial system, blackboard methods and what he dubbed in 1984 as “ontological engineering.”

He’s worked in army simulations, numerous tasks for the government for intelligence, with scientific organizations. In 1980 he revealed a critique of typical random mutation Darwinism. He authored a collection of articles in The Journal of Synthetic Intelligence exploring the nature of heuristic rules. But that’s not all: he was one of many unique Fellows of the Triple AI. And he’s the only particular person to watch on the scientific advisory board of both Apple and Microsoft. He’s a Fellow of the Triple AI and the cognitive science society, one of the unique founders of TTI/ Vanguard in 1991. And on and on and on… and he was named one of many WIRED 25. Welcome to the present!

Douglas Lenat: Thanks very much Byron, my pleasure.

I have been so wanting forward to our chat and I might simply love, I mean I all the time start off asking what synthetic intelligence is and what intelligence is. And I might identical to to sort of leap straight into it with you and ask you to elucidate, to convey my listeners on top of things with what you’re making an attempt to do with the query of widespread sense and artificial intelligence.

I feel that the primary thing to say about intelligence is that it’s a type of issues that you simply recognize it once you see it, otherwise you recognize it in hindsight. So intelligence to me isn’t just figuring out issues, not just having info and information however understanding when and methods to apply it, and truly successfully making use of it in these instances. And what meaning is that it’s all properly and good to store hundreds of thousands or billions of details.

But intelligence really includes understanding the principles of thumb, the principles of excellent judgment, the principles of excellent guessing that all of us virtually take as a right in our everyday life in widespread sense, and that we might study painfully and slowly in some subject the place we’ve studied and practiced professionally, like petroleum engineering or cardiothoracic surgical procedure or one thing like that. And so widespread sense rules like: greater issues can’t match into smaller issues. And if you consider it, every time that we say anything or write anything to other individuals, we are always injecting into our sentences pronouns and ambiguous words and metaphors and so on. We anticipate the reader or the listener has that information, has that intelligence, has that widespread sense to decode, to disambiguate what we’re saying.

So if I say one thing like “Fred couldn’t put the gift in the suitcase because it was too big,” I don’t mean the suitcase was too huge, I need to imply that the present was too massive. The truth is if I had stated “Fred can’t put the gift in the suitcase because it’s too small” then clearly it will be referring to the suitcase. And there are tens of millions, truly tens of hundreds of thousands of very common rules about how the world works: like huge issues can’t fit into smaller issues, that all of us assume that everybody has and makes use of on a regular basis. And it’s the absence of that layer of data which has made synthetic intelligence packages so brittle for the final 40 or 50 years.

My number one question I ask each [AI is a] Turing check type of thing, [which] is: what’s greater a nickel or the sun? And there’s by no means been one that’s been capable of reply it. And that’s the issue you’re making an attempt to unravel.

Right. And I feel that there’s really two types of phenomena happening here. One is understanding the query and understanding the sense through which you’re talking about ‘bigger.’ One within the sense of perception in the event you’re holding up a nickel in entrance of your eye and so on and the opposite in fact, is objectively figuring out that the sun is actually fairly a bit larger than a typical nickel and so forth.

And so one of the issues that we’ve to convey to bear, in addition to all the things I already stated, are Grice’s guidelines of speaking between human beings where we’ve to assume that the individual is asking us something which is significant. And so we now have to determine what significant question would they really probably be having in thoughts like if someone says “Do you know what time it is?” It’s fairly juvenile and jerky to say “yes” as a result of obviously what they imply is: please tell me the time and so forth. And so within the case of the nickel and the sun, you need to disambiguate whether or not the individual is talking a few perceptual phenomenon or an precise unstated physical reality.

So I wrote an article that I put plenty of effort and time into and I really appreciated it. I ran it on GigaOm and it was 10 questions that Alexa and Google Residence answered in another way however objectively. They need to have been equivalent, and in each one I type of tried to dissect what went improper.

And so I’m going to offer you two of them and my guess is you’ll in all probability be capable of intuit in both of them what the answer, what the issue was. The first one was: who designed the American flag? They usually gave me totally different solutions. One stated “Betsy Ross,” and one stated “Robert Heft,” so why do you assume that occurred?

All proper so in some sense, each of them are doing what you may name an ‘animal level intelligence’ of not likely understanding what you’re asking at all. However in truth doing the equal of (I gained’t even name it pure language processing), let’s call it ‘string processing,’ taking a look at processed net pages, in search of the confluence, and preferably in the identical order, of a few of the phrases and phrases that have been in your question and on the lookout for primarily sentences of the shape: X designed the U.S. flag or one thing.

And it’s actually no totally different than should you ask, “How tall is the Eiffel Tower?” and you get two totally different answers: one based mostly on answering from the one in Paris and one based mostly on the one in Las Vegas. And so it’s all nicely and good to have that type of superficial understanding of what it’s you’re truly asking, as long as the one that’s interacting with the system realizes that the system isn’t actually understanding them.

It’s kind of like your dog fetching a newspaper for you. It’s something which is you already know wagging its tail and getting issues to place in entrance of you, and then you definitely as the one that has intelligence has to take a look at it and disambiguate what does this answer truly suggest about what it thought the query was, as it have been, or what question is it truly answering and so forth.

But this is among the problems that we skilled about 40 years in the past in synthetic intelligence within the in the 1970s. We built AI techniques utilizing what at this time can be very clearly a neural internet know-how. Perhaps there’s been one small tweak in that area that’s value mentioning involving further hidden layers and convolution, and we constructed a AIs using symbolic reasoning that used logic very similar to our Cyc system does at this time.

And again the actual representation appears very similar to what it does right now and there had to be a bunch of engineering breakthroughs alongside the best way to make that happen. However primarily in the 1970s we built AIs that have been powered by the identical two sources of energy you discover at the moment, however they have been extremely brittle they usually have been brittle as a result of they didn’t have widespread sense. They didn’t have that type of information that was mandatory in an effort to perceive the context by which issues have been stated, as a way to understand the complete which means of what was stated. They have been simply superficially reasoning. That they had the veneer of intelligence.

We’d have a system which was the world’s professional at deciding what kind of meningitis a patient could be suffering from. However when you informed it about your rusted out previous automotive or you informed it about someone who’s lifeless, the system would blithely inform you what kind of meningitis they in all probability have been affected by as a result of it simply didn’t understand things like inanimate objects don’t get human illnesses and so on.

And so it was clear that someway we had to pull the mattress out of the street with a purpose to let visitors toward actual AI proceed. Somebody needed to codify the tens of tens of millions of common rules like non people don’t get human illnesses, and causes don’t occur before their results, and enormous issues don’t fit into smaller issues, and so forth, and that it was essential that someone do that undertaking.

We thought we have been truly going to have an opportunity to do it with Alan Kay at the Atari analysis lab and he assembled an excellent workforce. I used to be a professor at Stanford in pc science at the time, so I used to be consulting on that, however that was concerning the time that Atari peaked and then primarily had financial troubles as did everybody within the video game business at that time, and in order that undertaking splintered into several pieces. However that was the core of the concept someway someone wanted to gather all this widespread sense and characterize it and make it obtainable to make our AIs much less brittle.

After which an fascinating thing occurred: right at that time limit once I was beating my chest and saying ‘hey someone please do this,’ which was America was frightened to listen to that the Japanese had announced something they referred to as the ‘fifth generation computing effort.’ Japan principally threatened to do in computing hardware and software and AI what that they had just completed doing in shopper electronics, and in the automotive business: specifically wresting management away from the West. And so America was very scared.

Congress handed one thing that’s how one can tell it was many many years ago. Congress shortly passed one thing, which was referred to as the Nationwide Cooperative Analysis Act, which principally stated ‘hey all you large American companies: normally if you colluded on R & D, we would prosecute you for antitrust violations, but for the next 10 years, we promise we won’t do this.’ And so round 1981 a number of analysis consortia sprang up in the USA for the primary time in computing and hardware and synthetic intelligence and the first a type of was proper right here in Austin. It was referred to as MCC, the Microelectronics and Pc Know-how Company. Twenty 5 giant American corporations every contributed a small number of hundreds of thousands of dollars a yr to fund excessive danger, excessive payoff, long term R & D tasks, tasks which may take 10 or 20 or 30 or 40 years to succeed in fruition, however which, if they succeeded, might help maintain America aggressive.

And Admiral Bob Inman who’s also an Austin resident, one among my favourite individuals, one of many smartest and nicest individuals I’ve ever met, was the top of MCC and he came and visited me at Stanford and stated “Hey look Professor, you’re making all this noise about what somebody ought to do. You have six or seven graduate students. If you do that here if it’s going to take you a few thousand person years. That means it’s going to take you a few hundred years to do that project. If you move to the wilds of Austin, Texas and we put in ten times that effort, then you’ll just barely live to see the end of it a few decades from now.”

And that was a reasonably convincing argument, and in some sense that is the summary of what I’ve been doing for the final 35 years right here is taking day off from analysis to do an engineering venture, an enormous engineering venture referred to as Cycorp, which is amassing that info and representing it formally, placing it multi functional place for the first time.

And the good news is since you’ve waited thirty five years to speak to me Byron, is that we’re nearing completion which is a very exciting part to be in. And so most of our funding lately at Cycorp doesn’t come from the federal government anymore, doesn’t come from just some corporations anymore, it comes from numerous very giant corporations which might be truly placing our know-how into follow, not simply funding it for research reasons.

In order that’s huge news. So when you might have all of it, and to be clear, just to summarize all of that: you’ve spent the final 35 years engaged on a system of getting all of these guidelines of thumb like ‘big things can’t go in small issues,’ and to record all of them out each one among them (darkish issues are darker than mild issues). After which not simply listing them like in an Excel spreadsheet, but to discover ways to categorical them all in ways in which they can be programmatically used.

So what do you have got in the long run when you’ve got all of that? Like if you turn it on, will it inform me which is greater: a nickel or the solar?

Positive. And actually a lot of the questions that you simply may ask that you simply may think of as anybody ought to have the ability to reply this question, Cyc is definitely capable of do a reasonably good job of. It doesn’t understand that unrestricted pure language, so typically we’ll need to encode the question in logic in a proper language, however the language is fairly massive. In truth the language has about one million and a half words and of these, about 43,000 are what you may consider as relationship sort words: like ‘bigger than’ and so forth and so by representing all the information in that logical language as an alternative of say simply accumulating all of that in English, what you’re capable of do is to have the system do automated mechanical inference, logical deduction, so that if there’s something which logically follows from one or two or 2,000 statements, then Cyc (our system) will grind by means of routinely and mechanically come up with that entailment.

And so this is really the place where we diverge from everyone else in AI who’s both glad with machine studying illustration, which is kind of very shallow, virtually stimulus response pair-type representation of data; or people who find themselves working in information graphs and triple and quad shops and what individuals call ontology is lately, and so on which actually are virtually, you’ll be able to consider them like three or 4 word English sentences and there are an terrible lot of problems you’ll be able to remedy, just with machine learning. T

There’s a good bigger set of problems you’ll be able to clear up with machine studying, plus that type of taxonomic information illustration and reasoning. However with a view to actually capture the complete which means, you actually need an expressive logic: one thing that’s as expressive as English. And assume when it comes to taking one in every of your podcasts and forcing it to be rewritten as a collection of three word sentences. It will be a nightmare. Or think about taking something like Shakespeare’s Romeo and Juliet, and making an attempt to rewrite that as a set of three or 4 phrase sentences. It in all probability might theoretically be executed, nevertheless it wouldn’t be any fun to do and it definitely wouldn’t be any fun to learn or take heed to, if individuals did that. And yet that’s the tradeoff that individuals are making. The tradeoff is that in the event you use that limited a logical representation, then it’s very straightforward and nicely understood to effectively, very efficiently, do the mechanical inference that’s wanted.

So in case you symbolize a set is a kind of relationships, you possibly can combine them and chain them together and conclude that a nickel is a kind of coin or something like that. But there really is this distinction between the expressive logics which were understood by philosophers for over 100 years starting with Frege, and Whitehead and Russell and so forth and and others, and the limited logics that others in AI are utilizing as we speak.

And so we primarily began digging this tunnel from the other aspect and stated “We’re going to be as expressive as we have to and we’ll find ways to make it efficient,” and that’s what we’ve achieved. That’s actually the secret of what we’ve finished isn’t just be large on codification and formalization of all of that widespread sense information, however finding what turned out to be about 1100 tips and methods for rushing up the inferring, the deducing course of in order that we might get answers in actual time as an alternative of involving hundreds of years of computation.

Take heed to this episode or learn the complete transcript at www.VoicesinAI.com

Byron explores points around artificial intelligence and acutely aware computer systems in his new guide The Fourth Age: Sensible Robots, Acutely aware Computer systems, and the Future of Humanity.

Categories