Top 53 Quotes & Sayings by Geoffrey Hinton

Explore popular quotes and sayings by a British psychologist Geoffrey Hinton.
Last updated on September 17, 2024.
Geoffrey Hinton

Geoffrey Everest Hinton is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. Since 2013, he has divided his time working for Google and the University of Toronto. In 2017, he co-founded and became the Chief Scientific Advisor of the Vector Institute in Toronto.

Early AI was mainly based on logic. You're trying to make computers that reason like people. The second route is from biology: You're trying to make computers that can perceive and act and adapt like animals.
The brain sure as hell doesn't work by somebody programming in rules.
I am scared that if you make the technology work better, you help the NSA misuse it more. I'd be more worried about that than about autonomous killer robots. — © Geoffrey Hinton
I am scared that if you make the technology work better, you help the NSA misuse it more. I'd be more worried about that than about autonomous killer robots.
The pooling operation used in convolutional neural networks is a big mistake, and the fact that it works so well is a disaster.
Most people in AI, particularly the younger ones, now believe that if you want a system that has a lot of knowledge in, like an amount of knowledge that would take millions of bits to quantify, the only way to get a good system with all that knowledge in it is to make it learn it. You are not going to be able to put it in by hand.
Take any old classification problem where you have a lot of data, and it's going to be solved by deep learning. There's going to be thousands of applications of deep learning.
Any new technology, if it's used by evil people, bad things can happen. But that's more a question of the politics of the technology.
I had a stormy graduate career, where every week we would have a shouting match. I kept doing deals where I would say, 'Okay, let me do neural nets for another six months, and I will prove to you they work.' At the end of the six months, I would say, 'Yeah, but I am almost there. Give me another six months.'
Now that neural nets work, industry and government have started calling neural nets AI. And the people in AI who spent all their life mocking neural nets and saying they'd never do anything are now happy to call them AI and try and get some of the money.
Irony is going to be hard to get. You have to be master of the literal first. But then, Americans don't get irony either. Computers are going to reach the level of Americans before Brits.
In A.I., the holy grail was how do you generate internal representations.
Everybody right now, they look at the current technology, and they think, 'OK, that's what artificial neural nets are.' And they don't realize how arbitrary it is. We just made it up! And there's no reason why we shouldn't make up something else.
Most people at CMU thought it was perfectly reasonable for the U.S. to invade Nicaragua. They somehow thought they owned it.
In the long run, curiosity-driven research just works better... Real breakthroughs come from people focusing on what they're excited about. — © Geoffrey Hinton
In the long run, curiosity-driven research just works better... Real breakthroughs come from people focusing on what they're excited about.
My father was an entomologist who believed in continental drift. In the early '50s, that was regarded as nonsense. It was in the mid-'50s that it came back. Someone had thought of it 30 or 40 years earlier named Alfred Wegener, and he never got to see it come back.
My view is we should be doing everything we can to come up with ways of exploiting the current technology effectively.
I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.
We now think of internal representation as great big vectors, and we do not think of logic as the paradigm for how to get things to work. We just think you can have these great big neural nets that learn, and so, instead of programming, you are just going to get them to learn everything.
My main interest is in trying to find radically different kinds of neural nets.
Humans are still much better than computers at recognizing speech.
Machines can do things cheaper and better. We're very used to that in banking, for example. ATM machines are better than tellers if you want a simple transaction. They're faster, they're less trouble, they're more reliable, so they put tellers out of work.
In science, you can say things that seem crazy, but in the long run, they can turn out to be right. We can get really good evidence, and in the end, the community will come around.
In a sensibly organised society, if you improve productivity, there is room for everybody to benefit.
The role of radiologists will evolve from doing perceptual things that could probably be done by a highly trained pigeon to doing far more cognitive things.
I have a Reagan-like ability to believe in my own data.
I think it's very clear now that we will have self-driving cars.
Making everything more efficient should make everybody happier.
In the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses.
Computers will understand sarcasm before Americans do.
We want to take AI and CIFAR to wonderful new places, where no person, no student, no program has gone before.
I am betting on Google's team to be the epicenter of future breakthroughs.
I got fed up with academia and decided I would rather be a carpenter.
The question is, can we make neural networks that are 1,000 times bigger? And how can we do that with existing computation?
The brain has about ten thousand parameters for every second of experience. We do not really have much experience about how systems like that work or how to make them be so good at finding structure in data.
The paradigm for intelligence was logical reasoning, and the idea of what an internal representation would look like was it would be some kind of symbolic structure. That has completely changed with these big neural nets.
I feel slightly embarrassed by being called 'the godfather.' — © Geoffrey Hinton
I feel slightly embarrassed by being called 'the godfather.'
A deep-learning system doesn't have any explanatory power.
I get very excited when we discover a way of making neural networks better - and when that's closely related to how the brain works.
All you need is lots and lots of data and lots of information about what the right answer is, and you'll be able to train a big neural net to do what you want.
Deep learning is already working in Google search and in image search; it allows you to image-search a term like 'hug.' It's used to getting you Smart Replies to your Gmail. It's in speech and vision. It will soon be used in machine translation, I believe.
I think the way we're doing computer vision is just wrong.
The NSA is already bugging everything that everybody does. Each time there's a new revelation from Snowden, you realise the extent of it.
Once your computer is pretending to be a neural net, you get it to be able to do a particular task by just showing it a whole lot of examples.
I think people need to understand that deep learning is making a lot of things, behind-the-scenes, much better. Deep learning is already working in Google search, and in image search; it allows you to image search a term like "hug."
In the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses - 10 to the 15, it's a very big number.
I refuse to say anything beyond five years because I don't think we can see much beyond five years. — © Geoffrey Hinton
I refuse to say anything beyond five years because I don't think we can see much beyond five years.
As soon as you have good mechanical technology, you can make things like backhoes that can dig holes in the road. But of course a backhoe can knock your head off. But you don't want to not develop a backhoe because it can knock your head off, that would be regarded as silly.
To deal with a 14-dimensional space, visualize a 3-D space and say 'fourteen' to yourself very loudly. Everyone does it.
You look at these past predictions like there's only a market in the world for five computers [as allegedly said by IBM founder Thomas Watson] and you realize it's not a good idea to predict too far into the future.
In deep learning, the algorithms we use now are versions of the algorithms we were developing in the 1980s, the 1990s. People were very optimistic about them, but it turns out they didn't work too well.
I think we should think of AI as the intellectual equivalent of a backhoe. It will be much better than us at a lot of things.
In science, you can say things that seem crazy, but in the long run they can turn out to be right. We can get really good evidence, and in the end the community will come around.
Backhoes can save us a lot of digging. But of course, you can misuse it.
This site uses cookies to ensure you get the best experience. More info...
Got it!