Health care - the ability of neural networks to ingest lots of data and make predictions is very well suited to this area, and potentially will have a huge societal impact.
There's a lot of work in machine learning systems that is not actually machine learning.
The healthcare space is a very complicated one for a variety of reasons: It's much more regulated than some other kinds of industries, for good reason.
We want to build systems that can generalize to a new task. Being able to do things with much less data and with much less computation is going to be interesting and important.
Very simple techniques, when you have a lot of data, work incredibly well.
I've always liked code that runs fast.
I am concerned in general about carbon emissions and machine learning.
It's important to engage with governments around the world in how they're thinking about AI - to help inform them.
We have a lot of work to do to get really important useful capabilities into people's hands - self-driving cars are going to save an enormous number of lives.
As a society I think we are going to be much better off by having machines that can work in conjunction with humans to do things more efficiently and even better in some cases. That will 'enable humans to do things that they do better than machines.
I think one of the things about reinforcement learning is that it tends to require exploration. So using it in the context of physical systems is somewhat hard.
I think there are sometimes issues with - no matter where you put a conference, there's always going to be constraints on that. For example, sometimes students studying in the U.S. have trouble leaving the U.S. to go to a conference. So if you hold it outside the U.S. in a particular place, that sometimes creates complications.
Computers don't usually have a sense of if you have a picture of something what is in that image. And if we can do a good job of understanding what is in an image, that can bring along a lot of new things you can do in applications.
It's pretty clear that machine learning is going to a big part of science and engineering.
Supervised learning works so well when you have the right data set, but ultimately unsupervised learning is going to be a really important component in building really intelligent systems - if you look at how humans learn, it's almost entirely unsupervised.
As devices continue to shrink and voice recognition and other kinds of alternative user-interfaces become more practical, it is going to change how we interact with computing devices. They may fade into the background and just be around, allowing us to talk to them just as we would some other trusted companion.
People in my organization were very outspoken about what we should be doing with the Department of Defense. One of them is work on autonomous weapons. That, to me, is something I don't want to work on or have anything to do with.
I worry policymakers are not putting enough attention on what we should be planning for 10 years down the road. In general, governments aren't necessarily that good at looking down the road when it is a difficult issue.
One thing I think is true is that is you have someone who's really good in one or a few areas they can pick up something new pretty quickly and that's kind of a hallmark of someone you really want to hire because they can be very useful in a whole bunch of different areas.
I think really what cloud customers care about is, can they get their problem solved on any particular provider's cloud products?
In Google data centers, our energy usage throughout the year for all our computing needs is 100 percent renewable.
Machine learning is a new way of creating problem solving.
I think robotics is a really hard problem - to make robots that operate in sort of arbitrary environments, like a big conference room with chairs and stuff.
AI can help solve some of the most difficult social and environmental challenges in areas like healthcare, disaster prediction, environmental conservation, agriculture, or cultural preservation.
I do kind of think there's a bit of an overemphasis on - in the community - on sort of achieving ever-so-slightly better state-of-the-art results on particular problems, and a little underappreciation of completely different approaches to problems that maybe don't get state of the art because it's actually super hard and a pretty explored area.
Reinforcement learning is the idea of being able to assign credit or blame to all the actions you took along the way while you were getting that reward signal.
In order to reason, you need a network to be able to bring in knowledge from several different areas, such as math, science, and philosophy, to reach reasonable conclusions on what it's been tasked with.
I spend a fair amount of time dealing with email, mostly deleting them or skimming them to get a sense of what is going on.
I think true artificial general intelligence would be a system that is able to perform human-level reasoning, understanding, and accomplishing of complicated tasks.
I like working in small teams where people on the team have very different skills than what I have and that banter back and forth, and the ability to build something collectively that none of you could do individually is actually a really useful and valuable thing.
You need to find someone that you're gonna pair-program with who's compatible with your way of thinking, so that the two of you together are a complementary force.
In a lot of these areas, from machine translation to search quality, you're always trying to balance what you can do computationally with each query.
The things that I really enjoy doing are finding interesting problems and working together with colleagues to figure out how we can solve them.
We're training Google Street View to recognize street numbers.
Some things are easier to parrellelize than others. It's pretty easy to train up 100 models and pick the best one. If you want to train one big model but do it on hundreds of machines, that's a lot harder to parallelize.
There's nothing like necessity of needing to do something to cause you to come up with abstractions that help you break through the forms.
A lot of human learning comes from unsupervised learning where you're just sort of observing the world around you and understanding how things behave.
If you pass a lot of data through a teeny network, like 20 neurons, it'll do what it can, but it's not going to be very good.
We have way more unsupervised data in the world than supervised data.
Traditionally computers have not been that good at interacting with people in ways that people feel natural interacting with.
Definitely there's growing use of machine learning across Google products, both data-center-based services, but also much more of our stuff is running on device on the phone.
Some people are happy to work in a particular domain or some field of computer science for years, and years. I personally like to kind of move around every few years, just to learn about new areas.
I tend to be very impatient, thinking about all the ways we can do something, my mind and hands spinning at a very fast rate.
Health care has a lot of interesting machine-learning problems - outpatient outcomes, or when you have x-ray images and you want to predict things.
Deep neural networks are responsible for some of the greatest advances in modern computer science.
I think multimodal kinds of models are pretty interesting - like can you combine text with imagery or audio or video in interesting ways?
Vision I think is going to be an important input. Like, if you're using Google Glass, it's going to be able to look around and read all the text on signs and do background lookups on additional information and serve that. That will be pretty exciting.
One of the things that inspires me about working for Google is that when we solve a problem here, we can get that used by one million or even a billion people. That is very motivating as a computer scientist.
With TensorFlow, when we started to develop it, we kind of looked at ourselves and said: 'Hey, maybe we should open source this.'
I think that is one of the main goals of pushing forward in machine learning: having computers provide the wisdom that a human companion would be able to provide in offering advice, looking up more information when necessary and those kinds of things.
It would be great to have every engineer have at least some amount of knowledge of machine learning.
The speech recognition is now good enough that I dictate emails on my phone rather than type them in. It's not perfect, but it's good enough that it changes how I interact with my phone.
We're happy to work with military or other government agencies in ways that are consistent with our principles. So if we want to help improve the safety of Coast Guard personnel, that's the kind of thing we'd be happy to work on.
It's nice to have short-term to medium-term things that we can apply and see real change in our products, but also have longer-term, five to 10 year goals that we're working toward.
Previously, we might use machine learning in a few sub-components of a system. Now we actually use machine learning to replace entire sets of systems, rather than trying to make a better machine learning model for each of the pieces.
The idea behind reinforcement learning is you don't necessarily know the actions you might take, so you explore the sequence of actions you should take by taking one that you think is a good idea and then observing how the world reacts. Like in a board game where you can react to how your opponent plays.
Microsoft is in a lot of the same businesses that Google is in.
Understanding language is core to a lot of Google products such as Gmail.
If you only have 10 examples of something, it's going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that's the kind of scale where you should really start thinking about these kinds of techniques.
Deep learning is a really powerful metaphor for learning about the world.