Top 70 Quotes & Sayings by Oren Etzioni

Explore popular quotes and sayings by an American businessman Oren Etzioni.
Last updated on December 23, 2024.
Oren Etzioni

Oren Etzioni is an American entrepreneur, Professor Emeritus of computer science, and founding CEO of the Allen Institute for Artificial Intelligence (AI2). On June 15, 2022, he announced that he will step down as CEO of AI2 effective September 30, 2022. After that time, he will continue as a board member and advisor. Etzioni will also take the position of Technical Director of the AI2 Incubator. Oren joined the University of Washington faculty in 1991, where he became the Washington Research Foundation Entrepreneurship Professor in the Department of Computer Science and Engineering. In May 2005, he founded and became the director of the university's Turing Center. The center investigated problems in data mining, natural language processing, the Semantic Web and other web search topics. Etzioni coined the term machine reading and helped to create the first commercial comparison shopping agent.

I'm trying to use AI to make the world a better place. To help scientists. To help us communicate more effectively with machines and collaborate with them.
A.I. should not be weaponized, and any A.I. must have an impregnable 'off switch.'
We don't want A.I. to engage in cyberbullying, stock manipulation, or terrorist threats; we don't want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don't want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.
One of my favorite sayings is, 'Much have I learned from my teachers, but even more from my friends and even more from my students.' — © Oren Etzioni
One of my favorite sayings is, 'Much have I learned from my teachers, but even more from my friends and even more from my students.'
Automation has emerged as a bigger threat to American jobs than globalization or immigration combined.
A calculator is a tool for humans to do math more quickly and accurately than they could ever do by hand; similarly, AI computers are tools for us to perform tasks too difficult or expensive for us to do on our own, such as analyzing large data sets or keeping up to date on medical research.
There are many valid concerns about AI, from its impact on jobs to its uses in autonomous weapons systems and even to the potential risk of superintelligence.
Infrastructure investment in science is an investment in jobs, in health, in economic growth and environmental solutions.
Understanding of natural language is what sometimes is called 'AI complete,' meaning if you can really do that, you can probably solve artificial intelligence.
The Turing Test was a brilliant idea, but it's evolved into a competition of chatbots.
Driverless cars are a great thing.
If you believe everything you read, you are probably quite worried about the prospect of a superintelligent, killer AI.
Israel is a wonderful place to grow up.
I like to say I've been working on big data for so long, it used to be small data when I started working on it.
I'm not so worried about super-intelligence and 'Terminator' scenarios. Frankly I think those are quite farfetched.
The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals and have its own will and will use its faster processing abilities and deep databases to beat humans at their own game.
It's paradoxical that things that are hard for people are easy for the computer, and things that are hard for the computer, any child can understand.
A lot of people are scared that machines will take over the world, machines will turn evil: the Hollywood 'Terminator' scenario. — © Oren Etzioni
A lot of people are scared that machines will take over the world, machines will turn evil: the Hollywood 'Terminator' scenario.
When there are hiring decisions and promotion decisions to be made, people are hungry for data.
Taking new technology and incorporating into how people work and live is not easy.
An AI utopia is a place where people have income guaranteed because their machines are working for them. Instead, they focus on activities that they want to do, that are personally meaningful like art or, where human creativity still shines, in science.
The biggest reason we want autonomous cars is to prevent accidents.
It's hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I'd have to guess that talking about black holes gets boring after awhile - it's a slowly developing topic.
To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations.
Life is short. Don't do the same thing everyone else is doing - that's such a herd mentality. And don't do something that's two percent better than the other person. Do something that changes the world.
AI is a tool. The choice about how it gets deployed is ours.
We have an obligation to figure out how to help people cope with the rapidly changing nature of technology.
At least inside the city of Seattle, driving is going to be a hobby in 2035. It's not going to be a mode of commuting the same way hunting is a hobby for some people, but it's not how most of us get our food.
Cloud computing, smartphones, social media platforms, and Internet of Things devices have already transformed how we communicate, work, shop, and socialize. These technologies gather unprecedented data streams leading to formidable challenges around privacy, profiling, manipulation, and personal safety.
It's much more likely that an asteroid will strike the Earth and annihilate life as we know it than AI will turn evil.
The mechanical loom and the calculator have shown us that technology is both disruptive and filled with opportunities. But it would be hard to find a decent argument that we would have been better off without these inventions.
Because of their exceptional ability to automatically elicit, record, and analyze information, A.I. systems are in a prime position to acquire confidential information.
A universal basic income doesn't give people dignity or protect them from boredom and vice.
Ultimately, to me, the computer is just a big pencil. What can we sketch using this pencil that makes a positive difference to society and advances the state of the art, hopefully in an outsized way?
Even seemingly innocuous housecleaning robots create maps of your home. That is information you want to make sure you control.
I could do a whole talk on the question of is AI dangerous.' My response is that AI is not going to exterminate us. It's a tool that's going to empower us.
Machine learning is looking for patterns in data. If you start with racist data, you will end up with even more racist models. This is a real problem.
If you step back a little and say we want to do A.I., then you will realize that A.I. needs knowledge, reasoning, and explanation.
I think it's important for us to have a rule that if a system is really an AI bot, it ought to be labeled as such. 'AI inside.' It shouldn't pretend to be a person. It's bad enough to have a person calling you and harassing you, or emailing you. What if they're bots? An army of bots constantly haranguing you - that's terrible.
Scientists need the infrastructure for scientific search to aid their research, and they need it to offer relevancy and ways to separate the wheat from the chaff - the useful from the noise - via AI-enabled algorithms. With AI, such an infrastructure would be able to identify the exact study a scientist needs from the tens of thousands on a topic.
I'd like to make a fundamental impact on one of the most exciting, intelligent questions of all time. Can we use software and hardware to build intelligence into a machine? Can that machine help us solve cancer? Can that machine help us solve climate change?
The truth is that behind any AI program that works is a huge amount of, A, human ingenuity and, B, blood, sweat and tears. It's not the kind of thing that suddenly takes off like 'Her' or in 'Ex Machina.'
My dream is to achieve AI for the common good. — © Oren Etzioni
My dream is to achieve AI for the common good.
Deep learning is a subfield of machine learning, which is a vibrant research area in artificial intelligence, or AI.
Sooner or later, the U.S. will face mounting job losses due to advances in automation, artificial intelligence, and robotics.
People thrive on genuine connections - not with machines, but with each other. You don't want a robot taking care of your baby; an ailing elder needs to be loved, to be listened to, fed, and sung to. This is one job category that people are - and will continue to be - best at.
Our highways and our roads are underutilized because of the allowances we have to make for human drivers.
I think that there are so many problems that we have as a society that AI can help us address.
Some people have proposed universal basic income, UBI, basically making sure that everybody gets a certain amount of money to live off of. I think that's a wonderful idea. The problem is, we haven't been able to guarantee universal healthcare in this country.
Science is going to be revolutionized by AI assistants.
I don't think that all the coal miners - or even more realistically, say, the truck drivers whose jobs may be put out by self-driving cars and trucks - they're all going to go and become web designers and programmers.
Just as our roads and bridges are overdue for investment, so is the infrastructure for scientific research; that is, the body of scientific thought and the tools for searching through it.
Netbot was the first comparison shopping company. We realized comparison shopping can be quite tedious if you are driving from one furniture store to another. On the Internet, you can automatically look at a bunch of different stores and see where can you get the best price on a computer or some such thing, so that was the motivation.
In the past, much power and responsibility over life and death was concentrated in the hands of doctors. Now, this ethical burden is increasingly shared by the builders of AI software.
What are we going to do as automation increases, as computers get more sophisticated? One thing that people say is we'll retrain people, right? We'll take coal miners and turn them into data miners. Of course, we do need to retrain people technically. We need to increase technical literacy, but that's not going to work for everybody.
AI is neither good nor evil. It's a tool. It's a technology for us to use. — © Oren Etzioni
AI is neither good nor evil. It's a tool. It's a technology for us to use.
Things that are so hard for people, like playing championship-level Go and poker, have turned out to be relatively easy for the machines. Yet at the same time, the things that are easiest for a person - like making sense of what they see in front of them, speaking in their mother tongue - the machines really struggle with.
I became interested in AI in high school because I read 'Goedel, Escher, Bach,' a book by Douglas Hofstader. He showed how all their work in some ways fit together, and he talked about artificial intelligence. I thought 'Wow, this is what I want to be doing.'
Instead of expecting truck drivers and warehouse workers to rapidly retrain so they can compete with tireless, increasingly capable machines, let's play to their human strengths and create opportunities for workers as companions and caregivers for our elders, our children, and our special-needs population.
All these things that we've contemplated, whether it's space travel or solutions to diseases that plague us, Ebola virus, all of these things would be a lot more tractable if the machines are trying to solve these problems.
This site uses cookies to ensure you get the best experience. More info...
Got it!