A Quote by Jamie Dimon

Let's look at lending, where they're using big data for the credit side. And it's just credit data enhanced, by the way, which we do, too. It's nothing mystical. But they're very good at reducing the pain points. They can underwrite it quicker using - I'm just going to call it big data, for lack of a better term: "Why does it take two weeks? Why can't you do it in 15 minutes?"
I'm going to say something rather controversial. Big data, as people understand it today, is just a bigger version of small data. Fundamentally, what we're doing with data has not changed; there's just more of it.
Biases and blind spots exist in big data as much as they do in individual perceptions and experiences. Yet there is a problematic belief that bigger data is always better data and that correlation is as good as causation.
People think 'big data' avoids the problem of discrimination because you are dealing with big data sets, but, in fact, big data is being used for more and more precise forms of discrimination - a form of data redlining.
We use nearly 5 thousand different data points about you to craft and target a message. The data points are not just a representative model of you. The data points are about you, specifically.
By using big data, it will also be possible to predict adverse weather conditions, rerouting ships to avoid delays, and monitor fuel data, thereby allowing companies to optimize their supply chains and the way they drive their business.
There's a whole company called Palantir that does nothing but derive and create algorithms riches to search through big data. We're not using their capabilities. For heaven's sake, some of this is just ineptitude.
Errors using inadequate data are much less than those using no data at all.
The biggest mistake is an over-reliance on data. Managers will say if there are no data they can take no action. However, data only exist about the past. By the time data become conclusive, it is too late to take actions based on those conclusions.
The problem with data is that it says a lot, but it also says nothing. 'Big data' is terrific, but it's usually thin. To understand why something is happening, we have to engage in both forensics and guess work.
MapReduce has become the assembly language for big data processing, and SnapReduce employs sophisticated techniques to compile SnapLogic data integration pipelines into this new big data target language. Applying everything we know about the two worlds of integration and Hadoop, we built our technology to directly fit MapReduce, making the process of connectivity and large scale data integration seamless and simple.
As befits Silicon Valley, 'big data' is mostly big hype, but there is one possibility with genuine potential: that it might one day bring loans - and credit histories - to millions of people who currently lack access to them.
With too little data, you won't be able to make any conclusions that you trust. With loads of data you will find relationships that aren't real... Big data isn't about bits, it's about talent.
Big data has been used by human beings for a long time - just in bricks-and-mortar applications. Insurance and standardized tests are both examples of big data from before the Internet.
Go out and collect data and, instead of having the answer, just look at the data and see if the data tells you anything. When we're allowed to do this with companies, it's almost magical.
There are two sources of error: Either you lack sufficient data, or you fail to take advantage of the data that you have.
Big data is great when you want to verify and quantify small data - as big data is all about seeking a correlation - small data about seeking the causation.
This site uses cookies to ensure you get the best experience. More info...
Got it!