A Quote by Kate Crawford

Big data sets are never complete. — © Kate Crawford
Big data sets are never complete.
People think 'big data' avoids the problem of discrimination because you are dealing with big data sets, but, in fact, big data is being used for more and more precise forms of discrimination - a form of data redlining.
While many big-data providers do their best to de-identify individuals from human-subject data sets, the risk of re-identification is very real.
Big data will never give you big ideas... Big data doesn't facilitate big leaps of the imagination. It will never conjure up a PC revolution or any kind of paradigm shift. And while it might tell you what to aim for, it can't tell you how to get there
As we move into an era in which personal devices are seen as proxies for public needs, we run the risk that already-existing inequities will be further entrenched. Thus, with every big data set, we need to ask which people are excluded. Which places are less visible? What happens if you live in the shadow of big data sets?
When dealing with data, scientists have often struggled to account for the risks and harms using it might inflict. One primary concern has been privacy - the disclosure of sensitive data about individuals, either directly to the public or indirectly from anonymised data sets through computational processes of re-identification.
Big data is great when you want to verify and quantify small data - as big data is all about seeking a correlation - small data about seeking the causation.
The religion of Big Data sets itself the goal of fulfilling man's unattainable desires, but for that very reason ignores her attainable needs.
Data and data sets are not objective; they are creations of human design. We give numbers their voice, draw inferences from them, and define their meaning through our interpretations.
We don't use the term 'big data' - not on our website, not with customers. Saying it sets up expectations, the wrong expectations.
MapReduce has become the assembly language for big data processing, and SnapReduce employs sophisticated techniques to compile SnapLogic data integration pipelines into this new big data target language. Applying everything we know about the two worlds of integration and Hadoop, we built our technology to directly fit MapReduce, making the process of connectivity and large scale data integration seamless and simple.
We get more data about people than any other data company gets about people, about anything - and it's not even close. We're looking at what you know, what you don't know, how you learn best. The big difference between us and other big data companies is that we're not ever marketing your data to a third party for any reason.
One [Big Data] challenge is how we can understand and use big data when it comes in an unstructured format.
I will talk about two sets of things. One is how productivity and collaboration are reinventing the nature of work, and how this will be very important for the global economy. And two, data. In other words, the profound impact of digital technology that stems from data and the data feedback loop.
Biases and blind spots exist in big data as much as they do in individual perceptions and experiences. Yet there is a problematic belief that bigger data is always better data and that correlation is as good as causation.
Data will always bear the marks of its history. That is human history held in those data sets.
Big data has been used by human beings for a long time - just in bricks-and-mortar applications. Insurance and standardized tests are both examples of big data from before the Internet.
This site uses cookies to ensure you get the best experience. More info...
Got it!