A Quote by Robert Love

The key to a solid foundation in data structures and algorithms is not an exhaustive survey of every conceivable data structure and its subforms, with memorization of each's Big-O value and amortized cost.
Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
In C there are no data structures: there are pointers and pointer arithmetic. So you have a pointer into a data structure.
People think 'big data' avoids the problem of discrimination because you are dealing with big data sets, but, in fact, big data is being used for more and more precise forms of discrimination - a form of data redlining.
One of the myths about the Internet of Things is that companies have all the data they need, but their real challenge is making sense of it. In reality, the cost of collecting some kinds of data remains too high, the quality of the data isn't always good enough, and it remains difficult to integrate multiple data sources.
We all say data is the next white oil. [Owning the oil field is not as important as owning the refinery because what will make the big money is in refining the oil. Same goes with data, and making sure you extract the real value out of the data.]
As a digital technology writer, I have had more than one former student and colleague tell me about digital switchers they have serviced through which calls and data are diverted to government servers or the big data algorithms they've written to be used on our e-mails by intelligence agencies.
It is better to have 100 functions operate on one data structure than to have 10 functions operate on 10 data structures.
Biases and blind spots exist in big data as much as they do in individual perceptions and experiences. Yet there is a problematic belief that bigger data is always better data and that correlation is as good as causation.
AIs are only as good as the data they are trained on. And while many of the tech giants working on AI, like Google and Facebook, have open-sourced some of their algorithms, they hold back most of their data.
Big data is great when you want to verify and quantify small data - as big data is all about seeking a correlation - small data about seeking the causation.
There is a calculus, it turns out, for mastering our subconscious urges. For companies like Target, the exhaustive rendering of our conscious and unconscious patterns into data sets and algorithms has revolutionized what they know about us and, therefore, how precisely they can sell.
More data beats clever algorithms, but better data beats more data.
Learn when and how to use different data structures and their algorithms in your own code. This is harder as a student, as the problem assignments you'll work through just won't impart this knowledge. That's fine.
MapReduce has become the assembly language for big data processing, and SnapReduce employs sophisticated techniques to compile SnapLogic data integration pipelines into this new big data target language. Applying everything we know about the two worlds of integration and Hadoop, we built our technology to directly fit MapReduce, making the process of connectivity and large scale data integration seamless and simple.
With too little data, you won't be able to make any conclusions that you trust. With loads of data you will find relationships that aren't real... Big data isn't about bits, it's about talent.
I'm going to say something rather controversial. Big data, as people understand it today, is just a bigger version of small data. Fundamentally, what we're doing with data has not changed; there's just more of it.
This site uses cookies to ensure you get the best experience. More info...
Got it!