The starting point of the journey to build analytical big data products usually requires assessing a range of Data Science ideas. Key question is how to vet these data science ideas before committing to building a product around them? An important related question is, how to make data science exploration work in an Agile environment? […]
Wasted data science? Data scientists are expensive. Good data scientists are hard to find and they are often the bottle neck resource in analytics projects. Reducing the amount of time they spend on unnecessary work by, say 50% would be amazing, right? It would be amazing because it would not only speed up strategically vital […]
Do you remember one of the big product research fails in history (which has a happy end, but that’s a different story)? The introduction of “New Coke” in 1985. Even though The Coca Cola Company conducted taste tests among 200,000 consumers which confirmed a superior taste experience in comparison to the traditional recipe, the product […]
Using a GPU for computationally intensive and embarrassingly parallel tasks is nothing new; neither in science, nor in general IT. We have been playing around with (CUDA-based) general purpose GPU computing (GPGPU) for several years at GfK and have to admit that GPUs are now finally suitable for general purpose computing. Why? Because the setup […]
Data science is all around us these days. It’s the new big thing and considered to be the solution to almost any challenge that companies face. Being a data scientist is supposed to be the sexiest job of the 21st century and the faith in the power of analytics seems to be the mantra of […]
Today’s perception of Artificial Intelligence (AI) in the general public is massively influenced by movies like Her or Ex-Machina. ‘Super’ intelligent systems easily outperform human intelligence on almost all fronts and speak to ‘inferior’ humans in the voice of Scarlett Johansson – at least in the film Her.
The pressure on big data projects to deliver is huge. On the one hand, there is a high demand for data analytics tools for better decision making. On the other hand, projects often struggle with technical issues when implementing and operating the large variety of software and hardware in the big data stack. As a result the time to value is often too slow.
It may be surprising to some readers that the first few articles of our “Big Data, Broken Promise?” series have not covered many Technology topics at all. Is it really that easy to set up a suitable IT infrastructure and all the required tools for Data Scientists?
“Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.”
Some consider data enrichment as cheating, some as magic. Actually, it is neither. Data enrichment is a term that comprises a bunch of methods that range from engineering to science, plus a pinch of experience. In this blogpost, I would like to shed some light on different methods of data enrichment, their concepts and requirements. No worries, I will not dive into math.
In the latest blog post of our “Big Data, Broken Promise?” series, we have shared our number 1 observation of why numerous Big Data projects fail: They start without a clear understanding of the use case(s) that should be addressed and without sufficient interaction with the final end user or target group.
As announced in our last week’s blog post, in this series we are aiming to share our learnings and experiences on how to successfully approach big data projects and continuously transform into a data-driven company.