Design challenges Imagine your company has just reinvented itself with a completely new brand identity. The colours, logo etc. will need to be updated for all your digital products; maybe on a global scale if you’re a large company. With no system in place this would cost a lot of time and resources to achieve. […]
In Parts 1 and 2, we focused on the Product Design Phase of Building Data Products Using Agile. In Part 3 we will focus on the Product Build phase, the right hand side of the diagram below. Once you have passed the ‘RAT’ test of the Product Design Phase, it is time to move to […]
There are many components to successful Data Analytics products: data science, business analytics, software development, product management, design and marketing. They all come together with a goal of creating a product that meets customer’s requirements and allows for a pleasant experience while using it. But perhaps it is the role of a designer that takes […]
Prediction does not only mean forecasting a future value. It can also be used to calculate any hypothetic situation, a what-if scenario, or as a prerequisite for making an optimal decision. Prediction means that a trained model is fed with an unknown situation to produce the expected result. Predictive modelling is often seen as the […]
As explained in Part 1, an effective Product Design Scrum setup (see the diagram below) involves time-boxing the work that the team will do in exploring key research questions. Recall that the Product Design Scrum team consists of a mix of Data Scientists and Software Developers, DevOps and QA, along with a Technical Product Owner, […]
Did you know that from the time you login to the Amazon website until the time you complete your order, tens of patches are deployed to their website without any disruption to service? Top Internet companies like Amazon, Google, Netflix and Facebook have embraced continuous deployment practices; whereby incremental software updates are moved into production […]
The starting point of the journey to build analytical big data products usually requires assessing a range of Data Science ideas. Key question is how to vet these data science ideas before committing to building a product around them? An important related question is, how to make data science exploration work in an Agile environment? […]
Wasted data science? Data scientists are expensive. Good data scientists are hard to find and they are often the bottle neck resource in analytics projects. Reducing the amount of time they spend on unnecessary work by, say 50% would be amazing, right? It would be amazing because it would not only speed up strategically vital […]
Do you remember one of the big product research fails in history (which has a happy end, but that’s a different story)? The introduction of “New Coke” in 1985. Even though The Coca Cola Company conducted taste tests among 200,000 consumers which confirmed a superior taste experience in comparison to the traditional recipe, the product […]
Using a GPU for computationally intensive and embarrassingly parallel tasks is nothing new; neither in science, nor in general IT. We have been playing around with (CUDA-based) general purpose GPU computing (GPGPU) for several years at GfK and have to admit that GPUs are now finally suitable for general purpose computing. Why? Because the setup […]
Data science is all around us these days. It’s the new big thing and considered to be the solution to almost any challenge that companies face. Being a data scientist is supposed to be the sexiest job of the 21st century and the faith in the power of analytics seems to be the mantra of […]
Today’s perception of Artificial Intelligence (AI) in the general public is massively influenced by movies like Her or Ex-Machina. ‘Super’ intelligent systems easily outperform human intelligence on almost all fronts and speak to ‘inferior’ humans in the voice of Scarlett Johansson – at least in the film Her.
The pressure on big data projects to deliver is huge. On the one hand, there is a high demand for data analytics tools for better decision making. On the other hand, projects often struggle with technical issues when implementing and operating the large variety of software and hardware in the big data stack. As a result the time to value is often too slow.
It may be surprising to some readers that the first few articles of our “Big Data, Broken Promise?” series have not covered many Technology topics at all. Is it really that easy to set up a suitable IT infrastructure and all the required tools for Data Scientists?
“Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.”
Some consider data enrichment as cheating, some as magic. Actually, it is neither. Data enrichment is a term that comprises a bunch of methods that range from engineering to science, plus a pinch of experience. In this blogpost, I would like to shed some light on different methods of data enrichment, their concepts and requirements. No worries, I will not dive into math.
In the latest blog post of our “Big Data, Broken Promise?” series, we have shared our number 1 observation of why numerous Big Data projects fail: They start without a clear understanding of the use case(s) that should be addressed and without sufficient interaction with the final end user or target group.
Just recently I read that 2017 is the year of Artificial Intelligence. The number of AI start-ups is growing rapidly, there are more and more conferences dedicated to the topic and we hear a lot about the big tech companies investing huge amounts of money in AI related developments.
As announced in our last week’s blog post, in this series we are aiming to share our learnings and experiences on how to successfully approach big data projects and continuously transform into a data-driven company.