Max Huttunen

How to Make a Data Grid Scale

By Max Huttunen on August 3, 2017

At, our users have diverse, sometimes very demanding needs when it comes to tables for displaying and manipulating data. We wanted to cover all of the possible use cases with our new campaign management data grid implementation, where expandable rows, sticky columns and good performance are just some of the most important requirements.

In this blog post, we go through how we found the right tool for the job, and overcame the most critical challenges to build a truly well-performing and scalable data grid.

Lauri Kovanen

Statistical Significance for Humans — Automated Statistical Significance Calculator for A/B Testing

By Lauri Kovanen on July 19, 2017

As online marketing grows more complex, it’s difficult to get all the details right on the first try. With dozens of decisions to make for each ad, it is no surprise that there’s often room for improvement. Fortunately there’s a way to consistently make better decisions: A/B testing. However, running randomized controlled trials typically requires a good understanding of statistics, and most of our customers are not statisticians. We saw a clear need for a more understandable, automated solution especially for statistical significance calculations—so that’s what we set out to build.

Ville Lautanala

Streaming Data with Ruby Enumerators

By Ville Lautanala on July 6, 2017

Streaming is an efficient method of handling large collections of data. Working with streaming data in Ruby using blocks is clunky compared to Node.js Stream API where streams can be easily composed. In this blog post we share how we combined the ideas from Node.js Streams with Ruby enumerables to build composable streams in Ruby. This has helped us scale our feed processing to a whopping pace of over 1 million products processed each minute. 

Markus Ojala

Tutorial: How We Productized Bayesian Revenue Estimation with Stan

By Markus Ojala on June 21, 2017

Online advertisers are moving to optimizing total revenue on ad spend instead of just pumping up the amount of conversions or clicks. Maximizing revenue is tricky as there is huge random variation in the revenue amounts brought in by individual users. If this isn’t taken into account, it's easy to react to the wrong signals and waste money on less successful ad campaigns. Luckily, Bayesian inference allows us to make justified decisions on a granular level by modeling the variation in the observed data.

Joel Mertanen

How to Migrate from Angular to React Without a Massive Rewrite

By Joel Mertanen on February 8, 2017

The user interface of’s SaaS application was started as an Angular-based single page application back in 2013. We chose Angular, because it was an all-in-one solution and it had a seemingly straightforward data flow, which allowed rapid development and experimentation. Lately, we’ve been seeking ways to renew our front end stack without compromising our development speed or customer satisfaction.

Jukka Heinonen— A Day of Learning and Teaching for Developers

By Jukka Heinonen on December 8, 2016

Recently, we packed our 30-person developer team into a bus and drove them into the middle of nowhere for a one-day offsite. We wanted to take ourselves out of the day-to-day working environment, and focus on learning—and teaching—new skills and knowhow.’s first ever was a success.

Juuso Mäyränen

How We Scaled Our Architecture to Handle Thousands of Image Rendering Requests per Second

By Juuso Mäyränen on October 26, 2016

Automation is becoming more and more prevalent in the world of online advertising. Online shopping sites may have tens of millions of products in their product catalogs, and making ads for these products manually is simply not feasible. A key part of automation is feed-based advertising, where ads get created automatically based on files containing product information such as names, descriptions, prices, pictures and so on. This blog post describes how we built the infrastructure to handle the volume of image rendering requests our advertising automation requires.

Markus Ojala

Experiences in Using R and Python in Production

By Markus Ojala on May 12, 2016

Python and R are some of the best open source tools for data science. They can be easily used for scripting and custom analysis but running them automatically as part of an online software requires more consideration. At, we’ve been using them both extensively. In this blog post, I’ll share some of our experiences of integrating them in production.

Markus Ojala

Optimizing Conversions with Predictive Budget Allocation

By Markus Ojala on April 8, 2016

Finding the best way to allocate a campaign’s budget between multiple ad sets can be difficult and time-consuming. Our Predictive Budget Allocation, which was initially released last summer, uses machine learning to automate this work for you. In this blog post we'll look at recent improvements that make it even better. Using Predictive Budget Allocation remains as easy as it's always been: you only have to choose the goal that Predictive Budget Allocation should optimize towards.


Okko Hakola

A Look Behind the Scenes - 24h Support at

By Okko Hakola on March 3, 2016

Editor’s note: The original blog post was published in June 2015. We’ve since opened an office in Singapore, which made it possible for us to launch 24/5 customer service.