How Smartly.io designs for complex data experiences. This blog post has been featured also on the UX Collective.
One of the biggest challenges our design team faces day-to-day over at Smartly.io is designing for large amounts of data. Our users process, interpret and act upon a myriad of different variables on a daily basis. As a designer, finding the right way to visualize these varying sets of data can be an arduous and complicated process.
Recent projects have required us to think of innovative ways to go about handling and presenting these sometimes exorbitantly large datasets. In this article we’ll walk through some of the key learnings we’ve picked up along the way, and provide some guidelines we’ve established to make things a little easier. To provide a better, user-friendlier data experience.
Data experiences differ slightly from the more conventional user experience or data visualization. The primary goal of data experience design lies not only in the simplification of complex workflows, but covers the entirety of the data-consumption process. This requires that not just the graphs, charts and tables are taken into consideration, but that the focus is shifted to the real-world, day-to-day situations in which these insights provide users with an edge.
When designing these experiences, a few areas that are nowadays associated with Product Design go out the window. Users who leverage their data to gain an edge in their daily workflows do not care as much about smooth transitions or pixel-perfect line icons. These users care more about loading times, exportability, and data density.
This mandates that we not only consider the ways that data is visualized and presented to the user, but also to think about the way this data is collected and processed, what the internal logic and structure for these sets is, and to what extent the user needs to be able to act on this information. Users are looking at these interfaces for leverage in their work, and it is our job to provide them that leverage.
Some of the biggest challenges for designers in general lies in reducing complexity. The ability to make the complicated simple is a defining characteristic that sets many successful consumer-focused products apart from the crowd.
When designing for large-scale, data-dense products however, this isn’t always a possibility. Simply reducing the number of features, or doing away with and ignoring complex corner cases is not an option.
Imagine we were designing an airplane with reduced complexity in mind. To achieve that, the entire cockpit only features two things: a steering wheel and a lever for accelerating and decelerating. Would it take off? Most likely. Would the pilot’s experience be optimal? Probably not.
Data experiences require us to imagine a similar scenario. Our users want to not only be able to see their data, but also access numerous ways of exploring, plotting and acting upon said data. The number of use cases scales exponentially with the number of users. To this end, designers have to adopt a counter-intuitive attitude of embracing complexity rather than shying away from it.
Rather than hitting the whiteboard, notepad or Sketch artboard at the very first chance we get, we’ve found more worthwhile to forget about the actual interface or layout for a moment. Rather than starting staring blankly at rows of data in a spreadsheet, hoping a vision will form in the back of our minds, we try to immerse ourselves in the data. Thus, before getting into the nitty-gritty of grids, buttons and paddings, we take a deep dive into the actual datasets that our users will be looking at.
While raw, unedited rows upon rows of data might not be the sexiest thing to look at it, it is often the best place to start. It allows us to start thinking about what variables exist within this maze of numbers, and how the various data points relate to one another. We might find ourselves already trying to set up basic hierarchies or mapping out charts to try and get a better overview.
As designers, our skills and knowledge might be vastly different than that of our users. As such, we take the extra step to get close to the user, reach out to them directly and figure out what their daily workflow looks like, to try and envision where our solution might fit in.
We ask them questions to figure out what data they look at on a daily basis, what their goals are and which questions they’re trying to find an answer to. This helps us establish the context of their issues and identify their current pain points. Once we have an initial solution in place, one of the best possible ways to validate the effectiveness is to simply observe. Watching the user navigate our product and examining their recurring workflows often provides us with a better overview of the remaining issues, as well as ideas on how to iterate on the solution.
Real-world problems, real-world data
The complexity of the data that our users work with every day cannot be understated. The sheer amount of different variables that they want, need or have at their disposal can be absolutely perplexing. What this entails for the design process, is that we can’t really get by with a ‘fake it ‘till you make it’ mantra, meaning we can’t fully grasp the reality of the data experience with fake data.
As users are actively solving real-world problems using their own data, we strive to do that very same thing during the design process. This means we end up geeking out over this data, getting into the nitty-gritty of the different variables, and really digging into what each data point represent and how it affects the overall structure. Each of these points is undoubtedly linked to, limited and affected by a multitude of other parts in the dataset. We have to be able to understand these possibilities and limitations, as well as the nuances between them.
Building mockups and prototypes with real data thus becomes a necessity. It would be almost impossible to understand the interwoven complexity of the data experience without using real-world data. This becomes especially apparent when demonstrating proposed design solutions to the user. When looking at fake data, they will inevitably gloss over parts of the design. Using real data however, they start looking for the underlying patterns, imagining how this solution would solve their own issues.
Iterate, iterate, iterate
A data experience is never complete, never finished and certainly never perfect. As such, the work of a designer is never done. As methods for tracking, collecting, and processing data continue to evolve over time, so will the methods for representing and visualizing that data. We keep building on the foundations we’ve established, adding new features as we go and re-iterating existing features along the way.
The challenge here is to resist. Resisting the temptation to add layers upon layers of functionality allows users to explore, distinguish and act upon the most minute patterns with their data. Resting the need to keep increasing the product’s feature set to the point where it obscures its original purpose. That said, we should also not resist the opportunity to try new things, simplify existing workflows and creating a better data experience.
Continue to iterate, re-iterate, prototype, and let the data speak for itself.
Though data experience design is still in the initial stages of development over at Smartly.io, it has already shown tremendous value in making the lives of our users easier, simplifying their day-to-day efforts and providing a them with a tangible edge. It helps us in drawing out feedback that is instrumental to bettering our product, and establish consistent, user-friendly workflows across our platform.
If you’re working on something similar and want to share the tools, processes and thoughts you use to create a better data experience, we’d love to hear all about it.