Skip to main content
process

Designing Data Science Tools at Spotify: Part 2

December 2021

Article

Article credits
Sabrina Siu
Product Designer
Hui Yuan
Product Designer
Simon Child

When you’re working at a massive scale, like Spotify does, you accumulate huge amounts of raw data. Great product ideas can be derived from all this data, but only once it has been processed, managed, and distilled into explainable insights. To make that workflow possible and easy to execute, our data scientists need usable, well-designed tools. That’s where my team comes in.

I’m a product designer in the R&D Community at Spotify, and I’ve been working in the data tools space for a few years. I was brought in to pair up with engineering squads working on platforms and experiences for data scientists. 

Last winter, I wrote about my journey designing data science tools at Spotify. I covered the landscape of data tools at Spotify, the assumptions I had that were disproven when I started, and lessons learned throughout the process. Over the past year, I’ve continued working with data science teams to iterate and refine ScienceBox Cloud, Spotify’s internal data science tool. 

This time last year, the data design team was just getting started. Twelve short months later, we have more than doubled our initial design count, embedded into many teams, and rethought a lot of existing practices.

Existing landscape

The existing landscape of tools our data scientists use is pretty much the same as last year. We still use a lot of the same tools to write queries, run code, and organize files.

If you need a quick refresher on our tool suite, here’s a quick breakdown of the key players:

  • BigQuery: where users store datasets and write queries.

  • Jupyter Notebooks: where users run code in blocks mixed with prose. 

  • ScienceBox: where users organize files into projects, pre-install data science libraries, and create a standardized and reproducible data analysis workflow. 

Last year, we were focusing on the problem of expediting data science work by providing an easy way to add resources to notebook projects. That was highly successful and saved up to 50% of the time spent analyzing data. 

This year, we set our focus to make the experience as intuitive as possible by prioritizing feature tweaks coupled with holistic system improvements. 

What we learned — and what we Improved

Designing for the users’ mental models

In the last article, I wrote about optimizing for speed by allowing Spotifiers to choose powerful virtual machines. They now use virtual machines (VMs), an emulation of a separate computer system, to speed up the time it takes to run code. These VMs range from standard size (standard speeds) to large size types (extra-high speed and memory). With these VMs, data scientists are able to run multiple jobs at once and run each job faster. 

In our first iteration of the supporting interface, we focused the visual hierarchy on launching the notebook tool. We started with the assumption that the main user need for VMs was backend support to smoothly launch notebooks and quickly analyze data. We designed the main call to action on the homepage to be a large “open” button and lowered the visual hierarchy of the add, pause, and start controls. 

After gathering user feedback, we saw that our assumptions were slightly incorrect. Spotifiers did benefit from an “Open” quick action, but they actually mainly used ScienceBox as a VM administration system once their notebooks were running. They opened ScienceBox to restart, change size, delete, or rebuild their machines, but otherwise simply worked in their notebooks.

Illustration of the assumed user flow versus the actual flow

Mapping user feedback showed us this discrepancy stemmed from a different mental model (what users initially believe about a system) about ScienceBox and data analysis than we expected. I noticed a trend in which users emphasized the importance of keeping track of their machine state when describing their analysis process. It turned out that the users’ mental models centered around the virtual machine because properly running VMs were critical for reliable JupyterLab notebook use. Once we realized that, I flattened the product information architecture and placed all the VM controls in a dropdown accessible from the main tab. 

Now, the controls users came to find were promoted to where they expected them to be. 

Example view of VM controls on the main screen

Memory: RAM vs disk space

Before I continue, let’s refamiliarize ourselves with a few computer hardware terms:

  • Memory: Computer Memory is any physical device capable of storing information temporarily or permanently. 

  • RAM: Random Access Memory, also referred to as main memory or system memory. A temporary storage location for your files. When a program, such as your internet browser, is open, it is loaded from your hard drive and placed into RAM. 

  • Disk Space: Anything you save to your computer, such as a file or a video, is sent to your hard drive and uses disk space for storage. This is the maximum amount of data a drive (in this case our VM) is capable of holding. As information is saved to the VM, the disk usage is increased until it cannot hold any more. In our case, if the user is saving large JupyterLab notebook files, they will run out of disk space for these files.  

  • Central Processing Unit (CPU): The processor, also known as the CPU, provides the instructions and processing power the computer needs to do its work. The more powerful and updated your processor, the faster your computer can complete its tasks.

In the first few product iterations, we deliberately targeted reducing time to task completion as our main success metric. We designed a simple interface that showed only essential copy. When applied to VMs, we showed how many CPUs and how much RAM was available to the user. Since then, we’ve gathered quite a bit of user feedback that has informed us of a more critical type of memory: disk space.

We started noticing users were complaining about machine failures. After investigation, the engineers realized that users were switching to larger machine sizes but assumed that larger machine sizes had more disk space to allow them to save larger and larger files. We learned that a side effect of increasing analysis speed was enabling data scientists to load said larger and larger files that would ultimately overload the machine.

This was a key find because, in reality, all four of the different machine sizes had the exact same amount of disk space allocated and could only save the same amount of data. 

The simplest solution was for us to display in text the amount of disk space each VM selection had. This way we exposed the information they were really looking for upfront and without assumptions. Now our users can make fully informed decisions about the size of data files they could process, resulting in fewer machine failures due to large files sizes.

Example dropdown with decision making information displayed

Design for the holistic system 

In the last article, I focused on the design of the product itself. This time around I want to impart the importance of designing for the entire system. The data scientists’ life at Spotify doesn’t revolve around this one product; they are exposed to so many different products, information sources, and processes in their day-to-day work. 

Our data science team is constantly growing and is made up of a diverse group of people with differences in background, ways of working, code specializations, and more (plug: come work for us!). This means that the team and I also must keep our documentation, our tutorials, data science onboarding lessons, and various external dependencies in mind and in our broad scope of what’s in our designed user experience. 

Visual of the many other products that interact with ScienceBox Cloud

From 0 to 1 and beyond

For the first few iterations of this product, I was focused on creating the first ScienceBox Cloud experience. The team and I needed to validate our hypothesis that a cloud product would help Spotifiers run their code up to 50% faster. Our hypothesis turned out to be correct, and this allowed us the space to make this product experience better.

After quite a few product iterations, it’s been really fulfilling to approach this from a systems design perspective. Not only does the product need to work, but it also needs to be performant, flexible, reliable, and scalable. 

In the early iterations, I assumed a hierarchical information structure was the most fitting for the product information architecture; I have now learned more about the users’ mental models and re-architected a flatter structure to reflect what the user expects. While testing out assumptions, such as the one that memory on the VM was the most important information for the user, it taught me that disk space was valuable as well. Finally, I started designing ScienceBox Cloud assuming I needed to primarily focus on the in-product experience but now have realized that designing the entire workflow is a better way to guarantee a smooth holistic user experience.

I’ve learned so much more about how data scientists collect, process, understand, and analyze data to drive Spotify decision-making. Through this process, I've sharpened my instincts on how to design to impart great impact, and am excited to continue along this journey with you. Stay tuned!  

Credits

Sabrina Siu

Product Designer

Sabrina’s work focuses on the intersection of data, product design, and technical infrastructure. Originally from Northern California, she now lives in New York City.

Read More

Hui Yuan

Product Designer

Hui is a designer dedicated to simplifying complex data problems into elegant design solutions. She’s been focusing on data analytics and data visualization tools design for years.

Read More

Simon Child

Illustrator

Simon is an all-round designer / brand creative / casual illustrator and ex-world traveler.

Read More

Next up

Our latest in Process

Want Spotify Design updates
sent straight to your inbox?

By clicking send you'll receive occasional emails from Spotify Design. You always have the choice to unsubscribe within every email you receive.