Skip to main content
noted

Designing for Tomorrow: A Discussion on Ethical Design

January 2019

Article

Article credits
Lu Han

Ethical design is something we’re spending more and more time thinking about at Spotify. How can we build trust? Encourage meaningful consumption? And make sure we’re using our voice responsibly? One of our product designers, Lu Han, shares her take on the burning issues with this first piece of our three-part series on ethical design. Enjoy our companion playlist for this article:

Anyone who works in tech knows the industry has taken a beating in recent years for its perceived lack of morals and careless culture of “move fast, break things.” Companies are starting to realize how much damage has been done to labor markets, privacy, and mental health – and look at how to do things differently going forward.

As designers, we have a huge role to play in bringing about this cultural change. And to see why, just look at the evolution of design as a profession. Digital designers started out as artistically-inclined, CSS-writing engineers. But increasingly, designers have come to embody the voice of the user and strive for what Don Norman dubbed “user-centered design.”

Virtual assistants then and now.

Since 1988, so much has happened. Steve Jobs popularized the idea that “Design is not just what it looks like and feels like. Design is how it works.” The growth of voice interaction and AI has taken design beyond aesthetics into the territory of complex decision-making. Today, designers spend more time than ever tying our work to human values.

Another important change has also taken place: in getting to know people’s behavior and motivations better, we’ve learned that we’re largely an irrational species, prone to cognitive miscalculations like loss aversion or the sunk cost fallacy. Designers have exploited these psychological vulnerabilities to get users to forget what they want and click what businesses would like them to want.

Couple that with A/B testing – which allows us to run thousands of controlled experiments on users every day at little cost – and you have an internet that’s frighteningly good at manipulating people and causing long-term harm.

These unethical decisions aren’t usually the result of bad intentions. Rather, it’s a systematic issue that stems from our focus on short-term business goals, like engagement and revenue, often at the expense of user trust and wellbeing.

Trust is harder to measure in the short term, but it’s even harder to get it back once you’ve lost it. Designers need to earn and maintain user trust if we’re to build sustainable products that leave a positive impact on the people who use them.

Unethical design puts short-term business goals ahead of earning (and keeping) user trust.

The way unethical design hurts people

User trust and wellbeing get compromised when our design and product decisions cause harm. These can be broadly divided into three, overlapping categories:

1. Physical harm – including:

  • Inactivity and sleep deprivation, enabled by infinite-scroll feeds, auto-queued videos, and other hallmarks of the attention economy.

  • Financial strain, resulting from features that eat into data plans or make it incredibly difficult to cancel renewing subscriptions.

  • Exploitation of workers in the tech-driven gig economy, which uses behavioral economics under the guise of “persuasive design” to get people to work longer hours against their own interests.

  • Exposure of personally identifiable data – for instance, when features share someone’s exact location with others.

  • Accidents due to distraction, especially when people are driving.

2. Emotional harm – including:

  • Betrayal of trust or privacy, when people are exploited, exposed, or discriminated against using personal information they thought was private.

  • Negative self-image, anxiety & depression – especially amongst young people, whose minds, bodies, and identities are still in development and tend to crave social acceptance.

3. Societal harm – including:

  • Political polarization – algorithms flatten the landscape of journalism, drive news agencies to compete through sensationalism, and contribute to a divided society with polarized views and an immaterial grasp on reality.

  • Exclusion – for instance, when designers fail to develop features sensitive to the experiences of LGBTQ+ users, consider accessibility for those with mental and physical disabilities, and recognize the importance of legible text to older users.

  • Reinforcing stereotypes and structural oppression – due to a growing dependence on algorithms and biased data to classify and make predictions about people.

Why harmful experiences get built

Design is fundamentally about putting the user first. But if it were always easy to do that, we wouldn’t be facing the problems we do today.

Unfortunately, user needs often come into conflict with a few very tempting incentives. These are the business goals we often tunnel-vision towards in tech: engagement and revenue, science, automation and scale, neutrality, and reckless speed.

The incentives

  • Engagement – these are the big numbers, like daily average users (DAU), that get used as shorthand for success in tech teams.

  • Science – it’s not hard to see that unethical A/B tests could be run if we get too greedy over behavioral data. We often forget to consider how “just testing” something can have material effects on the users.

  • Automation and scale – we often try to find that one-size-fits-all solution, whether it’s an algorithm or a design flow, that forces people to adapt to it, rather than the other way around.

  • Neutrality – so many decisions get made by not deciding at all. We should try to fight the instinct to avoid difficult conversations, because passive choices are choices nonetheless.

  • Reckless speed – how many questionable decisions get brushed under the rug in service of “getting it out the door" quicker? Now we know that design can unintentionally cause harm, we need to make time for addressing that with the same rigor we bring to shiny new projects.

Looking back at the examples of harm, it’s plain to see that all the decisions made are in service of one of these incentives – as outlined below…

How our work can harm users—and to what end.

Changing the way we work 

Below are a few tips and tricks that some of our teams at Spotify have found useful…

Beware the language of trade-offs

One way to recognize when we’re trading off user trust and wellbeing for other business goals is just paying attention. Here are some phrases that often get thrown around in questionable situations and should prompt us to relook at our motives:

Words that warrant a bit of reflection.
  • “edge case” / “most of our users”

Dismissing certain groups of people as “edge cases,” or making assumptions about “most” of our users is a judgment call on who’s worthy of our consideration. Rather than drawing assumed boundaries and keeping some people on the margins, ask yourself how you can build empathy for those people.

  • “it’s just for the MVP” / “it’s only a small-scale test”

These phrases can shut down a conversation on values by making a solution seem temporary. But features often stay in the MVP stage longer than expected and even a 1% test means a million Spotify users. So when discussing within our teams, we should speak about tests in terms of the number of users it affects, rather than the percentage, and never be persuaded to roll out anything so harmful we wouldn’t want it to live in the app long-term.

  • “no one will notice anyway” / “just add it to the Terms & Conditions”

These phrases sound like the start of a bad press story. Firstly, because people always notice. And secondly, because it’s just irresponsible to assume everyone reads the Terms & Conditions.

  • “everyone does it” / “if we don’t, someone else will”

Phrases like these have been used to justify some pretty terrible things. But we should see ourselves as people who can bring about positive change.

Recognize conflicts of interest

Another way to recognize unethical decision-making is to notice when you’re using people’s cognitive biases in your design – for instance, by playing on people’s natural inertia to make them watch hours of auto-queued videos. This could indicate that your incentives are no longer aligned with the user’s. A good rule of thumb is to ask yourself: if the user knew what was actually going on, how would they feel about it? Is it what they’d want for themselves?

Choose the right metrics

Choosing the right metrics is critical to truly serving users, because a team’s metrics guide all their most important decisions. We need to use a few signals, instead of one, balance engagement with sentiment and not rely on quantitative measures alone. Read more on metrics-setting in part two of this series on ethical design, “A better measure of success'' which is coming to this site soon!

Use storytelling & framing

Storytelling is another part of the designer’s skillset that can help promote more humane developments in tech. And we’ve seen that so much of storytelling depends on the framing – in setting out our design principles at the start of a project and making sure we stick to them, no matter how tempting it is to deviate.

One way to do this is to use the Harm/Incentive framework above to highlight any physical, emotional, and societal damage that could result from a project. Read more on framing an opportunity with ethical foresight in part three of this series, “Storytelling – and its ethical impact.” which is coming to this site soon!

Evaluate for ethics along the way

Ethics aren’t something to be got out of the way at the beginning of a project and then forgotten. We should come back to them in post-ideation conversations, when we’re weighing different solutions. And we should also bring ethics into our user research plans – conducting interviews to dig deeper into ideas like trust, distraction, and privacy, understand where users draw the line and discover how we can make our design more respectful and accessible in the future.

Testing in different contexts using car simulators, eye-tracking glasses, and diary studies can uncover how well a design works in different situations.

Carry out after-testing

Once we have some test results, we should work with data scientists to understand whether certain populations are especially impacted by our test. We should try to avoid looking only at the metrics for the average user, since “the moment you need to make a decision about any individual—the average is useless. Worse than useless, in fact, because it creates the illusion of knowledge, when in fact the average disguises what is most important about an individual” as Todd Rose points out in his book, The End of the Average. And we should make friends with our Customer Support team – they're the ones with their ears closest to the ground and see a side of the experience that often gets lost in the numbers.

Here at Spotify, the Customer Support team shares their knowledge by having product teams visit their HQ every few months and sit with people responding to the emails and tweets coming in. Teams get to hear the most common complaints about our features and bring back interesting insights. For example, we changed the shuffle and repeat active states from a subtle color shift to a dot indicating the button is active. A small change like this can make the experience more accessible to not just colorblind users, but for everyone else as well.

By talking to our Customer Support team, we learned how one small change could make Spotify more accessible for everyone.

And finally…

We need to unpack our relationship with “unintended consequences.” The damage tech causes is almost always unintentional. And this is so important to bear in mind because it reminds us we can’t avoid creating harmful work through good intentions alone.

What we do know is this: once we notice or even predict that our work may harm others, we're responsible for fixing it or abandoning that specific solution. We can no longer stick our heads in the sand and opt for neutrality or plausible deniability. We need to have an opinion on ethics. Because being informed is what gives us the confidence to take a stand on things we care about.

Credits

Lu Han

Product Manager

Lu Han is a Product Manager on a machine learning team and co-leads Ethics Guild. In her spare time, she likes to paint, go to the movies, and forage for mushrooms.

Read More

Next up

Our latest in Noted

Want Spotify Design updates
sent straight to your inbox?

By clicking send you'll receive occasional emails from Spotify Design. You always have the choice to unsubscribe within every email you receive.