Playable systems: 3 principles for ethical product design

Photo: Jorge Royan / Wikimedia

One of the reasons UX design is such a compelling practice to me is that, rather than designing static artifacts, we design systems that shape the possibilities, expectations, and constraints for how people engage with the world.

That work, to shape how people engage with the world around them, carries a lot of power. And as we all know from Spider-man, with great power comes great responsibility. Increasingly, we are surrounded by digital products and experiences that abdicate that responsibility — that focus on short-term profitability over creating products that work well for the people (and societies) that use them.

So, what kinds of systems should we be creating?

I’ve been working with a framework that I call “playable systems”. Playable systems are ones which empower the people who use them. I use the term “playable” because I think that empowering products are ones that afford virtuosity, in the way that a musical instrument might. They can be easily approached by beginners, but can be mastered and played in highly complex ways.

How do you design a playable system?

The three principles of “playable systems” (this is what I’ve got so far, but there may be more!):

1. A playable system keeps the human in the loop

When we design with technology, we are often designing ways to automate tasks or decisions. However, it is critical that we don’t automate agency away from the user at the moments when they need it most. For example, FitBit came under fire last year when it released a period tracker that didn’t allow women to enter irregular periods outside of its assumed “normal” range. My favorite extreme anti-pattern is this video of a person unable to turn off his Nest Protect even though there was no smoke in his house (spoiler: he eventually shoves them all into coolers in a desperate attempt to muffle the noise). Whenever we automate a decision or make an assumption about what a user will want, it’s important to allow for human override.

2. A playable system is a legible system

In order to truly allow for virtuosity, one needs to be able to understand how the system operates. How can we design experiences where people can deeply understand how a platform works, especially when some interactions may be algorithmically determined? How do we allow for systems to be interrogated? These questions become more complicated and essential as the technologies we use — like neural networks — are harder for humans to read. There is also an interesting interplay between transparency and legibility. We want users to be able to see how things work, but sometimes too much transparency can actually reduce legibility. Finding the right balance can make the inner workings of a system clear and accessible for all.

3. A playable system can evolve in creative ways

Playable systems should be open enough that they allow space for emergent behavior and can grow in ways that extend the experience beyond its initial design. They are ideally extendable, flexible, and with clear pathways to build on top of the foundation. One of the reasons for Twitter’s popularity is that it made space (at least in its early years) for a multitude of emergent behaviors. Some of the core features of the service today were user-invented hacks, like hash tags, @ replies, and threads.

I was recently reading Ursula Franklin’s The Real World of Technology, and her framing of “holistic technologies” is akin to this concept of playable systems. She describes holistic technologies as ones that “leave the individual worker in control of a particular process of creating or doing something.” She contrasts them with “prescriptive technologies”, which are rigid and enforce a particular process.

The web as a playable system

I recently collaborated with Caresse Haaser on an animated meditation on the open web. One of the reasons I think it’s important to talk about the open web now is because it is a playable system. It’s the reason that the web used to be more diverse, idiosyncratic, and delightfully weird. You can read it, write it, and make it your own.

As we’ve seen the dominance of closed social platforms over the past decade, we’ve seen our experiences become more constrained, homogenous, and less self-directed. That is largely because many of these closed platforms are explicitly not playable systems. They are the epitome of Ursula Franklin’s “prescriptive technologies” in that they rigidly prescribe how we can express ourselves.

Constraints as a starting point aren’t bad, but only if the playable principles are in place as well. These platforms aren’t legible, however — they are explicitly black boxes. They also don’t allow for much emergent behavior, so they don’t evolve; instead, their growth is prescribed by their owners, not by their users.

The third wave of connected experiences

I’m curious as to how we can build new kinds of experiences that are explicitly designed as playable systems. What does a “third wave” of the web look like that affords some of the ease and connectivity of social platforms, but in a way that is designed to empower the people using it rather than exploiting their behavior or personal data? How do we create incentives or constraints for experiences that are ethical and benefit our societies? As designers, can we move away from the principles of addiction and virality to ones that support a better human, connected experience?

  • Category: