Six signals: insurance dystopias and Weird Facebook

six signals logo.

This week is kicking off with a couple of pretty dark future signals, but it gets more fun at the end, I promise!

If you want to get future issues in your inbox, please sign up for our newsletter.

1: Insurance dystopias

We could all probably see this coming a mile (or maybe 10,000 steps?) away, but now that we’re self-tracking and publishing so much data about ourselves, insurance companies are starting to use that data. Sarah Jeong writes that the cutting edge of the insurance industry involves using data — from your step count to your social media posts — to adjust premiums algorithmically.

And of course, since every new surveillance tactic begets an adversarial hack, there are phone cradles being made to artificially boost step counts to avoid premium increases.

Insurers Want to Know How Many Steps You Took Today

2: Is it illegal to opt out of facial recognition?

Police in London conducted a public street trial with facial recognition cameras. A man who covered his face as he walked by the cameras was stopped by officers, forced to submit to being photographed, and then arrested on a charge of public disorder after complaining loudly.

London police arrest man who covered face during public facial recognition trials

3: Jellyfish and insects for dinner

Sainsbury’s, the UK’s second-largest supermarket, has commissioned a report that explores the future of food in 2025, 2050, and 2169.

By 2169 it could be routine for people to hold details of their nutritional and health information in a personal microchip embedded in their skin, which will trigger an alert to the supermarket. It would then deliver by drone suitable food and drink based on their planned activities for the coming days.

Jellyfish supper delivered by drone? Radical future predicted for food.

4: Cars are the horses of the future

I’m always astonished that conversations around autonomous vehicles are so constrained by our current conception of what a “car” is. There’s a tendency to assume that cars will play the same role, but just be self-driving. But really, autonomous vehicles open an enormous possibility space around mobile housing, algorithmic shops, autonomous caravans, and floating offices, to name just a few. Chenoe Hart’s piece on self-driving cars points to a number of untapped design opportunities:

The Hy-Wire’s technology suggests that the focus of car design could turn inward, yielding a range of new possibilities for vehicle interiors. Our future passenger experience might bear little resemblance to either driving or riding within a vehicle; we’ll inhabit a space that only coincidentally happens to be in motion.

Perpetual Motion Machines

5: Empathetic ears

This very optimistic report looks at the possibility for in-ear devices, or “hearables”, to track a variety of biological and audio signals in order to adjust our environments to reduce stress and create more positive experiences. While I’m highly skeptical of future scenarios that rely on all the “smart” things working perfectly and humanely together, I also appreciate the idea of empathy as a core UX principle.

Hearables will monitor your brain and body to augment your life

6: Weird Facebook

Taylor Lorenz’s latest Atlantic piece digs into Facebook tag groups, which are part of the larger Weird Facebook genre (who knew?). People describe tag groups as being reminiscent of forum culture and earlier eras of internet culture. With Facebook’s new focus on Groups, there’s a clear opportunity here to learn from users’ emergent behavior, though they seem to be choosing to take a more top-down approach:

Zuckerberg’s vision for groups—a sort of digital version of the local knitting circle, kayaking club, or mom’s meet-up—is very different from the ground-up group culture that is dominated by one particular format: the tag group.

The groups bringing forum culture to Facebook

One playful thing

Six signals: Authenticity in AI and social media aesthetics

Six signals logo.

1: The future of voice assistants is…phones?

Last week, Audible (which is a subsidiary of Amazon) introduced a feature that allows U.S. owners of Amazon Echo devices to call Audible’s live customer service line. What’s interesting here is the concept of building on top of systems that are already voice driven (aka the phone) rather than trying to convert visual user experiences into conversational ones. According to The Verge, this is the first Alexa-powered customer support service. For now, it is simply providing a link to existing human support representatives, but we can easily see the signal of competition with Google’s Duplex, which uses human-sounding bots to make phone calls on your behalf for structured tasks like booking appointments.

Audible launches the first Alexa-powered customer support line

2: Art in the age of computational production

The Huawei P30 Pro is known for having one of the top smartphone cameras on the market. But one camera feature set off some recent controversy:

Using Moon Mode, a Huawei P30 Pro owner can take a close-up picture of the moon with no tripod or zoom lens necessary. Reportedly, the feature works by using the phone’s periscope zoom lens combined with an AI algorithm to enhance details in the photo.

However, some photographers who have been testing the camera claim that Huawei is going beyond enhancement and actually replacing parts of the image with pre-existing images of the moon. There’s a fascinating set of questions embedded in this controversy: How much do we want computers to “help” us? What constitutes the boundary between “real” and “fake”? At what point does computational augmentation decay authenticity?

Huawei P30 Pro ‘Moon Mode’ stirs controversy

3: Drone delivery on the horizon

Image: Wing

The Federal Aviation Administration recently awarded their first air carrier certification to a drone delivery company. Wing, which is a subsidiary of Google’s parent company, Alphabet, will begin delivering products by drone in Virginia as part of a pilot project. Previously, Wing had been testing its technology in Canberra, Australia.

When a Wing drone makes a delivery, it hovers at about 20 feet and lowers the package on a hook. Customers can select what they want delivered on an app.

Wing, Owned by Google’s Parent Company, Gets First Approval for Drone Deliveries in U.S.

4: “Fashion forward” wearables for recording your life

Image: Opkix

Opkix is the latest company to take a stab at the wearable camera market, with a set of accessories that include necklaces, sunglasses, and rings. We’ve seen some pretty spectacular failures in this space before, most notably Google Glass and Snap Spectacles. Does Opkix provide a combination of compactness and fashion that can change the game? Is the moment suddenly ripe for something that has seen failures in the past (we’ve seen this before with both digital music players and ebook readers)? Or is this a solution without a real problem to be solved?

Opkix One camera and accessories

5: Shifting social media aesthetics

Speaking of authenticity, Taylor Lorenz’s piece in The Atlantic last week notes a reactionary trend against the “Instagram aesthetic”. While the social media platform has become famous for highly polished, stylized glamour shots, that look seems to be going out of style in favor of more unfiltered, low-production aesthetics.

In fact, many teens are going out of their way to make their photos look worse. Huji Cam, which make your images look as if they were taken with an old-school throwaway camera, has been downloaded more than 16 million times. “Adding grain to your photos is a big thing now,” says Sonia Uppal, a 20-year-old college student. “People are trying to seem candid. People post a lot of mirror selfies and photos of them lounging around.”

Of course, it’s all a pendulum, so if you’re still ‘gramming your rainbow food, it’s only a matter of time before you’re back on trend again.

The Instagram Aesthetic Is Over

6: Training for robotic futures

OK, it’s a bit overdone to horror-post Boston Dynamics robots, but this video inside their testing facility is pretty fascinating. I especially like the sign that says “Not safe for humans. Robots only.”

Six Signals is a biweekly look at interesting signals of the near future — how technology, design, and more are changing our society and our personal experiences.

Playable systems: 3 principles for ethical product design

Photo: Jorge Royan / Wikimedia

One of the reasons UX design is such a compelling practice to me is that, rather than designing static artifacts, we design systems that shape the possibilities, expectations, and constraints for how people engage with the world.

That work, to shape how people engage with the world around them, carries a lot of power. And as we all know from Spider-man, with great power comes great responsibility. Increasingly, we are surrounded by digital products and experiences that abdicate that responsibility — that focus on short-term profitability over creating products that work well for the people (and societies) that use them.

So, what kinds of systems should we be creating?

I’ve been working with a framework that I call “playable systems”. Playable systems are ones which empower the people who use them. I use the term “playable” because I think that empowering products are ones that afford virtuosity, in the way that a musical instrument might. They can be easily approached by beginners, but can be mastered and played in highly complex ways.

How do you design a playable system?

The three principles of “playable systems” (this is what I’ve got so far, but there may be more!):

1. A playable system keeps the human in the loop

When we design with technology, we are often designing ways to automate tasks or decisions. However, it is critical that we don’t automate agency away from the user at the moments when they need it most. For example, FitBit came under fire last year when it released a period tracker that didn’t allow women to enter irregular periods outside of its assumed “normal” range. My favorite extreme anti-pattern is this video of a person unable to turn off his Nest Protect even though there was no smoke in his house (spoiler: he eventually shoves them all into coolers in a desperate attempt to muffle the noise). Whenever we automate a decision or make an assumption about what a user will want, it’s important to allow for human override.

2. A playable system is a legible system

In order to truly allow for virtuosity, one needs to be able to understand how the system operates. How can we design experiences where people can get “under the hood” and see the inner logic? How do we allow for systems to be interrogated? These questions become more complicated and essential as the technologies we use — like neural networks — are harder for humans to read. There is also an interesting interplay between transparency and legibility. We want users to be able to see how things work, but sometimes too much transparency can actually reduce legibility. Finding the right balance can make the inner workings of a system clear and accessible for all.

3. A playable system can evolve in creative ways

Playable systems should be open enough that they allow space for emergent behavior and can grow in ways that extend the experience beyond its initial design. They are ideally extendable, flexible, and with clear pathways to build on top of the foundation. One of the reasons for Twitter’s popularity is that it made space (at least in its early years) for a multitude of emergent behaviors. Some of the core features of the service today were user-invented hacks, like hash tags, @ replies, and threads.

I was recently reading Ursula Franklin’s The Real World of Technology, and her framing of “holistic technologies” is akin to this concept of playable systems. She describes holistic technologies as ones that “leave the individual worker in control of a particular process of creating or doing something.” She contrasts them with “prescriptive technologies”, which are rigid and enforce a particular process.

The web as a playable system

I recently collaborated with Caresse Haaser on an animated meditation on the open web. One of the reasons I think it’s important to talk about the open web now is because it is a playable system. It’s the reason that the web used to be more diverse, idiosyncratic, and delightfully weird. You can read it, write it, and make it your own.

As we’ve seen the dominance of closed platforms over the past decade, we’ve seen our experiences become more constrained, homogenous, and less self-directed. That is largely because these closed platforms are explicitly not playable systems. They are the epitome of Ursula Franklin’s “prescriptive technologies” in that they rigidly prescribe how we can express ourselves.

Constraints as a starting point aren’t bad, but only if the playable principles are in place as well. These platforms aren’t legible, however — they are explicitly black boxes. They also don’t allow for much emergent behavior, so they don’t evolve; instead, their growth is prescribed by their owners, not by their users.

The third wave of connected experiences

I’m curious as to how we can build new kinds of experiences that are explicitly designed as playable systems. What does a “third wave” of the web look like that affords some of the ease and connectivity of social platforms, but in a way that is designed to empower the people using it rather than exploiting their behavior or personal data? How do we create incentives or constraints for experiences that are ethical and benefit our societies? As designers, can we move away from the principles of addiction and virality to ones that support a better human, connected experience?

Six Signals: unionized memers and biblical AI

Six signals logo.

Welcome back to Six Signals! For those of you joining me for the first time, this is a biweekly look at some of the interesting signals of the near future — how technology, design, and more are changing our society and our personal experiences.

1: Seizing the “memes of production”

Folks who create Instagram memes are organizing to form a union. Yes, really. The argument is that meme creation is a growing type of labor that has none of the formal protections that other types of workers do. The Atlantic‘s piece on the union acknowledges that “the IG Meme Union will probably never be recognized by the National Labor Relations Board, but organizers say it can still act as a union for all intents and purposes.”

The primary issue the organizers are addressing is selective censorship on the part of Instagram. They want a more transparent appeals process, as well as better ways of ensuring that memers’ work isn’t monetized unfairly by others.

Instagram memers are unionizing

2: Iconography for digital surveillance

Image: Sidewalk Labs

Sidewalk Labs, Alphabet Inc.’s urban innovation organization, is developing a design language for public signage that will indicate when digital technologies are in use in public spaces and for what purpose. We are increasingly being “read” by any number of digital sensors in public spaces, from CCTV to door sensors to traffic cameras to Bluetooth and WiFi signals, but that sensing is invisible and therefore can’t be interrogated. The iconographic system is meant to bring more transparency to these interactions.

The project has raised some interesting debate about whether a design system like this leads to any kind of citizen empowerment, or if it aestheticizes and normalizes a culture of surveillance.

How can we bring transparency to urban tech? These icons are a first step.

3: How does God feel about AI?

The Southern Baptist Convention’s public-policy arm, the Ethics and Religious Liberty Commission, spent nine months researching and writing a treatise in response to artificial intelligence from an evangelical viewpoint. As far as I know, this is a rare example of a religious entity formally applying church principles to new technologies.

TL;DR: the document is mostly quite optimistic about AI, though it draws the line at sex bots and specifies that robots should never be given equal worth to humans.

How Southern Baptists are grappling with artificial intelligence

4: The dark side of optimization

In a recent New York Times article about Soylent’s new product line (surprise, it’s food!), there’s a disturbing note about Soylent’s foray into becoming a supplier for Uber, Lyft, and Postmates drivers.

Andrew Thomas, Soylent’s vice president of brand marketing, found an interesting gap in the tech industry — not, this time, at corporate offices, but in the gig economies their industry designed and oversees, where maximizing efficiency is more of an algorithmic mandate than it is a way to signal your sophistication.

It turns out Soylent is stocking fridges in the driver hubs for Lyft (and has a discount code for drivers) and has a partnership with the company that supplies Uber drivers with food. They are looking to do the same with Postmates.

Through these partnerships, potential and established, Soylent will complete a sort of circuit, taking its product, once a lifestyle choice for a small group of technology overlords, and pushing it as a lifestyle necessity to the tech underclass for whom every moment spent on things like eating instead of working means less money.

Here’s Soylent’s new product. It’s food.

5: The link between technophilia and fascism

Rose Eveleth has written a thoughtful analysis of the early twentieth century Futurist movement, a movement that was aggressively optimistic about the new technologies of the time — and also supported the growing Fascist politics in Europe. She draws a link between the two, cautioning that there are echoes of similar sentiments in the tech community now.

This love of disruption and progress at all costs led Marinetti and his fellow artists to construct what some call a “a church of speed and violence.” They embraced fascism, pushed aside the idea of morality, and argued that innovation must never, for any reason, be hindered.

Bottom line: we need to be thoughtful about how we apply technology, or else it can lead to applications that diminish our humanity.

When Futurism Led to Fascism—and Why It Could Happen Again

6: Defunct QR code tattoos

Want to know when the next Six Signals is available? Follow @cog_sprocket on Twitter or sign up for the email list.

Six Signals: Trusting your smart things & the internet of brains

Six Signals logo.

1: “What if I don’t trust my car?”

Simone Rebaudengo, who creates gorgeous speculative design projects, created a “Future FAQ” that reviews some of the strange and compelling questions that we will need to be answering in the very near future, including:

  • How smart is a smart home?
  • What if I disagree with my thermostat?
  • What if I don’t trust my car?
  • Can my house robot travel with me?
  • Can I diminish reality?

Future Frequently Asked Questions

2: The growing fluidity of media formats

Last month, I shared Descript, a video editor that you edit via the automated text transcription rather than manipulating the video itself. This is one of many signals I’m seeing that point to the increasing malleability of media formats, where text can become video can become audio can become text again effortlessly and with very little diminishment of fidelity. The latest signal I’m seeing is Tayl, which allows you to turn any website or piece of content online into your own personalized podcast and have it read to you later.

Tayl: Turn websites into podcasts

3: The future of AI is…prison labor?

Inmates at a prison in Finland are working to classify data to train artificial intelligence algorithms for a startup called Vainu. While the PR for this effort emphasizes “job training” and “teaching valuable skills”, it’s clear that this is another signal in the growing set of labor issues behind all that magical automation and machine intelligence.

Inmates in Finland are training AI as part of prison labor

4: Stock voices are the new stock photos

WellSaid Labs in Seattle is working to create a wide variety of synthetic voices that sound remarkably like real people. They use voice actors as the inputs to train a neural network that generates new, artificial voices.

WellSaid Labs isn’t planning to take over the voice-assistant market, though. Rather, it hopes to sell the voices to companies that want to use them in advertising, marketing and e-learning courses.

You’ve probably heard of stock photos; you might think of this as stock voices.

Watch out, Alexa. Artificial voices are starting to sound just like humans

5: 3D printing as a path to accessibility

IKEA recently announced its ThisAbles project, which provides a library of 3D printable extensions that can be added to IKEA furniture to make it accessible for customers with disabilities. The upside to the 3D printing approach is that anyone can submit a proposed solution to add to the library. The down side is that this is only usable by those who have access to 3D printers, which still isn’t a majority of the population.


6: The internet of brains

In the “very future signals” category, neuroscientists have successfully connected the brains of three people together, allowing them to do what they describe as “sharing their thoughts”. The aforementioned telepathy is fairly rudimentary, basically transmitting the on or off state of a light to indicate a yes or no response. But still, networked brains! Pretty neat.

Brains of 3 People Have Been Successfully Connected, Enabling Them to Share Thoughts

One fun thing: mixed reality eyedroppers

If you would like to receive Six Signals in your inbox, sign up for the Automattic.Design mailing list.

Six Signals: Climate fashion & vocabulary for the autonomous future

Bonjour! I just returned from a week in France with some of the Automattic Design team at the Design Bienniale in St. Étienne, which included a collaboration between our own John Maeda and the Google Material Design team. You can watch our evening of presentations from Automattic and Google designers, including the European premiere of the Open Web Meditation (with a French translation!).

This week’s Six Signals are extra meaty and future-facing, including behavioral concepts for autonomous vehicles, climate change gear as fashion, and AI tools that guide visually impaired people. Enjoy!

01: From “juddering” to captcha street furniture — vocabulary for the autonomous future

My colleague Beau Lebens tipped me off to this fantastic work by Jan Chipchase, who has put together a glossary of speculative terminology about autonomous vehicles and emerging behavior. Some of my favorites include:

  • Juddering: “the ripple of a dozen or more cars in a parking lot that react and finally settle to the arrival of a new vehicle.”
  • Captcha street furniture: “introduced by residents looking to filter out autonomous vehicles from passing through their neighbourhoods. (The opposite will also be true, with human-drivers filtered out of many contexts).”
  • Shy-distance: “the distance by which your vehicle instinctively avoids, shies away from other vehicles on the road and stationary objects.”

Twelve concepts in autonomous mobility

Driver behaviours in a world of autonomous mobility

02: AI-driven app to guide visually impaired users

Image: Google

Google recently released their Lookout app for Pixel devices, which helps those with visual disabilities make sense of their physical surroundings. “By holding or wearing your device (we recommend hanging your Pixel phone from a lanyard around your neck or placing it in a shirt front pocket), Lookout tells you about people, text, objects and much more as you move through a space.”

With Lookout, discover your surroundings with the help of AI

03: Dystopian accessories for unbreathable air

Image: Vogmask

As air pollution becomes a more common problem worldwide —from persistent smog conditions like we see in Beijing and Shanghai, to more frequent incidences of forest fires in places like California — face masks are becoming a necessity for more people. As a result, we’re beginning to see companies capitalizing on this need and turning the face mask into a fashion accessory. Rose Eveleth reports on this emerging reality in Vox:

The near-future of this accessory could depend on who picks up the object first … It could be adopted by streetwear fans (Supreme already sells a face mask, although it doesn’t seem to actually do much in the way of safety or filtration) or by users who prefer the Burning Man aesthetic. Or perhaps the wellness world adopts these masks, in which case the product design would look quite different. “The other direction might be the sort of Lululemon-ification of the masks, if they’re treated as these essential wellness objects and they enter the world of performance fabrics and athleisure and athletic wear.”

As air pollution gets worse, a dystopian accessory is born

04: Regulating algorithms like drugs

As algorithmic systems have a real impact on more aspects of our lives, from our health care to our financial services, we face increasingly pressing questions about how to monitor and interrogate these systems. A recent Quartz article suggests that we could take cues from the medical industry and use similar processes to those used for prescription drugs. The authors point out several similarities:

  • They affect lives
  • They can be used as medical treatment
  • They perform differently on different populations
  • They can have side effects

We should treat algorithms like prescription drugs

05: The luxury of human contact

The joy — at least at first — of the internet revolution was its democratic nature. Facebook is the same Facebook whether you are rich or poor. Gmail is the same Gmail. And it’s all free. There is something mass market and unappealing about that. And as studies show that time on these advertisement-support platforms is unhealthy, it all starts to seem déclassé, like drinking soda or smoking cigarettes, which wealthy people do less than poor people.

The wealthy can afford to opt out of having their data and their attention sold as a product. The poor and middle class don’t have the same kind of resources to make that happen.

Human contact is now a luxury good

06: Designing ethical experiences

The past few years have seen more widespread concern over the “dark patterns” in software design — the ways in which experiences are designed to monetize our attention, extract our data, and exploit addictive tendencies. In response, designer Jon Yablonski has put together a clear and accessible set of resources for “humane design” that is ethical and respectful.

As designers, we play a key role in the creation of such technology, and it’s time we take responsibility for the impact that these products and services we build are having on people it should serve.

Humane by design

See you in two weeks! If you would like to receive Six Signals in your inbox, sign up for the Automattic.Design mailing list.

Six Signals: automation, labor, and defunct robots

I’m writing this post today from the sunny Bahamas, attending one of many team meetups that we have at Automattic to spend some quality IRL time with our 100% remote colleagues.

This week in Six Signals, we’re looking at the impact of automation on labor and society, semi-private social spaces, algorithmic collusion, and watching how Jibo the (now defunct) home robot tells you it’s about to die.

01: The complicated truth about automation and jobs

John Oliver’s show this week did an very nuanced job of examining the complicated futures around automation and the human workforce. He articulates how it’s neither as simple as “the robots will take our jobs” nor “new jobs will emerge and we’ll all be fine”.

50 years from now, people will be doing jobs that we can’t imagine right now, like crypto baker or snail rehydrator or investment harvester. I don’t know, the point is you can’t imagine them. So we get rid of some jobs but we get new ones, so that’s even-steven right? Well not necessarily because the new jobs automation creates won’t necessarily pay the same as the ones it takes away and it might not be easy for displaced workers to transition into them.

02: The hidden labor behind automation

New York Magazine goes deep in reaction to The Verge exposé on Facebook’s content moderation contractors, analyzing the ways in which the supposed efficiencies of computational approaches are only efficient because companies externalize the human costs behind automated services.

There aren’t that many tasks that programs can do as well as human beings, and not many programs that can be automated without the help and work of humans. Which means that any company bragging about automated solutions is likely hiding a much larger shadow workforce supporting those solutions, like the one Facebook employs through Cognizant.

“Who Pays for Silicon Valley’s Hidden Costs?” New York Magazine

03: A new competitor in urban mobility

Daimler and BMW have merged 14 different services, including DriveNow and car2go into the largest conglomerate in this space. What’s interesting about the merger is that by bringing together solutions for car-sharing, taxi hailing, parking, electric car charging, and more, they are clearly thinking beyond simple car sharing to develop a rich network of services for rethinking how people get around cities. It also creates a potential competitor to service like Uber and Lyft, which have been so dominant in this market thus far.

Daimler and BMW Invest €1 Billion in Urban Mobility Co-Venture“, Fortune

04: Semi-private social spaces

Back in 2015, Matt Boggie and I wrote about the growth of semi-private social spaces — as embodied by everything from group texts to Slack — and the increasing interest in an alternative to the more public, broadcast model of social media exemplified by Facebook and Twitter. This week, Mark Zuckerberg announced that Facebook will be attempting to capitalize on that trend, with more emphasis on private and ephemeral communication. This move seems to be a reaction to the trust issue the company has been experiencing around privacy. It will be interesting to see whether Facebook can succeed in building interactions that are “privacy first”, and if so, how they will reconcile that with their advertising model.

05: Algorithms colluding to fix prices

An obvious question is, who — if anyone — should be prosecuted for price fixing when the bots work out how to do it without being told to do so, and without communicating with each other? In the US, where the Federal Trade Commission has been pondering the prospect, the answer seems to be no one, because only explicit collusive agreements are illegal. The bots would only be abetting a crime if they started scheming together. Tacit collusion, apparently, would be fine.

Expect mischief as algorithms proliferate“, The Financial Times

06: How the robots die

Hiring for kindness: One simple question

Photo by Jeremy Thomas on Unsplash

One of the most important qualities I look for when I’m hiring someone to join my team is kindness. In the world of aggressive business culture, the idea of kindness can have a reputation for meaning that you’re “soft” or unassertive. (I won’t even go into all the gender politics involved in how we talk about what makes a successful leader.)

Contrary to that perception, kindness is actually a critical skill for effective teams, successful businesses, and positive organizational culture. Kindness isn’t contrary to assertiveness or to radical candor. Rather, kindness is the quality that allows for colleagues who:

  • work collaboratively with others without ego getting in the way.
  • give constructive feedback in ways that are supportive and effective.
  • create an environment of trust that is so crucial to effective teams .

So, how does one hire for kindness? There’s certainly some of the process that involves a bit of “spidey sense”, but given that gut feelings can be subject to unconscious bias, I’ve tried to find more structured ways to screen people. One method, which is pretty straightforward, is simply asking about it in reference calls. Seems obvious, but people are often surprised by the question!

The most useful tool I’ve found is one simple question: “How would your colleagues describe you as a collaborator?” The way someone answers this question provides a wealth of information. The responses I’ve gotten tend to fall into 3 categories:

  1. They’ve clearly never considered the question before in any depth. This is a big red flag because it means two things: It means that they aren’t intentional about their approach to collaboration, but it also means that they haven’t really thought about how their behavior might affect their colleagues or how others might perceive them.
  2. They speak to their own approach to collaboration but not what role they play in the team dynamics overall. This is pretty common and can be a fine place to start and grow from, especially for an individual contributor (if someone is leading others, I would hesitate a bit more). The only big red flag here is if the approach they describe seems more focused on their own success than the team’s success.
  3. The best situation is a highly self-aware answer, one which outlines how they think their approach plays out in the team overall and how it helps to support others. With this level of candidate, I usually see them explicitly thinking through how to help others grow and make space for everyone.

This isn’t a perfect science, but that one question can be very revealing about how a person works and relates to others. It has been helpful in providing a more analytical way to hire for kindness and try to bring in people who will make for a successful team and an organization in which people can thrive. I would love to hear about any other strategies you have found to approach hiring for kindness. Share your thoughts on Twitter @cog_sprocket!