Six Signals: Every atom is a bit and every bit is an atom

Six signals logo.

This week’s signals look at the continuing collapse between digital and physical space, the real and unreal, human and machine. There are prognostications of the future “mirrorworld”, emerging interfaces for interacting in the spaces between digital and physical, and growing uncertainty around what is real or generated.

I also share Six Signals as a biweekly newsletter on Automattic.design. Sign up here.

1: Manipulating AR objects

Image of the litho device with the text "Litho is the input device for the real world."
Image: Litho.cc

The Litho controller is “like a set of miniature brass knuckles” — it is a hand-worn motion controller with an embedded trackpad, so it can support a combination of gesture, swipe, point, and tap. It primarily works with Apple’s AR Kit, though it was intended with the HoloLens in mind. Like some of the gestural controllers that have come before (Leap Motion, Myo), this may be a solution ahead of its time, but it does point to the potential need for new ways of interacting with digital objects if and when those objects become co-present in your physical space. My bet is that this kind of controller won’t really take off unless AR moves beyond the phone screen into some form of heads-up display.

The Litho controller is sci-fi jewelry for your iPhone’s AR apps

2: Hearables and augmented audio

Photo of wireless earbuds.
Photo by Howard Lawrence B on Unsplash

The growth of voice assistants (Siri, Alexa, etc.), the continuing trend of “more sensors everywhere”, and the increasing popularity of wearable tech means that our ears are the one of the next frontiers in wearable computing. “Hearables” describe in-ear devices that can incorporate everything from augmented audio to voice assistants to biometric tracking. We can see this technology emerging from multiple types of manufacturers with different audiences in mind: Massive tech companies like Google, Apple, and Amazon see the opportunity to embed some of their computing prowess into new kinds of devices. Headphone and audio manufacturers see the opportunity to provide new features to their audiophile audiences. And hearing aid companies see the potential for evolving assistive devices into augmenting devices.

The future is ear: Why “hearables” are finally tech’s next big thing

3: When our space contains multitudes

All of these emerging technologies point to the potential growth of what Kevin Kelly talks about in this week’s Wired as the “Mirrorworld”:

Everything connected to the internet will be connected to the mirrorworld. And anything connected to the mirrorworld will see and be seen by everything else in this interconnected environment. Watches will detect chairs; chairs will detect spreadsheets; glasses will detect watches, even under a sleeve; tablets will see the inside of a turbine; turbines will see workers around them.


This piece paints a sweeping picture of a future where the mirrorworld has come to fruition. Kelly is utopian and optimistic about it in the way that only someone who feels in control of technology rather than at the mercy of it can be. I don’t doubt that this is the ideal that people working on the requisite AR, AI, and computer vision technology are aiming for. But just like social media didn’t exactly accomplish the connected society that tech founders touted, we have to also imagine how this kind of future mirrorworld will break down or be used in problematic and exploitative ways.

AR Will Spark the Next Big Tech Platform—Call It Mirrorworld

4: The co-evolution of humanity and technology

Speaking of which, BBC Future has a great long read into how humans and technology evolve alongside each other, and our responsibility as those paths potentially diverge.

My belief is that, like most myths, the least interesting thing we can do with this story (the singularity) is take it literally. Instead, its force lies in the expression of a truth we are already living: the fact that clock and calendar time have less and less relevance to the events that matter in our world. The present influence of our technology upon the planet is almost obscenely consequential – and what’s potentially tragic is the scale of the mismatch between the impact of our creations and our capacity to control them.

Technology in deep time

5: The creativity of context collapse

My colleague Megs Fulton recently pointed me to this excellent article on the Big Flat Now, which speaks to the ways in which the growing fluidity between digital and physical, past and present, low and high culture, have actually created a new kind of creative space in which to operate.

Product design has become a form of DJing — and DJing has become a form of product design. Contemporary art and luxury fashion have come to operate according to the same logic, sharing practitioners who glide freely between each field. Film, music, fashion, visual art and the marketing machines that support them have been compressed into a unified slime called “content.”

Welcome to the Big Flat Now

6: Playing with the boundary between the real and generated

Work with deep learning and neural networks in recent months has led to pretty astonishing leaps forward, where we now have models that allow machines to generate images, text, and video that are nearly indistinguishable from real ones. The text generation piece has a new wrinkle with the Open AI Institute study that was published this week. Researchers were so concerned about potential misuse that they only released a partial model with the study results.

While there are many real reasons to be alarmed by these advances, this week has seen a number of projects that play with those increasingly blurry boundaries, including Which Face is Real?, This Person Does Not Exist, and of course (because it’s the internet), This Cat Does Not Exist.

One ridiculous thing

Six signals: Malware in your DNA and insurance in your Instagram

Six signals logo.

Every two weeks, I’ll be sharing links to six things that feel like signals of the near future, in ways big and small. These signals might be scientific advancements, art projects, codebases, or news articles, but will all have some flavor of where things might be heading. Enjoy!

1: Cascading futures

NESTA has its annual Tech Trends report out, which begins with this great observation:

If a prediction doesn’t have a hint of outlandishness which means it feels foreign to us now, then it isn’t serving its purpose, which is to generate alternative visions of the future

Click through to read more, but here’s the TLDR list:

  • RoboLawyers make legal services cheaper
  • Randomly-allocated research funding
  • Personalised nutrition based on profiling our gut microbiome
  • Supercharging the accessibility revolution
  • The future of algorithmic legibility
  • Weaponized deepfakes
  • AI for grading essays and exams
  • The age of the superbug
  • The rise of the “city brain”
  • The evolution of work

2: Digital identity leakage

Sometimes my bleakest predictions come true faster than expected. More insurers are using people’s digital traces as a factor in health and life insurance pricing / coverage. Here’s a depressing set of tips from the Wall Street Journal on how to use social media defensively.

3: Games as virtual concert halls

Fortnite continues its growth as “more than just a game”, with the first live virtual concert taking place on the platform. This brings back memories of Second Life…

If you want a deeper dive on why Fortnite is capturing a lot of interest, see this piece: Fortnite Is the Future, but Probably Not for the Reasons You Think

“Fortnite likely represents the largest persistent media event in human history. The game has had more than 6 consecutive months with at least 1 million concurrent active users – all of whom are participating in a largely shared and consistent experience.”

4: Malware in your DNA

This article is from a little while back but was making the rounds on Twitter again this week. Researchers figured out how to encode malware in strands of DNA, making our bodies potential future sites of all kinds of digital communication, encoding, and steganography.

5: The future is accessible


Google announced two new Android apps to make audio more accessible — Live Transcribe for real-time conversation transcription and Sound Amplifier to enhance the sound in your environment.

6: Transmedia editing

Descript is an app that lets you edit audio and video by editing the text of the recording. I love the media fluidity that this points to, and wonder what other experiences might be made possible with these kinds of translations.


One video to enjoy

The computational gaze

Image: Tim Ellis. Flickr

I’ve written and spoken before about what I call mechanomorphism — a word that I developed to describe the concept of machine intelligence as a companion species. This framing of AI is distinct from anthropomorphism, where we try (and inevitably fail) to make machines approximate human behavior. Instead, I envision a future where we appreciate computers for the ways in which they’re innately “other”.

Another way to put it is that I’m fascinated by the computational gaze — how machines see, know, and articulate the world in a totally alien manner. I’ve been talking a lot with my boss, John Maeda, about computational literacy and how to help people understand foundational concepts of computing. But computational literacy posits the machine as a tool (which it often is!). The computational gaze, on the other hand, suggests the machine as a collaborator or companion intelligence.

Collaborating with machine intelligence means being able to leverage that particular, idiosyncratic way of seeing and incorporate it into creative processes. This is why we universally love the “I trained a neural net on [x] and here’s what it came up with” memes. It has this delightful “almost-but-not-quite-ness” to it that lets us delight in the strangeness of that unfamiliar gaze, but also can help us see hidden patterns and truths in our human artifacts.

The increasing accessibility of tools for working with machine learning means that I’m seeing more examples of artists, writers and others treating the machine as collaborator — working with the computational gaze to create work that is beautiful, funny, and strange. Here are some folks who are doing particularly interesting work in this arena:


Visual feedback loops

In the visual arts, Ronan Barrot and Robbie Barrat have a show in Paris where they collaborate with a GAN to paint skulls. “It’s about having a neural network in a feedback loop with a painter, influencing each other’s work repeatedly — and the infinitude of generative systems.

Mario Klingemann has also been playing with GANs in his “Neural Glitch” series:

“Neural Glitch” is a technique I started exploring in April 2018 in which I manipulate fully trained GANs by randomly altering, deleting or exchanging their trained weights. Due to the complex structure of the neural architectures the glitches introduced this way occur on texture as well as on semantic levels which causes the models to misinterpret the input data in interesting ways, some of which could be interpreted as glimpses of autonomous creativity. 

—Mario Klingemann
Mario Klingemann, Neural Glitch
http://underdestruction.com/2018/10/28/neural-glitch/

Writing with machines

Alison Parrish does wonderful creative writing work in collaboration with generative systems. Some of her highlighted work is here, and many projects have open-source code or tutorials. Here’s an example of Alison’s Semantic Similarity Chatbot, which she describes as “uncannily faithful to whatever source material you give it while still being amusingly bizarre”.

Alison Parrish, Semantic Similarity Chatbot
https://gist.github.com/aparrish/114dd7018134c5da80bae0a101866581

I also often come back to Robin Sloan’s “Writing with the Machine” project from a couple of years ago, where he trained an RNN on a corpus of old sci-fi stories and used it to auto-suggest sentence completions in his text editor.

Robin Sloan, Writing with the Machine
https://www.robinsloan.com/notes/writing-with-the-machine/

Enjoying the weirdness

From a more playful perspective, I particularly love the work that Janelle Shane has been doing, documented on her site AI Weirdness:

I train neural networks, a type of machine learning algorithm, to write unintentional humor as they struggle to imitate human datasets. Well, I intend the humor. The neural networks are just doing their best to understand what’s going on. 

— Janelle Shane

Here’s her illustration of some of the cookies her neural net came up with when trained on cookie recipes:

Janelle Shane’s neural net-generated cookies
http://aiweirdness.com/

Machines cheat in bizarre ways

One of my favorite things is seeing how machine learning systems will find bizarre ways to “cheat” in order to fulfill the goals that are set for them. Recently, there was a lot of discussion around this AI that steganographically encoded invisible data into maps in order to achieve the stated goal of recreating aerial imagery from said map. There’s also a fantastic Google sheet that describes all the ways various AI systems have found unexpected and strange workarounds!

Indolent cannibals

In an artificial life simulation where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children).


The literal computational gaze

This last piece is not about machines as collaborators, but is still one of my favorite pieces in that it so powerfully evokes the sense of the machine’s alien gaze. This is from 2012, and is a video by Timo Arnall called Robot Readable World.

I find this kind of work delightful and meaty, and I hope to see more of it. As soon as I learned to code, I started making generative things — fake ad generators, chatbots, etc. I loved making work that, even though I had shaped it, continued to surprise me. I felt warmth and curiosity towards my strange mechanical collaborators. In a moment where the computational gaze is being used in so many exploitative and questionable ways, I hope that there is also space for work that allows us to explore all that is delightful and creative about our computational companions.

Four methods for good critique

Markers and a whiteboard as a symbol of design critique.

Mitch Goldstein at RIT has put together a lovely little site called How to Crit, that reviews the value of design critique and how to give and receive criticism. I love this kind of knowledge sharing — good critique is invaluable in improving one’s work and process. However, it’s a hard skill and many people give feedback in ways that are unconstructive, vague, or unkind.

So, Mitch inspired me to share what I’ve learned over the years about giving good feedback.

Start by acknowledging the intent

It’s easy to dive right into criticism, but doing so can lead to the creator feeling deflated or defensive, and therefore less receptive. I always try to reflect what the person’s intent was with the work, and to acknowledge what is positive about that intent. This has the benefit of making them feel understood and appreciated, and frames the conversation as a supportive one.

Frame constructive critique in terms of goals

Especially when it comes to design, it’s easy to have reactions that are based on personal aesthetic taste. To avoid this pitfall, I try to identify goals that we all agree on and give my analysis of where the design does or doesn’t achieve those goals. So for example, “The muted color scheme doesn’t convey the sense of youth and liveliness that we want to associate with the brand” is far more useful than “I don’t like the pastels”.

Present problems, not solutions

One of the most common pitfalls in critique is giving prescriptive feedback. Again, this is an easy trap — you see a design and you think, “Aha! If it were only like this, it would work better!” It feels faster and easier to just share your idea because you think it will work better.

However, this approach has two problems. The first is that by being prescriptive, you don’t leave space for someone to explore or to come up with an idea you haven’t considered. Design is a process of problem solving, so by jumping ahead to a solution, you are depriving the designer of the opportunity to work through it. Furthermore, if you present a problem rather than a solution, you ensure that everyone is aligned on those problems, and agrees that they are the right ones to solve. “Make the headers bigger” is less useful than “Help the user understand what’s most important on the page”.

Take your time

This one is always hard for me, as it’s easy to feel pressure to respond immediately, especially when work is presented in person for the first time. It often takes me time to process my response and give the most useful feedback. Some strategies I’ve developed for this:

  • Hang back. If you’re in a crit with multiple people, let others talk first and take time to collect your thoughts. Other people’s feedback can also spur your own ideas.
  • Make space for follow up. Take the pressure off of one meeting and make sure there are spaces for conversations to continue more fluidly after a crit or presentation session.
  • Get a sneak peek. If possible, have folks send you visuals before you review in person. You’ll get less context until you have a conversation or presentation, but you can start to develop some initial reactions. (Note: this is really only feasible with teammates you have a lot of trust in. I would not advise sharing in advance with a client or partner who isn’t close to the process. Thanks to Stewart Bailey for the prompt on this!)

Design process for the messy in-between

I tweeted this last week, and figured I should put my keyboard where my mouth is and take a stab at talking about design process for the real world. First, a caveat: I do think it’s valuable to frame ideal processes so that we know what we’re aspiring to. But often writing about design process has an all-or-nothing tone to it: It makes you feel that if you’re not doing it the “right” way, then you’re not doing good work, and won’t end up with a good product.

So first: there’s no one “right” way to do things. But there are a set of approaches that are generally good practice for user experience and product design: things like talking to your users, making sure to do divergent exploration, getting feedback, and continual iteration. However, it’s rare that I see a designer in a situation where they can execute a design process exactly as they would like to.

Instead, we all end up working in the messy in-between — a place where we need to make trade offs in our process due to real-world constraints. Those constraints tend to be things like:

  • Limited time: Deadlines won’t always accommodate a perfect process.
  • Skeptical stakeholders: People with authority over the project may not believe in the value of a thorough design process and see it as something that slows down the project or adds to cost.
  • The way things have been done before: If you’re trying to grow a design practice in an organization that hasn’t had a strong design or product culture, change doesn’t happen overnight. 
  • Personnel constraints: Sometimes you don’t have enough people or the right people to execute on all the pieces of the design process thoroughly.
  • Budget: This one is self-explanatory 🙂
  • And much more…

So, given those constraints, how do you decide where to cut corners and where to push for more? What’s a good design process for your design process? 

In my experience, here are a few rubrics for making these decisions:

1. Know your strengths and focus resources on your weaknesses.

What are your core abilities as an individual or a team? If you’re really familiar with your intended users, perhaps you don’t need to go as deep on user research, and instead you focus intensively on design exploration. On the other hand, if you have strong UX/UI design experience and instincts on your team, you might be able to spend less time exploring and iterating and more time talking to users.

This piece of the puzzle requires the ability to accurately self-assess. Be honest with yourself about your strengths and weaknesses, and design your process to support you where you need it most. If you have deep experience in one area, don’t be afraid to trust your instincts.

It can feel like sacrilege to say “we don’t need [x] because we’re really good at [y]”, but remember that ideal design processes are designed to check you — to make sure you’re considering options and needs that you might not immediately think of. Deep experience and skill can also help provide some of those checks and balances.

2. Learn to identify the immovable objects

In looking at your constraints, know which ones are fixed and which ones can’t be budged. This is a bit easier with things like budget, time, and people — for example, if you don’t have a budget for extensive user research, it’s clear you will have to work with some guerrilla research tools and approaches. But it’s more challenging to know which cultural pieces are immovable.

For example, you may have a stakeholder who just doesn’t buy the value of a strong design process. Most designers will find themselves in this position at some point, especially if you’re working in-house. Know when not to waste your time on unwinnable arguments. In those situations, there are one of two paths forward. One is that you can find small ways to inject better process and show how those approaches led to better outcomes. Seeing tangible proof of the utility of a good design process can lead to more investment and trust in that process for future projects. The second path is — unfortunately — that some stakeholders just won’t be convinced and it will prove to be a serious constraint on your ability to do deep design work.

It takes time to figure out which situation you are in, but in either case, knowing how fixed your constraints are helps you identify where to focus your efforts.

3. What has to be perfect now and what can be fixed later?

As designers, it’s always crucial to understand the overall product and business strategy for the experiences we’re designing. One of the reasons for this is that it can help to prioritize where to focus resources in our “messy in-between” processes. What features or users are most critical to the success of the product?

Constraints mean that we almost always have to pick things that aren’t going to get as much love and attention as we would ideally like. Can a feature be removed for launch, or is there a scaled-down MVP of that feature that will suffice for now? Which user group has to have their needs deeply met for success? Can other groups’ needs come later? It’s hard not to want everything to be perfect, but knowing what truly has to be perfect can help in focusing limited resources on the right things.

These are by no means exhaustive, but they are a few key rubrics that I frequently use. Most importantly, I hope that we can all share more about how we navigate design in situations that rarely meet the platonic ideal. In doing so, I believe we can alleviate a lot of the guilt and impostor syndrome that seems to be common amongst designers who are worried that they aren’t “doing it right”. Let’s embrace the imperfections of design process in real organizations and projects, and share tools for creating the best work within the constraints of those situations.

Before you make a thing

For his course on Technology & Society, Jentery Sayers has created a document entitled “Before you make a thing” that is a fantastic overview of how to critically approach designing and making with technology. The guide is divided into three sections: Theories and Concepts, Practices, and Prototyping Techniques. Here are a few of my favorite bits:

Examine the “default settings” of technologies; doing so asks for whom, by whom, and under what assumptions they are designed, and who they may exclude and enable. All projects have intended audiences, even if those intentions are not always conscious or deliberate.

Remember that data are produced, not given or captured; doing so emphasizes how this becomes that, or how data is structured, collected, and expressed for interpretation. 

Conjecture with affordances; doing so demonstrates how design is relational. It happens between people, environments, and things; it’s not just a quality or property of objects.

Make a useless or disinterested version of your project; doing so may underscore the creative and critical dimensions of technology and society. After all, not all technologies must increase productivity or efficiency. Consider the roles of technologies in art, theory, and storytelling. 

There’s a wealth of great guidance for both craft and thinking here, along with links to source materials for more in-depth study — go and read the whole thing!

Make America Geocities Again

Clockwise, from top left: Data Diaries, by Cory Arcangel; My Boyfriend Came Back from the War, by Olia Lialina; Arngren.net; Form Art, buy Alexei Shulgin; Cameron’s World, by Cameron Askin.

It’s 2018 and the web feels…sanitized. It’s an odd word to use amidst the rampant trolling and politics and problematic speech. But when you look at the systems we use to communicate with each other, we all assemble into the neighborhoods and cul-de-sacs that have been assembled for us, we write on our writing platforms and share on our sharing platforms and artfully compose photos on our photo platforms. We complain about the landlords but we still use all of the privately owned public spaces of the internet as our de facto watering holes.

We’ve all become expert users, but we’re no longer makers. Not in the same way.

I grew up with a web that was more rudimentary in its capabilities but it was clay in our hands. It was material for creating. And some of what was created was gaudy or ridiculous, but it was craft. It was our own glue and yarn creation, not some shiny cookie cutter assemblage we made from a kit. So, while I appreciate the elegance and gloss and ease of use of the tools and platforms we have available to us today, they feel so prescriptive, so limiting, and frankly, so dull.

And yet we are in a moment that has the potential to be so expressive. We are in a moment of political rage, we are in a moment of frustration and creation, where people are coming together and rising up and reclaiming systems and processes to better express their voices. But the internet, our digital infrastructure, offers weak tea, tools designed for an orderly way of being. The idea of a radical tweet or a movement-instigating Instagram post seems laughable. Where is the radical net art of this moment? Where are our geocities pages, our generative bots, our fantastical creations? Why aren’t there more of them?

So why not reclaim the tools of our digital landscape? Why not put our hands in the clay again and see what sculptures emerge? Let’s break out of the sleek, efficient, factory-sealed futures that have been engineered for us to complacently exist in and instead play and rage and make in the wide open fields that have lain falllow too long. Embrace the maximalist moment every design pundit tells us we’re in and make your big and weird thing. Let’s re-learn our tools. Don’t be intimidated by the over-complicated way you’re supposed to build things on the internet today (or, you know, go deep there if that’s your thing) — write your most basic html and JavaScript, just get your hands dirty in the tools again. Just start making and see what comes out of you. What would happen if we all did?