Some organizational news

About eight months ago, we started a site and gave it a name: Cog and Sprocket. We knew we wanted to do more writing and thinking together, but we didn’t know much beyond that. So we made a blank canvas to throw some paint on. The name Cog & Sprocket reflected a lot of things: our duality, the importance of individual components in larger systems, a sense of mechanical structure. But mostly, it reflected the fact that we didn’t know what the hell it was yet.

Over the intervening months — several posts and many paint splatters later — that focus and purpose has been emerging. Alexis started our Six Signals newsletter as a way of collecting and thinking through the weak signals of the near future she was seeing. About a month ago, Matt starting co-authoring the newsletter. In the meantime, we both started (or are about to start) new jobs. Through our collaboration, we began to rediscover a process we had created at The New York Times R&D Lab for building hypotheses about emerging futures and technological possibilities.  We had a lot of late-night conversations, talking through not only what technology makes possible, but how we can design with new tech to intentionally create something better. 

That something better is an ethically-designed future, one where the ethos of “Move fast and break things” has been replaced with “Move thoughtfully and fix things”. One where systems empower people rather than exploiting them. One where the impact of small decisions are truly considered, and understood to have deep and long-term effects. One where we design with emergent behavior and unintended consequences in mind. One where we spend as much, if not more, time answering the question “What should we make?” as “What can we make?” 

And so we’re pleased to announce the creation of the Ethical Futures Lab, where we hope to grow and shape these ideas further. The lab will be a space for writing and analysis, critical making, and community building. Six Signals will continue under the auspices of Ethical Futures Lab, along with more in-depth writing from us as well as guest authors. We’ll be making things as we’re inspired to, much as we had when we worked together at the R&D Lab, in an effort to illustrate better outcomes and inspire others toward them. We’ll be convening conversations among experts and practitioners who are deeply engaged with these issues. And we’ll encourage attention — and at some point, maybe even investment — toward entrepreneurs and creative thinkers who are showing us the best of what technology can do for a just society.

We’re excited to begin, and eager to see how this idea grows and matures. We hope you’ll join us, starting with signing up for the newsletter, but also by reaching out to us with your ideas for topics to cover, your thoughts on potential collaboration, and your hopes for a better tomorrow.

To the future,
Matt and Alexis

Six signals: Ethical design and bio bots

Six signals logo.

About seven years ago, Matt and I put together a presentation called “Atoms are the new bits”. This week’s signals show just how much the physical and digital have merged, and how much further that interconnectedness could go. On the human front, we were inspired by Anil Dash’s post on trustworthy organizations to share recent tools and analysis that document the impact software has on trust, psychology and relationships. We then took a deeper look at robots that mimic, manipulate, or even connect to living organisms. Read on for all this, and some very trippy GAN Simpsons characters…

If you want to get future issues in your inbox, please sign up for our newsletter.

1: A field guide to manipulated video

The Washington Post published a detailed guide to identifying videos that had been selectively edited, doctored, or even faked altogether in order to manipulate public opinion. Unlike the recent focus on “deepfakes” that rely on advanced computational techniques, the Post’s piece informs readers about simpler techniques — like misrepresentation or isolation — that are just as deceptive. Further, the Post finds examples of manipulation in surprising places, including a documentary on President Obama’s mother and her struggles obtaining health insurance. 

The end of the guide gives readers a chance to submit their own suspect videos and request review from the Post’s fact-checkers.

Seeing isn’t believing: the Fact Checker’s guide to manipulated video

2: The tech industry Columbusing the social sciences

Lilly Irani and Dr. Rumman Chowdhury take Tristan Harris to task for suggesting a “new” field of study he calls Society and Technology Interaction. Beyond the fundamental problem of claiming to discover fields of academic pursuit that have existed for decades, they argue that his statements show how little attention Silicon Valley pays to the underlying ethical and cultural impacts of the technology they build. 

Irani and Chowdhury point to recent #techwontbuildit protests as a good first step in countering unethical technology practices, but suggest that collective action against these practices requires more community engagement and education, so that those most affected can understand the impact of these systems and unite their voices against them. 

To really ‘disrupt’, tech needs to listen to actual researchers

3: Dark patterns in online retail

As a step toward better consumer education on tech ethics, J. Nathan Matias built a guide he calls “Tricky Sites” that lists commerce sites and the manipulative design patterns they employ. The tool is based on research Arunesh Mather led at Princeton that scanned over 11,000 separate sites, cataloguing 15 distinct “dark patterns” like high-demand messages, “confirmshaming” (e.g. “No thanks, I like paying full price”) and visual interference. 

With Tricky Sites, you can quickly scan for your online shopping destination of choice and be warned of the tactics you’ll encounter there. By making visible these patterns, and through increased government attention to their impact, sites may be pressured into building more trustworthy interfaces.

Tricky Sites

4: Robots that really flex their muscles

Ritu Raman, an engineer at MIT, is creating robotic mechanisms that incorporate biological tissue and are powered by living skeletal muscles. She uses 3D printing techniques to pattern living cells that can then self-assemble into functional muscle tissue. Raman is hopeful that this hybrid approach can be used for interventions that range from bio-adaptive medical treatments to environmental sensors.

“I’m a mechanical engineer by training, and I’m honestly a little bored building with the materials we’ve been building with for the past thousand years. So I’m making robots and machines that use biological materials to move and walk around and sense their environment, and do more interesting things—like get stronger when they need to and heal when they get damaged.”  

Ritu Raman profile

5: Cyborg botany

On a related note, Harpreet Sareen has been working on connecting plants to digital electronics to transform them into interfaces for both sensing and notification. He has developed methods for plants to sense input (such as motion) and send that signal to a computer, as well as a way for the computer to send a digital signal back to the plant and trigger “soft notifications”, like the leaves retracting or the jaws of a Venus flytrap snapping shut.

Taking the research one step further, Sareen is working to bring the computer and the plant together by embedding digital circuitry in the plant itself. His new project, Planta Digitalis, uses “liquid electronics” to embed conductive wire inside the stem of a rose plant that could then be controlled by sending electrical signals into the wire. His ultimate goal is to create a true plant-computer hybrid, by finding the “right mixture of chemicals that would react with the plant’s internal structure in a way that would self-assemble into an organic circuit”.

Plants are the oldest sensors in the world. Could they be the future of computers?

6: Solar-powered robot bees

User-uploaded image: image.png
Eliza Grinnell: Harvard Microbiotics Laboratory

For the final robotics signal of the week, take a look at the RoboBee X-Wing, a tiny solar-powered robot bee drone. RoboBee uses flapping wings like an insect to allow for greater agility, and the new solar-powered addition means that it no longer needs to be tethered to a power source and can fly on its own — only for half a second at the moment, unfortunately. Power sources have always been a tricky problem for small-scale robotics, as with these drones or medical nanobots. When the scale of the mechanism is too small to carry an onboard battery, there need to be more creative ways of generating and storing power, so solar power at this scale is a promising, novel approach.

What could possibly be cooler than RoboBee? RoboBee X-Wing.

One “D’oh!” thing

Michael Friesen made some neural-net-generated Simpsons characters and the results are both hilarious and troubling.

If you want to get future issues in your inbox, please sign up for our newsletter.

Six signals: Reality training and robot furniture

Six signals logo.

In this week’s signals we are hearing a lot about spaces: real, virtual, hidden, and cramped. The people that use those spaces sometimes get replaced by machines, and those machines are very keen to understand what we do in them. Should we keep telling them?

If you want to get future issues in your inbox, please sign up for our newsletter.

1: Context-aware fill for the real world

This speculative short fiction explores a near future where mixed reality — in the form of ubiquitous “iGlasses” — has become prevalent. What’s most compelling about this piece is the exploration of a version of AR where “augmentation” includes editing out parts of the real world; a context-aware fill for reality. You could easily imagine a future like this where, if you pay extra, you no longer have to see billboards. 

And then, without really knowing what I was doing, I took my iGlasses off. I know, I know. Who does that anymore? It’s so easy to forget you’re wearing them, especially since they delete other peoples’ glasses from your field of vision. I got this rush of panic when the object tags, memory aides and search bar disappeared. It felt like diving into a pool and not knowing where the surface was.

Keep Your Augmented Reality. Give Me a Secret Garden.

2: Flat-pack robot furniture

Video: IKEA

IKEA has announced a new line of adaptive, robotic furniture that is controlled via touchpad and can convert into a bed, couch, desk, or wardrobe, depending on your needs at the moment. It is intended to maximize the use of space in small, urban apartments, appealing to the increasing percentage of the world’s population that live in high-density cities. The furniture line, called Rognan, will launch in Hong Kong and Japan in 2020.

ROGNAN robotic furniture for small space living

3: Building faces from speech

MIT researchers have published a paper describing an AI that can approximate a face from recordings of the voice that produced it. Trained on millions of YouTube videos, the guesses are directionally correct, often getting ethnicity, gender, and basic bone structure right. Researchers have found that their models are picking up subtle differences in tone and timbre in their guesses, but caution that their models are dependent on a limited set of training data, which (like most other attempts like these) means it works better for populations more represented online.

Recordings of voices may be a new frontier in privacy considerations as technologies like these improve. Already Lyrebird is offering synthetic voice generation based on recordings of a person’s voice, and as the article points out, companies like Chase are using voice-matching technologies to identify customers. It’s something to consider the next time your latest conference talk gets uploaded to YouTube.

With this AI, your voice could give away your face

4: Humans are dead, long live humans

Image: Walmart

The Washington Post tells us the dystopian story of “Freddy”, a floor-cleaning robot that is named after a janitor who was fired, making way for his automated replacement. Walmart’s recent investments in a variety of automated retail robots are changing the way their human workers approach their own work:

Their jobs, some workers said, have never felt more robotic. By incentivizing hyper-efficiency, the machines have deprived the employees of tasks they used to find enjoyable. Some also feel like their most important assignment now is to train and babysit their often inscrutable robot colleagues.

As Walmart turns to robots, it’s the human worker who feel like machines

5: A design solution for better data privacy

Designer Lennart Ziburski has developed and shared a design system for increasing data privacy while still enabling the personalization we’ve come to expect from modern apps. This proposal has two parts: an on-device “circle of trust” where apps can share data with each other freely, but only on your device, and a set of “data permits” for when a service wants to share your data.

While a compelling and detailed proposal, it may be technically impossible to enforce. Once an app has access to your calendar or contacts locally, it’s trivial to design a way to transmit that information, even if the OS has a “permit” scheme or similar that app developers are encouraged to comply with. Those that do comply will likely use that as a premium feature for those who can pay to keep their data private, with others unwittingly paying for their “free” services with their purchase histories, location data, and social graphs.

Rethinking how technology uses our personal data

6: Making fake-reality to help AI understand real-reality

Facebook Reality Labs (what a name!) has taken high-resolution 3D scans of real offices and homes for use in training their systems to better understand human spaces, identify objects, and practice navigation. Building virtual spaces as inputs to navigation and recognition routines has greatly increased the speed at which these systems can be trained; experiments that once took months can now be accomplished in just a few hours. 

Facebook plans to release their AI Habitat system, and the underlying Replica data set, so that the AI community can benefit equally. The hope is that navigational systems will soon have a deeper understanding of human spaces, eventually leading to “social presence” AI with avatars that know where to sit, stand, and walk.

Facebook has built stunning virtual spaces for its AI programs to explore

One more (haptic) robot thing…

If you want to get future issues of Six Signals in your inbox, please sign up for our newsletter.

Six Signals: DNAdvertising and biometric tattoos

Six Signals logo.

This week’s Six Signals contains several speculative explorations, from a rethinking of the operating system to biometric tattoos to augmented art, as well as some even weirder things happening in the actual present, like genetic tourism marketing.

If you want to get future issues in your inbox, please sign up for our newsletter.

1: Humans pretending to be machines pretending to be humans

Google Duplex, a service that places automated calls on the behalf of people, was recently rolled out to a larger number of users. Duplex can perform tasks like making restaurant reservations and other appointments, and is predicated on having bots that are nearly indistinguishable from real people. Well, it turns out that about 25% of supposedly automated calls are actually made by real people working in a call center. We’ve seen this before with services like Facebook M, and it demonstrates that it’s still remarkably hard to make machines seem believably human in social contexts. It also means that, once again, the supposed cost savings of automated systems is often offset by the need for human intervention and support.

Google’s Duplex Uses A.I. to Mimic Humans (Sometimes)

2: DNAdvertising

So this is happening now. Last week, Airbnb announced a partnership with 23andMe to recommend “heritage travel destinations” based on your genetic profile. This is likely just the first harbinger of many services that will use your DNA to market personalized products to you, and it brings up ethical questions about how to handle genetic data responsibly. Obviously, consent is key, as well as approaches like forgetful databases that don’t store your information forever. But there are lots more meaty questions, like how to prevent genetic data being used in discriminatory ways and how to communicate the scientific validity of genetically personalized services.

Airbnb teams up with 23andMe to recommend heritage travel destinations

3: A city is not a computer

Huge thanks to Christopher Kent for sending this wonderful piece my way! Shannon Mattern dives deep into the premise of smart cities and explores the complexity of how information and lived spaces intersect.

We have to grapple with the political and ethical implications of our methods and models, embedded in all acts of planning and design. City-making is always, simultaneously, an enactment of city-knowing — which cannot be reduced to computation.

A City Is Not a Computer

4: Speculative biometric tattoos

In a recent episode of the Flash Forward podcast, Rose Eveleth dove deep into a possible future where medical tattoos can continuously monitor and visually respond to various biomarkers. While these speculative tattoos are probably a decade or more away, there is a lot of research being done in this area. The episode delves into the myriad potential benefits for monitoring health conditions, but also explores the ethical conundrums and possible abuses of this kind of technology.


5: Rethinking the OS

Image: Jason Yuan

RISD student Jason Yuan has developed a compelling approach to rethinking the operating system user experience. The concept he developed, called Mercury OS, is based on the the building blocks of Flows, Modules, and Spaces, with the intention of creating “something that users could move through without friction or boundaries.”

Introducing Mercury OS

6: When your watch tells you how to socialize

Bloomberg reported last week that Amazon is in the midst of developing a wearable device that is intended to sense your emotional state based on your voice input. The project is in early research phases and may never actually get produced, but brings up a number of complex design and ethics questions. The internal documents that Bloomberg reviewed suggest that “the technology could be able to advise the wearer how to interact more effectively with others”, which brings up huge questions about what social norms the technology would encode. Not to mention that tone of voice as marker of emotion assumes that you express yourself in a standard way, which doesn’t account for folks who aren’t neurotypical, for example.

Amazon Is Working on a Device That Can Read Human Emotions

One beautiful thing: Augmented art

Artists Claire Bardainne and Adrien Mondot created Mirages & Miracles, an installation that uses AR and VR in stunningly beautiful ways.

Ranging from small to large-scale work, this corpus of installations offers a delicate coincidence between the virtual and the material using augmented drawings, holographic illusions, virtual-reality headsets, large-scale projections. It offers a unique ensemble of improbable scenarios that takes root in both the mirage and the miracle, and plays with the boundaries between true and false, the animate and the inanimate, the authentic and the deceptive, the magical, the wondrous, and the indescriptible.

Mirages & miracles

That’s all for this week! Again, sign up for the newsletter to get future issues in your inbox:

Six signals: insurance dystopias and Weird Facebook

six signals logo.

This week is kicking off with a couple of pretty dark future signals, but it gets more fun at the end, I promise!

If you want to get future issues in your inbox, please sign up for our newsletter.

1: Insurance dystopias

We could all probably see this coming a mile (or maybe 10,000 steps?) away, but now that we’re self-tracking and publishing so much data about ourselves, insurance companies are starting to use that data. Sarah Jeong writes that the cutting edge of the insurance industry involves using data — from your step count to your social media posts — to adjust premiums algorithmically.

And of course, since every new surveillance tactic begets an adversarial hack, there are phone cradles being made to artificially boost step counts to avoid premium increases.

Insurers Want to Know How Many Steps You Took Today

2: Is it illegal to opt out of facial recognition?

Police in London conducted a public street trial with facial recognition cameras. A man who covered his face as he walked by the cameras was stopped by officers, forced to submit to being photographed, and then arrested on a charge of public disorder after complaining loudly.

London police arrest man who covered face during public facial recognition trials

3: Jellyfish and insects for dinner

Sainsbury’s, the UK’s second-largest supermarket, has commissioned a report that explores the future of food in 2025, 2050, and 2169.

By 2169 it could be routine for people to hold details of their nutritional and health information in a personal microchip embedded in their skin, which will trigger an alert to the supermarket. It would then deliver by drone suitable food and drink based on their planned activities for the coming days.

Jellyfish supper delivered by drone? Radical future predicted for food.

4: Cars are the horses of the future

I’m always astonished that conversations around autonomous vehicles are so constrained by our current conception of what a “car” is. There’s a tendency to assume that cars will play the same role, but just be self-driving. But really, autonomous vehicles open an enormous possibility space around mobile housing, algorithmic shops, autonomous caravans, and floating offices, to name just a few. Chenoe Hart’s piece on self-driving cars points to a number of untapped design opportunities:

The Hy-Wire’s technology suggests that the focus of car design could turn inward, yielding a range of new possibilities for vehicle interiors. Our future passenger experience might bear little resemblance to either driving or riding within a vehicle; we’ll inhabit a space that only coincidentally happens to be in motion.

Perpetual Motion Machines

5: Empathetic ears

This very optimistic report looks at the possibility for in-ear devices, or “hearables”, to track a variety of biological and audio signals in order to adjust our environments to reduce stress and create more positive experiences. While I’m highly skeptical of future scenarios that rely on all the “smart” things working perfectly and humanely together, I also appreciate the idea of empathy as a core UX principle.

Hearables will monitor your brain and body to augment your life

6: Weird Facebook

Taylor Lorenz’s latest Atlantic piece digs into Facebook tag groups, which are part of the larger Weird Facebook genre (who knew?). People describe tag groups as being reminiscent of forum culture and earlier eras of internet culture. With Facebook’s new focus on Groups, there’s a clear opportunity here to learn from users’ emergent behavior, though they seem to be choosing to take a more top-down approach:

Zuckerberg’s vision for groups—a sort of digital version of the local knitting circle, kayaking club, or mom’s meet-up—is very different from the ground-up group culture that is dominated by one particular format: the tag group.

The groups bringing forum culture to Facebook

One playful thing

Six signals: Authenticity in AI and social media aesthetics

Six signals logo.

1: The future of voice assistants is…phones?

Last week, Audible (which is a subsidiary of Amazon) introduced a feature that allows U.S. owners of Amazon Echo devices to call Audible’s live customer service line. What’s interesting here is the concept of building on top of systems that are already voice driven (aka the phone) rather than trying to convert visual user experiences into conversational ones. According to The Verge, this is the first Alexa-powered customer support service. For now, it is simply providing a link to existing human support representatives, but we can easily see the signal of competition with Google’s Duplex, which uses human-sounding bots to make phone calls on your behalf for structured tasks like booking appointments.

Audible launches the first Alexa-powered customer support line

2: Art in the age of computational production

The Huawei P30 Pro is known for having one of the top smartphone cameras on the market. But one camera feature set off some recent controversy:

Using Moon Mode, a Huawei P30 Pro owner can take a close-up picture of the moon with no tripod or zoom lens necessary. Reportedly, the feature works by using the phone’s periscope zoom lens combined with an AI algorithm to enhance details in the photo.

However, some photographers who have been testing the camera claim that Huawei is going beyond enhancement and actually replacing parts of the image with pre-existing images of the moon. There’s a fascinating set of questions embedded in this controversy: How much do we want computers to “help” us? What constitutes the boundary between “real” and “fake”? At what point does computational augmentation decay authenticity?

Huawei P30 Pro ‘Moon Mode’ stirs controversy

3: Drone delivery on the horizon

Image: Wing

The Federal Aviation Administration recently awarded their first air carrier certification to a drone delivery company. Wing, which is a subsidiary of Google’s parent company, Alphabet, will begin delivering products by drone in Virginia as part of a pilot project. Previously, Wing had been testing its technology in Canberra, Australia.

When a Wing drone makes a delivery, it hovers at about 20 feet and lowers the package on a hook. Customers can select what they want delivered on an app.

Wing, Owned by Google’s Parent Company, Gets First Approval for Drone Deliveries in U.S.

4: “Fashion forward” wearables for recording your life

Image: Opkix

Opkix is the latest company to take a stab at the wearable camera market, with a set of accessories that include necklaces, sunglasses, and rings. We’ve seen some pretty spectacular failures in this space before, most notably Google Glass and Snap Spectacles. Does Opkix provide a combination of compactness and fashion that can change the game? Is the moment suddenly ripe for something that has seen failures in the past (we’ve seen this before with both digital music players and ebook readers)? Or is this a solution without a real problem to be solved?

Opkix One camera and accessories

5: Shifting social media aesthetics

Speaking of authenticity, Taylor Lorenz’s piece in The Atlantic last week notes a reactionary trend against the “Instagram aesthetic”. While the social media platform has become famous for highly polished, stylized glamour shots, that look seems to be going out of style in favor of more unfiltered, low-production aesthetics.

In fact, many teens are going out of their way to make their photos look worse. Huji Cam, which make your images look as if they were taken with an old-school throwaway camera, has been downloaded more than 16 million times. “Adding grain to your photos is a big thing now,” says Sonia Uppal, a 20-year-old college student. “People are trying to seem candid. People post a lot of mirror selfies and photos of them lounging around.”

Of course, it’s all a pendulum, so if you’re still ‘gramming your rainbow food, it’s only a matter of time before you’re back on trend again.

The Instagram Aesthetic Is Over

6: Training for robotic futures

OK, it’s a bit overdone to horror-post Boston Dynamics robots, but this video inside their testing facility is pretty fascinating. I especially like the sign that says “Not safe for humans. Robots only.”

Six Signals is a biweekly look at interesting signals of the near future — how technology, design, and more are changing our society and our personal experiences.

Playable systems: 3 principles for ethical product design

Photo: Jorge Royan / Wikimedia

One of the reasons UX design is such a compelling practice to me is that, rather than designing static artifacts, we design systems that shape the possibilities, expectations, and constraints for how people engage with the world.

That work, to shape how people engage with the world around them, carries a lot of power. And as we all know from Spider-man, with great power comes great responsibility. Increasingly, we are surrounded by digital products and experiences that abdicate that responsibility — that focus on short-term profitability over creating products that work well for the people (and societies) that use them.

So, what kinds of systems should we be creating?

I’ve been working with a framework that I call “playable systems”. Playable systems are ones which empower the people who use them. I use the term “playable” because I think that empowering products are ones that afford virtuosity, in the way that a musical instrument might. They can be easily approached by beginners, but can be mastered and played in highly complex ways.

How do you design a playable system?

The three principles of “playable systems” (this is what I’ve got so far, but there may be more!):

1. A playable system keeps the human in the loop

When we design with technology, we are often designing ways to automate tasks or decisions. However, it is critical that we don’t automate agency away from the user at the moments when they need it most. For example, FitBit came under fire last year when it released a period tracker that didn’t allow women to enter irregular periods outside of its assumed “normal” range. My favorite extreme anti-pattern is this video of a person unable to turn off his Nest Protect even though there was no smoke in his house (spoiler: he eventually shoves them all into coolers in a desperate attempt to muffle the noise). Whenever we automate a decision or make an assumption about what a user will want, it’s important to allow for human override.

2. A playable system is a legible system

In order to truly allow for virtuosity, one needs to be able to understand how the system operates. How can we design experiences where people can deeply understand how a platform works, especially when some interactions may be algorithmically determined? How do we allow for systems to be interrogated? These questions become more complicated and essential as the technologies we use — like neural networks — are harder for humans to read. There is also an interesting interplay between transparency and legibility. We want users to be able to see how things work, but sometimes too much transparency can actually reduce legibility. Finding the right balance can make the inner workings of a system clear and accessible for all.

3. A playable system can evolve in creative ways

Playable systems should be open enough that they allow space for emergent behavior and can grow in ways that extend the experience beyond its initial design. They are ideally extendable, flexible, and with clear pathways to build on top of the foundation. One of the reasons for Twitter’s popularity is that it made space (at least in its early years) for a multitude of emergent behaviors. Some of the core features of the service today were user-invented hacks, like hash tags, @ replies, and threads.

I was recently reading Ursula Franklin’s The Real World of Technology, and her framing of “holistic technologies” is akin to this concept of playable systems. She describes holistic technologies as ones that “leave the individual worker in control of a particular process of creating or doing something.” She contrasts them with “prescriptive technologies”, which are rigid and enforce a particular process.

The web as a playable system

I recently collaborated with Caresse Haaser on an animated meditation on the open web. One of the reasons I think it’s important to talk about the open web now is because it is a playable system. It’s the reason that the web used to be more diverse, idiosyncratic, and delightfully weird. You can read it, write it, and make it your own.

As we’ve seen the dominance of closed social platforms over the past decade, we’ve seen our experiences become more constrained, homogenous, and less self-directed. That is largely because many of these closed platforms are explicitly not playable systems. They are the epitome of Ursula Franklin’s “prescriptive technologies” in that they rigidly prescribe how we can express ourselves.

Constraints as a starting point aren’t bad, but only if the playable principles are in place as well. These platforms aren’t legible, however — they are explicitly black boxes. They also don’t allow for much emergent behavior, so they don’t evolve; instead, their growth is prescribed by their owners, not by their users.

The third wave of connected experiences

I’m curious as to how we can build new kinds of experiences that are explicitly designed as playable systems. What does a “third wave” of the web look like that affords some of the ease and connectivity of social platforms, but in a way that is designed to empower the people using it rather than exploiting their behavior or personal data? How do we create incentives or constraints for experiences that are ethical and benefit our societies? As designers, can we move away from the principles of addiction and virality to ones that support a better human, connected experience?

Six Signals: unionized memers and biblical AI

Six signals logo.

Welcome back to Six Signals! For those of you joining me for the first time, this is a biweekly look at some of the interesting signals of the near future — how technology, design, and more are changing our society and our personal experiences.

1: Seizing the “memes of production”

Folks who create Instagram memes are organizing to form a union. Yes, really. The argument is that meme creation is a growing type of labor that has none of the formal protections that other types of workers do. The Atlantic‘s piece on the union acknowledges that “the IG Meme Union will probably never be recognized by the National Labor Relations Board, but organizers say it can still act as a union for all intents and purposes.”

The primary issue the organizers are addressing is selective censorship on the part of Instagram. They want a more transparent appeals process, as well as better ways of ensuring that memers’ work isn’t monetized unfairly by others.

Instagram memers are unionizing

2: Iconography for digital surveillance

Image: Sidewalk Labs

Sidewalk Labs, Alphabet Inc.’s urban innovation organization, is developing a design language for public signage that will indicate when digital technologies are in use in public spaces and for what purpose. We are increasingly being “read” by any number of digital sensors in public spaces, from CCTV to door sensors to traffic cameras to Bluetooth and WiFi signals, but that sensing is invisible and therefore can’t be interrogated. The iconographic system is meant to bring more transparency to these interactions.

The project has raised some interesting debate about whether a design system like this leads to any kind of citizen empowerment, or if it aestheticizes and normalizes a culture of surveillance.

How can we bring transparency to urban tech? These icons are a first step.

3: How does God feel about AI?

The Southern Baptist Convention’s public-policy arm, the Ethics and Religious Liberty Commission, spent nine months researching and writing a treatise in response to artificial intelligence from an evangelical viewpoint. As far as I know, this is a rare example of a religious entity formally applying church principles to new technologies.

TL;DR: the document is mostly quite optimistic about AI, though it draws the line at sex bots and specifies that robots should never be given equal worth to humans.

How Southern Baptists are grappling with artificial intelligence

4: The dark side of optimization

In a recent New York Times article about Soylent’s new product line (surprise, it’s food!), there’s a disturbing note about Soylent’s foray into becoming a supplier for Uber, Lyft, and Postmates drivers.

Andrew Thomas, Soylent’s vice president of brand marketing, found an interesting gap in the tech industry — not, this time, at corporate offices, but in the gig economies their industry designed and oversees, where maximizing efficiency is more of an algorithmic mandate than it is a way to signal your sophistication.

It turns out Soylent is stocking fridges in the driver hubs for Lyft (and has a discount code for drivers) and has a partnership with the company that supplies Uber drivers with food. They are looking to do the same with Postmates.

Through these partnerships, potential and established, Soylent will complete a sort of circuit, taking its product, once a lifestyle choice for a small group of technology overlords, and pushing it as a lifestyle necessity to the tech underclass for whom every moment spent on things like eating instead of working means less money.

Here’s Soylent’s new product. It’s food.

5: The link between technophilia and fascism

Rose Eveleth has written a thoughtful analysis of the early twentieth century Futurist movement, a movement that was aggressively optimistic about the new technologies of the time — and also supported the growing Fascist politics in Europe. She draws a link between the two, cautioning that there are echoes of similar sentiments in the tech community now.

This love of disruption and progress at all costs led Marinetti and his fellow artists to construct what some call a “a church of speed and violence.” They embraced fascism, pushed aside the idea of morality, and argued that innovation must never, for any reason, be hindered.

Bottom line: we need to be thoughtful about how we apply technology, or else it can lead to applications that diminish our humanity.

When Futurism Led to Fascism—and Why It Could Happen Again

6: Defunct QR code tattoos

Want to know when the next Six Signals is available? Follow @cog_sprocket on Twitter or sign up for the email list.