Six Signals: unionized memers and biblical AI

Six signals logo.

Welcome back to Six Signals! For those of you joining me for the first time, this is a biweekly look at some of the interesting signals of the near future — how technology, design, and more are changing our society and our personal experiences.


1: Seizing the “memes of production”

Folks who create Instagram memes are organizing to form a union. Yes, really. The argument is that meme creation is a growing type of labor that has none of the formal protections that other types of workers do. The Atlantic‘s piece on the union acknowledges that “the IG Meme Union will probably never be recognized by the National Labor Relations Board, but organizers say it can still act as a union for all intents and purposes.”

The primary issue the organizers are addressing is selective censorship on the part of Instagram. They want a more transparent appeals process, as well as better ways of ensuring that memers’ work isn’t monetized unfairly by others.

Instagram memers are unionizing


2: Iconography for digital surveillance

Image: Sidewalk Labs

Sidewalk Labs, Alphabet Inc.’s urban innovation organization, is developing a design language for public signage that will indicate when digital technologies are in use in public spaces and for what purpose. We are increasingly being “read” by any number of digital sensors in public spaces, from CCTV to door sensors to traffic cameras to Bluetooth and WiFi signals, but that sensing is invisible and therefore can’t be interrogated. The iconographic system is meant to bring more transparency to these interactions.

The project has raised some interesting debate about whether a design system like this leads to any kind of citizen empowerment, or if it aestheticizes and normalizes a culture of surveillance.

How can we bring transparency to urban tech? These icons are a first step.


3: How does God feel about AI?

The Southern Baptist Convention’s public-policy arm, the Ethics and Religious Liberty Commission, spent nine months researching and writing a treatise in response to artificial intelligence from an evangelical viewpoint. As far as I know, this is a rare example of a religious entity formally applying church principles to new technologies.

TL;DR: the document is mostly quite optimistic about AI, though it draws the line at sex bots and specifies that robots should never be given equal worth to humans.

How Southern Baptists are grappling with artificial intelligence


4: The dark side of optimization

In a recent New York Times article about Soylent’s new product line (surprise, it’s food!), there’s a disturbing note about Soylent’s foray into becoming a supplier for Uber, Lyft, and Postmates drivers.

Andrew Thomas, Soylent’s vice president of brand marketing, found an interesting gap in the tech industry — not, this time, at corporate offices, but in the gig economies their industry designed and oversees, where maximizing efficiency is more of an algorithmic mandate than it is a way to signal your sophistication.

It turns out Soylent is stocking fridges in the driver hubs for Lyft (and has a discount code for drivers) and has a partnership with the company that supplies Uber drivers with food. They are looking to do the same with Postmates.

Through these partnerships, potential and established, Soylent will complete a sort of circuit, taking its product, once a lifestyle choice for a small group of technology overlords, and pushing it as a lifestyle necessity to the tech underclass for whom every moment spent on things like eating instead of working means less money.

Here’s Soylent’s new product. It’s food.


5: The link between technophilia and fascism

Rose Eveleth has written a thoughtful analysis of the early twentieth century Futurist movement, a movement that was aggressively optimistic about the new technologies of the time — and also supported the growing Fascist politics in Europe. She draws a link between the two, cautioning that there are echoes of similar sentiments in the tech community now.

This love of disruption and progress at all costs led Marinetti and his fellow artists to construct what some call a “a church of speed and violence.” They embraced fascism, pushed aside the idea of morality, and argued that innovation must never, for any reason, be hindered.

Bottom line: we need to be thoughtful about how we apply technology, or else it can lead to applications that diminish our humanity.

When Futurism Led to Fascism—and Why It Could Happen Again


6: Defunct QR code tattoos


Want to know when the next Six Signals is available? Follow @cog_sprocket on Twitter or sign up for the automattic.design email list.

Six Signals: Trusting your smart things & the internet of brains

Six Signals logo.

1: “What if I don’t trust my car?”

Simone Rebaudengo, who creates gorgeous speculative design projects, created a “Future FAQ” that reviews some of the strange and compelling questions that we will need to be answering in the very near future, including:

  • How smart is a smart home?
  • What if I disagree with my thermostat?
  • What if I don’t trust my car?
  • Can my house robot travel with me?
  • Can I diminish reality?

Future Frequently Asked Questions


2: The growing fluidity of media formats

Last month, I shared Descript, a video editor that you edit via the automated text transcription rather than manipulating the video itself. This is one of many signals I’m seeing that point to the increasing malleability of media formats, where text can become video can become audio can become text again effortlessly and with very little diminishment of fidelity. The latest signal I’m seeing is Tayl, which allows you to turn any website or piece of content online into your own personalized podcast and have it read to you later.

Tayl: Turn websites into podcasts


3: The future of AI is…prison labor?

Inmates at a prison in Finland are working to classify data to train artificial intelligence algorithms for a startup called Vainu. While the PR for this effort emphasizes “job training” and “teaching valuable skills”, it’s clear that this is another signal in the growing set of labor issues behind all that magical automation and machine intelligence.

Inmates in Finland are training AI as part of prison labor


4: Stock voices are the new stock photos

WellSaid Labs in Seattle is working to create a wide variety of synthetic voices that sound remarkably like real people. They use voice actors as the inputs to train a neural network that generates new, artificial voices.

WellSaid Labs isn’t planning to take over the voice-assistant market, though. Rather, it hopes to sell the voices to companies that want to use them in advertising, marketing and e-learning courses.

You’ve probably heard of stock photos; you might think of this as stock voices.

Watch out, Alexa. Artificial voices are starting to sound just like humans


5: 3D printing as a path to accessibility

IKEA recently announced its ThisAbles project, which provides a library of 3D printable extensions that can be added to IKEA furniture to make it accessible for customers with disabilities. The upside to the 3D printing approach is that anyone can submit a proposed solution to add to the library. The down side is that this is only usable by those who have access to 3D printers, which still isn’t a majority of the population.

ThisAbles


6: The internet of brains

In the “very future signals” category, neuroscientists have successfully connected the brains of three people together, allowing them to do what they describe as “sharing their thoughts”. The aforementioned telepathy is fairly rudimentary, basically transmitting the on or off state of a light to indicate a yes or no response. But still, networked brains! Pretty neat.

Brains of 3 People Have Been Successfully Connected, Enabling Them to Share Thoughts


One fun thing: mixed reality eyedroppers


If you would like to receive Six Signals in your inbox, sign up for the Automattic.Design mailing list.

The computational gaze

Image: Tim Ellis. Flickr

I’ve written and spoken before about what I call mechanomorphism — a word that I developed to describe the concept of machine intelligence as a companion species. This framing of AI is distinct from anthropomorphism, where we try (and inevitably fail) to make machines approximate human behavior. Instead, I envision a future where we appreciate computers for the ways in which they’re innately “other”.

Another way to put it is that I’m fascinated by the computational gaze — how machines see, know, and articulate the world in a totally alien manner. I’ve been talking a lot with my boss, John Maeda, about computational literacy and how to help people understand foundational concepts of computing. But computational literacy posits the machine as a tool (which it often is!). The computational gaze, on the other hand, suggests the machine as a collaborator or companion intelligence.

Collaborating with machine intelligence means being able to leverage that particular, idiosyncratic way of seeing and incorporate it into creative processes. This is why we universally love the “I trained a neural net on [x] and here’s what it came up with” memes. It has this delightful “almost-but-not-quite-ness” to it that lets us delight in the strangeness of that unfamiliar gaze, but also can help us see hidden patterns and truths in our human artifacts.

The increasing accessibility of tools for working with machine learning means that I’m seeing more examples of artists, writers and others treating the machine as collaborator — working with the computational gaze to create work that is beautiful, funny, and strange. Here are some folks who are doing particularly interesting work in this arena:


Visual feedback loops

In the visual arts, Ronan Barrot and Robbie Barrat have a show in Paris where they collaborate with a GAN to paint skulls. “It’s about having a neural network in a feedback loop with a painter, influencing each other’s work repeatedly — and the infinitude of generative systems.

Mario Klingemann has also been playing with GANs in his “Neural Glitch” series:

“Neural Glitch” is a technique I started exploring in April 2018 in which I manipulate fully trained GANs by randomly altering, deleting or exchanging their trained weights. Due to the complex structure of the neural architectures the glitches introduced this way occur on texture as well as on semantic levels which causes the models to misinterpret the input data in interesting ways, some of which could be interpreted as glimpses of autonomous creativity. 

—Mario Klingemann
Mario Klingemann, Neural Glitch
http://underdestruction.com/2018/10/28/neural-glitch/

Writing with machines

Alison Parrish does wonderful creative writing work in collaboration with generative systems. Some of her highlighted work is here, and many projects have open-source code or tutorials. Here’s an example of Alison’s Semantic Similarity Chatbot, which she describes as “uncannily faithful to whatever source material you give it while still being amusingly bizarre”.

Alison Parrish, Semantic Similarity Chatbot
https://gist.github.com/aparrish/114dd7018134c5da80bae0a101866581

I also often come back to Robin Sloan’s “Writing with the Machine” project from a couple of years ago, where he trained an RNN on a corpus of old sci-fi stories and used it to auto-suggest sentence completions in his text editor.

Robin Sloan, Writing with the Machine
https://www.robinsloan.com/notes/writing-with-the-machine/

Enjoying the weirdness

From a more playful perspective, I particularly love the work that Janelle Shane has been doing, documented on her site AI Weirdness:

I train neural networks, a type of machine learning algorithm, to write unintentional humor as they struggle to imitate human datasets. Well, I intend the humor. The neural networks are just doing their best to understand what’s going on. 

— Janelle Shane

Here’s her illustration of some of the cookies her neural net came up with when trained on cookie recipes:

Janelle Shane’s neural net-generated cookies
http://aiweirdness.com/

Machines cheat in bizarre ways

One of my favorite things is seeing how machine learning systems will find bizarre ways to “cheat” in order to fulfill the goals that are set for them. Recently, there was a lot of discussion around this AI that steganographically encoded invisible data into maps in order to achieve the stated goal of recreating aerial imagery from said map. There’s also a fantastic Google sheet that describes all the ways various AI systems have found unexpected and strange workarounds!

Indolent cannibals

In an artificial life simulation where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children).


The literal computational gaze

This last piece is not about machines as collaborators, but is still one of my favorite pieces in that it so powerfully evokes the sense of the machine’s alien gaze. This is from 2012, and is a video by Timo Arnall called Robot Readable World.

I find this kind of work delightful and meaty, and I hope to see more of it. As soon as I learned to code, I started making generative things — fake ad generators, chatbots, etc. I loved making work that, even though I had shaped it, continued to surprise me. I felt warmth and curiosity towards my strange mechanical collaborators. In a moment where the computational gaze is being used in so many exploitative and questionable ways, I hope that there is also space for work that allows us to explore all that is delightful and creative about our computational companions.

Coding as Creativity

Maze-like output of 10 Print, a classical "code poem".
From @10print_bot on Twitter. For more information, see https://10print.org.

An executive recently confided in me that he was surprised to learn that developers cared what problems they were working on. “I thought they just cared about writing code, not what it was for,” he said, in what seemed like newfound respect.

While this might seem like an isolated opinion, many technology leaders direct their developers in ways that support this statement, even if they wouldn’t put it so starkly. Developers are often seen as interchangeable parts who can go from working on internal tools to email templates to site reliability to API development without a second thought. Further, the tools and processes we use strengthen this view; tickets are often assigned to teams who align on a certain skill, not a given product or problem set, and organizations are often delineated by understanding of a portion of a “stack” (front-end, back-end, APIs, etc.)

These distinctions make some logical sense (I’ve recently been guilty of organizing teams by these assumptions) but to foster true creativity, new techniques will be required.

More than any other technique (and I’ll write about others in the coming weeks) the one that must come first is to realize that development is not about writing code, but rather using code to solve problems. Too often tech teams are given tasks without context or goals, cutting off an avenue for innovation and creativity. With clear communication and trust, we can unlock the creativity of our tech teams by treating tech as a strategic partner and not a service bureau.

This begins with sharing strategy, goals, measurements, and reasoning with technologists, including everyone from the CTO to individual developers. Recent history is flush with examples of developers and engineers creating new lines of business within their organizations; Amazon Prime is likely the most successful of these. These contributions were only possible because these developers understood their company’s goals and could apply their unique skills to those problems.

Unleashing this creativity requires clear, candid communication. Only by sharing hopes and fears honestly will every member of your staff be in a position to contribute with all of their skills.

Once they’ve been properly briefed, developers tasked with solving a given problem should work together across boundaries of expertise. Whether you call them teams, squads, pods, whatever, people working together to solve problems will yield solutions that are both more creative and implemented more quickly. The tight feedback loop of design and interface development, or API development and information display, for example, creates this effect. If at all possible, this should include participation from design, product development, and even editorial and marketing, if appropriate.

Treating technologists as service practitioners will guarantee you get exactly what you ask for. Including technology in the early stages of defining problems and opportunities will mean getting solutions far more creative, efficient, and sustainable than you could have imagined.

Design process for the messy in-between

I tweeted this last week, and figured I should put my keyboard where my mouth is and take a stab at talking about design process for the real world. First, a caveat: I do think it’s valuable to frame ideal processes so that we know what we’re aspiring to. But often writing about design process has an all-or-nothing tone to it: It makes you feel that if you’re not doing it the “right” way, then you’re not doing good work, and won’t end up with a good product.

So first: there’s no one “right” way to do things. But there are a set of approaches that are generally good practice for user experience and product design: things like talking to your users, making sure to do divergent exploration, getting feedback, and continual iteration. However, it’s rare that I see a designer in a situation where they can execute a design process exactly as they would like to.

Instead, we all end up working in the messy in-between — a place where we need to make trade offs in our process due to real-world constraints. Those constraints tend to be things like:

  • Limited time: Deadlines won’t always accommodate a perfect process.
  • Skeptical stakeholders: People with authority over the project may not believe in the value of a thorough design process and see it as something that slows down the project or adds to cost.
  • The way things have been done before: If you’re trying to grow a design practice in an organization that hasn’t had a strong design or product culture, change doesn’t happen overnight. 
  • Personnel constraints: Sometimes you don’t have enough people or the right people to execute on all the pieces of the design process thoroughly.
  • Budget: This one is self-explanatory 🙂
  • And much more…

So, given those constraints, how do you decide where to cut corners and where to push for more? What’s a good design process for your design process? 

In my experience, here are a few rubrics for making these decisions:

1. Know your strengths and focus resources on your weaknesses.

What are your core abilities as an individual or a team? If you’re really familiar with your intended users, perhaps you don’t need to go as deep on user research, and instead you focus intensively on design exploration. On the other hand, if you have strong UX/UI design experience and instincts on your team, you might be able to spend less time exploring and iterating and more time talking to users.

This piece of the puzzle requires the ability to accurately self-assess. Be honest with yourself about your strengths and weaknesses, and design your process to support you where you need it most. If you have deep experience in one area, don’t be afraid to trust your instincts.

It can feel like sacrilege to say “we don’t need [x] because we’re really good at [y]”, but remember that ideal design processes are designed to check you — to make sure you’re considering options and needs that you might not immediately think of. Deep experience and skill can also help provide some of those checks and balances.

2. Learn to identify the immovable objects

In looking at your constraints, know which ones are fixed and which ones can’t be budged. This is a bit easier with things like budget, time, and people — for example, if you don’t have a budget for extensive user research, it’s clear you will have to work with some guerrilla research tools and approaches. But it’s more challenging to know which cultural pieces are immovable.

For example, you may have a stakeholder who just doesn’t buy the value of a strong design process. Most designers will find themselves in this position at some point, especially if you’re working in-house. Know when not to waste your time on unwinnable arguments. In those situations, there are one of two paths forward. One is that you can find small ways to inject better process and show how those approaches led to better outcomes. Seeing tangible proof of the utility of a good design process can lead to more investment and trust in that process for future projects. The second path is — unfortunately — that some stakeholders just won’t be convinced and it will prove to be a serious constraint on your ability to do deep design work.

It takes time to figure out which situation you are in, but in either case, knowing how fixed your constraints are helps you identify where to focus your efforts.

3. What has to be perfect now and what can be fixed later?

As designers, it’s always crucial to understand the overall product and business strategy for the experiences we’re designing. One of the reasons for this is that it can help to prioritize where to focus resources in our “messy in-between” processes. What features or users are most critical to the success of the product?

Constraints mean that we almost always have to pick things that aren’t going to get as much love and attention as we would ideally like. Can a feature be removed for launch, or is there a scaled-down MVP of that feature that will suffice for now? Which user group has to have their needs deeply met for success? Can other groups’ needs come later? It’s hard not to want everything to be perfect, but knowing what truly has to be perfect can help in focusing limited resources on the right things.

These are by no means exhaustive, but they are a few key rubrics that I frequently use. Most importantly, I hope that we can all share more about how we navigate design in situations that rarely meet the platonic ideal. In doing so, I believe we can alleviate a lot of the guilt and impostor syndrome that seems to be common amongst designers who are worried that they aren’t “doing it right”. Let’s embrace the imperfections of design process in real organizations and projects, and share tools for creating the best work within the constraints of those situations.

Before you make a thing

For his course on Technology & Society, Jentery Sayers has created a document entitled “Before you make a thing” that is a fantastic overview of how to critically approach designing and making with technology. The guide is divided into three sections: Theories and Concepts, Practices, and Prototyping Techniques. Here are a few of my favorite bits:

Examine the “default settings” of technologies; doing so asks for whom, by whom, and under what assumptions they are designed, and who they may exclude and enable. All projects have intended audiences, even if those intentions are not always conscious or deliberate.

Remember that data are produced, not given or captured; doing so emphasizes how this becomes that, or how data is structured, collected, and expressed for interpretation. 

Conjecture with affordances; doing so demonstrates how design is relational. It happens between people, environments, and things; it’s not just a quality or property of objects.

Make a useless or disinterested version of your project; doing so may underscore the creative and critical dimensions of technology and society. After all, not all technologies must increase productivity or efficiency. Consider the roles of technologies in art, theory, and storytelling. 

There’s a wealth of great guidance for both craft and thinking here, along with links to source materials for more in-depth study — go and read the whole thing!