Six Signals: unionized memers and biblical AI

Six signals logo.

Welcome back to Six Signals! For those of you joining me for the first time, this is a biweekly look at some of the interesting signals of the near future — how technology, design, and more are changing our society and our personal experiences.


1: Seizing the “memes of production”

Folks who create Instagram memes are organizing to form a union. Yes, really. The argument is that meme creation is a growing type of labor that has none of the formal protections that other types of workers do. The Atlantic‘s piece on the union acknowledges that “the IG Meme Union will probably never be recognized by the National Labor Relations Board, but organizers say it can still act as a union for all intents and purposes.”

The primary issue the organizers are addressing is selective censorship on the part of Instagram. They want a more transparent appeals process, as well as better ways of ensuring that memers’ work isn’t monetized unfairly by others.

Instagram memers are unionizing


2: Iconography for digital surveillance

Image: Sidewalk Labs

Sidewalk Labs, Alphabet Inc.’s urban innovation organization, is developing a design language for public signage that will indicate when digital technologies are in use in public spaces and for what purpose. We are increasingly being “read” by any number of digital sensors in public spaces, from CCTV to door sensors to traffic cameras to Bluetooth and WiFi signals, but that sensing is invisible and therefore can’t be interrogated. The iconographic system is meant to bring more transparency to these interactions.

The project has raised some interesting debate about whether a design system like this leads to any kind of citizen empowerment, or if it aestheticizes and normalizes a culture of surveillance.

How can we bring transparency to urban tech? These icons are a first step.


3: How does God feel about AI?

The Southern Baptist Convention’s public-policy arm, the Ethics and Religious Liberty Commission, spent nine months researching and writing a treatise in response to artificial intelligence from an evangelical viewpoint. As far as I know, this is a rare example of a religious entity formally applying church principles to new technologies.

TL;DR: the document is mostly quite optimistic about AI, though it draws the line at sex bots and specifies that robots should never be given equal worth to humans.

How Southern Baptists are grappling with artificial intelligence


4: The dark side of optimization

In a recent New York Times article about Soylent’s new product line (surprise, it’s food!), there’s a disturbing note about Soylent’s foray into becoming a supplier for Uber, Lyft, and Postmates drivers.

Andrew Thomas, Soylent’s vice president of brand marketing, found an interesting gap in the tech industry — not, this time, at corporate offices, but in the gig economies their industry designed and oversees, where maximizing efficiency is more of an algorithmic mandate than it is a way to signal your sophistication.

It turns out Soylent is stocking fridges in the driver hubs for Lyft (and has a discount code for drivers) and has a partnership with the company that supplies Uber drivers with food. They are looking to do the same with Postmates.

Through these partnerships, potential and established, Soylent will complete a sort of circuit, taking its product, once a lifestyle choice for a small group of technology overlords, and pushing it as a lifestyle necessity to the tech underclass for whom every moment spent on things like eating instead of working means less money.

Here’s Soylent’s new product. It’s food.


5: The link between technophilia and fascism

Rose Eveleth has written a thoughtful analysis of the early twentieth century Futurist movement, a movement that was aggressively optimistic about the new technologies of the time — and also supported the growing Fascist politics in Europe. She draws a link between the two, cautioning that there are echoes of similar sentiments in the tech community now.

This love of disruption and progress at all costs led Marinetti and his fellow artists to construct what some call a “a church of speed and violence.” They embraced fascism, pushed aside the idea of morality, and argued that innovation must never, for any reason, be hindered.

Bottom line: we need to be thoughtful about how we apply technology, or else it can lead to applications that diminish our humanity.

When Futurism Led to Fascism—and Why It Could Happen Again


6: Defunct QR code tattoos


Want to know when the next Six Signals is available? Follow @cog_sprocket on Twitter or sign up for the automattic.design email list.

Six Signals: Trusting your smart things & the internet of brains

Six Signals logo.

1: “What if I don’t trust my car?”

Simone Rebaudengo, who creates gorgeous speculative design projects, created a “Future FAQ” that reviews some of the strange and compelling questions that we will need to be answering in the very near future, including:

  • How smart is a smart home?
  • What if I disagree with my thermostat?
  • What if I don’t trust my car?
  • Can my house robot travel with me?
  • Can I diminish reality?

Future Frequently Asked Questions


2: The growing fluidity of media formats

Last month, I shared Descript, a video editor that you edit via the automated text transcription rather than manipulating the video itself. This is one of many signals I’m seeing that point to the increasing malleability of media formats, where text can become video can become audio can become text again effortlessly and with very little diminishment of fidelity. The latest signal I’m seeing is Tayl, which allows you to turn any website or piece of content online into your own personalized podcast and have it read to you later.

Tayl: Turn websites into podcasts


3: The future of AI is…prison labor?

Inmates at a prison in Finland are working to classify data to train artificial intelligence algorithms for a startup called Vainu. While the PR for this effort emphasizes “job training” and “teaching valuable skills”, it’s clear that this is another signal in the growing set of labor issues behind all that magical automation and machine intelligence.

Inmates in Finland are training AI as part of prison labor


4: Stock voices are the new stock photos

WellSaid Labs in Seattle is working to create a wide variety of synthetic voices that sound remarkably like real people. They use voice actors as the inputs to train a neural network that generates new, artificial voices.

WellSaid Labs isn’t planning to take over the voice-assistant market, though. Rather, it hopes to sell the voices to companies that want to use them in advertising, marketing and e-learning courses.

You’ve probably heard of stock photos; you might think of this as stock voices.

Watch out, Alexa. Artificial voices are starting to sound just like humans


5: 3D printing as a path to accessibility

IKEA recently announced its ThisAbles project, which provides a library of 3D printable extensions that can be added to IKEA furniture to make it accessible for customers with disabilities. The upside to the 3D printing approach is that anyone can submit a proposed solution to add to the library. The down side is that this is only usable by those who have access to 3D printers, which still isn’t a majority of the population.

ThisAbles


6: The internet of brains

In the “very future signals” category, neuroscientists have successfully connected the brains of three people together, allowing them to do what they describe as “sharing their thoughts”. The aforementioned telepathy is fairly rudimentary, basically transmitting the on or off state of a light to indicate a yes or no response. But still, networked brains! Pretty neat.

Brains of 3 People Have Been Successfully Connected, Enabling Them to Share Thoughts


One fun thing: mixed reality eyedroppers


If you would like to receive Six Signals in your inbox, sign up for the Automattic.Design mailing list.

Six Signals: Climate fashion & vocabulary for the autonomous future

Bonjour! I just returned from a week in France with some of the Automattic Design team at the Design Bienniale in St. Étienne, which included a collaboration between our own John Maeda and the Google Material Design team. You can watch our evening of presentations from Automattic and Google designers, including the European premiere of the Open Web Meditation (with a French translation!).

This week’s Six Signals are extra meaty and future-facing, including behavioral concepts for autonomous vehicles, climate change gear as fashion, and AI tools that guide visually impaired people. Enjoy!

01: From “juddering” to captcha street furniture — vocabulary for the autonomous future

My colleague Beau Lebens tipped me off to this fantastic work by Jan Chipchase, who has put together a glossary of speculative terminology about autonomous vehicles and emerging behavior. Some of my favorites include:

  • Juddering: “the ripple of a dozen or more cars in a parking lot that react and finally settle to the arrival of a new vehicle.”
  • Captcha street furniture: “introduced by residents looking to filter out autonomous vehicles from passing through their neighbourhoods. (The opposite will also be true, with human-drivers filtered out of many contexts).”
  • Shy-distance: “the distance by which your vehicle instinctively avoids, shies away from other vehicles on the road and stationary objects.”

Twelve concepts in autonomous mobility

Driver behaviours in a world of autonomous mobility

02: AI-driven app to guide visually impaired users

Image: Google

Google recently released their Lookout app for Pixel devices, which helps those with visual disabilities make sense of their physical surroundings. “By holding or wearing your device (we recommend hanging your Pixel phone from a lanyard around your neck or placing it in a shirt front pocket), Lookout tells you about people, text, objects and much more as you move through a space.”

With Lookout, discover your surroundings with the help of AI

03: Dystopian accessories for unbreathable air

Image: Vogmask

As air pollution becomes a more common problem worldwide —from persistent smog conditions like we see in Beijing and Shanghai, to more frequent incidences of forest fires in places like California — face masks are becoming a necessity for more people. As a result, we’re beginning to see companies capitalizing on this need and turning the face mask into a fashion accessory. Rose Eveleth reports on this emerging reality in Vox:

The near-future of this accessory could depend on who picks up the object first … It could be adopted by streetwear fans (Supreme already sells a face mask, although it doesn’t seem to actually do much in the way of safety or filtration) or by users who prefer the Burning Man aesthetic. Or perhaps the wellness world adopts these masks, in which case the product design would look quite different. “The other direction might be the sort of Lululemon-ification of the masks, if they’re treated as these essential wellness objects and they enter the world of performance fabrics and athleisure and athletic wear.”

As air pollution gets worse, a dystopian accessory is born

04: Regulating algorithms like drugs

As algorithmic systems have a real impact on more aspects of our lives, from our health care to our financial services, we face increasingly pressing questions about how to monitor and interrogate these systems. A recent Quartz article suggests that we could take cues from the medical industry and use similar processes to those used for prescription drugs. The authors point out several similarities:

  • They affect lives
  • They can be used as medical treatment
  • They perform differently on different populations
  • They can have side effects

We should treat algorithms like prescription drugs

05: The luxury of human contact

The joy — at least at first — of the internet revolution was its democratic nature. Facebook is the same Facebook whether you are rich or poor. Gmail is the same Gmail. And it’s all free. There is something mass market and unappealing about that. And as studies show that time on these advertisement-support platforms is unhealthy, it all starts to seem déclassé, like drinking soda or smoking cigarettes, which wealthy people do less than poor people.

The wealthy can afford to opt out of having their data and their attention sold as a product. The poor and middle class don’t have the same kind of resources to make that happen.

Human contact is now a luxury good

06: Designing ethical experiences

The past few years have seen more widespread concern over the “dark patterns” in software design — the ways in which experiences are designed to monetize our attention, extract our data, and exploit addictive tendencies. In response, designer Jon Yablonski has put together a clear and accessible set of resources for “humane design” that is ethical and respectful.

As designers, we play a key role in the creation of such technology, and it’s time we take responsibility for the impact that these products and services we build are having on people it should serve.

Humane by design


See you in two weeks! If you would like to receive Six Signals in your inbox, sign up for the Automattic.Design mailing list.

Six Signals: automation, labor, and defunct robots

I’m writing this post today from the sunny Bahamas, attending one of many team meetups that we have at Automattic to spend some quality IRL time with our 100% remote colleagues.

This week in Six Signals, we’re looking at the impact of automation on labor and society, semi-private social spaces, algorithmic collusion, and watching how Jibo the (now defunct) home robot tells you it’s about to die.

01: The complicated truth about automation and jobs

John Oliver’s show this week did an very nuanced job of examining the complicated futures around automation and the human workforce. He articulates how it’s neither as simple as “the robots will take our jobs” nor “new jobs will emerge and we’ll all be fine”.

50 years from now, people will be doing jobs that we can’t imagine right now, like crypto baker or snail rehydrator or investment harvester. I don’t know, the point is you can’t imagine them. So we get rid of some jobs but we get new ones, so that’s even-steven right? Well not necessarily because the new jobs automation creates won’t necessarily pay the same as the ones it takes away and it might not be easy for displaced workers to transition into them.


02: The hidden labor behind automation

New York Magazine goes deep in reaction to The Verge exposé on Facebook’s content moderation contractors, analyzing the ways in which the supposed efficiencies of computational approaches are only efficient because companies externalize the human costs behind automated services.

There aren’t that many tasks that programs can do as well as human beings, and not many programs that can be automated without the help and work of humans. Which means that any company bragging about automated solutions is likely hiding a much larger shadow workforce supporting those solutions, like the one Facebook employs through Cognizant.


“Who Pays for Silicon Valley’s Hidden Costs?” New York Magazine


03: A new competitor in urban mobility

Daimler and BMW have merged 14 different services, including DriveNow and car2go into the largest conglomerate in this space. What’s interesting about the merger is that by bringing together solutions for car-sharing, taxi hailing, parking, electric car charging, and more, they are clearly thinking beyond simple car sharing to develop a rich network of services for rethinking how people get around cities. It also creates a potential competitor to service like Uber and Lyft, which have been so dominant in this market thus far.

Daimler and BMW Invest €1 Billion in Urban Mobility Co-Venture“, Fortune


04: Semi-private social spaces

Back in 2015, Matt Boggie and I wrote about the growth of semi-private social spaces — as embodied by everything from group texts to Slack — and the increasing interest in an alternative to the more public, broadcast model of social media exemplified by Facebook and Twitter. This week, Mark Zuckerberg announced that Facebook will be attempting to capitalize on that trend, with more emphasis on private and ephemeral communication. This move seems to be a reaction to the trust issue the company has been experiencing around privacy. It will be interesting to see whether Facebook can succeed in building interactions that are “privacy first”, and if so, how they will reconcile that with their advertising model.


05: Algorithms colluding to fix prices

An obvious question is, who — if anyone — should be prosecuted for price fixing when the bots work out how to do it without being told to do so, and without communicating with each other? In the US, where the Federal Trade Commission has been pondering the prospect, the answer seems to be no one, because only explicit collusive agreements are illegal. The bots would only be abetting a crime if they started scheming together. Tacit collusion, apparently, would be fine.

Expect mischief as algorithms proliferate“, The Financial Times


06: How the robots die

Hiring for kindness: One simple question

Photo by Jeremy Thomas on Unsplash

One of the most important qualities I look for when I’m hiring someone to join my team is kindness. In the world of aggressive business culture, the idea of kindness can have a reputation for meaning that you’re “soft” or unassertive. (I won’t even go into all the gender politics involved in how we talk about what makes a successful leader.)

Contrary to that perception, kindness is actually a critical skill for effective teams, successful businesses, and positive organizational culture. Kindness isn’t contrary to assertiveness or to radical candor. Rather, kindness is the quality that allows for colleagues who:

  • work collaboratively with others without ego getting in the way.
  • give constructive feedback in ways that are supportive and effective.
  • create an environment of trust that is so crucial to effective teams .

So, how does one hire for kindness? There’s certainly some of the process that involves a bit of “spidey sense”, but given that gut feelings can be subject to unconscious bias, I’ve tried to find more structured ways to screen people. One method, which is pretty straightforward, is simply asking about it in reference calls. Seems obvious, but people are often surprised by the question!

The most useful tool I’ve found is one simple question: “How would your colleagues describe you as a collaborator?” The way someone answers this question provides a wealth of information. The responses I’ve gotten tend to fall into 3 categories:

  1. They’ve clearly never considered the question before in any depth. This is a big red flag because it means two things: It means that they aren’t intentional about their approach to collaboration, but it also means that they haven’t really thought about how their behavior might affect their colleagues or how others might perceive them.
  2. They speak to their own approach to collaboration but not what role they play in the team dynamics overall. This is pretty common and can be a fine place to start and grow from, especially for an individual contributor (if someone is leading others, I would hesitate a bit more). The only big red flag here is if the approach they describe seems more focused on their own success than the team’s success.
  3. The best situation is a highly self-aware answer, one which outlines how they think their approach plays out in the team overall and how it helps to support others. With this level of candidate, I usually see them explicitly thinking through how to help others grow and make space for everyone.

This isn’t a perfect science, but that one question can be very revealing about how a person works and relates to others. It has been helpful in providing a more analytical way to hire for kindness and try to bring in people who will make for a successful team and an organization in which people can thrive. I would love to hear about any other strategies you have found to approach hiring for kindness. Share your thoughts on Twitter @cog_sprocket!

Six Signals: Every atom is a bit and every bit is an atom

Six signals logo.

This week’s signals look at the continuing collapse between digital and physical space, the real and unreal, human and machine. There are prognostications of the future “mirrorworld”, emerging interfaces for interacting in the spaces between digital and physical, and growing uncertainty around what is real or generated.

I also share Six Signals as a biweekly newsletter on Automattic.design. Sign up here.

1: Manipulating AR objects

Image of the litho device with the text "Litho is the input device for the real world."
Image: Litho.cc

The Litho controller is “like a set of miniature brass knuckles” — it is a hand-worn motion controller with an embedded trackpad, so it can support a combination of gesture, swipe, point, and tap. It primarily works with Apple’s AR Kit, though it was intended with the HoloLens in mind. Like some of the gestural controllers that have come before (Leap Motion, Myo), this may be a solution ahead of its time, but it does point to the potential need for new ways of interacting with digital objects if and when those objects become co-present in your physical space. My bet is that this kind of controller won’t really take off unless AR moves beyond the phone screen into some form of heads-up display.

The Litho controller is sci-fi jewelry for your iPhone’s AR apps

2: Hearables and augmented audio

Photo of wireless earbuds.
Photo by Howard Lawrence B on Unsplash

The growth of voice assistants (Siri, Alexa, etc.), the continuing trend of “more sensors everywhere”, and the increasing popularity of wearable tech means that our ears are the one of the next frontiers in wearable computing. “Hearables” describe in-ear devices that can incorporate everything from augmented audio to voice assistants to biometric tracking. We can see this technology emerging from multiple types of manufacturers with different audiences in mind: Massive tech companies like Google, Apple, and Amazon see the opportunity to embed some of their computing prowess into new kinds of devices. Headphone and audio manufacturers see the opportunity to provide new features to their audiophile audiences. And hearing aid companies see the potential for evolving assistive devices into augmenting devices.

The future is ear: Why “hearables” are finally tech’s next big thing

3: When our space contains multitudes

All of these emerging technologies point to the potential growth of what Kevin Kelly talks about in this week’s Wired as the “Mirrorworld”:

Everything connected to the internet will be connected to the mirrorworld. And anything connected to the mirrorworld will see and be seen by everything else in this interconnected environment. Watches will detect chairs; chairs will detect spreadsheets; glasses will detect watches, even under a sleeve; tablets will see the inside of a turbine; turbines will see workers around them.


This piece paints a sweeping picture of a future where the mirrorworld has come to fruition. Kelly is utopian and optimistic about it in the way that only someone who feels in control of technology rather than at the mercy of it can be. I don’t doubt that this is the ideal that people working on the requisite AR, AI, and computer vision technology are aiming for. But just like social media didn’t exactly accomplish the connected society that tech founders touted, we have to also imagine how this kind of future mirrorworld will break down or be used in problematic and exploitative ways.

AR Will Spark the Next Big Tech Platform—Call It Mirrorworld

4: The co-evolution of humanity and technology

Speaking of which, BBC Future has a great long read into how humans and technology evolve alongside each other, and our responsibility as those paths potentially diverge.

My belief is that, like most myths, the least interesting thing we can do with this story (the singularity) is take it literally. Instead, its force lies in the expression of a truth we are already living: the fact that clock and calendar time have less and less relevance to the events that matter in our world. The present influence of our technology upon the planet is almost obscenely consequential – and what’s potentially tragic is the scale of the mismatch between the impact of our creations and our capacity to control them.

Technology in deep time

5: The creativity of context collapse

My colleague Megs Fulton recently pointed me to this excellent article on the Big Flat Now, which speaks to the ways in which the growing fluidity between digital and physical, past and present, low and high culture, have actually created a new kind of creative space in which to operate.

Product design has become a form of DJing — and DJing has become a form of product design. Contemporary art and luxury fashion have come to operate according to the same logic, sharing practitioners who glide freely between each field. Film, music, fashion, visual art and the marketing machines that support them have been compressed into a unified slime called “content.”

Welcome to the Big Flat Now

6: Playing with the boundary between the real and generated

Work with deep learning and neural networks in recent months has led to pretty astonishing leaps forward, where we now have models that allow machines to generate images, text, and video that are nearly indistinguishable from real ones. The text generation piece has a new wrinkle with the Open AI Institute study that was published this week. Researchers were so concerned about potential misuse that they only released a partial model with the study results.

While there are many real reasons to be alarmed by these advances, this week has seen a number of projects that play with those increasingly blurry boundaries, including Which Face is Real?, This Person Does Not Exist, and of course (because it’s the internet), This Cat Does Not Exist.

One ridiculous thing

Six signals: Malware in your DNA and insurance in your Instagram

Six signals logo.

Every two weeks, I’ll be sharing links to six things that feel like signals of the near future, in ways big and small. These signals might be scientific advancements, art projects, codebases, or news articles, but will all have some flavor of where things might be heading. Enjoy!

1: Cascading futures

NESTA has its annual Tech Trends report out, which begins with this great observation:

If a prediction doesn’t have a hint of outlandishness which means it feels foreign to us now, then it isn’t serving its purpose, which is to generate alternative visions of the future

Click through to read more, but here’s the TLDR list:

  • RoboLawyers make legal services cheaper
  • Randomly-allocated research funding
  • Personalised nutrition based on profiling our gut microbiome
  • Supercharging the accessibility revolution
  • The future of algorithmic legibility
  • Weaponized deepfakes
  • AI for grading essays and exams
  • The age of the superbug
  • The rise of the “city brain”
  • The evolution of work

2: Digital identity leakage

Sometimes my bleakest predictions come true faster than expected. More insurers are using people’s digital traces as a factor in health and life insurance pricing / coverage. Here’s a depressing set of tips from the Wall Street Journal on how to use social media defensively.

3: Games as virtual concert halls

Fortnite continues its growth as “more than just a game”, with the first live virtual concert taking place on the platform. This brings back memories of Second Life…

If you want a deeper dive on why Fortnite is capturing a lot of interest, see this piece: Fortnite Is the Future, but Probably Not for the Reasons You Think

“Fortnite likely represents the largest persistent media event in human history. The game has had more than 6 consecutive months with at least 1 million concurrent active users – all of whom are participating in a largely shared and consistent experience.”

4: Malware in your DNA

This article is from a little while back but was making the rounds on Twitter again this week. Researchers figured out how to encode malware in strands of DNA, making our bodies potential future sites of all kinds of digital communication, encoding, and steganography.

5: The future is accessible


Google announced two new Android apps to make audio more accessible — Live Transcribe for real-time conversation transcription and Sound Amplifier to enhance the sound in your environment.

6: Transmedia editing

Descript is an app that lets you edit audio and video by editing the text of the recording. I love the media fluidity that this points to, and wonder what other experiences might be made possible with these kinds of translations.


One video to enjoy

The computational gaze

Image: Tim Ellis. Flickr

I’ve written and spoken before about what I call mechanomorphism — a word that I developed to describe the concept of machine intelligence as a companion species. This framing of AI is distinct from anthropomorphism, where we try (and inevitably fail) to make machines approximate human behavior. Instead, I envision a future where we appreciate computers for the ways in which they’re innately “other”.

Another way to put it is that I’m fascinated by the computational gaze — how machines see, know, and articulate the world in a totally alien manner. I’ve been talking a lot with my boss, John Maeda, about computational literacy and how to help people understand foundational concepts of computing. But computational literacy posits the machine as a tool (which it often is!). The computational gaze, on the other hand, suggests the machine as a collaborator or companion intelligence.

Collaborating with machine intelligence means being able to leverage that particular, idiosyncratic way of seeing and incorporate it into creative processes. This is why we universally love the “I trained a neural net on [x] and here’s what it came up with” memes. It has this delightful “almost-but-not-quite-ness” to it that lets us delight in the strangeness of that unfamiliar gaze, but also can help us see hidden patterns and truths in our human artifacts.

The increasing accessibility of tools for working with machine learning means that I’m seeing more examples of artists, writers and others treating the machine as collaborator — working with the computational gaze to create work that is beautiful, funny, and strange. Here are some folks who are doing particularly interesting work in this arena:


Visual feedback loops

In the visual arts, Ronan Barrot and Robbie Barrat have a show in Paris where they collaborate with a GAN to paint skulls. “It’s about having a neural network in a feedback loop with a painter, influencing each other’s work repeatedly — and the infinitude of generative systems.

Mario Klingemann has also been playing with GANs in his “Neural Glitch” series:

“Neural Glitch” is a technique I started exploring in April 2018 in which I manipulate fully trained GANs by randomly altering, deleting or exchanging their trained weights. Due to the complex structure of the neural architectures the glitches introduced this way occur on texture as well as on semantic levels which causes the models to misinterpret the input data in interesting ways, some of which could be interpreted as glimpses of autonomous creativity. 

—Mario Klingemann
Mario Klingemann, Neural Glitch
http://underdestruction.com/2018/10/28/neural-glitch/

Writing with machines

Alison Parrish does wonderful creative writing work in collaboration with generative systems. Some of her highlighted work is here, and many projects have open-source code or tutorials. Here’s an example of Alison’s Semantic Similarity Chatbot, which she describes as “uncannily faithful to whatever source material you give it while still being amusingly bizarre”.

Alison Parrish, Semantic Similarity Chatbot
https://gist.github.com/aparrish/114dd7018134c5da80bae0a101866581

I also often come back to Robin Sloan’s “Writing with the Machine” project from a couple of years ago, where he trained an RNN on a corpus of old sci-fi stories and used it to auto-suggest sentence completions in his text editor.

Robin Sloan, Writing with the Machine
https://www.robinsloan.com/notes/writing-with-the-machine/

Enjoying the weirdness

From a more playful perspective, I particularly love the work that Janelle Shane has been doing, documented on her site AI Weirdness:

I train neural networks, a type of machine learning algorithm, to write unintentional humor as they struggle to imitate human datasets. Well, I intend the humor. The neural networks are just doing their best to understand what’s going on. 

— Janelle Shane

Here’s her illustration of some of the cookies her neural net came up with when trained on cookie recipes:

Janelle Shane’s neural net-generated cookies
http://aiweirdness.com/

Machines cheat in bizarre ways

One of my favorite things is seeing how machine learning systems will find bizarre ways to “cheat” in order to fulfill the goals that are set for them. Recently, there was a lot of discussion around this AI that steganographically encoded invisible data into maps in order to achieve the stated goal of recreating aerial imagery from said map. There’s also a fantastic Google sheet that describes all the ways various AI systems have found unexpected and strange workarounds!

Indolent cannibals

In an artificial life simulation where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children).


The literal computational gaze

This last piece is not about machines as collaborators, but is still one of my favorite pieces in that it so powerfully evokes the sense of the machine’s alien gaze. This is from 2012, and is a video by Timo Arnall called Robot Readable World.

I find this kind of work delightful and meaty, and I hope to see more of it. As soon as I learned to code, I started making generative things — fake ad generators, chatbots, etc. I loved making work that, even though I had shaped it, continued to surprise me. I felt warmth and curiosity towards my strange mechanical collaborators. In a moment where the computational gaze is being used in so many exploitative and questionable ways, I hope that there is also space for work that allows us to explore all that is delightful and creative about our computational companions.