Smile Vector is just the tip of the iceberg. It’s hard to give a comprehensive overview of all the work being done on multimedia manipulation in AI right now, but here are a few examples: creating 3D face models from a single 2D image; changing the facial expressions of a target on video in realtime using a human “puppet”; changing the light source and shadows in any picture; generating sound effects based on mute video; live-streaming the presidential debates but making Trump bald; “resurrecting” Joey from Friends using old clips; and so on. Individually, each of these examples is a curiosity; collectively, they add up to a whole lot more. “The field is progressing extremely rapidly,” says Jeff Clune, an assistant professor of computer science at the University of Wyoming. “Jaw-dropping examples arrive in my inbox every month.” Clune’s own work isn’t about manipulating images, but generating them, whole cloth. His team at Wyoming began work on this in 2015 by adapting neural networks trained in object recognition. Inspired by research done on the human brain in 2005, they identified the neurons that lit up when faced with certain images, and taught the network to produce the images that maximized this stimulation.