AICAN doesn't need human help to paint like Picasso


Rutgers University

Artificial intelligence has exploded onto the art scene over the past few years, with everybody from artists to tech giants experimenting with the new tools that technology provides. While the generative adversarial networks (GANs) that power the likes of Google’s BigGAN are capable of creating spectacularly strange images, they require a large degree of human interaction and guidance. Not so with the AICAN system developed by Professor Ahmed Elgammal and his team at Rutgers University’s AI & Art Lab. It’s a nearly autonomous system trained on 500 years worth of Western artistic aesthetics that produces its own interpretations of these classic styles. And now it’s hosting its first solo gallery show in NYC.

AICAN stands for “Artificial Intelligence Creative Adversarial Network” and while it utilizes the same adversarial network architecture as GANs, it engages them differently. Adversarial networks operate with two sets of nodes: one set generates images based on the visual training data set that it was provided while the second set judges how closely the generated image resembles the actual images from the training data. AICAN pursues different goals. “On one end, it tries to learn the aesthetics of existing works of art,” Elgammal wrote in an October FastCo article. “On the other, it will be penalized if, when creating a work of its own, it too closely emulates an established style.” That is, AICAN tries to create unique — but not too unique — art.

And unlike GANs, AICAN isn’t trained on a specific set of visuals — say chihuahuas, blueberry muffins, or 20th century American Cubists. Instead, AICAN incorporates the aesthetics of western art history as it crawls through databases, absorbing examples of everything — landscapes, portraits, abstractions, but without any focus on specific genres or subjects. If the piece was made in the Western style between the 15th and 20th centuries, AICAN will eventually analyze it. So far, the system has found more than 100,000 examples. Interestingly this learning method is an offshoot of the lab’s earlier research into teaching AI to classify various historical art movements.

Elgammal notes that this training style more closely mimics the methodology used by human artists. “An artist has the ability to relate to existing art and… innovate. A great artist is one who really digests art history, digests what happened before in art but generates his own artistic style,” he told Engadget. “That is really what we tried to do with AICAN — how can we look at art history and digest older art movements, learn from those aesthetics but generate things that doesn’t exist in these [training] files.” It can even name the art that it creates using titles of works it has already learned.

To regulate the uniqueness of the generated artworks, Elgammal’s team had to first quantify “uniqueness.” The team relied on “the most common definition for creativity, which emphasizes the originality of the product, along with its lasting influence,” Elgammal wrote in a 2015 article. The team then “showed that the problem quantifying creativity could be reduced to a variant of network centrality problems,” the same class of algorithms that Google uses to show you the most relevant results for your search. Testing the quantifying system on more than 1,700 paintings, AICAN generally picked out what are widely considered masterpieces: rating Edvard Munch’s The Scream and Picasso’s Ladies of Avignon far higher in terms of creativity than their peer works, for example, but panned Da Vinci’s Mona Lisa.

SP 22

The pieces that it does produce are stunningly realistic… in that most people can’t tell that it wasn’t made by a human artist. In 2017, Elgammal’s team showed off AICAN’s work at the Art Basel show. 75 percent of the attendees mistook the AI’s work for a human’s. One of the machine’s pieces sold later that year for nearly $16,000 at auction.

Despite AICAN’s critical and financial successes, Elgammal believes that there is still a market for human artists, one that will greatly expand as this technology enables virtually anybody to generate similar pieces. He envisions AICAN as being a “creative partner” rather than a simply artistic tool. “It will unlock the capability for lots of people, so not only artists, it will make more people able to make art,” he explained, in much the same way that Instagram’s social nature revolutionized photography.

He points to the Met Museum in NYC, as an example. A quick Instagram search will turn up not just images of the official collection but the visual interpretations of those works by the museum’s visitors as well. “Everybody became an artist in their own way by using the camera,” Elgammal said. He expects that to happen with GANs and CANs as well, once the technology becomes more commonplace.

SP 34

Until then, you’ll be able to check out AICAN’s first solo gallery show, “Faceless Portraits Transcending Time,” at the HG Contemporary in New York City. This show will feature two series of images — one surreal, the other abstract — generated from Renaissance-era works.

“For the abstract portraits, I selected images that were abstracted out of facial features yet grounded enough in familiar figures. I used titles such as portrait of a king and portrait of a queen to reflect generic conventions,” Elgammal wrote in a recent post. “For the surrealist collection, I selected images that intrigue the perception and invoke questions about the subject, knowing that the inspiration and aesthetics all are solely coming from portraits and photographs of people, as well as skulls, nothing else.”

The show runs February 13th through March 5th, 2019.

Images: Rutgers University

Let’s block ads! (Why?)

Link to original source

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.