Machine Learning or Artificial Intelligence
Whatever you call it, it has come to disrupt our business. While at the Google Cloud Platform Next event in San Francisco this Spring, at the the entrance to the hall was an area where a picture was taken of you and vast army of computers within Google interpreted your expression. It was showing off the Image Application Programming Interface, API they had developed. Nice trick I thought and continued to walk around. During the keynote session the speakers emphasized the work that was going into machine learning and again showed off the Image API, but here the focus was different. It showed how a picture could be converted to a painting in the style of your favorite painter. So my face could be style like the Expressionist painting “The Scream” by Edvard Munch. This piqued my interest. Afterwards I sought out a manager and asked a simple question, “Could you take a high res TIFF and interpret it in the style of an art director?” After a pause, she smiled and said yes that could be done. She then asked me how many programmers did I have working for me and would we be interested in partnering with Google on such a project. Although International Color Services has come up with some nifty programs like iCatalog and Proofics, I do not have such a staff to work on R&D projects. So I had to decline.
Fast forward to my family vacation in the Homer, Alaska. My wife pulls out her iPhone and takes a picture of Grewingk Glacier with field of Fireweed in the foreground. A lovely picture. She then opens up an app called Prisma and voila the image is stylized like a painter. She likes the picture, but my mind has just blown apart because in the 3 months since the Google conference, an app has been developed and anyone can use this technology and get their images converted in a few seconds OFF OF A PHONE. Again, if it can be done to convert an image to a Van Gogh, it can be converted to the style of an art director.
Some of you may be asking, why is this a big deal? Envision a world where the client sends in the images to the pre-press house and get scheduled for a first round of color. The files are downloaded to a workstation, worked in Photoshop, an Epson is produced, given to Quality Control department and then sent to the client. With this new machine learning, instead of the team working the color, the files would be processed by the Google API and given back in a few minutes. Partner that technology with ICS’ Proofics and it is possible to compress the workflow from weeks to days.
Will this happen tomorrow, no. Working high res TIFFs that can be 200MB or greater takes substantially more computational power than a 3MB JPEG taken off a phone. And learning the style of an art director, will take some effort. But it won’t take 3 years either.
So should Photoshop be scheduled for end of life in 2020? Should we invest in companies like Prisma-AI.com, the makers of the nifty iPhone app, and short Adobe? I’ll let you make your own investment decisions, but this machine learning is coming to prepress and it will change the way we prepare print. From art directors to desktop operators, our world will change.