Generative Zoology with Neural Networks
A couple of years ago, an article appeared on my reading list entitled "Progress and development of GAN to improve quality, stability and increase variation" . It describes the gradual growth of generative-adversarial networks , which began by creating low-resolution images and increased granularity as training continued. Many publications have been devoted to this topic, as the authors used their idea to create realistic and unique images of human faces.
Sample Images created by GAN
Looking at these images, it seems that other neural networks will need to learn a lot of examples in order to be able to create what the GAN produces. Some factors seem relatively simple and reasonable in fact - for example, that the color of both eyes should match. But other aspects are fantastically complex and very difficult to formulate. So, for example, what details are necessary in order to tie together the eyes, mouth and skin into a holistic image of the face? Of course, I'm talking about the statistical machine as a person, and our intuition can deceive us - it may turn out that there are relatively few working variations, and the decision space is more limited than we imagine. Probably the most interesting is not the images themselves, but the terrible impact that they have on us.
Some time later, mentioned PhyloPic in my favorite podcast - database of silhouette images of animals, plants and other life forms. Reflecting on these lines, I wondered - what happens if you train a system, such as the one described in the article "Progressive GAN", on a very diverse set of such data? You will get many varieties of several well-known animal types, or we will get many variations that will give rise to speculative zoology controlled by neural networks? No matter how everything turned out, I was sure that I could get some good drawings from this for my study wall, so I decided to satisfy my curiosity with the experiment.
I adapted the code from the progressive GAN article and trained the model using 12,000 iterations using the power of Google Cloud (8 NVIDA K80 GPUs) and entire PhyloPic dataset. The total training time, including some errors and experiments, was 4 days. I used the final trained model to create 50 kilobyte individual images, and then spent hours looking at the results, categorizing, filtering, and matching the images. I also edited some images a bit, rotating them so that all the creatures were directed in the same direction (in order to achieve visual satisfaction). This practical approach means that what you see below is a kind of collaboration between me and the neural network - it was a creative work, and I made my own changes to it.
The first thing that surprised me was how aesthetically pleasing the results were. Much of this, of course, is a reflection of the good taste of the artists who created the original image. However, there were pleasant surprises. For example, it seems that whenever a neural network enters an area of uncertainty - whether it be small pieces that it has not yet mastered, or flights of blurry biological fantasy - chromatic aberrations appear in the image. This is curious, because the input set is completely made in black and white, which means that the color cannot be the solution to any generative problem adopted when teaching the model. Any color is a pure artifact of the mind of the machine. Surprisingly, one of the factors that constantly cause chromatic aberration is the wings of flying insects. This leads to the fact that the model generates hundreds of variations of brightly colored “butterflies”, similar to those presented above. I wonder if this can be a useful observation - if you train the model using only black and white images, and at the same time require the output of full-color images, then colored spots can be a useful way to display areas in which the model is not able to accurately display the training set.
The bulk of the output is a huge variety of fully recognizable silhouettes - birds, various tetrapods, many small graceful carnivorous dinosaurs, lizards, fish, beetles, arachnoids and humanoids.
As soon as the creatures we know end, we encounter unfamiliar things. One of my questions was: will there be plausible plans for the body of animals that do not exist in nature (perhaps hybrids of creatures that are part of the input data set)? With the help of a thorough search and a small pareidolia, I discovered hundreds of tetrapods, snakehead deer and other fantastic monsters.
Going even further into the unknown, the model gave rise to strange abstract patterns and unidentifiable entities that create a certain sense of their “vivacity."
What is not visible in the above images is the abundance of variations in the results. I printed and placed several of these sets of images in frames, and the effect produced by hundreds of small, detailed images drawn side by side in scale is quite striking. To give some insight into the scope of the full dataset, I include one example of the printout below — a random sample from an unfiltered image body.
Learn the details of how to get a sought-after profession from scratch or Level Up in skills and salary, taking SkillFactory paid online courses:
- Machine Learning Course a>(12 weeks)
- Learning Data Science from scratch (12 months)
- Analyst profession with any starting level (9 months)
- Python for Web Development Course (9 months)
- Trends in the Data Scene 2020
- Data Science has died. Long live Business Science
- Cool Data Scientists do not waste time on statistics
- How to become a Data Scientist without online courses
- 450 free courses from the Ivy League
- Data Science for the humanities: what is “data”
- Steroid Data Scenes: Introducing Decision Intelligence