This AI system can generate images of artificial galaxies

VentureBeat
Above: Galaxy images. The top row were generated by the AI system; the middle are from the Galaxy Zoo 2 dataset; and the bottom shows the absolute difference between the two.

Image Credit: University of Edinburgh

Picture this: star clusters, nebulas, and other interstellar phenomena created out of whole cloth unsupervised, by a computer. It might sound like the description for a futuristic holodeck, but researchers at the University of Edinburgh’s Institute for Perception and Institute for Astronomy have designed such a system with the help of artificial intelligence (AI).

In a paper published on the preprint server Arxiv.org (“Forging new worlds: high-resolution synthetic galaxies with chained generative adversarial networks“), they describe an AI model capable of generating high-resolution images of synthetic galaxies that closely follow the distributions of real galaxies.

“Astronomy of the 21st century finds itself with extreme quantities of data, with most of it filtered out during capture to save on memory storage,” they wrote. “This growth is ripe for modern technologies such as deep learning. Since galaxies are a prime contender for such applications, we explore the use of [AI] to produce … galaxy images.”

Core to the team’s machine learning model architecture are generative adversarial networks (GANs), two-part neural networks consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples. It’s not a stretch to characterize GANs as a wunderkind of AI models; they’ve been used to discover new drugs, create convincing photos of burgers and butterflies, and even produce artificial scans of brain cancer.

Above: Additional samples produced by the AI system.

The proposed galaxy-generating system is made up of two five-layer GANs: Stage-I GAN and Stage-II GAN. The first generated low-resolution (64 x 64-pixel) images, while the second converted them into higher-resolution (128 x 128-pixels) images using a technique called super-resolution. In practice, the researchers noted, the Stage-II GAN hallucinated missing pixels, targeting realism rather than accuracy.

To “encourage” the generator in the stage Stage-II GAN to spit out images of a synthetic galaxies similar to their upscaled, real image counterparts, the paper’s authors introduced a “dual-objective function” that computed an error metric between resolution-enhanced images and real galaxies. The result was a greater number of generated samples retaining “rarer” characteristics of the galaxies, such as spiral arms.

The researchers trained the AI system on a PC with a single Nvidia GTX 1060 GPU, feeding it full-color images of stars and planetary bodies from the Galaxy Zoo 2 dataset, a crowd-sourced astronomy project. They considered four properties in evaluating the results: ellipticity, or the degree of deviation from circularity; angle of elevation from the horizontal; total flux; and the size measurement of the semi-major axis (one half of the ellipse’s longest diameter).

The result? The model produced “physically realistic” images closely resembling the characteristics of real galaxies captured on camera. The researchers posit that it might be used to augmented databases of real samples, serving as a data source for deep learning models — such as those designed to classify and segment galaxy image — that require a large number of training samples.

“Generative models that are able to create physically realistic galaxy images have many practical uses,” they wrote. “[Our] work demonstrates the potential of GAN architectures as a valuable tool for modern-day astronomy.”

0
0
おすすめ