MIT scientists find a new way to image black holes

image black holes
A black hole named Cygnus X-1.

In a recent study, researchers develop a new algorithm that could help astronomers produce the first image of a black hole.

MIT’s Computer Science and Artificial Intelligence Laboratory, the Harvard-Smithsonian Center for Astrophysics, and the MIT Haystack Observatory led the study.

The algorithm would stitch together data collected from radio telescopes scattered around the globe, under the auspices of an international collaboration called the Event Horizon Telescope.

The project seeks, essentially, to turn the entire planet into a large radio telescope dish.

But because of their long wavelengths, radio waves also require large antenna dishes.

The largest single radio-telescope dish in the world has a diameter of 1,000 feet, but an image it produced of the moon, for example, would be blurrier than the image seen through an ordinary backyard optical telescope.

The solution adopted by the Event Horizon Telescope project is to coordinate measurements performed by radio telescopes at widely divergent locations.

Currently, 6 observatories have signed up to join the project, with more likely to follow.

But even twice that many telescopes would leave large gaps in the data as they approximate a 10,000-kilometer-wide antenna. Filling in those gaps is the purpose of algorithms.

Usually, an astronomical signal will reach any two telescopes at slightly different times.

Accounting for that difference is essential to extracting visual information from the signal, but the Earth’s atmosphere can also slow radio waves down.

This exaggerates differences in arrival time and throws off the calculation on which interferometric imaging depends.

The researchers adopted a clever algebraic solution to this problem. If the measurements from three telescopes are multiplied, the extra delays caused by atmospheric noise cancel each other out.

This does mean that each new measurement requires data from three telescopes, not just two, but the increase in precision makes up for the loss of information.

The researchers also used a machine-learning algorithm to identify visual patterns that tend to recur in 64-pixel patches of real-world images, and they used those features to further refine the algorithm’s image reconstructions.

With the Event Horizon Telescope project, “there is a large gap between the needed high recovery quality and the little data available,” says Yoav Schechner, a professor of electrical engineering at Israel’s Technion, who was not involved in the work.

“This research aims to overcome this gap in several ways: careful modeling of the sensing process, cutting-edge derivation of a prior-image model, and a tool to help future researchers test new methods.”

Follow Knowridge Science Report on Facebook, Twitter and Flipboard.

News source: MIT.
Figure legend: This image is credited to M.Weiss/ NASA/CXC.