## The event horizon telescope (EHT) #

Everyone is talking about the image of the black hole, which has been named Powehi, at the center of the M67 Galaxy which was created last week by the Event Horizon Telescope (EHT) research group, an international group of scientists who have been working to capture this image for a number of years.

When I saw the news I was excited and awed, feelings that have not been aroused by the news for some time. Finally an example of human beings coming together, transcending petty tribal and national conflicts to build something that really increases the overall sum of human knowledge. Real progress.

I was very curious as to how the image was actually made. This is my first question when I see “photographs” of extremely distant objects, because these images are never created through the same process as normal optical cameras. In fact most modern space telescopes are arrays of various very sophisticated sensors which feed immense amount of data into image processing algorithms.

A recent innovation in deep space telemetry was the development of the Event Horizon Telescope, which is actually an array of eight radio telescopes located in the United States, Mexico, Chile, Greenland, Hawaii, the South Pole, France and Spain. Using a technique called Very Long Baseline Inferometry (VLBI) (the Very Long Baseline refers to the diameter of the massive “virtual telescope” created by the array, Inferometry is a better description than photography or telemetry for what actual goes on inside the brains of these telescopes) observations from the eight telescopes are combined into a single image.

There’s a reason that we need a big telescope to see very distant objects (the black hole is more than 50 million light years from Earth), or more precisely, objects which require a very high angular resolution to resolve. The physical relationship between a telescope’s ability to distinguish between distant objects (in the language of telemetry, to “resolve” them) is inversely proportional:

$\vartheta = \frac{2.5\times 10^5\lambda}{D}$

where $\vartheta$ is the resolving power of the telescope (technically: the distance between objects in arcseconds that the telescope is capable as resolving as distinct), $\lambda$ is the wavelength of the light being collected and $D$ is the telescope’s diameter. So in order to successfully image the M87 black hole, which subtends around $2.5\times 10^{-4}$ arcseconds, at the telescope’s observing wavelength of $1.3\times 10^{-3}$m, we can compute that $D$, the telescope’s diameter, needs to be around 8,000 kilometers.

I’ll list also just some of the considerations required in order to get a consistent signal from this virtual telescope:

• correction for the time difference between each telescope. Light (or radio waves, or other elecromagnetic radiation) from the target of observation will hit the different telescopes at slightly different times since it travels at the speed of light. The differences in time are miniscule but important when trying to piece together an image from eight streams of continuous data.
• correction for the different atmospheric conditions around each telescope. The atmosphere is not homogenous and different mixtures of gases will distort the signal as it approaches each telescope differently.
• correction for the rotation of the Earth during measurement. From the perspective of each telescope, the target of observation will trace out an elliptical path in the sky as the Earth rotates, which needs to be calculated so it can be corrected for.

Once consistent measurements are obtained, the algorithm then needs to piece them together into an image. I didn’t realise this, but the statistical models underlying these kinds of celestial imaging projects rely on fairly rough Bayesian priors, or assumptions about the structure of the data before it has actually been observed. These might include, for example, an assumption about the range of visible wavelengths in the image, whether or not it is spherical, etc. An algorithm then builds an image which best fits the observed data and the prior assumptions (“best fits” can be formalised mathematically using maximum likelihood estimation.

My purist’s aesthetics are in conflict when I think about these sorts of methods. On the one hand, it is undeniably remarkable and awe-inspiring that the human race is able to collect this 50-million-year old light and arrange it into a picture which represents how it might look. But I also think it’s a little disingenuous to call it a photograph; that most people will not even try to understand the way the image was made and simply assume that we pointed a camera at a black hole and took this picture. I think the more interesting properties of this image are the fact that astronomers were able to use it to get a much more accurate estimate of the mass of the black hole, and even its direction of rotation (clockwise from the perspective of Earth).

• The paper describing the algorithm used to obtain the black hole image from the EHT.