The Deep Field Paradox


Normally, objects that are further away look smaller. That is the basic rule of perspective. Without this rule, everything would be infinitely large right in front of us and we wouldn't be able to see anything. So, thank you, perspective. In outer space, however, if we look deep enough, objects start to look larger than they should, as if they are closer. This is because the universe has been expanding, and early objects are seen at a time when the universe was much closer together. This phenomenon is rarely mentioned by astronomers in public forums and then only in passing, without much excitement. 

Maybe because I am a visual artist, I can't quite get over this notion. And, because I am a complete amateur when it comes to Astronomy, I am perhaps misunderstanding the situation, so, please correct me if this is the case. As far as I can tell however, from researching the published science, this is actually the case, or believed to be the case. I haven't come across any actual observations. Also, it can only be the case if the big bang theory turns out to be true. And, vice-versa, if the very distant objects do not look enlarged (yet fainter) the theories of the big bang and the expanding universe are challenged.

I came upon this notion when the first Hubble Deep Field images were published and raised in me some questions regarding the optics of the expanding universe. I call it the Deep Field paradox.

In 1995 NASA created an image called the Hubble Deep Field (HDF). It was made using hundreds of long exposures at different wavelengths. It shows a region of the sky the size of a grain of sand held at arm's length (see more analogues and superlatives at the end of this post) in the greatest detail ever, exposing extremely far away galaxies, at a very early time in the history of the universe. It is considered by many the greatest photograph ever taken. But, more importantly, the image was the first to show a density of galaxies in what was until then assumed to be an empty region of the sky, indicating that there are far more galaxies in the visible universe than we had known before. The patch of sky was selected to avoid interference from other closer objects. Several more images have been created since then, such as the Hubble Ultra Deep Field (HUDF), the (Extremely Deep Field Image (XDF), of the same regions, and with even greater enlargement, showing objects even further back in time. The Hubble Ultra Deep Field South focused on a similarly blank region in the southern hemisphere and showed that this density of galaxies is probably uniform in all directions.



Some of the objects in the image are seen at only a few hundred million years after the big bang, which is now believed to be close to 13.8 billion years ago. This ever increasing reach towards the temporal beginning of the universe, implies that we will eventually be able to see the big bang itself, because our technology keeps improving. While it is true that we will still see further back than at present, there is actually a limit, because there was a period, referred to as the dark ages, during which nothing was visible, where a kind of murky fog persisted. Once our telescopes reach that far back, visibility will stop (although we can already detect cosmic background radiation, and kind of indirect image of the big bang, or maybe not).

But the image still raised two questions for me, one of which turned out to be based on a misunderstanding, the other turned out to be a very strange paradox, which am still trying to fully understand.

The first question that I had regarding this image and the notion of the expanding universe is a common one and is often phrased as: “How can we see the early universe and the Big Bang? Shouldn’t the light have already passed us?” (By “us” I mean not us presently but the stuff that we are made of, our previous materials in the past.) If everything was so close together and nothing can expand faster than the speed of light shouldn't all that light and information have already gotten to us once before and then how can we now see it again?

This questioning logically follows several misunderstandings based on the simplifications and analogies of astronomers trying to explain the universe to us laymen. The explanation begins like this: The visible universe started out as a singularity, a point so small it was without dimension and then very rapidly expanded into a ball the size of (insert here golf ball, grapefruit, orange, etc) and into space many light years across. It is still expanding, though less rapidly. This narrative is factually correct, as far as the scientific consensus is concerned, but the language creates a lot of confusion. When the golf ball is mentioned (or some fruit) astronomers are talking about the very early size of the CURRENTLY VISIBLE universe and NOT (and this is key) the ENTIRE universe. By visible they mean observable, as in limited by time. (Sometimes, by observable they mean opposed to invisible to our eyes, as in infrared, or because it is dark energy or dark matter, but not here.) So imagine a kind of large “sphere of the visible” 13.7 billion years in temporal diameter in all directions, and shrink it down to a golf ball. The edges of the ball represent “then” what the outer visible edges are now.

But that sphere is not all there is. The universe goes on from there. And the universe “out there” on “the edge” is not any older or denser than around here, only in the image of it. So if we waited another billion years, the observable universe would be a billion light years “larger” or "further out," showing us even more. The same would happen if we could instantly transport to a place a billion light years away and looked out in the same direction that we travelled, we would see another billion light years further. So, the universe continues beyond the HUDF image, we just can't see it yet. This photograph, like any others that have gone and will go deeper, does not show “the edge” of the Universe, just a period close to the beginning of time (which is amazing enough).

How much more of the Universe is there? That is still being debated. It could be infinite, also known as flat, or finite and curved (like a saddle or a sphere but in higher dimensions). In either case it is much larger than what we can now observe.

Let's get back to the golf ball. At that moment in early expansion, there would be many more golf balls, possibly an infinite number of them, so it makes sense that we didn't already see all of the universe at that moment in time.

But this infinity still doesn't explain why we wouldn't have seen at least the inner edges of our golf ball at that time. The explanation for this requires the mentioning of the second misunderstanding regarding space time, which is the speed of light limitation. Early on (and still today) the universe expanded at much faster than the speed of light. But isn't this impossible? No. Light within space cannot travel faster than the speed of light, but space itself can. Space expands equally everywhere very slowly, but distances accumulate over distance, so the farthest away objects are receding much faster (relative to us) than the speed of light. The outer rim of the golf ball never had a chance to optically reach us THEN because space was expanding so much faster than light. It has reached us now in the form of those earliest galaxies. Why now? Because, by definition what reaches us now is the visible edge. What reaches us tomorrow is the – slightly larger – visible edge tomorrow. The answer to “Shouldn’t the light have already passed us?” is that space expansion was faster than light and yet the light eventually reaches us. For the objects whose light age is close to the age of the universe, it took that long to reach us. Imagine someone in a car that is driving away from you, and throws a ball towards you. Even if the car was very close to you at first, the ball would take a while longer to reach you because of the movement of the car.

These deep field objects are very far away because they are extremely redshifted. Redshift, the reddening of light waves, or expanding of all electromagnetic waves, is due to a distortion of space and its effect on light across space-time. Redshift, discovered by Edwin Hubble, is extremely important as a measure of the age of light, and therefore distance. But, the age of light (light years) as a measure of distance is a misleading concept in deep space. Light years indicate the distance travelled through space, which gets distorted at larger distances, and so no longer matches actual distances in time.

A typical quote that promotes the misunderstanding regarding time versus distance is from Time magazine online: “The XDF [Extreme Deep Field image] goes even farther [than the HDF image], capturing objects some 13.2 billion light-years away”. It doesn't. It captures light that is 13.2 billion light years old, which in turn tells us a lot about its distance, but that IS NOT its distance. Objects whose light is 13.2 billion light years old are 32.69 billion light years away NOW, because after they released their light, they moved away from us in the expansion of space.

With close-by objects the time distortion is a comfortable concept. We see the moon 1.5 seconds in the past. We see the sun 8 minutes ago. Mars, 20 minutes ago, the nearest other star, Proxima Centauri 4.2 years ago. But we see these objects at roughly their correct distance, according to conventional perspective and the relationship between the speed of light and distance. However, when we observe other galaxies, the great distances mean that we see things not only in the past, but at drastically wrong distances, because the subtle expansion of space accumulates exponentially and becomes a strong distortion.

This distance distortion, the fact that there is a discrepancy between the age of the light (in this case 13.2 billion years) and their present position (32.69 billion years away) is also not a difficult concept. Light takes time to travel so things are not where they used to be (and look very differently now).

But there is a third distortion, and this is the Deep Field paradox: some of the furthest away objects appear closer to us, and larger, than some of the closer objects. This violates the rules of basic perspective (farther appears smaller). It's a kind of telescoping near the edge of the visible. I suspected this early on while looking at the image, and this was my second, totally naive yet reasonable question: shouldn't objects from the earliest periods of space appear very close to us? Apparently they do.

I came across the first confirmation of my suspicion in this quote:

“In an expanding universe, we see the galaxies near the edge of the visible universe when they were very young nearly 14 billion years ago because it has taken the light nearly 14 billion years to reach us. However, the galaxies were not only young but they were also at that time much closer to us. The faintest galaxies visible with the Hubble Space Telescope were only a few billion light years from us when they emitted their light. This means that very distant galaxies look much larger than you would normally expect as if they were only about 2 or 3 billion light years from us (although they are also very very faint - see Luminosity Distance) [emphasis added].”   http://www.atlasoftheuniverse.com/redshift.html

The same object mentioned before, 13.2 billion light years in the past, was only 2.9861 billion light years away when it released that light, because back then space was much more condensed. Its image appears to us in a position much closer than its age in years would indicate, about 1/5th the distance. Its original distance to us i.e. its apparent size when the light left, is defined by astronomers in terms of its “angular size”. I think of it as “how close it appears”.

You can play around with astronomical distances at this great website, the Light Travel Time Converter: http://www.astro.ucla.edu/%7Ewright/DlttCalc.html  Just enter a number for light travel time in Gyr (billion years), click on flat (infinite universe), and the rest of the measurements appear. Its apparent distance, will be listed as angular size distance. Try any number between 0 and 13.72 (the age of the universe). Then try 13.7199 and .046. Notice that an object whose light is 13.7199 billion light years (Gly) light travel distance would appear as if it is only .0.045587 Gly (about 45.587 million light years), which is closer than than an object whose light is only .046 light years “away”, at 0.045924 Gly. 

So an object that is cosmologically very far away has a larger angular size than an equivalent object that is much closer. It appears closer, not because it IS, but because it WAS (when its light left.) The furthest objects gave off their light when closer to us, earlier on, when space was much more condensed, and so appear larger than others that are closer to us, but whose light left more recently. In theory, if we could actually see the foggy, murky space right after the big bang, the beginning of time itself, it should appear right in front of us. Maybe it does. 

When you add a value of 13.72 Gly, the time of the big bang, in the Light Travel Time Converter, you get a distance of 0. 

To illustrate the notion of enlarged distant objects, I made an image that shows an object as it would appear in a deep field image. Because galaxies not only change drastically in shape, but also in size and color over time, they do not make for the perfect familiar object in a demonstration. So instead I use an actor's face, someone famous and pictured for his entire life: Mickey Rooney (I could have just as well used Shirley Temple or Michael Jackson). Like galaxies, people look younger in older images. Each row represents copies of him from left to right at different cosmological distances, vertically increasing with successive distortions.

Row A: Five identical Mickey Rooneys
Row B: Five Mickey Rooneys each further back in time/space
Row C: With added perspective (they look smaller with distance)
Row D: With added red shift (they look redder with distance)
Row E: With added dimming (they look fainter with distance)
Row F: The Deep Field paradox (the farthest look much larger than they should)




And here's a simulation of a deep field image with many Mickey Rooneys at different distances.





Notes:

The big question responded to on askamathematician.
"The farther we see into the universe, the younger the objects we see. But if we could see the Big Bang, it would be right on top of us. Isn't this paradoxical?"

A list of the superlatives and oddities of the Hubble Ultra Deep Field.
• it took one week of imaging, so it is a very long exposure. This long exposure was necessary to collect enough light to see resolve the faintest galaxies.
• it shows the furthest away and oldest objects in the universe every captured (up to that time).
• it shows the universe as it was between 400 and 800 million years after the big bang, about 12.9 to 13.3 billion years ago. The current consensus for the age of the universe is 13.7 billion years.
• it looks like a picture of stars, but on closer inspection it is almost exclusively galaxies, over 10,000.
• the section of the sky pictured is: if you made 8 feet of drinking straws into one tube and looked through it at the sky, or 1/10 of the width of the full moon, or a the size of a tennis ball seen from 300 feet away, or a grain of sand held at arms length, or 24-millionth of the whole sky.
• this is not a crowded section or unusual section of the sky, it is dense with with galaxies like this in every direction
• many objects are warm in color; the redder they are, the older and further away they are, due to redshift. The objects are in actuality in the infrared spectrum and we can't see them without using infrared sensors.
• the farthest imaged objects are young galaxies, therefore smaller and less ordered.
• many galaxies shown are larger, brighter, or strangely distorted (see “face galaxy” http://msp.warwick.ac.uk/~cpr/paradigm/HUDF-2.pdf) than they should be because of gravitational lensing. This lensing, which acts like naturally occurring gravity-based telescopes, allows astronomers to see even “deeper” into space, by providing naturally occurring gravity-based telescopes. Because these “lenses” are irregular in shape, they can strangely distort and multiply objects. Multiple versions of the same objects can also be seen at different periods in their history.  
• all optical rules that apply to this image also apply to any other image, even with much closer objects, just less so.


SNL and the international comedy delay


Saturady Night Live is now being licensed, not just imitated, in other countries (see links): Japan, Korea, Spain, Italy, Brazil

Franchising of TV shows is nothing new, but with SNL the issue becomes whether a particular type of American humor, which is often ironic, political, and topical, can be exported or recreated. In all likelihood, only the format will be copied, while the humor will be the usual local one, but I hope some of the American style will establish itself.

Interestingly, Saturday NIght Live has only recently become known in much of the non-english speaking world. I know that in Germany, for example, most people I asked have never heard of SNL. I often have to explain that it is the most important satirical show on American television and that it has spawned many of the Hollywood stars that everyone there does know. SNL is not alone: stand up comedy, as well, has only recently come to Germany, where it is smoothly blending with what they call Kabarett, which can be extremely good. I often cringe, however, when I see the typical US style standup performance there it because it is so brazenly imitative (imagine ”Hey. How you doin’ Heidelberg?" translated in German) and many of the jokes seem merely lifted from US comedians, with the assumption that the locals won't find out. In time this situation will probably improve.

Also, believe it or not, Johnny Carson is practically unknown in Germany. This is probably different in Britain and The Netherlands, etc, where American shows are regularly shown without subtitles, but in Germany the local market is large enough that most shows are dubbed, and it would make little sense to dub Johnny Carson, Letterman or SNL. They did dub Seinfeld, which really doesn’t work very well. And, because they don’t know Carson, or Conan, or Letterman, someone like Harald Schmidt can get away with an amazingly shameless Letterman rip off (see clip starting at around 2:55).

So what has caused the delay of this kind of American comedy? I would guess that it is mostly a matter of language and market size. Obviously English speaking countries can simply air the American content. As a result, Australians and New Zealanders are extremely aware of US popular culture and its history. But, countries with a large enough local non-english speaking market like Germany and France tend to dub and thereby leave out humor-based shows because the jokes just don’t translate. Countries that are smaller, and generally more multi-lingual, like The Netherlands, Belgium, Switzerland, tend to air the shows in English and with or without subtitles. Another factor is the content. Sex and action and violence translate visually, which explains why Johnny Carson is much less known than David Hasselhof in Germany, and than Schwarzenegger internationally. The third factor may be the internet. In most countries it is now possible to go around the official Television lineup and view American television, but it takes a while for the older generations to adopt to this newer medium and for the younger generations to find out about older establishments (like Carson, Carlin, Bruce, Pryor, SNL).

This disparity of knowledge points to a few bumps in the image of globalized culture. On the one hand Americans who travel abroad are often surprised by locals who actually know more about the US than many Americans (on some subjects, see Jason Jones interview with Iranians), but this erroneously leads to the assumption that they know everything about the US. Instead, there is always the national “portfolio” of international culture. The local still exists. In part, it expresses itself in very unpredictable combinations of knowledge and ignorance of the foreign.

I hope the export of SNL goes well. Of all the products to be exported from the US, I would prefer humor to be one of them, because it can be very subversive, cynical, progressive.

Related:
List of American TV shows based on British TV shows
List of international standup comedians (click on the United States link to get a picture of the difference in pure quantity).
Germany voted the least funny, US the most.





Complete Camera Convergence

We are headed towards a point in technology where spaces are visually replicated in three dimensions in real time, transmitted, and viewed interactively. Photography as we know it, will be considered merely a mode of visually and temporally framing a multidimensionality, whether actual or modeled. It will be to representation, what today a freeze frame is to video. Currently, we still think of photography as a specific medium with its own culture and mode of production, that overlaps on the fringes with film, video, scanning, 3d imaging and mapping. Eventually, however, photography will be a consciously limited subcategory in the completely converged system of optic reproduction.

Imagine a typical (and inevitable) application: an athletic game (soccer or football) is taking place and viewers at home and in the stadium view a live broadcast. But they can see it in three dimensions, even have it displayed as a holographic object in front of them. They can also change its angle of view and its level of enlargement. The viewer can zoom in on a particular player, and watch the action from any angle, all in real time. What will be broadcast will not be a dimensionally limited sequence of images, but a real time 3d, camera-based model. The entire model will be stored locally by viewers, as it changes through time, and so any interaction with it remains open-ended. Even a recording of the event, will be accessible from all angles and points in time. Consequently replaying the event in a different speed or from a different angle will be a personal choice. Dimensionality will be democratized. Image framing will be democratized.

The actual recording will be done by video cameras, some stationary, some mobile, some crowd-sourced (any member of the audience with a camera can upload a particular angle). There may also be some form of 3d scanning using optical or other wavelengths as an additional way to measure distances and dimensionality. All data will be combined into a single model in motion and then broadcast.

It will likely have different resolutions in different areas, and some areas (such as niches or spaces out of view) will be missing altogether. I imagine there will be more sophisticated algorithms by then to fill in the unknown areas based on intelligent conjecture. We can also expect heavy use of augmented reality, the overlaying of graphics, texts, social media, since the model already contains geo-locational data for easy superimposition.

All this is inevitable because the necessary technologies already exist and the trends are pointing towards spatialisation. The only bottlenecks are processing speed and data bandwidths, as usual. If we can assume that technology will accelerate as before, then all the pieces are destined to merge, and probably fairly soon.

We are already getting used to some of these ideas in consumer cameras. Many cameras now can take both still and video images, each of which is still considered a distinct entity. A video frame is lower in resolution, so as to allow for faster capture. Better processors are doing away with that distinction. A video will be merely an image sequence; a photo will be a freeze-frame, technically speaking.

Photos are also becoming dimensional for consumers, with many cameras having built-in options for panoramas, which can be viewed as if they are surrounding spaces. Or they include two lenses allowing for 3d photo and video (with a single perspective 3d result, not yet virtual modeling of space).

3d recording and displaying is advancing rapidly, creating a renewed commercial “added-value” and a necessity for Hollywood, from what used to be a gimmicky fad. TV and phone 3d displays not requiring glasses are already on the market. Video holography is also progressing rapidly, but may need a little more time to break through the processing bottleneck.

Video games are usually lagging behind Hollywood productions in realism, but only because games require real-time rendering, while Hollywood can devote however long they need to render just a single frame. Usually, what took hours and days to render a few years ago, gets reduced down to a fraction of a second, and can then be applied to gaming.

Lagging behind further is the speed of input. Both films and video games use similar methods of input: mathematical modeling (synthesizing) or photography and live video that is mapped onto geometric surfaces. But photography is also used to create 3d models directly, instead of just becoming a surface. Using multiple cameras positioned optimally, software can interpolate dimensionality. This method is being used very effectively by NASA and other space programs to create models of outer space or surfaces of planets. The data is gathered very gradually by telescopes and satellites and then creatively and mathematically combined to create “fly-throughs” and landscape “photographs.” Apple's new (and beautiful but rather useless and comically flawed) 3d mapping app is using the same principles: photography as the generator of space, as opposed to merely the content of surfaces projected on mathematical models, which is how Google's 3d maps are produced. Eventually, the input and its recombination into models could be accelerated to the point of being live.

Notice how we have to use quotation marks to distinguish a rendered viewpoint from an “actual” photograph of a sequence. The veracity of a scientific, model-based photograph is actually not inferior, because the original data is from measurements. It is not a photograph of just a model, it is a photo of a model based on photographs (and other data). In many ways such an image contains more information, i.e. more “truth.” What is missing perhaps, is the original momentary intent, the conscious framing in time, the soul of the photographer. But even that could be argued about. As the latest images from the European space program show, there is such a thing as the soul of the “post–photographer.” He/she wasn’t at the location then, but has experienced it virtually and internally and been affected by it emotionally, and this results in images that also reveal humanity.

It seems the idea of photography will become something similar to what in film production is called the “director of photography." It will denote the framing in time and space, the design of the image, not a residue of the moment of being there, feeling it, and taking the shot. The latter notion of photography could become something not necessarily retrograde, but certainly specialized. Perhaps it will be called moment-based photography (momentography) or some such.

In the meantime it is worth considering what such a convergence could mean for historical documentation. We are already recording the present in ways that could one day be reconstructed to create spatial models. Just as we can colorize old black and white films, and turn traditional movies into 3d (up to a point), it will be easier in the future to render spaces from optical recordings. So, it is actually urgent that we think about how we record the present for posterity, to prepare it for the proper dimensioning by future historians.

links:
European Space Agency images of mars, composited from multiple images into models, then reframed
http://www.esa.int/esaMI/Mars_Express/SEMWF0474OD_1.html
http://www.esa.int/esa-mmg/mmg.pl?b=b&type=I&mission=Mars%20Express&start=1

Holographic video in development:
http://web.mit.edu/newsoffice/2011/video-holography-0124.html
http://www.youtube.com/watch?v=Y-P1zZAcPuw

Microsoft photosynth uses public images to create dimensional spaces/views of the original subjects
http://photosynth.net

Multi-View Stereo for Community Photo Collections
http://grail.cs.washington.edu/projects/mvscpc/

Judeo-christianity is voluntary serfdom to a hypothetical king.

How could it be that in the US, where slavery and aristocracy are officially rejected, where freedom and individuality are more possible than in most of the world, that so many people voluntarily recreate the demeaning relationship of king/subject and master/slave?

It's not just disturbing that people are willing to see themselves in a relationship with someone that is likely imaginary, but that they choose a relationship of self-enslavement. How shameful.

Does this mean human beings desire to be dominated, that happiness comes from submission to hierarchy?


Why did the chicken cross the road?

Because it's afraid of the car.

I have seen this many times, because we've had chickens. Chickens will cross the road for two reasons: They either want to cross because they have some urge to cross, such as to look for food on the other side. Or they hear a car coming, get startled, and start running towards their own flock, which might be on the other side. They feel safer with the other chickens and are willing to risk crossing the road to be with their own group. Safety in numbers overrides the risks of crossing a road. I think the question of why did the chicken cross the road came about because it happens so frequently that a chicken will cross the road right in front of your car and you wonder why it would do that (squirrels do it too).

A brief history of modern life

Phase 1: Not doing everything yourself anymore:
Electricity, running hot water, the oven, the refrigerator, the supermarket, processed foods, the broker, the insurance company, the travel agent, the dating service, the government …

Phase 2: Not doing everything yourself anymore, while sitting down:
radio, television, the telephone, the automobile, the train, the airplane, the drive-through...

Phase 3: Doing everything yourself again, while sitting down:
Online-banking, -investing, -lobbying, -publishing, -travel planning, -reviewing, -defining, -mapping-shopping, -selling, -dating …

Never buy more than fits on your shelves and don’t buy more shelves.

Americans call WWII "the good war”. Europeans call it “the last war.” Neither are right.

The law of geographic perspective

In physical perspective objects further away look smaller and closer together and are harder to distinguish. There is an equivalent in cultural geography that seems to follow the same principles. The law of geo-cultural perspective: cultures further away seem to be almost indistinguishable. Many can't quite tell the difference between the Germans and the Dutch, the Hutu and Tutsi, Peru and Ecuador, China and Korea, Vietnam and Cambodia, Australia and New Zealand, Sweden and Norway. Sicily is mistaken as Italian, Bavaria is mistaken as German. In the US, all northern European folklore is mixed up as Polka (see Lawrence Welk). To the rest of the world, the US seems as one country, one culture; internally it seems like a world so diverse and complex as to eclipse the importance of the rest of the world.

Internal complexity distracts people from curiosity about the rest of the world. External complexity is considered irritating. Lack of curiosity about and irritation concerning other countries is externally labeled as ignorance.