Peter Meilstrup

“Contrast normalization” is the start of the problems. Patterns of light impinge in the retina and go through a couple of processing steps before anything that would count as “seeing” begins to happen. Contrast normalization means that responses of cells in primary visual cortex appear to be scaled to a range determined by the activity of neighboring cells. So
how much of a signal a light generates in the brain does not depend on the brightness of the light, but more on its brightness as compared to its immediate surroundings. Increasing the brightness of a point that is already the most conspicuous thing in its immediate vicinity does not make it more conspicuous, but it does damp down the response to other things nearby. With a very bright light, you see the light, surrounded by a region in which you are not very good at detecting anything. And this region gets larger the brighter the light is.

“Crowding” was originally discovered in studies of reading; it’s in large part the subject of my research, so I’m going to be probably too verbose about it. Say you put a letter in the periphery of your vision and make it large enough to read. You can do this fine, but if you place two three letters of the same size next to each other in the same location, you can no longer tell what the letters are; the corners and edges belonging to the individual letters can’t get sorted out. There is what Dennis Pelli calls an “integration field” (of which the physiological mechanism is unclear, but it probably works after contrast normalization) which has a characteristic size, in terms of visual angle, and it’s within the integration fields that simple image features (like corners and edges of letters) are combined into larger groupings (like the letters themselves), and if equally salient stimuli are put into the same integration field things tend to get jumbled.

The integration fields are sized about proportionate to their distance away from the center of fixation, so you deal with cluttered scenes by turning your eyes, moving the center of gaze (where integration fields are smaller) towards different locations in turn, which is time consuming.

Take for instance looking at a clock: you can probably focus on the center of an analog clock face and read any of the numbers around the periphery. That’s about how big the integration fields are: if the numbers were packed closer together clocks would be slower to read. (So 24 hour dials have never caught on.)

Now it turns out that crowding is a very general phenomenon, and crops up any time you have to combine or compare simple visual features — either for recognition, or for judging the relations between separate objects; it applies to motion as well as shape, and in particular to relative judgements of position and motion.

So far I’ve described how crowding hurts you if objects are packed too close together in peripheral vision, but it cuts the other way as well: if image features that need to be combined, compared, or put in relation to make sense of a scene are separated by more than that critical distance, then you can’t really put them together into a coherent whole.

Take a bright light on a bike. What do you need if you are driving a car and need to steer around a bike? You need to simply detect it, see that it’s there, but that’s the easy part. You also need to see its position (in relation to the edge of the road/lane); which direction it’s moving (in respect to the background scene); how wide it is (rider’s shoulder, in relation to your car) and so on. All of these relative spatial judgements require not only detecting the light but also detecting other things and judging their relation — a job that has to be done inside the space of one integration field.

So here’s the entire problem: put contrast normalization together with the integration fields, and you see that these relative spatial judgments become very problematic if, due to extreme luminous intensity of the light, everything within the integration field other than the light is contrast-normalized away. There’s nothing detectable left around the light for drivers to make any spatial judgements about your bike with.

What are some ways to increase conspicuity or range of visibility without invoking contrast normalization? One easy answer is to emit the light from a larger surface area so that luminance does not have to be so high. Now on a bike, there’s a problem in that there’s simply not a lot of surface area available to display your light from. But given an excess number of lumens like LEDs and big batteries give you nowadays, you can take a different tact — blast your light down and out, onto the road surface and onto surrounding street furniture, rather than directly into drivers’ eyes. This has the effect of making your light occupy a larger space, while also illuminating the objects in your immediate vicinity that drivers also need in order to accurately judge your position.

You’ll notice that this downward, illuminating-the-ground angle is how Dinotte, makers of hyperpowered taillights, originally intended. They provide a mount that is 90 degrees to the seat post, not level with the ground. Well, they used to. Now it’s an adjustable angle mount and 99% of people will point it straight back. Argh!