Chapter 2 : Image Intensities

Now that we have seen how to choose and set optimal resolutions for the the microscopy experiment, we should now discuss something about how the intensity of the image that is acquired.

A word on resolution and contrast

It makes sense to discuss this now because the different intensities of various regions of a sample image are called its ‘contrast’.

The contrast of the image is subtly linked to the resolution of the image you are getting. Remember that resolution is the ability to distinguish detailed structures.

Think of it this way, when you are blinded by the headlights of a car at night, everything turns white and you lose your ability to see any details of the road or car in front of you. Likewise, if you step into a pitch black completely dark room, everything is black, and you can’t distinguish details either. The point is that resolution is about where the light is coming from. If there is too much light every where (or at least all over your sensor), there is no way to tell where it is coming from, it is just coming from everywhere. Similarly, if no light is coming at all, the question of where it is coming from is meaningless.

Thus, contrast affects resolution rather intimately, despite the two being rather different concepts.

Lasers

A confocal microscope uses a variety of lasers to excite the sample. As of today, there are several gas-based lasers : Argon, HeNe, HeKr, etc., and many solid-state lasers : UV Diode, NIR, Titanium-Sapphire etc. In general, the gas-lasers are slowly being replaced by solid-state lasers. Whatever the type of laser, it has a certain energy output.
This energy output often seems like a very small number – for example, one of the most powerful lasers on microscopes is a 3 milliWatt Ti-Sapphire laser. The energy output of this laser is 20,000 times less than an average 60W light bulb in a room.

But we have to remember that a microscope will focus this laser down into a small point of the sample. Therefore, it is not the power output but rather the power density on the sample that matters.

Our 1.4 NA oil objective with a resolution of 213 nm will create an energy density of roughly 600 watts/sq. micron on the sample. A light bulb on the other hand creates a power density of about 5 watts/sq.m in the room. Now you see that the laser actually shines a really huge amount of light on the sample, and one rarely needs this much power. Typically, microscope lasers are operated at less than 10% of their maximum energy output. Software allows the control of how much laser needs to be used.

Too much laser power and the sample is destroyed. Too little, and the sample is invisible. In general, one must use the minimum possible laser required such that the interesting part of the sample is just visible.

Detector Sensitivity

The light coming from the sample is collected on a detector, which generates an electric signal in response to the light falling on it. There is usually some form of amplification applied to this electric signal.

The amount of amplification needed depends on the sensitivity of the detector. And so, the kind of amplification applied differs based on the type of detector. Amplification makes the detector more sensitive, but there will be the age old battle between sensitivity and accuracy. As the detector becomes more sensitive, it also becomes more sensitive to noise. We should keep this in mind.

We will not go into the mechanics of the detector, but simply what the parameters would do when you are imaging a sample.

  1. Single Pixel Detectors have good sensitivity that is expressed in 3 separate parameters that a user can adjust. These parameters directly affect the signal coming out of the detector. Two of these make the detector more sensitive, while one makes it less sensitive.

    HV

    The HV parameter changes the voltage of the electrical signal. This makes is a nice strong signal that can be measured easily by the electric circuitry of the detectors. A higher HV will make sure that there is no loss of the electrical signal and all of it is robustly measured. That’s the good part. The bad part is that it a higher HV will also make sure any noise and fluctuations in the electric parts of the system are also robustly measured.

    So as you increase the HV, at first the image will look better and brighter, and then at some point will start getting noisier and noisier. Random bright pixels will start appearing in it. It’s best to choose this value at a reasonable level and then not to tamper with it. Especially, do not change the HV between images taken as part of the same experiment – this would change the amount of noise that each image has and make any quantification later very hard.

    Gain

    The Gain parameter is more common and changes the amount of current of the electrical signal. This means that the detector will generate large currents even for a small amount of light. This makes sure that all the light coming from the sample is measured properly. However, it also means that any small amounts of stray light, unwanted reflections from the sample etc. will also be measured. Thus, higher gain makes sure all light from the sample coming onto the detector is measured. Unfortunately, it also means that pesky light NOT coming from the sample is also measured. You will see this effect as the background of your sample getting brighter and brighter as you increase the gain.

    This causes lower contrast – the difference between the object of interest in the sample and the background gets less and less. You start getting closer to the ‘headlights of a car’ situation described above and thus losing contrast.

    Offset

    The Offset parameter allows you to choose the minimum current that the detector signal must have in order for it to be considered a valid signal. A higher offset means that any low signals from the detector will be ignored. It is like a substraction on the signal. You can see how the offset can now be used to correct some of the problems that a high Gain setting might create. If your gain setting has created a high background, say of 50 units, you might set the offset also to 50, making the background appear to be 50-50 = 0. Wonderful, you have no background.

    In general, one sets the offset to have the background set to 0 for a decided gain. Unfortunately , setting the offset too high will mean that any relatively dim structures in your sample will also be ignored !

Note that

  1. The Good – HV increases voltage of the signal , Gain increases current of the signal
  2. The Bad – HV increases electrical noise creating speckly grainy images, Gain increases light noise creating higher background intensity and low contrast.
  3. The Good – The Offset can compensate for noise introduced by increasing the gain.
  4. The Bad – It will also cause any dim structures in your sample to become invisible.
  5. The Ugly – The offset cannot do anything to the noise introduced by increasing HV. There are ways to deal with the noise introduced by HV, but in general these are more intensive and you will have to pay the price in reduced resolution or sample destruction.

Recording (or Image) Intensity

Thats the hardware side of things. Just like resolution, the final question we have to deal with is how will we record the intensity signals coming from the detector.

We talked a little about contrast in the picture. And everyone is aware of the idea of the brightness of the picture. In the jargon of signal processing, both of these are part of the same concept – Dynamic range. The dynamic range is the range of values that the electric signal coming from the detectors is recorded as.

Let me explain : lets say you have a dim light source and a bright light source. Lets say their signals are recorded just as I described them – High, and Low. So you have two possible levels – you have a dynamic range of 2. Now if there was a third even brighter light source, you’re out of luck – since you can only assign “high” or “low” as a value. The bright light source and brightest light source would both get a value of “High” and they would be indistinguishable when you saw the signal later.

Obviously, it’s important to have a large dynamic range of the image. Convention dictates that the dynamic range on microscopes is typically 256 levels (0-255) or 4096 levels (0-4095) or 65536 levels (0-65535). There is a reason these numbers are chosen, but we will discuss that later.

Lets say you set the micrscope to have a dynamic range of 4096. Then if you really want to use all 4096 levels, you have set your laser, and HV, Gain, Offset such that the lowest intensity of your sample reads 0 and the highest intensity reads 4095. That way, everything in between is spread across the full 4096 level range and thus you can distinguish every tiny change in intensity. And since we know that the ability to distinguish differences in intensity better is the contrast, you end up with the best contrast in the image when you have the best dynamic range.

I’ll make this point very clear by taking an absurd example – what happens if I set the image to have the highest value at 4095 and lowest value at 4094. Thats not useful, there is just one level of dynamic range and the entire image contains pixels that are either ‘bright’ or ‘slightly less bright’. Essentially, just “high” or “low”. Not going to be a very useful image at all eh?

Bit-depth

This is a word that comes about often when talking about images. I’ll attempt to explain what it means. We just chose the dynamic range of the image above. This means thats every pixel can have a possible value of, say 0-4095. The computer has to somehow store this value for that pixel. We know that any number can be expressed as the sum of the decimal places :

So 3547 = 3×1000 + 5×100 + 4×10 + 7×1

1000, 100, 10 and 1 are all powers of 10 – 10^3, 10^2, 10^1, and 10^0. It works like this because we count in base 10. It has to do with the fact that we use 10 independent symbols (0,1,2,3,4,5,6,7,8,9) to represent all possible numbers.

A computer doesn’t have that luxury – it’s made of millions of switches. And a switch can either be on or off, representing 1 and 0 respectively. So the computer can only use 2 symbols to represent all possible numbers – in other words, it works in base 2 or binary.

Numbers in binary work the same way as decimal numbers. The number 15 is written as:

13 = 1×8 + 1×4 + 0x2 + 1×1 = 1101

Obviously, 8, 4, 2, 1 are now powers of 2: 2^3, 2^2, 2^1, 2^0. Each of these ‘places’ can be represented by a switch on the computers electronics. This switch now represents a piece of information and is called a ‘bit’.

The number of bits you assign to represent a pixel determines the highest value that can be stored in it. If you have 4-bits, the highest value that can be stored is 1111 or 15, and the lowest value is 0 – giving us a total of 16 values. The number of values that can be stored is simply 2^(number of bits). This is called the ‘bit-depth’.

Now consider and image. An 8-bit image means that each pixel has been assigned 8-bits on the computers hardware, and thus the pixel can take a highest possible value of 11111111 in binary , which happens to be 255 in decimal. (2^8 = 256 i.e 0 and 1-255)

By the same logic, 12-bit images can take a highest value of 4095 and 16-bit images can a highest value of 65535.

Note: 32-bit images are a special case. They can 4294967296 values. But these values are not from 0 to 4294967296 but rather from -2147483646 to 2147483646 and a 0. The leaves 3 extra values that are assigned to special imaginary numbers : Inf for infinity, -Inf for infinity, and NaN for ‘not-a-number’. We will discuss these when we come up to them.

It becomes clear now that bit-depth is pretty much the first step for deciding what your dynamic range will end up being. Typically, a bit-depth of 12-bits is chosen when acquiring images.

Saturation

Conclusions and Practical Tips