Home Resources Tutorials and papers

### What's the difference between MTF (Modulation Transfer Function) and CTF (Contrast Transfer Function)?

CTF expresses the lens contrast response when a "square pattern" (chessboard style) is imaged; this parameter is the most useful in order to assess edge sharpness for measurement applications. On the other hand, MTF is the contrast response achieved when imaging a sinusoidal pattern in which the grey levels range from 0 and 255; this value is more difficult to convert into any useful parameter for machine vision applications.

### Does telecentricity means that "the inner or outer object walls completely disappear"?

Only to a certain degree: even with a "perfect" telecentric lens, only half the ray cones coming from the object edges actually strike the detector. As such, the inner or outer edges of an object will present some blurring. That effect can be reduced or even canceled by means of collimated sources.

Since Telecentric Lenses are a real world object, they show some residual distortion which can affect measurement accuracy. Distortion is calculated as the percent difference between the real and expected image height and can be approximated by a second order polynomial.

If we define the radial distances from the image centre as follows

the distortion is computed as a function of Ra:

dist (Ra) = (Ra - Re)/Ra = c*Raˆ2 + b*Ra + a

where a, b and c are constant values that define the distortion curve behavior; note that "a" is usually zero as the distortion is usually zero at the image centre. In some cases, a third order polynomial could be required to get a perfect fit of the curve.

In addition to radial distortion, also trapezoidal distortion must be taken into account. This effect can be thought of as the perspective error due to the misalignment between optical and mechanical components, whose consequence is to transform parallel lines in object space into convergent (or divergent) lines in image space. Such effect, also known as "keystone" or "thin prism", can be easily fixed by means of pretty common algorithms which compute the point where convergent bundles of lines cross each other.

An interesting aspect is that radial and trapezoidal distortion are two completely different physical phenomena, hence they can be mathematically corrected by means of two independent space transform functions which can also be applied subsequently.

An alternative (or additional) approach is to correct both distortions locally and at once: the image of a grid pattern is used to define the distortion error amount and its orientation zone by zone. The final result is a vector field where each vector associated to a specific image zone defines what correction has to be applied to the x,y coordinate measurements within the image range.

### Why "Telecentric range" is a misleading concept?

Some suppliers talk about a supposed "telecentric range", that is to say that within a certain depth range (expressed in mm) the maximum error you would get would stay within a certain amount (usually expressed in micron); this parameter is somewhat meaningless from an optical point of view and possibly misleading. Incoming ray cones show a maximum inclination expressed in degree which depends on the lens telecentricity. Since rays "run straight" in space, let's say that "all the space is telecentric"! We guarantee our lenses to have a maximum telecentric slope of 0.1°, 0,0017 (1.7 mrad) if turned into radians, although the typical departure from perfect telecentricity in tests is usually half of that, about 0,0008 rad (0.8 mrad). This means that the maximum error for a displacement of 1 mm would be less than 1 micron.

A crucial difference between us and our competitors is that, while they just state a value for telecentricity, we do measure that parameter with specific test instrumentation and we certify each lens telecentricity with a test report.

### How do I assemble a large telecentric lens?

Big telecentric lenses, such as TCxx120, TCxx144, TCxx192 and Tcxx240 lenses integrate a large mounting flange which is built-in the mechanics.
Once the camera is mounted, the rotation phase of the camera can be adjusted in order to align the FOV and the detector sides according to any of these two options:

• Building a holding flange with elliptical holes: the lens+camera phase can be adjusted by rotating the entire assembly. Then lens can be then blocked by inserting screws into the through-holes on the lens flange.
• Building a holding flange where the lens flange is free to rotate: once the correct phase has been found, the two flanges can be pushed together and held tight by means of screws.

For TCxx240, a special C-mount rotating adaptor can be pre-assembled on the lens upon request. When the right phase has found, the assembly can be blocked by means of three radial screws.

### Are the lenses used in the LTCL series the same as those in matching telecentric lenses? For example, do a LTCL120 feature the same lenses installed in a TC12120?

No, there are differences between telecentric lenses and illuminators, since they are also intended to perform differently: telecentric lenses accept "telecentric cones", while telecentric illuminators (also known as collimated illuminators) basically project a parallel beam of rays.

### Is depth of field an available spec for LTCL series?

As LTCL illuminators are non-imaging components, depth of field is sort of meaningless: collimated illuminators must always be used in combination with a telecentric lens, hence field-depth and other optical specs cannot be listed as with common stand-alone illuminator.

### Why the divergence of collimating sources is not listed?

Since LTCL illuminators have to be used in combination with Telecentric Lenses, the optical system aperture depends only on the Telecentric lens aperture stop; for this reason, the collimating source divergence is not a relevant value. The divergence of the illuminators ranges from 0.1° to 1°; the collimation degree of LED sources is lower than in laser beam collimators which, in turn, could not be used effectively in machine vision applications because of diffraction effects, which would seriously affect the measurement accuracy

### When coupling a LTCL illuminator with its compatible telecentric lens, which change occurs in the telecentric lens field depth?

The use of a collimated source increases the natural field depth of the telecentric lenses by approx +20/30%, but that also depends on other factors such as the lens type, the light colour, the pixel size and the method used to compute field depth. Since the illuminator's output numerical aperture (NA) is lower than the telecentric lens object N, the optical system behaves as if the lens had the same NA as the illuminator in terms of field depth, while maintaining the same image resolution as per the actual telecentric lens NA

### Why plots for illumination uniformity are not included in the LTCL series documentation?

The uniformity of the source alone is not meaningful: the only thing that matters is the image brightness uniformity provided by the combination of the illuminator with a telecentric lens. The illumination homogeneity ensured by this type of combination is typically well within +/- 10% inhomogeneity.

### Why GREEN light is recommended for these products?

All lenses operating in the visible range, including OE Telecentric lenses, are achromatized through the whole VIS spectrum. However, parameters related to the lens distortion and telecentricity are typically optimized for the wavelengths at the centre of the VIS range, that is green light. Moreover, the resolution tends to be better in the green light range, where the achromatization is almost perfect.

"Green" is also better than "Red" because a shorter wavelength range increases the diffraction limit of the lens and the maximum achievable resolution.

### Why OE® Telecentric lenses don't integrate an iris?

Our TC lenses don't feature an iris, but we can easily adjust the aperture upon request prior to shipping the lens, without any additional costs or delays for the customer. The reasons why our lenses don't feature an iris are so many that the proper question would be "why other manufacturers integrate irises?":

• adding an iris make a lens more expensive because of a feature that would only be used once or twice throughout the product life
• iris insertion makes the mechanics less precise and the optical alignment much worse
• we would be unable to test the lenses at the same aperture that the customer would be using
• iris position is much less precise than a metal sheet aperture: this strongly affects telecentricity
• the iris geometry is polygonal, not circular: this changes the inclination of the main rays across the FOV, thus affecting the lens distortion and resolution
• irises cannot be as well centered as fixed, round, diaphragms: proper centering is essential to ensure a good telecentricity of the lens
• only a circular, fixed, aperture makes brightness the same for all lenses
• an adjustable iris is typically not flat and this causes uncertainty in the stop position, which is crucial when using telecentric lenses!
• iris is a moving part that can be dangerous in most industrial environments. Vibrations could easily disassemble the mechanics or change the lens aperture
• the iris setting can be accidentally changed by the user and that would change the original system configuration
• end users prefer having less options and only a few things that have to be tuned in a MV system
• apertures smaller than what is delivered by OE as a standard will not make sense as the resolution will decay because of diffraction limit; on the other hand, much wider apertures would cause a reduction of the field depth.

The standard aperture of OE lenses is meant to optimize image resolution and field depth.

### Why OE® Telecentric lenses don't feature a focusing mechanism?

As with the iris, a focusing mechanism would generate a mechanical play in the moving part of the lens, thus making it worse the centering of the optical system and also causing trapezoidal distortion. Another issue is concerned with radial distortion: the distortion of a telecentric lens can be kept small only when the distances between optical components are set at certain values: displacing any element from the correct position would increase the lens distortion. A focusing mechanism makes the positioning of the lenses inside the optical system uncertain and the distortion value unknown: the distortion would then be different from the values obtained in our quality control process.

### F/#, Working F/# and Numerical Aperture

numerical aperture = sin(theta)

Where theta is half the cone angle subtended by rays that enter or exit an optical system. The F/# is defined as the ratio between the lens aperture (D) and its focal length f.

F/# = f/D

For small theta values:

F/# = 1 /(2 *numerical aperture)

hence

numerical aperture = 1/(2 * F/#)

Note that numerical aperture (and F/#) refer to both image and object space, as they can define both the cone angle of incoming and outgoing rays. Usually F/# refers to image space and numerical aperture is more commonly used in object space (incoming rays).
In macro lenses, like Telecentric Lenses, the F/# parameter loses its meaning as the object is not located at infinity; the working F/# should be used instead. Those two parameters come together in the formula:

Working F/# = (1 + magnification) * F/#

Note also that

numerical aperture(object) = magnification * numerical aperture(image)

and consequently

working F/#(object) = working F/#(image) / magnification.

### Telecentric lenses field depth

Field depth is stated on our product documentation: for most of our TC series, the stated field depth is the overall field depth at F/# 8.

The magnification of a telecentric lens always remains constant regardless the change of the working distance. Therefore we may consider that the best focus plane is located in the middle of the field depth, or in other words field depth is symmetrical to the best focus plane.

At the borders of the field depth the image can be still used for measurement but, to get a very sharp image, only half of the nominal field depth should be considered.

Field depth is a complex parameter: it depends on the magnification, F/#, wavelength, pixel size and, last but not least, on the sensitivity of the edge extraction algorithm in use. For this reason there is no objective nor standard way to define it: it's a subjective parameter.

A simple rule of thumb to calculate field depth:

field depth = (WFN * p * k) / (M * M)

where

M = magnification
WFN = working F/#
p = pixel size (in micron)
k = application specific parameter

The k parameter depends upon the type of application. For telecentric measurement applications, a reasonable k value is 0.008, while for defect inspection k should be set at about 0.015.
For certain magnification and working F/#, the bi-telecentricity of our lenses yields superior field depth values.

Many cameras are found not to respect the industrial standard for C-mount (17.52 mm), which defines the flange-to-detector distance (flange focal length). Besides all the issues involved with mechanical inaccuracy, many manufacturers don't take into the due account the thickness of the detector's protection glass which, no matter how thin, is still part of the actual flange to detector distance.

OE® lenses are pre-set to work at the nominal C-mount distance; however, a spacer kit is supplied with our telecentric lenses with instructions on how to tune the back focal length at the optimal value.

### LED pulsing with OE® illuminators

Most OE® illuminators can be fed by a 12 or 24 V DC source. The LED is driven by the inner circuitry (a micro-switching electronic power supply), which ensures both optical throughput stability and safe operating conditions.

A trimmer in the rear side of the device makes it possible to adjust the LED current flow and consequently the luminous flux. When customers need to pulse the source in order to cope with very short exposure times, the LED can be directly driven by connecting a third independent cable. Pulsing electrical ratings are listed in our product brochures.

When using a LTCL Series collimated source in combination with a telecentric lens, all the light flowing out the source is collected by the telecentric lens, making this configuration amazingly efficient from the point of view of energy balance. The detector is very brightly illuminated, hence allowing for very small integration times without the need of pulsed operation.

### Diffraction limit and CTF with small pixel detectors

Many integrators use large resolution cameras with very small pixels without taking care of the actual lens performances. The resolution of a lens is typically expressed by its MTF (modulation transfer function) graph, which shows showing the response of the lens when a sinusoidal pattern is imaged. However, the CTF (Contrast Transfer Function) is a more interesting parameter, because it gives the contrast achieved when a black and white striped pattern is imaged, thus simulating the behaviour of a lens when imaging an object edge.

If "t" is the thickness of a white or black stripe in object space, the related spatial frequency w (usually expressed in line pairs/mm) is computed as

w = 1/2t.

For any given value of w the contrasts is computed as:

CTF(w) = (Iw - Ib) / (Iw + Ib)

where Iw and Ib are the maximum intensities (or the "grey levels") you can measure on the image plane, for white and black stripes respectively.
CTF is limited by diffraction and the limit lowers with increasing F/#s: for a given spatial frequency w, the CTF increases when the working F/# gets smaller.
At the same time, CTF also depends on the wavelength range: the shorter the wavelength, the higher the CTF. Expressing the CTF as a function of these parameters yields the following:

CTF = CTF (w , WFN, lambda)

where

w = spatial frequency in linepairs/mm
WFN = working F/#
lambda = wavelength (in millimeters)

the "cut-off frequency" is defined as the w value for which

CTF = 0

which occurs when

w = 1/(WFN * lambda)

For instance, a TC Series lens with working F/# 8 and operating with in green light (lambda = 0,000587 mm) has a cut-off frequency of:

w(cutoff) = 1/(8 * 0,000587) = 213 lp/mm

which corresponds to a pixel size of about 1/(2*213) = 2,3 micron.

Theoretically, one wouldn't like a lens yielding a very small contrast (CTF) at the pixel spatial frequency; however it looks like a small pixel size is helpful in reducing noise and better defining the profile of an object.

For this reason, although the increase in resolution is less than proportional than the pixel size (because the CTF curves are decreasing when the spatial frequency increases) there are still some good reasons to use small pixels. In addition, the edge detection is carried in two dimensions (therefore decreasing the pixel size a little strongly increases the number of pixels over a certain image area and probably makes the edge detection more efficient).

### F-mount interface and backlash

Many standard F-mount adaptors for standard photography lenses are affected by backlash: the F-mount interface is intrinsically elastic because is based on pre-load springs. F-mount is a commercial and not an industrial standard, so there's no objective reference to define the spring pre-load nor the exact mechanical tolerances.

Because of its elastic nature, the F-mount interface could be troublesome if the camera attached to the lens is heavy and the system is subject to vibrations (obviously, holding the lens in place only through the camera F-mount is not recommended).
Possible ways to overcome backlash could be:

• blocking the camera, as well;
• increasing the pre-load of the springs.

### Types of pattern coatings

OE® supplies both laser-engraved and photolithography technique patterns for projection or distortion calibration.

### Laser engraving

Several layers of dielectric material are deposited over the glass substrate. The result is a "dichroic" mirror which is very similar to Aluminium coating. A laser source is then used to remove parts of the substrate and let light pass through the engraved surface areas.
This technique is fast, inexpensive, but not very accurate as the laser spot side is 30-40 micron and real geometrical resolution can't be precisely stated.

### Photolithography

A chrome layer is present on the glass substrate. With a similar technique to that used for electronics boards manufacturing, a photoresistor is placed on the chrome layer and then gets UV developed. Acid is then used to remove the undeveloped photoresistor areas so that the desired chrome pattern is left on the glass surface. With the UV development being carried out by means of a high precision plotter, a geometrical accuracy in the range of few a microns over a surface of some tens of millimeters can be easily achieved.
The roughness of the pattern edges is also very limited, in the range of 1.5 micron roughness or less.

### Could collimated light cause interference effects?

Collimated light is the best choice if you need to inspect objects with curved edges; for this reason this illumination technique is widely used by our customers, especially when working with measurement systems of shafts, tubes, screws, springs, o-rings and similar samples. However, collimated light generates interference effects, both destructive and constructive, due to the partial coherence of the radiation source and physically inherent to this type of illumination.

These effects appear as a image luminosity distribution different than the one expected from a standard illuminator such as a common backlight, which provides a perfectly homogeneous border but at the same time cannot operate effectively with other types of objects (for example cylinder shapes).

Image processing libraries are routinely able to compensate for higher or lower luminosity near the object edge (lighter border or expanded shadow areas), adjusting dimensional analysis to standard lighting conditions.

Changing image processing parameters is required if you have to measure objects with significally varying shapes and surfaces: by diffusing, reflecting or blocking the incoming light, the object becomes an integral part of the optical system.

In that sense, a contactless optical measure examines the disturbance caused by the inspected object: this can noticed when measuring (under any illumination) two dimensionally identical objects but with different surface textures.