Pages:     | 1 |   ...   | 5 | 6 || 8 | 9 |   ...   | 54 |

3. Chip-level ADC, where a single ADC circuit serves the whole APS array [57]-[58]. This method requires a very high-speed ADC, especially if a very large array is used. The architecture shown in Figure 4, utilizes this approach for ADC implementation.

E. Bandgap reference and current generators these building blocks are used to produce on-chip analog voltage and current references for other building blocks like amplifiers, ADC, digital clock generator and others. It is very important to design high precision and temperature independent references, especially in high resolution state-ofthe-art image sensors, where the temperature of the die can vary by many tens of degrees.

F. Digital timing and control block, clock generator - aim to control the whole system operation. Their implementation in the chip level decreases the number of required I/O pads and thus reduces system power dissipation. Synchronized by the generated clock, the digital timing and control block produces the proper Information Technologies in Biomedicine sequencing of the row address, column address, ADC timing and the synchronization pulses creation for the pixel data going offchip. In addition, it controls the synchronization between the imager and the analog and digital processing.

G. Analog and Digital Image Processing although these blocks are optional, they play a very important role in today's "smart" image sensors. Conventional vision systems are put at a disadvantage by the separation between a camera for seeing the world, and a computer or DSP for figuring out what is seen. In these systems all information from the camera is transferred to the computer for further processing. The amount of processing circuitry and wiring necessary to process this information completely in parallel is prohibitive. In all engineered systems, such computational resources are rarely available and are costly in terms of power, space, and reliability. Opposite to a conventional camera-on-a-chip, which only captures the image and transfer it for the further processing, "smart" image sensors reduce the computational cost of the processing stages interfaced to it by carrying out an extensive amount of computation at the focal plane itself (analog and digital image processing blocks in Figure 4), and transmitting only the result of this computation (see Figure 5).

Figure 5. An example of an imaging system, employing a "smart" CMOS image sensor with on-chip processing and processors/DSPs for image processing Both analog and digital processing can be performed either in the pixel or in the array periphery. There are advantages and disadvantages for both methods. In-pixel digital image processing is very rare because it requires pixel-level ADC implementation and results in very poor fill factor and large pixel size. In-pixel analog image processing is very popular, especially in the field of neuromorphic vision chips. In these chips in-pixel computations are fully parallel and distributed, since the information is processed according to the locally sensed signals and data from pixel neighbours. Usually, neuromorphic visual sensors have very low-power dissipations due to their operation in the subthreshold region, but suffer from low resolution, small fill-factor and very low image quality. Other applications employing in-pixel analog processing are tracking chips, wide dynamic range sensors, motion and edge detection chips, compression chips and others. The periphery analog processing approach assumes that analog processing is performed in the array periphery without penalty on the imager spatial resolution and it is usually done in a column parallel manner. While this approach has computational limitations compared to in-pixel analog processing, it allows better image quality. Periphery digital processing is the most standard and usually simpler. It is performed following the A/D conversion, utilizes standard existing techniques for digital processing and is usually done on the chip level. The main disadvantage of this approach is its inefficiency by means of area occupied and power dissipation. Note, all mentioned techniques can be mixed and applied together on one chip to achieve better results.

Fourth International Conference I.TECH 2006 3. Image Sensors in Security Applications The importance of security applications has significantly increased due to numerous terrorists attacks worldwide.

This area also greatly benefits from the achievements in the image sensors field. Today we can meet the cameras not only in military applications, but also in commercial and civilian applications. They are present in the shops and on the streets, in the vehicles and on the robots. The applications are numerous and can not be covered in this short paper. We have decided to concentrate on two important applications that represent a large fraction of the total security market. These applications are surveillance and biometrics. Both of the applications are extensively utilized in military, commercial and civilian fields.

3.1 Surveillance Surveillance systems enable a human operator [59] to remotely monitor activity over large areas. Such systems are usually equipped with a number of video cameras, communication devices and computer software or some kind of DSP for real-time video analysis. Such analysis can include scene understanding, attention based alarming, colour analysis, tracking, motion detection, windows of interest extraction etc. With recent progress in CMOS image sensor technology and embedded processing, some of the mentioned functions and many others can be implemented in dedicated hardware, minimizing system cost and power consumption. Of course, such integration affects system configurability, but not all applications require configurable systems: some of them benefit from low cost and low power dedicated hardware solutions.

For example, in [60] we have presented an image sensor that can be used for such applications. Due to a specific scanning approach this sensor can be used efficiently for motion detection, tracking, windowing and digital zoom.

Figure 6 shows the standard approach for sensor data scan - raster and the alternative Morton or Z scan.

(a) raster scan conventional approach in image (b) Morton (Z) scan newly proposed and sensors, implemented Figure 6. Two approaches for data scan The Morton (Z) scan poses a very valuable feature, neighbour pixels that are concentrated in blocks appear at the output sequentially, one after other. With this scanning approach the image blocks can be easily extracted and processed with simple on-chip hardware. For example, for constructing video camera with 4 digital zoom, the blocks of 44 pixels need to be extracted and averaged. Similarly, cameras with digital zoom 8 and 16 can be easily constructed. Figure 7shows measurements from our test chip.

Figure 7. Morton scan chip test results Information Technologies in Biomedicine Another example is a wide dynamic range (WDR) imager. Dynamic range (DR) quantifies the ability of a sensor to image highlights and shadows. If we define the dynamic range of the sensor as 20log(S/N), where S is the maximal signal value and N is the sensor noise, the typical image sensors will have a very limited dynamic range, about 65-75 dB. Wide dynamic range imaging is very important in many surveillance systems. The dynamic range can be increased in two ways: the first one is noise reduction and thus enabling expansion of the dynamic range toward darker scenes; the second method is incident light saturation level expansion, thus improving the dynamic range toward brighter scenes.

Herein we present one of the possible solutions for dynamic range extension in CMOS Active Pixel Sensors (APS) [2]. As in a traditional CMOS APS, this imager is constructed of a two-dimensional pixel array, with random pixel access capability and row-by-row readout rolling shutter method. Each pixel contains an optical sensor to receive light, a reset input and an electrical output representing the illumination received. This imager implements a simple function for saturation detection, and is able to control the light exposure time on a pixel-by-pixel basis, resulting in no saturation. The pixel value can then be determined as a floating-point representation. To do so, the outputs of a selected row are read out through the column-parallel signal chain, and at certain points in time are also compared with an appropriate threshold value, as shown in Figure 8. If a pixel value exceeds the threshold, i.e. the pixel is expected to be saturated at the end of the exposure time; the reset is given at that time to that pixel. The binary information concerning the reset (i.e., if it is applied or not) is saved in a digital storage for later calculation of the scaling factor. Thus, we can represent the pixel output in the following floating-point M 2EXP format:, where the mantissa (M) represents the digitized pixel value and the exponent (EXP) represents the scaling factor. This way a customized, linear, large increase in the dynamic range is achieved.

Figure 8. Imaging pipeline, image sensor architecture and work principle.

Figure 9 (a) and Figure 9 (b) show a comparison between an image captured by a traditional CMOS imager and by the autoexposure system described here. In Figure 9 (a), a scene is imaged with a strong light hitting the object; hence, some of the pixels are saturated. At the bottom of Figure 9 (b), the capability of the autoexposure sensor for imaging the details of the saturated area in real time may be observed. Since the display device is limited to eight bits, only the most relevant eight-bit part (i.e., the mantissa) of the thirteen-bit range of each pixel is displayed here. The exponent value, which is different for different areas, is not displayed.

Fourth International Conference I.TECH 2006 Figure 9. (a) Scene observed with a traditional Figure 9. (b) Scene observed with our in-pixel CMOS APS sensor, autoexposure CMOS APS sensor.

3.2 Biometric personal identification Biometric personal identification is strongly related to security and it refers to identifying an individual based on his or her distinguishing physiological and/or behavioural characteristics (biometric identifiers) [61]. Figure shows the most frequently used biometric characteristics.

Figure 10. Biometric characteristics Almost all biometric characteristics, shown in Figure 10, require some kind of sensing. Usually, conventional image sensors with external hardware or software image processing are used. The difficulty for on-chip integration is caused by the complexity of the required image processing algorithms. However, there are some developments that successfully achieve the required goals by parallel processing utilization.

To give some more detailed examples in the field, we concentrate on fingerprint sensors. Generally these sensors can be classified by the physical phenomena used for sensing: optical, capacitance, pressure and temperature. The first two classes are the most popular and both mainly employ CMOS technology.

In Figure 11 various technologies for fingerprint sensing are shown [62]. The most popular approach (see Figure 11 (a)) is based on optical sensing and light reflection from the finger surface. Also, this type provides high robustness to finger condition (dry or wet), but the system itself is tend to be bulky and costly. Alternative solutions that can provide compact and lower cost solutions, are based mostly on solid state sensors where the finger is directly placed on the sensor. However, in these solutions the sensor size needs to be at least equal to the size of the finger part used for sensing. Two sensors of this type are shown in Figure 11 (b) and (c). The first one is based on light transmitted through the finger and then sensed by the image sensor, while the second one is the non-optical sensor that can be implemented either as pressure, capacitance or temperature sensor. The fingerprint sensor, known as a sweep sensor and shown in Figure 11 (d), can be implemented using either the optical or other previously mentioned techniques. A sweep sensor employs only a few rows of pixels, thus in order to get a complete fingerprint stamp the finger needs to be moved over the sensing part. Such technology greatly reduces the cost of the sensor due to reduced sensor area and solves the problem of fingerprint stamp that needs to be left on the surface in the first two methods.

Information Technologies in Biomedicine (a) (b) (c) (d) Figure 11. Fingerprint sensors (a) optical - reflection based, (b) optical transmission based (c) non-optical based on pressure, capacitance or temperature (d) sweep sensor In all presented methods, the output signal is usually an image and the sensors are composed of pixels that sense either temperature, pressure, photons or change in capacitance. The overall architectures of these sensors are similar to the architecture described in section II and they integrate various image and signal processing algorithms, implemented the same die. Various research papers have been published in this area and numerous companies are working on such integration. For example, in [63] the authors implement image enhancement and robust sensing for various finger conditions. Capacitive sensing CMOS technology is used and data is processed in a column parallel way. The same technology is used also in [65], but the fingerprint identifier is also integrated and the data is processed massively in parallel for all pixels.

Despite the fact that fingerprint technology is quite mature, there is much work to be done to reduce power consumption, to improve technology and image processing algorithms and to achieve better system miniaturization.

4. Image Sensors in Medical Applications Almost all medical and near medical areas benefit from image sensors utilization. These sensors are used for patients observation and drug production, inside the dentists offices and during surgeries. In most cases the sensor itself represents only a small fraction (in size and cost) of the larger system, but its functionality plays a major role in the whole system. Figure 12 shows examples of medical applications where CMOS image sensors are used. In this section of the paper we mostly concentrate on applications that push current image sensor technology to the edge of the possibilities. These applications are wireless capsule endoscopy and retinal implants. Both of these applications will play an important role in millions of patients lives in the near future.

Pages:     | 1 |   ...   | 5 | 6 || 8 | 9 |   ...   | 54 |

2011 www.dissers.ru -

, .
, , , , 1-2 .