1. Trang chủ >
  2. Kỹ Thuật - Công Nghệ >
  3. Cơ khí - Chế tạo máy >

10 LABORATORY - INTERFACING TO A DAQ CARD

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.59 MB, 594 trang )


page 479



17. VISIONS SYSTEMS

• Vision systems are suited to applications where simpler sensors do not work.



17.1 OVERVIEW



• Typical components in a modern vision system.



page 480



Lighting



Scene



Camera

lens



iris

CCD



control

electronics



Computer



Action or Reporting

Software (Robot,

Network, PLC, etc)



Image Processing

Software (Filtering,

Segmentation and

Recognition)



Frame Grabber

Hardware

(A/D converter

and memory)



17.2 APPLICATIONS

• An example of a common vision system application is given below. The basic operation

involves a belt that carries pop (soda) bottles along. As these bottles pass an optical sensor, it triggers a vision system to do a comparison. The system compares the captured image to stored

images of acceptable bottles (with no foreign objects or cracks). If the bottle differs from the

acceptable images beyond an acceptable margin, then a piston is fired to eject the bottle. (Note:



page 481



without a separate sensor, timing for the piston firing is required). Here a PLC is used as a common industrial solution controller. - All of this equipment is available off-the-shelf ($10K-$20K).

In this case the object lighting, backgrounds and contrast would be very important.



Light

Emitter



Light

Detector



Stuff!



Light Source

Stuff!



Camera



Stuff!



Stuff!



Pneumatic Piston



Stuff!



Pneumatic Solenoid



Vision Module

Programmable Logic Controller

(aka PLC)

Air Exhaust



Air Supply



17.3 LIGHTING AND SCENE

• There are certain features that are considered important in images,

- boundary edges

- surface texture/pattern

- colors

- etc



page 482



• Boundary edges are used when trying to determine object identity/location/orientation. This

requires a high contrast between object and background so that the edges are obvious.



• Surface texture/pattern can be used to verify various features, for example - are numbered

buttons in a telephone keypad in the correct positions? Some visually significant features must be

present.



• Lighting,

- multiple light sources can reduce shadows (structured lighting).

- back lighting with luminescent screens can provide good contrast.

- lighting positions can reduce specular reflections (light diffusers help).

- artificial light sources provide repeatability required by vision systems that is not possible without natural light sources.



17.4 CAMERAS

• Cameras use available light from a scene.



• The light passes through a lens that focuses the beams on a plane inside the camera. The

focal distance of the lens can be moved toward/away from the plane in the camera as the scene is

moved towards/away.



• An iris may also be used to mechanically reduce the amount of light when the intensity is

too high.



• The plane inside the camera that the light is focussed on can read the light a number of

ways, but basically the camera scans the plane in a raster pattern.



• An electron gun video camera is shown below. - The tube works like a standard CRT, the

electron beam is generated by heating a cathode to eject electrons, and applying a potential

between the anode and cathode to accelerate the electrons off of the cathode. The focussing/

deflecting coils can focus the beam using a similar potential change, or deflect the beam using a



page 483



differential potential. The significant effect occurs at the front of the tube. The beam is scanned

over the front. Where the beam is incident it will cause electrons to jump between the plates proportional to the light intensity at that point. The scanning occurs in a raster pattern, scanning many

lines left to right, top to bottom. The pattern is repeated some number of times a second - the typical refresh rate is on the order of 30Hz



electron

accelerator



photon

heated cathode

scanning

electron beam

anode



signal



focus and

deflection coils



• Charge Coupled Device (CCD) - This is a newer solid state video capture technique. An

array of cells are laid out on a semiconductor chip. A grid like array of conductors and insulators

is used to move a collection of charge through the device. As the charge moves, it sweeps across

the picture. As photons strike the semiconductor they knock an electron out of orbit, creating a

negative and positive charge. The positive charges are then accumulated to determine light intensity. The mechanism for a single scan line is seen below.



page 484



Li-1



Li



-V



+V



Li+1



control electrodes



-V



oxide insulator



-e



e-



e- e- - e e

e

e- - eee

e- - ee- e

e- p+



p-type semiconductor



The charge is trapped in this location by voltages on

the control electrodes. This location corresponds to a

pixel. An incident photon causes an electron to be liberated.



photon



Li



Li-1



-V



0V



Li+1



+V

e- - - eee eee- - e- - e e

e- e e- ee e



Li+2



-V

The charges can be

moved to the next pixel

location by changing the

electrode voltages



page 485



The description of moving the charge is for a single scan line, this can be expanded

to consider the entire CCD.

charge moves this way



L11

L10

L9

L8

L7

L6

L5

L4

L3

L2

e- -ee

e- e



L1

L0



n-type barriers to control charge (on bottom)



• Color video cameras simply use colored filters to screen light before it strikes a pixel. For

an RGB scan, each color is scanned 3 times.



page 486



17.5 FRAME GRABBER

• A simple frame grabber is pictured below,



video

signal



pixel

intensities

signal

splitter



digital

values

fast A/D



RAM



Computer

bus



line start

picture start



address

generator



• These items can be purchased for reasonable prices, and will become standard computer

components in the near future.



17.6 IMAGE PREPROCESSING

• Images are basically a set of pixels that are often less than a perfect image representation.

By preprocessing, some unwanted variations/noise can be reduced, and desired features

enhanced.



• Some sources of image variation/noise,

- electronic noise - this can be reduced by designing for a higher Signal to Noise Ratio

(SNR).

- lighting variations cause inconsistent lighting across an image.

- equipment defects - these cause artifacts that are always present, such as stripes, or pixels

stuck off or on.



page 487



17.7 FILTERING

• Filtering techniques can be applied,

- thresholding

- laplace filtering

- fourier filters

- convolution

- histograms

- neighborhood averaging



17.7.1 Thresholding

• Thresholding basically sets a transition value. If a pixel is above the threshold, it is switched

fully on, if it is below, it is turned fully off.

Original Image



e.g. Threshold = 2



1

5

3

2



1

7

7

7



2

6

7

3



3

7

7

1



4

5

4

2



7

7

7

7



7

7

7

1



7

7

7

7



an array of pixel brightness

e.g. Threshold = 5

It can be difficult to set a good

threshold value, and the results are prone

to noise/imperfections in the image.



17.8 EDGE DETECTION



1

7

1

1



1

7

7

1



1

7

7

1



1

7

1

1



page 488



• An image (already filtered) can be checked to find a sharp edge between the foreground and

background intensities.



• Let’s assume that the image below has been prefiltered into foreground (1) and background

(0). An edge detection step is then performed.

Actual Scene



Thresholded Image

0

0

0

0

0

0

0

0



0

0

0

0

1

0

0

0



0

0

1

1

1

1

0

0



0

1

1

1

1

1

0

0



0

0

1

1

1

1

1

0



Edge Detected Image

0

0

0

1

1

1

0

0



0

0

0

0

0

0

0

0



0

0

0

0

0

0

0

0



0

0

0

0

0

0

0

0



0

0

0

0

1

0

0

0



0

0

1

1

0

1

0

0



0

1

0

0

0

1

0

0



0

0

1

0

0

0

1

0



0

0

0

1

1

1

0

0



0

0

0

0

0

0

0

0



0

0

0

0

0

0

0

0



A simple algorithm might create a new image (array) filled with zeros, then look at

the original image. If any pixel has a vertical or horizontal neighbor that is 0, then the



17.9 SEGMENTATION

• An image can be broken into regions that can then be used for later calculations. In effect

this method looks for different self contained regions, and uses region numbers instead of pixel

intensities.



page 489



Actual Scene



0

0

0

0

0

0

0

0



0

0

0

0

0

0

0

0



0

0

1

1

0

0

0

0



0

0

1

1

0

0

0

0



0

0

1

1

0

0

0

0



0

0

0

0

0

0

0

0



0

0

0

0

0

0

0

0



0

0

0

0

0

1

0

0



0

0

0

0

1

1

1

0



0

0

0

0

1

0

1

0



0

0

0

0

1

1

1

0



0

0

0

0

0

0

0

0



Thresholded



1

1

1

1

1

1

1

1



1

1

1

1

1

1

1

1



1

1

2

2

1

1

1

1



1

1

2

2

1

1

1

1



1

1

2

2

1

1

1

1



1

1

1

1

1

1

1

1



1

1

1

1

1

1

1

1



1

1

1

1

1

3

1

1



1

1

1

1

3

3

3

1



1

1

1

1

3

4

3

1



1

1

1

1

3

3

3

1



1

1

1

1

1

1

1

1



Segmented



• A simple segmentation algorithm might be,

1. Threshold image to have values of 1 and 0.

2. Create a segmented image and fill it with zeros (set segment number variable to one).

3. Scanning the old image left to right, top to bottom.

4. If a pixel value of 1 is found, and the pixel is 0 in the segmented image, do a flood fill

for the pixel onto the new image using segment number variable.

5. Increment segment # and go back to step 3.

6. Scan the segmented image left to right, top to bottom.

7. If a pixel is found to be fully contained in any segment, flood fill it with a new segment

as in steps 4 and 5.



Xem Thêm
Tải bản đầy đủ (.pdf) (594 trang)

×