DigImage: Particle Tracking
:::: IMAGE CAPTURE
:::: PARTICLE LOCATION
:::: PARTICLE MATCHING
:::: STRUCTURE OF FILES
:::: BASIC OPERATION OF TRK2DVEL
:::: CONFIGURATION OF TRK2DVEL
:::: PARTICLE-TAGGED DATA
:::: IMAGE PREPROCESSING
DigImage uses Super VHS video tape to record a single view of an illuminated region of an experiment. During the tracking phase, the video tape is replayed and the images are captured by digitising the video signal. As only four or sixteen (depending on the hardware) complete frames may be captured in one go, it is necessary to be able to control the video recorder and synchronise the frame grabber with it so that exactly the right series of four (or sixteen) frames may be acquired in one go. The video recorder must be in PLAY mode during the frame acquisition as small errors in the synchronisation when in PAUSE mode lead to unacceptable images; single frame stepping through the tape is therefore not feasible.
The task of controlling the video recorder so that exactly the right frame is acquired is not trivial. Even VTRs designed for precise editing applications occasionally lose track of a frame when changing direction. As an experiment may be tracked 5000 or more frames (the software limit is 65279 samples or the length of the video tape), the tape may need to change direction well over 1000 times, so even small timing errors will accumulate. Moreover, as the velocities are determined by what is effectively a finite difference in time, any errors will be reflected in the measured velocity.
DigImage makes use of a Panasonic AG-7330, Panasonic AG-7350 or JVC BRS822 Super VHS video tape recorder. These three machines are fitted with an RS-232 interface allowing computer control of all the video functions. The interface also allows the computer to interrogate the VTR to determine what it is doing and where it is. Unfortunately, the VTR is able to return its position only in an asynchronous manner which may lag behind the true position of the VTR. Moreover with the AG-7330 the position is limited to units of one second, and with both Panasonic machines there is the possibility of the VTR returning an incorrect time. As we require to know the exact frame and the precise moment to acquire the buffers this RS232 control is not adequate. The RS-232 interface for the Panasonic machines has therefore been modified to make the VTRs internal video field strobe available on a spare pin (see installation documentation). On the AG-7330 the field strobe undergoes a transition every video field (1/50 or 1/60 of a second), while on the AG-7350 it pulses once a frame (1/25 or 1/30s). An interrupt driven routine in the host computer counts these transitions, so that, when provided with the direction of tape transport, the tape position may be determined. For the JVC a complicated procedure utilising a frame grabber generated interrupt is employed. During critical operations, these interrupt driven routine must have a higher priority than any other process in the host PC, otherwise it is possible to miss a transition. Interrupts in both the real and protected modes for the CPU are employed.
This control system would be all that was required if the field strobe in the VTR always changed state when it should. However this is not the case with the cheaper Panasonic machines. The AG-7330 always loses two fields when changing from reverse to forwards (an error which can be catered for), and both machines occasionally (about one time in ten direction changes) loses a further two pulses. It is these intermittent errors which cause the difficulties. To overcome this, before processing a video tape, DigImage pre-formats the tape with a time-code pulse on one of the audio tracks (normally channel 1). A short tone pulse is recorded every eight video fields (four frames). During subsequent tape operations, the relative position of these pulses is used as a check for the field count. If the position of the pulses appears to have changed since the previous tape operation, DigImage is alerted to an error in the field counter; this error is then corrected before any image acquisition or precision tape positioning.
Extensive tests of this control system have shown DigImage to be 100% reliable in capturing the required frame, provided the tape is of adequate quality and the audio pulses have been recorded correctly. The system operates through a customised serial cable linking the modified RS-232 interface, audio out and audio in connectors on the VTR with the COM1: and COM2: (or COM3: and COM4:) ports on the PC. Details of these components are given in the Installation Guide. No special expansion cards are required for the PC.
An alternative approach to controlling the video source would be to utilise a laser disk to store the images. There are two draw backs of this option: the cost and the resolution. The current generation of laser discs sufficiently sophisticated for these applications cost in the region of £13000-00. The cost of replacement write-once discs is around £250-00 for 32 minutes. This compares very poorly with around £1600-00 or the AG-7350 and interface. Laser disks record the video signal in an analogue form using the PAL or NTSC standard for encoding colour information. While the recording medium has a high noise immunity, it does not offer any significant improvement in resolution over Super VHS.
A second alternative for slower flows is to record the video sequence directly to the computer's hard disk using the [;KM: Movies] facility of DigImage. By avoiding recording the signal on an alalogue medium, the signal to noise ratio of such direct to disk recordings will be better, and as a consequence the quality of the results will be improved. The main limitation with this approach is that the sampling rate is limited and dependent on the size of the window being recorded. For more information, consult the System Overview .
Before processing the images in any way, the permanent reference points set up by [;PR: Reference points] (refer to the System Overview for details) are located within each image. If the root mean square error in the mapping which is generated exceeds the limit set by [;USZ Limit on rms error for mapping], then the image will be rejected as being of inadequate quality. If the captured image for this frame has not been rejected too often before (how often depends on a second setting in [;USZ Limit on rms error for mapping]), then DigImage will try to improve the quality of the image by repeating the acquisition process until it has either captured a satisfactory image, or the limiting number of retries has been reached.
After an image has been captured and reference points validated, it may be necessary to enhance the quality of the image in some manner. Typically this requires some form of filtering. A wide range of low and high pass filters are available under DigImage (see help facility in [;USI: Filter type] for more details), though it is seldom desirable to use these. Rather it is far better to ensure clean experimental images than to throw away information by trying to remove unwanted noise. One occasion where filtering may be desirable is in the removal of a dynamically varying background. Static background variations may be removed by [;USB: Background removal]. While most of the options in [;USI: Filter type] take the form of convolution filters, [;USIU User supplied pre-process subroutine] allows arbitrary pre-processing to be performed using a user-supplied subroutine which hooks onto the particle tracking. The default routine simply removes large scale variations in the background image, effectively implementing a more sophisticated dynamic equivalent of [;USBR Remove particles then low pass filter] to construct a background image which is then subtracted from the raw image. Complete details on the use of this option and other customisable procedures are given in the file Document\Trk2Hook.DOC.
One of the hidden problems with using video tape is that typical CCD cameras have electronic shutters which are open on a cycle of 50Hz (or 60Hz), whereas a complete video frame is produced at only 25Hz (30Hz). Thus the information on one half of the interlace (the even lines) corresponds to an earlier time than the information on the other half of the interlace (odd lines). For flows which are evolving slowly, this need not be of concern as the particles will have moved only a small fraction of their size between the two halves of the interlace. Their position may be considered as the mean of the even and odd line positions.
In contrast, if the flow velocities are large so that a particle moves one or more particle diameters between the two halves of the interlace, then the particle will appear to be in two positions at once. Under these circumstances it may be necessary to reduce the vertical resolution from 512 lines to 256 lines, utilising only one half of the interlace to determine the particle positions.
If a high frequency response is required or high velocities are present, then the two halves of the interlace may be considered as separate snap shots of the flow, allowing sampling at 50Hz (60Hz for NTSC) to resolve 25Hz (30Hz) signals. To achieve this frequency response for the vertical component of velocity, the amplitude of the fluctuations must be much greater than the pixel size, otherwise the one line vertical offset between the even and odd lines will contaminate the results.
Some forms of video camera are able to produce an interlaced signal for which both video fields contain information at the same time. The Cohu 4910 series and Sony XC-77RR series, when combined with a phase-locked mechanical shutter are able to produce full resolution images. These cameras provide a much greater range of facilities than most conventional video cameras, yet cost little more. We therefore strongly recommend one of these cameras for particle tracking - further details are available in the file Document\Cameras.DOC.
In many cases, even with very clean experiments, there remains some residual background illumination. This may be either intentional or unintentional. Unintentional spatially uniform illumination presents no difficulty, but variations in the background across the viewed region need to be catered for. DigImage is able to allow for static or dynamic variations in the background illumination, and optionally gather additional information from the dynamic variations.
The basic technique for static background variations is to subtract some idealisation of the background from the incoming video signal. In the simplest situation, the idealisation may be a uniform intensity field. A number of options for determining the background illumination are presently available:
|[;USBA ALU operations with buffer]||Under this option more general ALU operations may be used to modify an in-coming image. This option could be used to achieve the same effect as [;USBM Mask region with buffer] by selecting a ALU AND function, however any ALU function may be used in combination with a user-specified image.|
|[;USBN No background removal]||This assumes the background is uniform black or white.|
|[;USBF Low pass filter average picture]||This option captures and averages a number of images at some specified time during the experiment. The average is then passed through a simple low-pass filter.|
|[;USBM Mask region with buffer]||This option is designed specifically to enable tracking in nonrectangular domains. The effect is similar to using [;USBA ALU operations with buffer] with a binary image in a AND operation. Regions in the image to be processed corresponding to regions in the mask buffer with an intensity of 255 will be retained, while those regions corresponding to a zero in the mask buffer will be discarded. In this way the desired area may be selected from a large window in a manner more flexible than the basic rectangular windowing. Note that the data may alternatively be windowed or masked from within Trk2DVel if the necessary.|
|[;USBP Polynomial fit to background]||Under this option the average picture is broken into 10x20 pixel blocks with the average intensity being computed for each block. A bi-quartic polynomial is fitted (using a least squares routine) to these blocks. This least squares fit is then used as the background.|
|[;USBR Remove particles then low pass filter]:||This represents a more sophisticated version of the low pass filter. The average picture is first copied. The copy passes through a low-pass filter and is then subtracted from the original to produce a residual picture. The residual picture will give some indication of points, such as particles, in the average picture which need to be removed before creating the background. A constant is subtracted from the residual picture before it, in turn, is subtracted from the original averaged picture. Finally, this corrected average is passed through a low pass filter.|
|[;USBU User supplied buffer background]||With this final option, the user may specify a "background" image to be subtracted from the incoming image. This option is effectively a subset of [;USBA ALU operations with buffer].|
Dynamic background removal is a form of high pass filtering and is covered by the [;USI: Filter type] menu with user-definable routines discussed in Document\Trk2Hook.DOC.
Temporal changes in the background may be used to either discriminate different regions of the flow, or (provided no more than 511 particles are to be tracked and the [;UG Start particle tracking - 511 particles]) to gather information about some scalar field (e.g. density marked with a fluorescent dye). Briefly the options are as follows:
|[;USRB Record background intensity]||Available only if 511 or fewer particles to be tracked. Under this option the local background intensity in the neighbourhood of each particle is evaluated and assigned to one of eight possible levels between two user-defined limits. This information is stored along with the other data for each and every particle. The division into only eight levels is both to minimise the additional storage requirements for this information, and in the realisation that a greater degree of precision in determining the true local background intensity is unlikely. The parameters controlling this process are set up by the [;USLB: Background intensity evaluation] menu.|
|[;USRE: Exclude region inside contour]
[;USRI: Include region inside contour]
|Rather than recording information about the background intensity field, this option provides a method of distinguishing two separate regions of the flow and performing the tracking process in only one such region. Typically the two regions will be marked with some visible tracer such as a fluorescent dye. At regular intervals DigImage will locate an intensity contour within the image after utilising a special range of filters to remove the influence of particles on the image. The contours may optionally be filtered to produce a smooth boundary between the two regions (each region may manifest itself as more than one object in a given image). Finally, which is applied to the parent image to select only the desired regions pick up particles in the regions outside the contour.|
|[;USRN No region recording]||Of course, there is no need to utilise or record any information about variations in the background intensity if it is of no interest in a given flow.|
|[;USRR: Record contour region particle in]||Available only if 511 or fewer particles are being tracked. This option utilises the same mask image as is produced to Include/Exclude regions inside a contour, but rather than masking off certain regions prior to tracking, the mask image is used instead to flag which region the particle was in at a given time. The information thus produced is therefore similar to that obtained by recording the background intensity, but utilising only a binary approximation to the background intensity. This option is intensity dividing two distinct regions, while the previously mentioned option is more suited to relatively slow, continuous variations in the background.|
Once the image has been corrected for static variations in the
background intensity, and any masks applied, it is necessary to
locate the particles within the image.
Goto next document (Location)
DigImage documentation page
DigImage home page
Stuart Dalziel's home page