Moving Foreground Object Extraction from Dynamic Background


Total downloads: 115

In object tracking various problem come in the way, one of them is the light intensity variation which results in false tracking of object. That’s why we develop an algorithm which is resist to illumination variation and gives rise to correct tracking of object. For this we developed a threshold algorithm which can compensate the illumination variation. This also reduces the time consumed in operation.

This repository includes:

  • Complete MATLAB code for dynamic background reduction
Download Code


Moving object detection in videos has improved a lot in recent years.  The challenges  in moving foreground object extraction is very dynamic background. A background subtraction technique is used to construct reliable background information from the video sequence. Then, each incoming frame is compared with the background image. If the luminance value of a pixel differs significantly from the background image, the pixel is marked as moving object; otherwise, the pixel is regarded as background. The adaptive background threshold algorithm is used which uses gray level co-occurrance matrix and local mean to calculate the threshold value corresponding to each pixel. This is called local processing and results are compared with global processing like otsu’s threshold method. Shadow effect is a problem in many change detection based segmentation algorithms. In the proposed algorithm, a morphological gradient operation is used to filter out the shadow area while preserving the object shape. In order to achieve the real-time requirement for many multimedia communication systems, our algorithm avoids the use of computation intensive operations.

In our work because of this human visual system local contrast map is extracted from an image and then on the basis of that a local threshold approach will be used to convert the image onto binary format. Previously image gradient and normalize image gradient were used to extract local contrast of image, these methods are quite good, although the variation of bright to weak contrast can be compensated by these methods yet these don’t perform well in case of document which have bright text. This is because a weak contrast will be calculated for stroke edges of the bright text. Calculation of local contrast and then global threshold algorithm like otsu is used and then local image edge detection is used in paper published by Bolan Su (2013). We have followed the same line of action but rather than using global threshold, we use local threshold, it removes the need of using again local edge detection algorithm like canny edge detection. Gray level co –occurrence matrix (GLCM) also called texton co- occurrence matrix (TCM) fulfills our purpose. It is a local contrast mapping method.

Background subtraction

Video can be considered as multiple frames. Each frame is different from other in pixel values. But those which are equal are treated as background pixels as background don’t moves but there is problem when background consists of slow moving objects like swaying of trees which must be considered as background, but due to difference of each frame to next frame these also appear into foreground as pixels value changes for these too. To avoid this problem we have used multi background registration concept. In this frame difference mask along with background difference mask is generated and both are used to decide which is pixel constitutes the foreground. Here is the work given in detail:

A flow chart for image background subtraction by proposed work is shown in figure 1 whole algorithm is divided into four steps at

Step1: Frame Difference

In Frame Difference, the frame difference between current frame and previous frame, which is stored in Frame Buffer, is calculated and thresholded. It can be presented as

where I is frame data,   is frame difference, and is  Frame Difference Mask, ‘t’ represents the time of coming frame, ‘t-1’ is for previous frame. Note that there is a parameter   needed to be set in advance. The method to decide the optimal is discussed in section 3.1. Pixels belonging to FDM are viewed as “moving pixels.” This can be written in MATLAB as

%%%% calculate frame difference mask
for p=1:row
for q=1:colm
if FD(p,q)>th(p,q)||FD(p,q)==th(p,q)
fdm(p,q) = 1;
SI(p,q)=0; % stationary index
fdm(p,q) = 0;

Step 2: Background Registration

Background Registration can extract background information from video sequences. According to FDM, pixels not moving for a long time are considered as reliable background pixels. The procedure of Background Registration can be shown as

Where SI is Stationary Index, BG is Background Indicator, and BI is the background information. The initial values all are set to “0.” Stationary Index records the possibility if a pixel is in background region. If SI is high, the possibility is high; otherwise, it is low. If a pixel is “not moving” for many consecutive frames, the possibility should be high, which is the main concept of SI equation. When the possibility is high enough, the current pixel information of the position is registered into the background buffer, which is shown as BG. Besides, Background Indicator is used to indicate whether the background information of current position exists or not, which is shown as BI.

Step 3: Background Difference

The procedure of Background Difference is similar to that of Frame difference. What is different is that the previous frame is substituted by background frame. After Background Difference, another change detection mask named Background Difference Mask is generated. The operations of Background Difference can be shown by

where BD is background difference, is background frame, and is BDM Background Difference Mask, respectively.

Step4: Object Detection

Both of FDM and BDM are input into Object Detection to produce Initial Object Mask (IOM). The procedure of Object Detection can be presented as the following equation.

In IOM every frame is passed through morphological imclose operation which will fill the pixels in 3*3 neighborhood.

In post processing work done till now is used conditionally to extract background and foreground separately. These conditions are shown in table 1.

Table 1: Conditions to separate background and foreground

Foreground Object 1 1
Background 0 0 0 1


2 reviews for Moving Foreground Object Extraction from Dynamic Background

  1. shiza_khan41


  2. shiza_khan41


Only logged in customers who have purchased this product may leave a review.