CIS 601 Spring 2007
Homework 3: Detecting Moving Objects
This time you'll build the basis of a surveillance system. Given a series of grayvalue pictures,
you have to segment the moving object in the scene. This very basic version of a surveillance system will just
detect pixels that changed their gray value, using background subtraction.
I do not ask for real object following or even object recognition (that will come later!).
The MATLAB WORKSPACE contains a variable 'ag'. 'ag' is a 240x320x56 matrix containing
56 grayvalue images taken from a movie (starring a famous professor). The movie shows a static indoor scene with a single
moving object, the following images show 3 frames out of the 56.
You can show the images as a movie using the following code:
for i=1:size(ag,3)
imshow(ag(:,:,i));
pause(0.03);
end
Your task:
- Compute the background image BG1 from the fames as the mean of all frames.
- Compute the pixelwise standard deviation in each pixel (see 'doc std' how to use 'std' to get the standard deviation). The result,
let's call it 'BGstd', is a 240x320 matrix since it gives you the standard deviation in each pixel!
- For all frames, show the difference images, thresholded by 2 standard deviations. This means: a pixel in the difference image
of image(n) to BG1 is white (=1) if abs(image(n)-BG1)>2*std. These pixels are called 'foreground pixels'. The following picture
shows the difference image of the third image above:
- Create a 1x56 vector 'v', that contains the number of foreground pixels in each frame. Plot the vector v (e.g. using 'bar(v)') and compare its
correspondence to the movie frames. Analyse the correspondence between the bar plot and the movie and write some lines about the result
(i.e. your result mail should contain a little comment about what you see, if it makes sense and where you see problems in such a simple system
if it would be used as a basis for an object recognition system).
So far it's 7 points. Additional 3 points:
You might wonder if the system could be improved, if the background image is not just built averaging all images. Since averaging all
images also learns foreground pixels as background, the idea is to filter frames where there's surely a huge amount of foreground
and not to use these frames in the background building process. How to in its simplest form:
- Analyse the vector v. It shows the amount of foreground activity. Identify all frames with a foreground activity that's less
than the mean activity. Build a second version of the background (BG2) using these frames only.
- Once again creating the mean and std show the difference images (same as above, but using BG2 instead of BG1).
The following image shows the same difference image as above, using BG2 instead of BG1:
- Show the difference between the 2 background (abs(BG2-BG1)) images in an appropriate version (histeq!). Although they seem to be very similar,
the enhanced difference picture shows a lot of information. What do you see ?
- Please comment about the quality of the second version of the system, using BG2. Did the quality improve? What do you see in
the difference of the background pictures? What was 'learned' as background?
All this sounds like a lot, but the whole program is less than 20 lines of code. Think how to use MATLAB wisely!
Good luck and enjoy!