Please note: You are viewing the unstyled version of this website. Either your browser does not support CSS (cascading style sheets) or it has been disabled. Skip navigation.

Automatic Detection and Identification of Rockfish and their Habitats from Underwater Images Collected by the Seabed AUV and Video Observer Monitoring of Commercial Fishing Vessels

  Email    Print  PDF  Change text to small (default) Change text to medium Change text to large

June 1, 2006 through May 31, 2007

Dr. Cabell Davis
Woods Hole Oceanographic Institution, Woods Hole, MA 02543

Dr. Qiao Hu
Woods Hole Oceanographic Institution, Woods Hole, MA 02543

Program Manager: Dr. Elizabeth Clark, NOAA/NMFS etc.

Related NOAA Strategic Plan Goal:
Goal 1. Protect, restore and manage the use of coastal and ocean resources through ecosystem-based management.

Project Overview
The mission of this project is to develop an automated image analysis and pattern recognition system to automatically process digital images collected by SeaBed AUV and electronic observer monitoring system. The system includes two components: fish detection and fish classification. Fish detection is used to extract (segment) the fish image from the background. Fish classification is used to separate fish into different groups using supervised learning.

Accomplishments
We have three major achievements after releasing our first report. We collected videos from electronic observer monitoring system, configured the Matlab interface to read video files, and developed a fish detection algorithm on the video files and tested the algorithm on one of the video files

1. Video Collecting
We had regular email and phone exchanges with Howard McElderry at Archipelago Marine Research Ltd (AMR) and scientist Jonathan Cusick at Northwest Fisheries Science Center. We met Morgan Dyas from Archipelago Marine Research Ltd at Cape Cod Commercial Hook Fishermen’s Association, Chatham, Massachusetts to check out the electronic monitoring systems and get high quality video files from electronic monitoring system. We exchange ideas with him and Richard Rees to make the electronic monitoring systems works better.

2. Configure the Matlab interface to read video files
After trying several options to read video files under Matlab environment, we ended up with using “mplayer” as the media player and Matlab wrapper files developed by Ashwin Thangali from Boston University under Linux operation system. The configured system is able to read video files provided by Archipelago Marine Research Ltd on electronic monitoring system in Matlab interface. Since Matlab toolbox only provides to read raw AVI video files, it is non-trivial task to set the system up. Out of 4 video files provided by Archipelago Marine Research Ltd, I am able to read 2 of them under the Matlab environment. I suspect that the other two files are too big.

3. Implement a fish detection algorithm on the video files:
We developed a fish detection algorithm that includes the following 5 components (ref. Figure 1).
a. Background subtraction: The videos were turned on at the moment when the fishing activity began. There is a time interval between the start time of fishing activity and the first fish being caught. This time period is used to estimate the background image of fishing boat. The mean of first 2000 frames are calculated and subtracted from the consequent videos.
b. Segmentation: The background image is subtracted from all the video frames after first 2000 frames. The resulting video frames are transformed into gray-scale images. The global threshold value is calculated by Otsu’s segmentation method. This threshold value is used to segment the frame into foreground and background image.
c. Optimal window selection: Due to the cluttered nature of fishing environment, it is desirable to only consider part of the video frame as region of interest. An optimal window is selected based on the visual inspection of the fishing activity videos. An optimal window is defined as a region of interest where it is best to separate the fish from the background.
d. Motion detection: The algorithm checks the overlap between the segmented blobs and the optimal window. If the overlapping blobs exceed certain size threshold, they are considered as fish. A bounding box is derived from the blob, and it is used as a mask. The masked region on the background subtracted image is saved out as tiff file with the frame number as part of the filename.
e. Fish count: The filenames are analyzed by a script and the frame numbers are recovered. If the difference between two consequent frame numbers is below a threshold, these two images are considered as the same fish.
 
4. Test results:
We tested the above algorithm on one of the video files named “Southern_Dawn_close_camera_view.dvi”. There were 1655 files saved out as fish images. Out of these 1655 files, the fish count program counts 703 fishes. Most of them are false positives due to other moving components (such as fish line, porch, and waves) and high compression ratio. The example images are given in Figure 2.

Future Direction
Although the above algorithm shows very some promising results, there are several hurdles need to be overcome before the system to be fully operational.

1. Probability of detection and probability of false alarm
The probability of detection and probability of false alarm need to be quantified. I optimized the parameters on the video file which did not have manual counting results. This step is relatively easy with corporation from AMR.

2. Relative high false alarms
Although I did not have detailed counts on the video file I investigated. My best estimate is that there are between 200 to 300 fishes caught on that video. This yields 2 out of 3 fishes detected are false alarms. There are two ways to tackle this problem. First approach is to try to fine tune the free parameters in the system to further reduce false alarm rate. Second approach is to separate fish and non-fish with classification. Both approaches should be investigated.

3. A unified framework
Different fishing boats tend to have different camera settings. The optimal windows are very different from different video files. Furthermore, different videos have different image quality. A unified framework is need to process generic video files. Otherwise, minor human interference is needed.

4. Multiple shots
When the classification approach is used to separate the segmented images into different categories, strategies need to be developed to be able to take advantage of multiple shot of single fish. I suggest using classifier voting system.

5. Image from original frame versus background subtracted frame
In current version, the region of interest is saved out from background subtracted frames. It is very easy to switch to save out from original frames. A comparison is needed to compare the classification accuracy of images from original frames versus background subtracted frames.

6. Video image quality
The video image quality is fairly low as shown in Figure 2. It would be very helpful both the fish detection and fish classification to reduce the compression ratio. The tradeoff between video image quality and the fish classification accuracy needs to be investigated.

Summary of Interaction with NOAA
I exchanged emails and phone conversation with Dr. Cusick regarding what needs to be done on electronic observer monitoring system.



Last updated: August 19, 2008
 


whoi logo

Copyright ©2007 Woods Hole Oceanographic Institution, All Rights Reserved, Privacy Policy.
Problems or questions about the site, please contact webdev@whoi.edu