Skip to main content

Preliminary Research and Documentation

Design and build a AI-based aquatic drone that can be used to clean up floating trash from the surface of rivers, lakes, seas and oceans.

 

 

Requirements

First stage / Proof of Concept

Record footage from an underwater camera, ideally mounted on a buoy in a similar kind of location to where we intend to deploy the smart buoys. Develop algorithm to automatically detect moving objects of interest.

Classification

If an expert can identify in the footage some of the species that we are most likely to encounter, then this can form the training data for a machine learning-based classifier, e.g. Google DeepLab (open source).

360° Multi-Camera Setup

By using a ring of outward-looking cameras mounted around the buoy we could cover the entire azimuthal surround. This would be helpful for a number of reasons:

  • Avoid re-counting the same target as it moves in and out of frame by tracking between cameras.
  • Monitor all directions, eliminating variability due to the direction the camera happened to be pointing.
  • By combining with GPS and compass data, gain some idea of migration patterns (daily or seasonal).

Fairly wide-angle lenses would therefore be desirable to cover the entire visual surround with as few cameras as possible, although extreme fisheye lenses would introduce radial distortions that might complicate the image processing.

3D Tracking

If we double up the camera ring described above so that all directions are imaged by (at least) two cameras (e.g. eight cameras with 90° fields of view spaced at 45° intervals), then we can perform stereoscopic 3D to estimate range in an analogous fashion to human binocular depth perception. The greater the spatial separation of the cameras (i.e. the diameter of the camera ring), the more accurate this will be - what size of rig can we feasibly mount on a buoy? Range data would be essential for estimating target size (small and near versus big and distant) and speed. It could also help us to control for our detection radius changing because of variable turbidity (see below).

Choice of sensor technology

I propose to use standard visible-light cameras and natural illumination.

Pros:

  • Cameras - even ones suitable for underwater use - are cheap and readily available.
  • High-quality open-source software for image/video analysis is available (ImageJ, OpenCV, DeepLab, etc.) in a way that it would not be for more exotic sensor modalities.
  • Passive sensing using ambient light minimises the ecological impact on the habitat in question, compared to emitting light or sound, which could skew our results if the animals were able to perceive it in any way. (For example, imagine trying to count moths using a bright floodlight.) It also saves power, which might be important for a solar-powered autonomous buoy.
  • Images / video are good for PR - a picture says a thousand words!

Cons:

  • Night-time tracking is impossible. A potential solution would be to use infrared (IR) illumination and IR-sensitive cameras, as (to my knowledge) no animals can see wavelengths longer than about 800nm. Unfortunately, water transmits IR rather poorly, so the lights would need to be bright and thus power-hungry. Nevertheless, high-power IR LEDs might be a possibility.

 

Other practical concerns

Many of these will be addressed by seeing the test footage and thereby getting a feel for the nature of the problems the machine vision system has to deal with.

  • Image stability. As the camera(s) are mounted on a floating buoy, presumably the image will not be steady. Depending on the severity of the movement, this could present various challenges for the algorithm.
  • Water turbidity. If the water is clear, then the limiting factor on our range will be the size of the animals. However, if the water is very murky then this could severely limit our detection radius, possibly even rendering cameras useless. Furthermore, if the turbidity of the water is variable, then this will be a major confounding factor on our measurements of animal populations at different times.
  • Visual clutter. What depth of water will the buoys be in? Can we assume that any moving visual object is an animal? If, for instance, the camera can see seaweed swaying the current, then this will greatly complicate the task of target detection. Will there be objects in the scene that animals could be occluded by?
  • Multiple target tracking. I would hope to implement a system capable of tracking several targets at once, especially since social animals might be among the species detected. However, individually counting/tracking whole shoals of fish is probably not feasible.

 

Similar Work

This company has developed the underwater camera used in the movie Chasing Coral: https://coralgardeners.org/. The technology is ready to go and available off the shelf. Here is a live feed to their Hawaii location: https://www.youtube.com/watch?v=oPGH65HrseA.

Licensing

This project is being developed as an open-source project with the following licensing: