Image-Based Particle Filter for Drone Localization

Find out more at: github

This project implements an Image-Based Particle Filter in Python to estimate the position of a simulated drone on a known map. The drone’s “sensor” is a small camera scan, and each reading helps refine where on the map the drone might be. Particle filters are ideal for handling sensor noise and other uncertainties inherent in real-world localization tasks.

At the core of this approach is a set of particles (initialized with random position) each encoding a hypothesis about the real drone location. As the drone moves (with noise to mimic wind or imperfect motion estimates), every particle is updated accordingly. Then, each particle’s predicted “camera view” is compared to what the drone actually sees. Particles that produce close matches gain weight and are more likely to survive into the next iteration, while those with poor matches are eliminated.

Key steps:
1) Distribute particles randomly across the map to represent initial uncertainty.
2) Move particles according to the drone’s estimated motion, adding random noise.
3) Compare the drone’s camera image to each particle’s hypothetical view using RGB histogram correlation.
4) Resample so that well-matching particles propagate more strongly in the next generation.
5) Iterate until particles converge around the drone’s true position.

Technical highlights include simulating sensor noise, ensuring realistic motion uncertainty, and leveraging image-based matching techniques. Python’s ecosystem (NumPy, OpenCV, etc.) made handling these operations straightforward and efficient.

Below are quick GIFs demonstrating this filter in action:

Demo of particle filter localization on a city map

Demo of particle filter localization on another map

References include Sebastian Thrun's "Probabilistic Robotics" and the Particle Filter entry on Wikipedia .

This project is licensed under the MIT License. See the LICENSE file for details.