FPGA and Non-extensive particle lter for the drone autonomous navigation

- 103777
Favoritar este trabalho
Como citar esse trabalho?
Resumo

A strategy employing image processing is applied to estimate the drone position for autonomous navigation: visual odometry, and computer vision. The computer vision approach is based on the image edge extraction from satellite and drones images. An image correlation is applied for matching objects in a georeferenced satellite optical image and an aerial image obtained from the drone camera. The edge detection step is computed by the multi-layer perceptron neural network. Results obtained by the neural network are implemented on FPGA. Data fusion combining the drone position estimation obtained by visual odometry and computer vision is computed by using the non-extensive particle filter (NExt-PF). The NExt-PF is designed with the likelihood operator using the Tsallis' probability distribution. Our approach shows better results than stantard particle filter (Gaussian likelihood), and a better performance was obtained with FPGA.

Instituições
  • 1 LAC-INPE
  • 2 Instituto Nacional de Pesquisas Espaciais
  • 3 Centro Técnico AeroEspacial
Palavras-chave
Drone autonomous navigation
Image Processing
Artificial Neural Networks
FPGA
Non-extensive particle filter