UNIVERSIDAD NACIONAL AUTÓNOMA DE MÉXICO
PROGRAMA DE MAESTRÍA Y DOCTORADO EN INGENIERÍA
MECÁNICA - MECATRÓNICA
DISEÑO, IMPLEMENTACIÓN Y VALIDACIÓN EXPERIMENTAL DE ALGORITMOS DE ESTIMACIÓN PARA LA
NAVEGACIÓN DE VEHÍCULOS AUTÓNOMOS
TESIS
QUE PARA OPTAR POR EL GRADO DE:
DOCTOR EN INGENIERÍA
PRESENTA:
GIOVANNA AHUATZIN FLORES
TUTOR
DR. YU TANG XU, FI-UNAM
MIEMBROS DEL COMITÉ TUTOR
DR. MARCELO LÓPEZ PARRA, FI-UNAM
DR. VÍCTOR JAVIER GONZÁLEZ VILLELA, FI-UNAM
DR. EDMUNDO GABRIEL ROCHA COZÁTL, FI-UNAM
DR. CARLOS ROMO FUENTES, FI-UNAM
Juriquilla, Querétaro, México, septiembre 2022.
UNAM – Dirección General de Bibliotecas
Tesis Digitales
Restricciones de uso
DERECHOS RESERVADOS ©
PROHIBIDA SU REPRODUCCIÓN TOTAL O PARCIAL
Todo el material contenido en esta tesis esta protegido por la Ley Federal
del Derecho de Autor (LFDA) de los Estados Unidos Mexicanos (México).
El uso de imágenes, fragmentos de videos, y demás material que sea
objeto de protección de los derechos de autor, será exclusivamente para
fines educativos e informativos y deberá citar la fuente donde la obtuvo
mencionando el autor o autores. Cualquier uso distinto como el lucro,
reproducción, edición o modificación, será perseguido y sancionado por el
respectivo titular de los Derechos de Autor.
Universidad Nacional Autonoma de Mexico
Faculty of Engineering
Design, Implementation, and Experimental
Validation of Estimation Algorithms for
Autonomous Vehicle Navigation
T H E S I S
This dissertation is submitted for the degree of:
Doctor in Mechatronics Engineering
PRESENTS:
Giovanna Ahuatzin Flores
ADVISOR:
Dr. Yu Tang
Juriquilla, Querétaro, September 2022
Dedication
To my parents:
For their understanding, love, unconditional support, help; for inculcating in me the
principles and values that have contributed to my personal and academic formation. For
always encouraging me to follow my dreams.
To my family:
For being a source of inspiration in my life, I love you all.
To my husband:
For being a great support in my life, for being my life partner and being there for me, always.
To my gandmom † Sof́ıa:
As a tribute to her life, for being a very strong woman and for the unconditional love she
gave that transcends time and space.
To my friends:
For supporting me and giving me words of encouragement throughout my life, for sharing
your dreams and thoughts with me.
ii
Acknowledgements
My most sincere gratitude to all the people and institutions who made possible the ac-
complishment of this research work and for all the support throughout the past five years.
First of all, I am grateful to my advisor, Dr. Yu Tang Xu for all his valuable lectures,
patience, guidance and mentoring from whom I have learned many lessons during my doctoral
studies and who has contributed greatly to my academic development.
I would like to thank my committee: Dr. Marcelo López Parra, Dr. Vı́ctor Javier González
Villela, Dr. Edmundo Gabriel Rocha Cózatl and Dr. Carlos Romo Fuentes, for their valuable
discussions and feedback that have enriched this work; for reviewing this dissertation and for
all the support that they have given me at all times.
I am very grateful with the Universidad Nacional Autonoma de Mexico through the
Facultad de Ingenieŕıa, the Unidad de Alta Tecnoloǵıa campus Juriquilla in Querétaro, for
the facilities, infrastructure and financial support under the grants PAPIIT-UNAM IN112421
and PAPIIT-UNAM IT103920, highlighting that all experiments were carried out at the
National Laboratory of Automobile and Aerospace Engineering LN-INGEA.
A special recognition and my deepest gratitude to CONACYT for the financial support
through the scholarship I received during my doctoral studies and the project SEP-CONACyT
No. 253677, for promoting technological development in the universities and for the training
of human resources in the country.
Thank you to our research group integrated by Eduardo Esṕındola López, Rodolfo Ramı́rez,
Larry Serratos and Miguel Angel Verdi Resendiz because they have been an enormous support
in the exchange of ideas, in the process of brainstorming, for discussions and collaborations in
the realization of the projects. Also, I appreciate all members of UAT community, those who
have accompanied me throughout my academic career and from whom I have learned a lot:
Mireya Abigailh Ortega Ontiveros, Osiris Ricardo Torres, Socorro Armenta Servin, Rafael
Chávez Moreno, Guadalupe Ortega Ontiveros, Juan Carlos Sánchez Villegas, Cristóbal David
and Jaime Rueda.
A big thank you, I would like to express to Dr. Scott Palo, who leads Maxwell Cubesat
Project for accepting me as a collaborator for a research stay at the University of Colorado
at Boulder, Colorado, United States of America, through the Ann and H.J. Smead Depart-
ment of Aerospace Engineering Sciences; for all his support during the doctoral stay that I
completed during november 2019 to october 2020. Thank you for all the experiences and
teachings that impacted my academic life in understanding the Attitude Determination and
Control System in depth. Thanks also to all the team members with whom I had the good
fortune to work: Aaron Aboaf, Arunima Prakash, Anastasia Muszynski, Vikas Natajara,
iv
Matt Zola, Elliott Harrod, Robert Redfern and Bailey Roker.
I really appreciate the expertise and knowledge shared by Matt Keat (Ardupilot com-
munity), Dr. Antonio Arciniega Nevarez (Universidad Autónoma de Guanajuato, México),
MEng. Salvador Mart́ınez Regil and MEng. Jaime Correa Rodŕıguez.
Thanks to the love, patience, accompaniment and understanding of my beloved husband,
Lorenzo Hernández Dı́az for trusting in me, believing that I can achieve my dreams and
supporting my life projects.
To my parents, Rosalva Flores Mendieta and Celedonio Enrique Ahuatzin Morales, who
have always trusted me, for motivating me every day, for being an unconditional support in
my life. To my whole family, for being an important motivation that encourages me to move
forward and are an example of hard work and effort; to my brothers Vı́ctor Manuel Ahuatzin
Flores, Omar Enrique Ahuatzin Flores, my aunt Reyna Flores Mendieta; to my cousins
Andrea Paola Flores Mendieta and Alonso Flores Mendieta. To my sister, Diana Abilene
Ahuatzin Flores who is a source of inspiration and an example of courage and strength. To
my beloved nephew and nieces for making my life joyful: Vı́ctor Ahuatzin Briones, Fátima
Abilene Ahuatzin Briones and Daniela Danaé Ahuatzin Mart́ınez. To my sisters-in-law, whom
I admire and respect very much: Maŕıa Antonieta Briones Conde and Daya Mart́ınez. To
my uncle Ruben Flores Mendieta, who fomented in me the curiosity for science and space
exploration and to his wife, Leticia Lugo for her countless support.
To my beloved grandmom Sof́ıa Mendieta Flores, to whom I owe everything I am. I will
be eternally grateful for everything she did for me, her legacy continues in my life. I always
remember her words of wisdom and unconditional love.
To my uncle Sergio Flores Mendieta and aunts Teresa González, Yolanda Flores Maldon-
ado, Silvia Ahuatzin Morales, to my cousin Karina Jiménez and their families as well as my
dear great aunt Ana Maŕıa Arellano Velázquez, to the Hernández Dı́az, Ahuatzin Morales
families; those who are always in my thoughts and in my heart.
To my dear friends, Roxana Hernández Carro, Daisy Elena Mart́ınez Vázquez, Guadalupe
Nallely Esṕıritu Salvador, José Antonio Arciniega Nevárez, Angélica Sánchez Cortés, Maŕıa
Esther Sevilla López, Oralia Mart́ınez Cázares, Julio Mata, Angel Figueroa Soto, Rosy Apari-
cio, Shareni Muñoz, Yadira Lizeth Barreto, Jaqueline Vázquez Corona and Iráıs Bautista
Guzmán, for being part of my life, for all the moments we have shared together and for being
there for me at every instant.
Finally, I am very thankful to my beloved cat Shodi, who accompanied me countless
sleepless nights while I was studying and she is a very important member of my family.
Abstract
This thesis presents a novel design of a nonlinear observer for navigation of autonomous
vehicles, in particular quadrotors, using the information from vector and position measure-
ments directly, available from low-cost sensors like IMU or CCD cameras and a GPS sensor.
First, a nonlinear observer of angular velocity is proposed, which ensures global exponential
stability proven by the Lyapunov analysis. The estimated angular velocity is then used to
estimate the attitude relying on the Explicit Complementary Filter (ECF), which is almost
globally asymptotically convergent. The angular velocity observer in cascade with ECF con-
stitutes an observer with cascade structure, guaranteeing almost global asymptotic stability.
In contrast with the common approach where the attitude is first estimated and then using
the estimated attitude to estimate the angular velocity, the approach proposed here has the
advantage that the angular velocity observer is decoupled from the attitude observer. Using
the position measurement provided by a GPS, a translational state observer is designed with
global exponential stability. The overall observer consists of three cascades modules to esti-
mate the angular velocity, the attitude, and the linear velocity together with the position of
a quadrotor, each module is designed independently, ensuring their functionality and robust-
ness to failure. This fact facilitates the convergence analysis and the implementation of the
observer in practice.
The performance of the proposed observer was validated by numerical simulations in
realistic scenarios, noisy measurements and uncertainties of the inertia matrix were included.
Furthermore, an experimental platform based on a quadrotor developed in the Mechatronic
Laboratory in the National Laboratory of Automotive and Aerospace Engineering at the
Unidad de Alta Tecnoloǵıa (UAT) was developed, and experiments were carried out on this
platform to further validate the design of the observer.
The results of the numerical simulations and experiments have shown the convergence
to zero of the estimation error, in accordance with the analysis results. Therefore, the pro-
posed observer provides a reliable solution for autonomous vehicle navigation in real-world
applications.
vii
Resumen
Esta tesis presenta una propuesta novedosa para el diseñar un observador para la navegación
de veh́ıculos autónomos, en particular drones tipo quadrotor, utilizando directamente las
mediciones vectoriales y de posición, que se pueden obtener de sensores de bajo costos como
IMU o cámaras CCD y un sensor de GPS. En primer lugar, se propone un observador no
lineal de velocidad angular, cuya convergencia global exponencial es demostrada mediante
análisis de Lyapunov. La velocidad angular estimada se utiliza entonces para estimar la
orientación basándose en el Filtro Complementario Expĺıcito (ECF por sus siglas en inglés
Explicit Complementary Filter), cuya convergencia es casi global y asintótica. El observador
de velocidad angular junto con el ECF constituye un observador no lineal completo del
estado rotacional con estructura en cascada, garantizando la estabilidad asintótica casi global.
Comparando con los resultados reportados en la literatura, el observador propuesto se basa
en mediciones vectorial y el diseño del observador del estado rotacional permite desacoplar
el observador de velocidad angular con el observador de orientación. Más aún, Usando las
medidas proporcionadas por un sensor GPS, un observador del estado translacional (velocidad
lineal y posición) es diseñado con estabilidad global exponencial. El observador en conjunto
consiste de tres módulos en cascada para estimar la velocidad angular, la orientación y la
velocidad lineal junto con la posición de un quadrotor; cada módulo fue diseñado de manera
independiente, garantizando su funcionalidad y robustez ante fallas. Esta estructura en
cascada facilita el análisis de convergencia y su implementación del observador propuesto.
El desempeño del observador propuesto fue validado a través de simulaciones numéricas
en escenarios prácticos, en donde se incluyeron mediciones de sensores contaminadas por
ruido y la incertidumbre paramétrica en la matriz de inercia.
Se diseñó y se construyó una plataforma experimental basada en un quadrotor en el Lab-
oratorio de Mecatrónica dentro del Laboratorio Nacional de Ingenieŕıa Automotriz y Aeroes-
pacial en la Unidad de Alta Tecnoloǵıa de la Facultad de Ingenieŕıa. En esta plataforma
se validó el diseño del observador en el quadrotor. Los resultados obtenidos tanto de las
simulaciones numéricas como de los experimentos muestran concordancia con los resulta-
dos teóricos, proporcionando confiabilidad del observador propuesto en aplicaciones para
veh́ıculos autónomos.
viii
Contents
Resumen viii
List of Figures xi
List of Tables xiii
1 Introduction 1
1.1 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Review of the attitude observer . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Review of the angular velocity observer . . . . . . . . . . . . . . . . . 5
1.1.3 Review of the translational state observer . . . . . . . . . . . . . . . . 6
1.2 Problem description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Mathematical tools 10
2.1 Attitude representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Rotation matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Euler angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.3 Unit quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Attitude kinematics and dynamics . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Translational kinematics and dynamics . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Sensor modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Design of an angular velocity observer 16
3.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Angular velocity observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Analytical form of the observer . . . . . . . . . . . . . . . . . . . . . . 16
3.2.2 Useful results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Implementation of the angular velocity observer . . . . . . . . . . . . . . . . . 22
ix
CONTENTS
4 Attitude observer design 24
4.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 Attitude observer design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.1 Useful results for the convergence analysis of the attitude observer . . 25
4.3 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5 Linear velocity and position observer 33
5.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Linear velocity and position observer design . . . . . . . . . . . . . . . . . . . 34
5.3 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6 Validation by numerical simulations 37
6.1 Quadrotor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.3 Scenario I. Noise-free case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.4 Scenario II. Noisy case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.5 Scenario III. Uncertain inertia matrix . . . . . . . . . . . . . . . . . . . . . . 41
7 Experimental validation 48
7.1 Experimental platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.1.1 Hardware architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.1.2 Software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.1.2.1 Ardupilot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.1.2.2 Mission planner . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2 Parameter identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.2.1 Moment of inertia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.2.1.1 Quadrotor CAD model . . . . . . . . . . . . . . . . . . . . . 65
7.2.1.2 Analytical method . . . . . . . . . . . . . . . . . . . . . . . . 65
7.2.2 Determination of the motor-propeller thrust coefficient . . . . . . . . . 65
7.2.3 Determination of the motor-propeller torque coefficient . . . . . . . . 67
7.3 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
7.3.1 Circular path flight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8 Conclusions 77
A Appendix A 78
A.1 AP RPM Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
B Quadrotor Moment of inertia 94
Bibliography 98
x
List of Figures
2.1 Euler angle from B towards A. . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.1 Illustration of the proposed cascaded structure of the observer when the vector
measurements are obtained from an IMU. . . . . . . . . . . . . . . . . . . . . 25
5.1 Modular structure of the navigation observer. . . . . . . . . . . . . . . . . . . 36
6.1 Circular path flight executed by the quadrotor in Simulink. . . . . . . . . . . 38
6.2 Plus and X configuration of a quadrotor. Image credit [36]. . . . . . . . . . . 39
6.3 Scenario I (Noise-free case): (Top-down) angular velocity error, attitude error
q̂−1 ∗ q, position error p− p̂, and linear velocity error v − v̂ . . . . . . . . . . . 42
6.4 Scenario I (Noise-free case): (Top-down) Root Mean Square estimate for angu-
lar velocity error ω̃, attitude error q̂−1 ∗ q, position error p̃ and linear velocity
error ṽ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.5 Scenario II (Noisy case): Top-down: Estimate for angular velocity error ω̃,
attitude error q̂−1 ∗ q, position error p̃ and linear velocity error ṽ. . . . . . . . 44
6.6 Scenario II (Noisy case): (Top-down) Root Mean Square estimate for angular
velocity error ω̃, attitude error q̂−1∗q, position error p̃ and linear velocity error
ṽ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.7 Scenario III (Uncertain inertia matrix): norm of (top-down) the angular ve-
locity estimation error, position, and linear velocity estimation error. . . . . . 46
6.8 Scenario III (Uncertain inertia matrix): Attitude error q̂−1 ∗ q. . . . . . . . . 47
7.1 Quadrotor frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2 quadrotor components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.3 Pixhawk Flight Controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.4 Motor Brushless and ESC used in our Quad. . . . . . . . . . . . . . . . . . . 51
7.5 ESC Connection Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.6 Propeller 1045. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.7 Power Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.8 Fly Sky RC transmitter and receiver. . . . . . . . . . . . . . . . . . . . . . . . 53
7.9 Turnigy Lipo Battery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.10 UBlox GPS plus Compass module. . . . . . . . . . . . . . . . . . . . . . . . . 55
7.11 Hobbywing Brushless RPM Sensor . . . . . . . . . . . . . . . . . . . . . . . . 55
7.12 quadrotor developed in our laboratory . . . . . . . . . . . . . . . . . . . . . . 56
xi
LIST OF FIGURES
7.13 Mission Planner Main Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.14 Load custom firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.15 Autopilot to computer connection . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.16 RC calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.17 Accelerometer calibration. Diagram credit: [1]. . . . . . . . . . . . . . . . . . 60
7.18 Compass calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.19 ESC’s calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7.20 Autonomous Mission executing a triangle path. . . . . . . . . . . . . . . . . . 63
7.21 Autonomous Mission executing a circular trajectory. . . . . . . . . . . . . . . 64
7.22 Quadrotor CAD model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.23 Test bench for motor-propeller characterization to measure thrust force and
motor speeds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.24 Thrust force vs. motor-propeller speed. . . . . . . . . . . . . . . . . . . . . . 67
7.25 Torque vs. motor-propeller speed. . . . . . . . . . . . . . . . . . . . . . . . . 68
7.26 Circular flight of the quadrotor in NED frame. . . . . . . . . . . . . . . . . . 70
7.27 Normalized accelerometer measurements (Circular flight). . . . . . . . . . . . 71
7.28 Magnetometer measurements (Circular flight). . . . . . . . . . . . . . . . . . 71
7.29 Motor angular speeds (Circular flight). . . . . . . . . . . . . . . . . . . . . . . 72
7.30 True and estimated angular velocity of the quadrotor (Circular flight). . . . . 73
7.31 True and estimated attitude in Euler angles representation of the quadrotor
(Circular flight). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7.32 True and estimated linear velocity (Circular flight). . . . . . . . . . . . . . . . 74
7.33 True and estimated position (Circular flight). . . . . . . . . . . . . . . . . . . 74
7.34 Real flight: (Top-down) angular velocity error ω̃, attitude error q̂−1⊗ q, linear
velocity error ṽ and position error p̃ (Circular flight). . . . . . . . . . . . . . . 75
7.35 Real flight: (Top-down) Root Mean Square estimate for angular velocity error
ω̃, attitude error q̂−1 ∗ q, position error p̃ and linear velocity error ṽ. . . . . . 76
B.1 Quadrotor divided in small parts. Credit image: [20] . . . . . . . . . . . . . . 94
xii
List of Tables
6.1 Initial conditions and parameters used in simulations. . . . . . . . . . . . . . 40
7.1 RPM sensor Parameters Set up . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.2 Quadrotor physical parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7.3 Data used in the implementation of the complete algorithm in the Quad . . . 69
xiii
Chapter 1
Introduction
In this thesis, we consider the problem of estimating state variables for autonomous vehicle
(AV) navigation using on-board sensors. Autonomous vehicles (AVs) require motion stabi-
lization or trajectory tracking, despite external disturbances and sensor imperfections such as
drifts in the measured variable and scaling uncertainty. The control system in charge of exe-
cuting this task needs to know the states of the AV, i.e., position, linear velocity, orientation,
and angular velocity, to command orders to the actuators that will regulate the orientation of
the vehicle. In a real-world application, it is crucial to develop efficient algorithms that can
estimate the state of an AV using the information provided by the onboard sensors. Position
and linear velocity can be obtained using the Global Position System (GPS); angular velocity
can be obtained by gyroscopes attached to the body, but these sensors are very susceptible
to failure; while for attitude there are no sensors that measure it directly, so it is estimated
by fusing data from different sensors. Thus, a central question in state estimation tasks is
to reconstruct the rotational state (attitude and angular velocity) of the AV [15] so that the
implementation of the control strategy helps to ensure a successful mission. Using the recon-
structed rotational state, the translational state (position and linear velocity) can be posted
as a linear estimation/filtering problem, and many systematic solutions, such as Luenberger
observer and Kalman filter, are available [23, 37]. Both rotational state and translational
state estimation problem are addressed in this thesis.
Since the late 1960s, the problem of estimating the attitude of rigid bodies has been
deeply studied and, currently, has gained importance due to the appearance of new schemes
for sensor fusion, the development of nonlinear control algorithms, and creation of platforms
as test beds. Furthermore, the number and complexity of aerial vehicle applications are
increasing continuously, and the attitude and angular velocity estimation techniques involved
must also be improved to provide better performance and greater stability of such systems.
This chapter gives a review of the literature on related topics of rotational ans transla-
tional state estimation. The problem description, the main objectives of the thesis and the
contributions are stated.
1
1.1 Literature review
1.1 Literature review
1.1.1 Review of the attitude observer
Attitude determination is fundamental for navigation applications to control the orientation of
autonomous vehicles. Attitude Determination and Control Subsystem (ADCS) is responsible
to accomplish two main objectives: to estimate the orientation of the vehicle as accurately
as possible and to point the vehicle to the desired orientation. If the vehicle is not in the
desired orientation, then the control system compares the actual orientation with the desired
orientation and calculates the error, this information is used to know how much action is
needed for each actuator so that the vehicle can be brought to the desired orientation.
Most systems require precise knowledge of the vehicle’s orientation to acquire photographs
and/or video of a specific point on Earth, to point the vehicle to a specific point in space or to
perform tracking and guidance maneuvers, depending on the mission objective. For example,
in the case of satellites, pointing the solar panels towards the Sun is vital to recharge the
battery and extend the life of the mission or may be required to point to a specific point for
Earth observation. In the case of Unmanned Aerial Vehicles (UAVs), the attitude can be
significant for cooperative or collaborative tasks between vehicles.
The attitude of a rigid body is represented mathematically through the Euler angles, unit
quaternions, rotation matrix, or Rodriguez parameters. A very complete survey of the orien-
tation representations can be found in [43] where twelve different orientation representations
and their transformations between them are presented.
The problem of attitude estimation has been deeply explored since 1964, when Black
[11] proposed determining attitude using only two vector observations and its corresponding
inertial vectors. The basis of attitude estimation was introduced by Wahba in 1965 known
as ”Wahba’s problem” which is formulated as: A body-frame measurement of two non-
collinear vectors is available from sensors such as accelerometer, magnetometer, coarse sun
sensors, or star trackers; in addition, the information of two reference vectors in the inertial
frame must also be known. Then, the direction cosine matrix or rotation matrix, which
relates the two sets of measurements, can be estimated. The first solution to solve the
Wahba’s problem is the TRIAD algorithm. Many solutions have been proposed for attitude
estimation, classified mainly in two categories: static or deterministic algorithms and dynamic
or stochastic algorithms.
Deterministic approaches include the TRIAD triaxial orientation determination algo-
rithm, Davenport’s q-Method, QUaternion EStimator (QUEST), Fast Optimal Attitude Ma-
trix (FOAM), Markley’s SVD method, EULER-2 and EULER-n methods, and ESOQ and
ESOQ-2 algorithms [30, 42].
The TRI-axial Attitude Determination method (TRIAD) [23, 42, 47], computes the at-
titude rotation matrix from two non-collinear vectors in the body and inertial frame. Ex-
pressed in the inertial frame, vectors vI
1
and vI
2
and vectors vB
1
and vB
2
in the body frame,
where vI
1
= RvB
1
, vI
2
= RvB
2
. This method constructs a third vector with the information of
2
1.1 Literature review
the other two as:
s1 = vI1 s2 =
vI
1
× vI
2
|vI
1
× vI
2
|
s3 = s1 × s2
r1 = vB1 r2 =
vB
1
× vB
2
|vB
1
× vB
2
|
r3 = r1 × r2
The rotation matrix can be calculated as:
R =
3∑
i=1
sir
T
i = [s1 s2 s3][r1 r2 r3]
The TRIAD method is easy to implement, but not all the information of the two vector
measurements is used, moreover, if more than two sensors are available, we cannot use this
method, and if noise is considered, there is no guarantee that the estimated R remains in
the rotation group of SO(3). In addition, this method does not consider the reliability of the
sensors. To solve these problems, other solutions have been proposed.
The Davenport’s method, transformed Wahba’s problem into the problem of finding the
largest eigenvalue λmax of the symmetric matrix K ∈ R
4×4.
K ≜
[
σ zT
z S − σI3×3
]
where S = B +BT , σ = tr(B), B =
∑n
i=1
kiv
B
i × vIi and z = [B23 −B32 B31 −B13 B12 −
B21]
T and the attitude estimate β can be calculated by solving the equation: Kβ = λβ.
The QUaternion ESTimator (QUEST) algorithm uses quaternion parametrization and is
based on Davenport’s method. This algorithm solved the following equation:
[
σ zT
z S − σI3×3
](
1
q
)
= λmax
(
1
q
)
where q = ((λmax+mσ)I3×3−S)
−1 is the attitude estimation. This method was proposed by
Shuster and was implemented in the Magsat mission in 1979. Numerical analysis shows that
this method is not the best at finding the eigenvalues of its characteristic equation, which
makes it less robust than Davenport’s method [30] but reliable in practice.
The FOAM algorithm was proposed by Markley in 1993; it solves the Wahba problem
using only the sensor measurements without taking into account the system model. Compared
to the other deterministic methods, it is faster and more robust, although it does not use
quaternion parameterization.
On the other hand, dynamics methods first find the best estimate of the true system state
using a dynamic model, and then use measurements assuming that they have random noise
or corrupted data. Filtering methods can generally provide a more accurate attitude estimate
than deterministic methods because they incorporate the memory of past observations.
The Extended Kalman Filter (EKF) is the most widely used algorithm for attitude es-
timation in autonomous vehicles among dynamic solutions. It has been recognized that
3
1.1 Literature review
EKF-based approaches are the most widely used for two main reasons, their proven relia-
bility and ease of incorporating other measurement sources that can improve the quality of
the estimate or even provide an estimate of additional quantities such as altitude or vertical
velocity [14]. It uses a three-step iteration process: updating the measurement, resetting the
state vector, and propagating to the next measurement time. Some variants of this algorithm
are Multiplicative KF, Additive KF, Unscented KF, Invariant KF. The Kalman filter requires
state models and observations to be linear, some of the variants do deal with nonlinearities,
but the computational development is increased.
The most recent advances in solving the attitude estimation problem are nonlinear ob-
servers and filter-based methods. Nonlinear attitude estimation observers have emerged that
consider the problem when a vehicle is susceptible of strong accelerations to improve the
accuracy of the estimated attitude by using accelerometer, magnetometer, gyroscope and
complemented with GPS or Inertial navigation systems (INS) measurements. Examples of
this research line are as follows. Manohy et al. [28] propose the orientation estimation problem
formulated directly on the special orthogonal group SO(3). They termed the direct, passive
and complementary filter. The latter uses gravitational or magnetic field directions obtained
from an IMU plus gyroscope readings. This observer does not require an online algebraic
reconstruction of attitude and is ideally suited for implementation on embedded hardware
platforms due to its low complexity. However, it suffers from possible discontinuities in the
bias correction signal when the equivalent rotation angle of the estimated quaternion ap-
proaches ±π rad. The explicit complementary filter (ECF) proposed in [28] is composed of
1) a prediction term based on the measurement ω and 2) a correction term derived from the
vector measurements:
˙̂
R = R̂σS(ω + kpσR), R̂(0) = R̂0 ∈ SO(3) (1.1)
σR =
n∑
i=1
kiS(bi)b̂i (1.2)
where R̂ is the attitude estimate, kp > 0 is a positive gain, and b̂i = R̂T ri. The expression
(1.1) represents the dynamics of a generic complementary filter on SO(3).
A weakness of the ECF (1.1) is its requirement of the measured angular velocity ω, which
is obtained by a low-cost MEMS gyros in a typical AV application. The main problem with
a MEMS gyros is its bias and reliability. The gyro bias which must be corrected [28]. The
reliability calls for a backup solution as that we propose in the next section.
Hua [22] proposed an attitude and velocity observer based on the Explicit Complementary
Filter, that depends only on inertial measurements, Global Navigation Satellite Systems
(GNSS) velocity measurements, and magnetometer measurements. This observer exploits
the fact that the vehicle’s acceleration vector in the navigation frame is implicitly available in
the derivative of the GNSS velocity and, therefore, it can be compared to the accelerometer
measurement to help determine the attitude. Stability analysis results state semi-global
convergence.
Roberts and Tayebi [38] proposed two nonlinear observers that estimate the orientation
of a rigid body using GPS, accelerometer and magnetometer. This observer considers the
dynamics of the linear velocity in the correction term resulting in semiglobal exponential
convergence. This observer belong to the class of velocity-aided attitude observer.
4
1.1 Literature review
When the objective consists in combining the estimation of the attitude and the filtering of
the linear velocity, some invariant attitude observers have been proposed recently [7, 12, 31].
These observers are GPS–aided and are named invariant in the sense that they preserve the
properties of the Lie Group. A complete overview related to nonlinear attitude observers
can be found in [15, 18, 23]. The attitude determination problem has been solved by several
authors under the assumption that the attitude and angular rate information is known and
available. But, the research becomes more challenging when angular velocity measurements
are unreliable or unavailable due to faulty gyroscopes. This leads us to the problem of
estimating the angular velocity of a rigid body.
1.1.2 Review of the angular velocity observer
In order to estimate the angular velocity of a rigid body, there are essentially three methods to
solve this. The first consists of measuring the angular velocity by means of a sensor, but these
sensors are relatively expensive and susceptible to failures, as case studies are presented [27]
[8]. For this reason, it is encouraged to select other types of solutions. The second method
is known as two- step approach, where the first step is to calculate the orientation from
vector measurements and use it in the second step to reconstruct the angular velocity. A
third technique is to employ a nonlinear observer using vector measurements from on-board
sensors to directly calculate the angular velocity and use this estimate to reconstruct the
attitude of a rigid body.
The advantage of this type of solution is that it does not require the use of gyroscope
measurements and does not require knowing the orientation to estimate the angular velocity.
The first paper to propose this solution was [45] relying solely on body vector measurements,
which guarantees almost global asymptotic stability; an improved version of this method
was presented in [8] where the unstable equilibria of closed-loop dynamics were reduced and
the auxiliary error system does not use the inertial fixed reference vectors as in [45]. In
this context, the first work that proposed a nonlinear observer that achieves global uniform
exponential convergence in the estimate of angular velocity is [41]. This observer consists of
a copy of the system plus correction terms and a dynamic extension based on an invariant-
manifold framework. Another interesting proposal presented is [9] in which the authors design
the angular velocity observer directly on SO(3)×R
3, leading to results of global exponential
stability. Recent progress in the development of new nonlinear observers for implementation
and the growing development in new control schemes allow for an active area for research.
Arguments of these observers depend on the angular velocity obtained from a gyroscope
plus correction terms that required magnetometer, accelerometer, and GPS/INS measure-
ments; these works assume that the angular velocity is known at all times. However, this is
not always possible for real-world applications, since gyroscopes are prone to failure. If the
gyroscope data is not available or the gyroscope malfunction, the mission may be compro-
mised, the angular velocity cannot be estimated, and the attitude cannot be estimated. This
is the reason why new proposals are encouraged for attitude estimation without gyroscope
knowledge.
5
1.2 Problem description
1.1.3 Review of the translational state observer
Several nonlinear navigation observers have been reported to estimate both the rotational
(attitude and angular velocity) and translational state (position and linear velocity) of general
autonomous vehicles moving in a 3D space. This approach can be found in literature as the
problem of full state estimation for vehicles navigating in a three dimensional space. In order
to achieve reliable estimates, these techniques assume that rigid body is equipped with IMU
fused with sensors that provide partial or full position measurements such as GPS, motion
capture systems, LIDARs, altimeter or Inertial navigation systems (INS). Previous works on
this area of research are:
Rehbinder and Ghosh in Ref. [37] used a CCD camera as a vector measurement device
used with inertial sensors to estimate the pose (attitude and position) of a rigid body. They
formulated that attitude estimation can be detached from the position estimation. Further-
more, that once the attitude is estimated, the position can be treated as a linear implicit
problem.
Refs. [6, 10, 19] proposed a navigation observer for UAV using vector measurements
and gyro rate of an IMU aided by an INS or a general position sensor to estimate attitude,
gyro bias, and translational state. Navigation observers in these references employ the key
idea from attitude estimation in [28, 37] to first estimate attitude and gyro bias using vector
measurements and the gyro rate simultaneously, and then the translational state is estimated
using the attitude estimate.
Using IMU sensors fused with INS are typically used in this class of solutions. But, in case
that only the INS is used, the outputs may give unreliable readings since measurement errors
and unknown initial conditions cause the estimation error to deviate over time [10]. Using
INS assisted with GPS, correct the position estimates over time and keep errors small and
bounded. Other types of sensors for position information such as motion cameras systems
(such as Optitrack, Vicon), GNSS, bearing measurements UltraWide Band (UWB) sensors
are excellent alternatives with high measurement accuracy but, therefore, expensive.
For low-cost solutions using commercial GPS sensors combined with nonlinear navigation
observers are an excellent alternative to estimate the translational state to produce reliable
estimates. For this reason, it is necessary to investigate new proposals within this context.
1.2 Problem description
This thesis focuses on the problem of the design, implementation, and validation of an an-
gular velocity and attitude observer for the navigation of an AV using vector measurements
directly without any attitude representation such as rotation matrix or unit quaternions.
Vector measurements may be obtained from low-cost sensors such as an IMU or a CCD cam-
era. Contrary to the commonly used two-step method [15], in this thesis we propose a new
approach to estimate the rotational state that gives rise to a cascaded observer structure.
The observer design is based upon the Lyapunov analysis, ensuring almost global asymptotic
convergence of the estimated state.
The proposed observer is validated by numerical simulations under realistic scenarios.
Experimental results show further the observer in a real-world application. The experiments
6
1.3 Objectives
are carried out on a platform, consisting of a quadrotor, sensors and processor on board, and
supporting software, constructed in the Mechanical Laboratory of the National Laboratory of
Automotive and Aerospace Engineering, Unidad de Alta Tecnoloǵıa (UAT) UNAM campus
Juriquilla. This project arises from the need to validate our own control and estimation
algorithms developed at the UAT.
1.3 Objectives
The general objective is the design, implementation, and experimental validation of a navi-
gation observer for autonomous vehicles, in particular for UAVs quadrotors.
Specific goals
• Design of a rotational state observer using vector measurements from low-cost sensors,
such as an IMU and a CCD camera, to estimate the angular velocity and the attitude
of UAVs.
• Design of a rotational state observer using additional position measurements from a
GPS sensor to estimate the linear velocity and the position of UAVs.
• Ensuring convergence to the true state in the proposed observer based on the Lyapunov
analysis.
• To analyze the performance of the observer through numerical simulations of different
conditions and situations.
• To research on suitable open architecture UAVs and technologies that allow the free use
of hardware and software to reprogram the control board to test our own algorithms.
• To build a quadrotor for the experimental validation of the proposed system that meets
the necessary hardware requirements.
• Create and program different autonomous missions to be executed on the quadrotor.
• Conduct a static thrust test to measure the thrust coefficient and the torque coefficient
and perform experiments in the lab to identify the parameters of the quadrotor.
• To run real-time flight experiments in an outdoor environment, analyze flight data.
1.4 Hypothesis
• It is possible to design an angular velocity and attitude observer using directly the
vector measurements from a low-cost IMU, in a framework where gyro information is
not needed for the calculation.
• It is possible to design a translational state observer that requires GPS inputs in cascade
with the rotational state observer.
• It is possible to build a low-cost experimental platform based on a quadrotor to validate
the navigation observer.
7
1.5 Contributions
1.5 Contributions
The contributions of this thesis are a novel deign of navigation observer for autonomous
vehicles such as quadrotor, analyzed by Lyapunov stabilty theory and validated by numerical
simulations and experiments in laboratory. Specifically:
• An angular velocity observer design that ensures global exponential convergence of the
error dynamics, using directly vector measurements from sensors, such as accelerome-
ters, magnetometers, and CCD cameras. Compared to other works in the literature,
this algorithm does not need gyroscope measurements or attitude to perform the cal-
culation.
• An attitude observer in cascade with the angular velocity observer based on the explicit
complementary filter (ECF) proposed by [28] to estimate the attitude of a rigid body.
The proposed angular velocity observer together with the EFC form an overall rota-
tional state observer with a cascaded structure that guarantees almost global asymptotic
convergence to the true rotational state. This approach facilitates the converge analysis
of both the angular velocity and the attitude observer.
• A translational state observer with exponential convergence, designed independently
of the rotational state observer, using the position measurement provided by a GPS
sensor.
• An experimental platform, built in the Mechatronics laboratory to test and validate the
angular velocity and attitude observer characterized by a low-cost open architecture
quadrotor, whose onboard controller uses a Pixhawk and is ideal for academic research.
Part of these results have been reported in the work Ahuatźın, Tang and Esṕındola (2022)
[2].
1.6 Thesis outline
The remainder of this thesis is organized as follows.
Chapter 2. This chapter provides a mathematical background necessary for stability
analysis, also presents attitude kinematics and dynamics of a rigid body and some identities
for attitude representation.
Chapter 3. This chapter presents a detailed description of the angular velocity observer
design and its stability proof through the Lyapunov analysis.
Chapter 4. This chapter designs the attitude observer using the estimated angular
velocity in the previous chapter based on the the Explicit Complementary Filter. A rigorous
proof of its stability analysis is presented by means of Lyapunov theory.
Chapter 5. This chapter is dedicated to the design of the linear velocity and position ob-
server. From IMU measurements together with a GPS sensor, a new observer for translational
state is proposed. Global exponential stability is demonstrated.
Chapter 6. In this chapter, numerical simulations are illustrated to test the performance
of the proposed observer. Three scenarios are considered: an ideal case where two reference
8
1.6 Thesis outline
vectors in the inertial frame are perpendicular and the on-board vector measurements are
noise- free. The second scenario simulates a more realistic situation, where the angle between
the reference vectors is small and the vector measurements are contaminated by Gaussian
noise. The third scenario shows the robustness of the observer under uncertainty in the
inertia matrix.
Chapter 7. This chapter describes the experimental platform developed in the Mecha-
tronics Laboratory based on a quadrotor that worked as a test bed in order to analyze the
behavior of the proposed observer in real-world applications. Experimental results are pre-
sented and analyzed.
Chapter 8. Finally, in Chapter 8, conclusions and a discussion for future work are
presented.
9
Chapter 2
Mathematical tools
This chapter presents the mathematical tools necessary for the design of the angular velocity
and attitude observers and the analysis of the convergence. A brief introduction to the main
attitude representations such as the rotation matrix, Euler angles, and quaternions is also
presented. Finally, the rotational kinematics and dynamics for an autonomous vehicle are
described, which are essential for the design of the proposed observer.
2.1 Attitude representations
To describe the attitude of a rigid body, it is necessary to establish coordinate frames to
establish a reference with respect to which the attitude is measured. The following concepts
are needed:
• A coordinate system defined in a convenient way according to the application called
Inertial Reference frame, I.
• A coordinate system fixed on the center of mass of the rigid body of interest called
Body Frame B. All sensors are attached rigidly on the vehicle, thus the measured states
are expressed in the body-fixed frame B.
Since there is a variety of attitude representations to choose from, the decision of which
one to adopt provides an additional degree of freedom in observer design. This decision is
important since different attitude representations may lead to different solutions, appropri-
ate for the underlined application. Some advantages and disadvantages of various attitude
representations are presented.
2.1.1 Rotation matrix
The rotation matrix is a linear transformation in Euclidean Space used to rotate vectors and
map coordinates from one reference frame to another. Consider two reference frames A and
B, and let the position of the rigid body in B denoted by Bp ∈ R
3. Then, the position of the
rigid body with respect to the reference frame A, is given by:
Ap =A
B RBp, (2.1)
10
2.1 Attitude representations
where A
BR denotes the rotation matrix of B to A. A rotation matrix R belongs to the group
of Ortogonal Special Matrices of dimension 3, denoted by SO(3) = {R ∈ R
3×3|RRT =
I, det(R) = 1}.
2.1.2 Euler angles
Representing an attitude by rotation matrices requires 9 scalars. However, the attitude can
be defined using fewer parameters. An alternative that uses only 3 parameters is the so-called
Euler angles representation. This representation is defined by the angles that each reference
axis B can rotate, in sequential order, to match the reference frame A.
There are different conventions related to Euler angles that are associated with the order
of the axes on which the rotation is to be done. For the case where the convention is Z−Y −X,
the reference B is first rotated around the z axis by an angle γ, which gives us a reference
frame B’. Then B’ is rotated around the y axis by an angle beta to obtain the reference frame
B”. Finally B” is rotated around the x axis, by an angle alpha such that the final reference
matches A. This process is illustrated in the figure.
Figure 2.1: Euler angle from B towards A.
Rotation matrix can be obtained from Euler angles using successive rotations around the
principal axes, i.e.,
A
BR = Rz(γ)Ry(β)Rx(α), (2.2)
where
Rx(α) =
1 0 0
0 cα −sα
0 sα cα
, Ry(β) =
cβ 0 sβ
0 1 0
−sβ 0 cβ
, Rz(γ) =
cγ −sγ 0
sγ cγ 0
0 0 1
(2.3)
and cα = cos(α), sα = sin(α), cβ = cos(β), sβ = sin(β), cγ = cos(γ), sγ = sin(γ).
Euler angles can be calculated given a rotation matrix, using the following relations
β = atan2
(
−r31,
√
r2
11
+ r2
21
)
,
γ = atan2(r21/cβ, r11/cβ),
α = atan2(r32/cβ, r33/cβ),
11
2.1 Attitude representations
where rij =
[
A
BR
]
ij
, cβ ̸= 0, and atan2(x, y) is the inverse tangent in the fourth quadrant,
which is defined as atan2(x, y) = 2atan
(
x
√
x2 + y2 + z2
)
. Note that this function will be
undefined if β = ±
π
2
rad. The existence of singularities is the main disadvantage of the Euler
angle representation. To mitigate this problem, the following convention is sometimes used.
Choosing γ = 0 rad and, if β = π
2
, then α = atan2(r12, r22). On the other hand, if β = −”π
2
rad, then −α = atan2(r12, r22) is taken.
2.1.3 Unit quaternions
The unit quaternion expresses a rotation matrix as a homogeneous quadratic function of
the quaternion elements. No trigonometric evaluations or other transcendental functions
are required, so they avoid singularities. Unit quaternions are more efficient at specifying
rotations than the rotation matrix itself, having only four components instead of nine, and
obey only one constraint, the norm constraint, instead of the six constraints imposed on the
orientation matrix by orthogonality [30].
Unit quaternions are given by
q = q0 + iq1 + jq2 + kq3 (2.4)
where q0, q1, q2 and q3 are real numbers that satisfy q2
0
+ q2
1
+ q2
2
+ q2
3
= 1, are components
of a quaternion q and i, j and k are imaginary unit vectors that satisfy i2 = j2 = k2 = −1,
ij = −ji = k, jk = −kj = i, ki = −ik = j.
A quaternion can also be expressed through its scalar and vector parts, and belongs to the
unit sphere of dimension 3 embedded in the space R
4 defined as S3 = {q ∈ R
4|qT q = 1}, i.e.,
q = (q0, qv) ∈ S
3, where q0 ∈ R is the scalar part of the quaternion and qv = [q1, q2, q3]
T ∈ R
3
is the vector part. With the following product of quaternions ⋆
[
q0
qv
]
⋆
[
q̄0
q̄v
]
=
[
q0q̄0 − qTv q̄v
q0q̄v + q̄0qv + qv × q̄v
]
(2.5)
it defines a group whose unit element is 1 = (1, 0). Note that the quaternion product is not
commutative but associative.
The transformation between a rotation matrix R and a unit quaternion q is uniquely
defined by Rodrigues formula:
R = I3 + 2q0S(qv) + 2S(qv)
2. (2.6)
Transforming a rotation matrix into quaternions is not so straightforward, since two unit
quaternions ±q correspond to the same rotation matrix R
q0 = ±
1
2
√
1 + tr(R), S(qv) =
R−RT
4q0
.
12
2.2 Attitude kinematics and dynamics
2.2 Attitude kinematics and dynamics
The kinematics, which relates the angular velocity with the time derivative of the unit quater-
nion, is given by
Ṙ = RS(ω), (2.7)
where ω is the angular velocity of the rigid body expressed in the body frame and the skew-
symmetric operator S(·) satisfied S(u)v = u × v, ∀u, v ∈ S
3. Notice that a rotation matrix
has nine scalar components with six constraints.
Let u be a vector ∈ R
3. The skew-symmetric operator S(·) is defined as:
S(u) =
0 −u3 u2
u3 0 −u1
−u2 u1 0
(2.8)
Lemma 2.2.1 (Properties for S matrix) ∀ x, y ∈ R
3 and R ∈ SO(3) satisfies the prop-
erties
a1. ST (x) = −S(x),
a2. S(x)y = −S(y)x,
a3. S(x)S(y) = yxT − xT y,
a4. S2(x) = xxT − xTx,
a5. S(S(x)y) = S(x)S(y)− S(y)S(x),
a6. S(R(x)) = RS(x)RT
In terms of unit quaternions, the rigid body kinematics can be expressed as
q̇ =
1
2
J(q)ω, (2.9)
where J(q) ∈ R
4×3 is defined as
J(q) =
[
−qTv
q0I + S(qv)
]
. (2.10)
Some of the properties of this matrix are:
Lemma 2.2.2 (Properties of the matrix J(q)) ∀ x, y ∈ R
4 the matrix J(q) satisfies the
13
2.3 Translational kinematics and dynamics
properties
b1. JT (x)J(x) =
∥
∥
∥x
∥
∥
∥
2
2
,
b2. JT (x)y = 0 ⇔ y = kx, k ∈ R,
b3. J(αx+ βy) = αJ(x) + βJ(y) α, β ∈ R,
b4. JT (x)y = −JT (y)x,
b5.
∥
∥
∥J(x)
∥
∥
∥
2
=
∥
∥
∥x
∥
∥
∥
2
,
b6.
d
dt
(J(x)) = J(ẋ),
The attitude dynamics for a rigid body is given by
Mω̇ = S(Mω)ω + τ, (2.11)
where M ∈ R
3×3, M = MT > 0, is the constant inertia matrix and τ(t) ∈ R
3 is the control
torque vector, both expressed in the body frame.
We recall some identities and math tools useful for the stability analysis.
• The anti-symmetric operator in square matrix space is defined as P∨
a (H) = 1
2
(H −
HT ) ∀H ∈ R
3×3.
• For matrices A, B ∈ R
n×n, the Lie-bracket is defined as [A,B] = AB −BA.
• ∀x ∈ R
3 and H ∈ R
3×3, HT = H, the trace tr(HS(x)) = 0.
• ∀x, y ∈ R
3, tr(xyT ) = xT y, S(x× y) = (yxT − xyT )/2.
2.3 Translational kinematics and dynamics
The inertial position of the center of mass of a rigid body is denoted by p ∈ R
3 and v ∈ R
3 is
the linear velocity expressed in the inertial frame. The translational kinematics is given by
ṗ = v. (2.12)
Taking into account the thrust f ∈ R applied to the center of the mass, the translational
dynamics of the rigid body is given by
v̇ = ge3 −
f
m
Re3, (2.13)
where g represents the acceleration of gravity, m is the mass of the vehicle and e3 = [0 0 1]T
is the z axis of the inertial frame.
14
2.4 Sensor modeling
2.4 Sensor modeling
A commercial IMU is commonly composed of a triaxial magnetometer, a triaxial accelerom-
eter, and a triaxial gyroscope.
Accelerometer: The accelerometer provides a measurement of the linear acceleration of the
rigid body minus the gravitational acceleration. In reality, these measurements are corrupted
by bias and noise. We consider that the raw measurements are modeled as
aB = atrue + ba + ηa (2.14)
where aB is the accelerometer measurement, ba the bias ∈ R
3, ηa ∈ R
3 the noise. When the
vehicle is in stationary motion, the true acceleration atrue is given by
atrue = −e3R
T g0 (2.15)
where {e1, e2, e3} ∈ R
3 is the orthogonal basis of the inertia frame and g0 = [0 0 9.8]T is
the acceleration of gravity.
Magnetometer: The magnetometer measures the Earth’s magnetic field in the body frame
mB = mtrue + bm + ηm (2.16)
where mB is the magnetic field measurement, bm the bias ∈ R
3 and ηm ∈ R
3 the noise. The
component mtrue is related to the magnetic field of Earth mI by
mtrue = RTmI (2.17)
15
Chapter 3
Design of an angular velocity observer
In this chapter, the problem statement is introduced, followed by a detailed description
regarding the mathematical framework for the design of the nonlinear observer as well as the
proof of its convergence through Lyapunov analysis.
3.1 Problem statement
Consider a rigid body whose kinematics and dynamics are described by (2.7) and (2.11),
respectively. The problem addressed in this chapter is to design an observer to estimate
the angular velocity ω using the vector measurements bi ∈ R
3, i = 1, ..., n, available from
on-board sensors such as accelerometers, magnetometers, or CCD cameras expressed in the
body frame B, given known constant reference vectors ri ∈ R
3 in the inertial frame I. The
relationship between vectors bi and ri is given by
bi = RT ri. (3.1)
Without loss of generality, the reference vectors ri as well as its measurement in the body
frame bi are normalized, i.e., ∥ri∥ = ∥bi∥ = 1.
The following assumptions are made:
A1: There exist at least two vectors, say r1 and r2, that are non-coplanar.
A2: The angular velocity is bounded, i.e., ∥ω∥ ≤ ωmax, with ωmax > 0 unknown.
A3: The inertia matrix M and the torque τ are known.
3.2 Angular velocity observer
3.2.1 Analytical form of the observer
From the dynamics of a rigid body
∑
p
: Mω̇ = S(Mω)ω + τ (3.2)
16
3.2 Angular velocity observer
We propose the following angular velocity observer in its analytical form
∑
o
: M ˙̂ω = S(Mω̂)ω̂ +Kf (ω − ω̂) + τ (3.3)
where ω̂ represents the estimated angular velocity and ω the true angular velocity. Kf is the
gain matrix to be designed. Observe that observer (3.3) consists of the attitude dynamics
plus a correction term. Note that the proposal of (3.3) is for convergence analysis purposes
and therefore is not directly implementable, as it depends on ω. Its implementation will be
given later.
The objectives in the design of the observer are as follows:
1. Designing the gain matrix Kf such that the estimated angular velocity ω̂ converges to
true angular velocity ω asymptotically, i.e, ω̂ − ω → 0 as t → ∞.
2. To eliminate ω so that the proposed observer can be implementable using only the
vector measurements vi, for i = 1, 2, ..., n.
3.2.2 Useful results
The following results are needed to construct the mathematical framework of the angular
velocity observer.
Reduced kinematics.
Let bi ∈ R
3, i = 1, ..., n be the vector measurements in body frame B and their corre-
sponding inertial reference vectors ri ∈ R
3, i = 1, ..., n.
Lemma 3.2.1 Reduced Attitude kinematics. Consider bi and ri given in (3.1). Then,
the reduced attitude kinematics is given by
ḃi = S(ω)bi. (3.4)
Proof : Differentiating directly bi in (3.1) and using (2.7) leads to
ḃi = ṘT ri = ST (ω)RT ri = −S(ω)RT ri = −S(ω)bi = S(bi)ω
where we have used Property a1 and Property a2 of Lemma 2.2.1 of a skew-matrix S.
Filtered vector measurements.
Let bif be the filtered vector measurement of bi, and obeys
ḃif = γf (bi − bif ), bif (0) = bi(0), (3.5)
with γf > 0 being the filter gain. The following technical lemma, proved in [16], estab-
lishes the intuitive fact that the difference between the input and output of the filter can be
arbitrarily small with a sufficiently large filter gain.
17
3.2 Angular velocity observer
Lemma 3.2.2 ∀ϵf > 0, there exists a γf := ω̄max/ϵf , being ω̄max <∞ the bandwidth of the
system, such that ∥bi − bif∥ < ϵf for all t ≥ 0, provided that γf > γf .
Gain matrix.
Let the matrices K, Kf ∈ R
3×3 be defined as
K :=
n∑
i=1
ST (bi)ΛiS(bi) (3.6)
Kf :=
n∑
i=1
ST (bif )ΛiS(bi), (3.7)
where Λi ∈ R
3×3 is a symmetric positive definite gain matrix
The following lemmas gives some properties of these matrices, useful in the convergence
analysis in the sequel.
Lemma 3.2.3 Under Assumption A1, the following statements hold
i. K is symmetric matrix positive definite K = KT > 0.
ii. Kω =
∑n
i=1
ST (bi)ΛiS(bi)ω =
∑n
i=1
ST (bi)Λiḃi
iii. Kfω =
∑n
i=1
ST (bif )ΛiS(bi)ω =
∑n
i=1
ST (bif )Λiḃi
iv. Kf =
∑n
i=1
ST (bif )ΛiS(bi) =
∑n
i=1
ST (bi)ΛiS(bi) +
∑n
i=1
ST (bif − bi)ΛiS(bi) = K +
∑n
i=1
ST (bif − bi)ΛiS(bi). Equivalently, Kf −K =
∑n
i=1
ST (bif − bi)ΛiS(bi) .
v.
d
dt
∑n
i=1
ST (bif )Λibi = Kfω +
∑n
i=1
ST (ḃif )Λibi = Kfω − γf
∑n
i=1
ST (Λibi)(bi − bif )
Proof i. By definition, a symmetric matrix K, with real entries, is positive definite if and
only if the real number yTKy is positive for all non-zero y ∈ R
3. Consider ∀y ∈ R
3
yTKy =
n∑
i=1
yTST (bi)ΛiS(bi)y
=
n∑
i=1
(S(bi)y)
TΛi(S(bi)y)
=
n∑
i=1
xTi Λixi, xi = S(bi)y
≥xT1 Λ1x1 + xT2 Λ2x2 + ...+ xTnΛnxn > 0.
since every Λi ∈ R
3×3 is a symmetric positive-definite matrix, this means that each term in
the above sum is greater than zero. This proves item i).
18
3.2 Angular velocity observer
Proof ii. Let’s start by multiplying the matrix K by ω, we have that:
Kω =
n∑
i=1
ST (bi)ΛiS(bi)ω, by definition of K, 3.6
=
n∑
i=1
ST (bi)Λiḃi using that ḃi = S(ω)bi 3.4
which ends the proof.
Proof iii. Similarly, we now multiply the matrix Kf by ω, this yields:
Kfω =
n∑
i=1
ST (bif )ΛiS(bi)ω, by definition of Kf , 3.7
=
n∑
i=1
ST (bif )Λiḃi using that ḃi = S(ω)bi 3.4
and statement iii holds.
Proof iv. Starting by definition of the Kf matrix, we have that:
Kf =
n∑
i=1
ST (bif )ΛiS(bi) by definition of Kf
=
n∑
i=1
ST (bif + bi − bi)ΛiS(bi) adding and subtracting bi
=
n∑
i=1
ST (bif − bi)ΛiS(bi) +
n∑
i=1
ST (bi)ΛiS(bi)
︸ ︷︷ ︸
K
by distributing terms
= K +
n∑
i=1
ST (bif − bi)ΛiS(bi)
This is the end of the proof. Furthermore, since we just have proven that
Kf = K +
n∑
i=1
ST (bif − bi)ΛiS(bi)
this implies that:
Kf −K =
n∑
i=1
ST (bif − bi)ΛiS(bi)
19
3.3 Stability analysis
Proof v. Differentiating with respect to time, we have:
d
dt
n∑
i=1
ST (bif )Λibi =
n∑
i=1
ST ( ˙bif )Λibi +
n∑
i=1
ST (bif )
d
dt
(Λibi)
=
n∑
i=1
ST ( ˙bif )Λibi +
n∑
i=1
ST (bif )
Λiḃi +✓
✓✼
0
Λ̇ibi
since Λ is constant over time
=
n∑
i=1
ST ( ˙bif )Λibi +
n∑
i=1
ST (bif )Λiḃi
=
n∑
i=1
ST ( ˙bif )Λibi +
n∑
i=1
ST (bif )Λiḃi
︸ ︷︷ ︸
Kfω
by statement iii
= Kfω +
n∑
i=1
ST ( ˙bif )Λibi
= Kfω + γf
n∑
i=1
ST (bi− bif )Λibi using that ḃif = γf (bi − bif ) (3.5)
= Kfω − γf
n∑
i=1
ST (Λibi)(bi− bif ) using that S(x)y = −S(y)x
this completes the proof.
3.3 Stability analysis
We prove in this section the convergence of the proposed angular velocity observer. The
results are summarized in the following theorem.
Theorem 3.3.1 Global, exponential convergence of the angular velocity observer:
The observer defined by (3.15) and (3.16) achieves ω̂ → ω exponentially from any initial
condition.
Proof. From the analytical form of the observer (3.3) it follows that
M ˙̂ω = S(Mω̂)ω̂ +Kf (ω − ω̂) + τ
= S(Mω)ω̂ + S(M(ω̂ − ω))ω̂ +Kf (ω − ω̂) + τ
= S(Mω)ω̂ − S(ω̂)M(ω̂ − ω) +Kf (ω − ω̂) + τ using a2. from Lemma 2.2.1
= S(Mω)ω̂ + S(ω̂)M(ω − ω̂) +Kf (ω − ω̂) + τ
= S(Mω)ω̂ +
(
Kf + S(ω̂)M
)
(ω − ω̂) + τ.
20
3.3 Stability analysis
Let the angular velocity error be defined as
ω̃ = ω̂ − ω (3.8)
Thus, the error dynamics is expressed as
M ˙̃ω = S(Mω)ω̃ − (Kf + S(ω̂)M)ω̃. (3.9)
Consider now the Lyapunov function candidate
Vω(ω̃) =
1
2
ω̃TMω̃ (3.10)
which is positive definite with respect to ω̃ = 0 and radially unbounded. Differentiating
Vω(ω̃) with respect to time and evaluating it along the error dynamics (3.9) yield
V̇ω(ω̃) = ω̃TM ˙̃ω
= ω̃T
(
S(Mω)ω̃ − (Kf + S(ω̂)M)ω̃
)
= −ω̃T
(
K + (Kf −K)
)
ω̃ − ω̃TS(ω̂)Mω̃
= −ω̃TKω̃ − ω̃T (Kf −K)ω̃ − ω̃TS(ω̂)Mω̃
= −ω̃TKω̃ − ω̃T (Kf −K)ω̃ − ω̃TS(ω̂ − ω + ω)Mω̃
= −ω̃TKω̃ − ω̃T (Kf −K)ω̃ − ω̃TS(ω̃ + ω)Mω̃ (3.11)
To get a bound on the second and the third r.h.t. of (3.11) consider
∥Kf −K∥ = ∥
n∑
i=1
ST (bif − bi)ΛiS(bi)∥
≤
n∑
i=1
∥ST (bif − bi)∥∥Λi∥∥S(bi)∥
≤
n∑
i=1
λmax(Λi)∥bif − bi∥ ≤ ϵf λ̄.
where maxi{λmax(Λi)} ≤ λ̄, and
∥ω̃TS(ω̃ + ω)Mω̃∥ = ∥ω̃TS(ω)Mω̃∥
≤ λmax(Λi)∥ω∥∥ω̃∥
2
≤ λmax(Λi)ωmax∥ω̃∥
2
where ∥ω∥ ≤ ωmax. Therefore,
V̇ω(ω̃) ≤ −
(
λmin(K)− ϵf λ̄− λmax(M)ωmax
)
︸ ︷︷ ︸
λω
∥ω̃∥2
= 2
λω
λmax(M)
(
−
1
2
λmax(M)∥ω̃∥2
)
≤ −2
λω
λmax(M)
Vω, (3.12)
21
3.4 Implementation of the angular velocity observer
where Assumption 2 and the facts in Lemma 3.2.2 are used, λω := λmin(K) − ϵf λ̄ −
λmax(M)ωmax, and λ̄ ≥ maxi{λmax(Λi)}. Therefore, the equilibrium ω̃ = 0 is globally
exponentially stable (Theorem 4.10, [24]). This proves the global exponential convergence of
the observer.
3.4 Implementation of the angular velocity observer
To eliminate the unknown angular velocity from the analysis form of the observer (3.3), we
rewrite it as
M ˙̂ω = S(Mω̂)ω̂ +Kfω −Kf ω̂ + τ (3.13)
By Lemma 3.2.3 v, we obtain
Kfω =
d
dt
n∑
i=1
ST (bif )Λibi + γf
n∑
i=1
ST (Λibi)(bi − bif ) (3.14)
Substituting Kfω from (3.14) in (3.13), yields
M ˙̂ω = S(Mω̂)ω̂ +
d
dt
n∑
i=1
ST (bif )Λibi + γf
n∑
i=1
ST (Λibi)(bi − bif )−Kf ω̂ + τ
This gives that
d
dt
Mω̂ −
n∑
i=1
ST (bif )Λibi
︸ ︷︷ ︸
ω̄
= S(Mω̂)ω̂ + γf
n∑
i=1
ST (Λibi)(bi − bif )−Kf ω̂ + τ
Defining
ω̄ =Mω̂ −
n∑
i=1
ST (bif )Λibi,
the angular velocity observer is then implemented in the following form
˙̄ω = S(Mω̂)ω̂ + γf
n∑
i=1
ST (Λibi)(bi − bif )−Kf ω̂ + τ (3.15)
ω̂ =M−1
(
ω̄ +
n∑
i=1
ST (bif )Λibi
)
, (3.16)
together with the filter (3.5) and the initial condition ω̄(0) =Mω̂(0)−
∑n
i=1
ST (bif (0))Λibi(0).
Remark 3.4.1 In contrast to the angular velocity proposed in [13, 16, 40, 44], instead of
the attitude measurements, the angular velocity observer (3.15)-(3.16) uses solely the vector
22
3.4 Implementation of the angular velocity observer
measurements bi, i = 1, 2, ..., n, similarly to [9, 46]. This feature provides several advantages
than using attitude measurements such as it avoids employing an attitude representation and
enables to achieve the global exponential convergence. On the other hand, since this observer
is interdependent of the rigid body attitude, the estimated angular velocity together with the
same vector measurements can be used to estimate the rigid body attitude.
23
Chapter 4
Attitude observer design
This chapter an attitude observer that uses the same vector measurements as the angular
velocity observer in the previous section and the estimated angular velocity is designed. Both
angular velocity and attitude observer constitute the rotational state observer.
4.1 Problem statement
Consider a rigid body whose kinematics and dynamics are described by (2.7) and (2.11),
respectively. The problem addressed in this chapter is to design an observer to estimate
the attitude in terms of a rotation matrix R using, as in the previous chapter, the vector
measurements bi ∈ R
3, i = 1, ..., n, available from on-board sensors such as accelerometers,
magnetometers, sun sensors, or CCD cameras expressed in the body frame B, given known
constant reference vectors ri ∈ R
3 in the inertial frame I.
Recall that the relationship between vectors bi and ri is given by bi = RT ri (3.1) and
without loss of generality, the reference vectors ri as well as its measurement in the body
frame bi are normalized, i.e., ∥ri∥ = ∥bi∥ = 1. Also the assumption A1 in Section 3.1 is made.
4.2 Attitude observer design
We use the ECF for attitude estimation. Instead of using the angular velocity from the gyros,
we use the angular velocity estimate performed by the angular velocity observer (3.15)-(3.16).
The attitude observer is
˙̂
R = R̂S(ω̂ + kpσR) (4.1)
σR =
n∑
i=1
kiS(bi)b̂i, (4.2)
where R̂ is the attitude estimate, b̂i = R̂T ri is the estimate of bi, kp > 0 is a scalar gain,
ki are positive constant gains that represent the reliability of the sensors. If each sensor is
equally trusted, then their value is equal to 1 in each case; bi, i = 1, 2, ..., n are the sensor
measurements in the body frame defined in (3.1) satisfying Assumption A1.
24
4.2 Attitude observer design
The proposed angular observer with the attitude ECF constitutes an overall nonlinear
observer with cascaded structure, illustrated in Fig. 4.1.
Magnetometer
Accelerometer
Angular Velocity
Observer
Attitude
Observer
Figure 4.1: Illustration of the proposed cascaded structure of the observer when the vector
measurements are obtained from an IMU.
4.2.1 Useful results for the convergence analysis of the attitude observer
The following results, summarized in lemmas, will be used for the convergence analysis of
the attitude observer in cascade with the angular velocity observer proposed in the previous
chapter.
Attitude estimation error.
Let the attitude estimation error be defined as R̃ = R̂TR, and error eR as
eR =
1
2
n∑
i=1
ki∥bi − b̂i∥
2. (4.3)
Lemma 4.2.1 (Dynamics of the attitude estimation error.) The dynamics of the
attitude estimation error is
˙̃R = [R̃, S(ω)]− S(ω̃)R̃− S(kpσR)R̃. (4.4)
Proof. Taking the time derivative of R̃ yields
˙̃R =
˙̂
RTR+ R̂T Ṙ
25
4.2 Attitude observer design
Substituting Ṙ = RS(ω) in (2.7) and
˙̂
R = R̂S(ω̂ + kpσR) in (4.1) yield
˙̃R =
˙̂
RTR+ R̂T Ṙ
=
[
R̂S(ω̂ + kpσR)
]T
R+ R̂TRS(ω)
= ST (ω̂ + kpσR)R̂
TR+ R̂TRS(ω)
= −S(ω̂ + kpσR)R̂
TR+ R̂TRS(ω) using a1. Lemma 2.2.1 ST = −S
= −S(ω̂ + kpσR)R̃+ R̃S(ω) by definition, R̃ = R̂TR
= −S(ω + ω̃ + kpσR)R̃+ R̃S(ω) recalling, ω̃ = ω̂ − ω
= −S(ω)R̃− S(ω̃)R̃− S(kpσR)R̃+ R̃S(ω)
= R̃S(ω)− S(ω)R̃− S(ω̃)R̃− S(kpσR)R̃
= [R̃, S(ω)]− S(ω̃)R̃− S(kpσR)R̃
where [A,B] = AB −BA is Lie-bracket.
Correction term σR.
Some facts on the correction term σR are stated in the following lemma.
Lemma 4.2.2 (Facts on the correction term σR. )
• The correction term (4.2) can be expressed as σR =
∑n
i=1
kiS(bi)b̂i = P∨
a (R̃Γb), where
the operation (·)∨ is the inverse operation of S(u), u ∈ R
3, i.e., S∨(u) = u, Γb = RTΓrR
with Γr :=
∑n
i=1
kirir
T
i . Both Γr and Γb are positive definite under Assumption A1.
• σR = 0 implies that R̃ = I3 or tr(R̃) = −1.
26
4.2 Attitude observer design
Proof. i) By definition, the operator Pa acts on a matrix H ∈ R
3×3 Pa(H) = 1
2
(H −HT ).
Pa(R̃Γb) =
1
2
(
R̃Γb − (R̃Γb)
T
)
=
1
2
(
R̃Γb − ΓT
b R̃
T
)
=
1
2
(
R̃Γb − ΓbR̃
T
)
since ΓT
b = Γb
=
1
2
(
R̃RTΓrR−RTΓrRR̃
T
)
using definition, Γb = RTΓrR
=
1
2
(
R̂TRRTΓrR−RTΓrR(R̂
TR)T
)
using, R̃ = R̂TR
=
1
2
(
R̂TΓrR−RTΓrRR
T R̂
)
=
1
2
(
R̂TΓrR−RTΓrR̂
)
=
1
2
[
R̂T
n∑
i=1
kirir
T
i R−RT
n∑
i=1
kirir
T
i R̂
]
using definition, Γr :=
n∑
i=1
kirir
T
i
=
1
2
n∑
i=1
ki
[
R̂T rir
T
i R−RT rir
T
i R̂
]
=
1
2
n∑
i=1
ki
[
b̂ir
T
i R−RT rib̂
T
i
]
using, b̂i = R̂T ri and b̂Ti = rTi R̂
=
1
2
n∑
i=1
ki
[
b̂ib
T
i − bib̂
T
i
]
using, bTi = rTi R and bi = RT ri
=
n∑
i=1
ki
2
[
b̂ib
T
i − bib̂
T
i
]
On the other hand, σR =
∑n
i=1
kiS(bi)b̂i. This implies that:
S(σR) = S
(
n∑
i=1
kiS(bi)b̂i
)
=
n∑
i=1
kiS
(
S(bi)b̂i
)
=
n∑
i=1
kiS
(
bi × b̂i
)
using, S(u)v = u× v
=
n∑
i=1
ki
2
(
b̂ib
T
i − bib̂
T
i
)
using property, S(x× y) =
1
2
(yxT − xyT )
Hence,
Pa(R̃Γb) =
n∑
i=1
ki
2
[
b̂ib
T
i − bib̂
T
i
]
= S(σR) (4.5)
by applying the operator (·)∨ in both sides of equation (4.5), yields:
P∨
a (R̃Γb) = S∨(σR) = σR =
n∑
i=1
kiS(bi)b̂i (4.6)
27
4.2 Attitude observer design
The proof of the item i) is now complete.
ii) Note that if σR = 0, then S(σR) = 0.
S(σ) = Pa(R̃Γb) =
1
2
(
R̃Γb − (R̃Γb)
T
)
=
1
2
(
R̃Γb − ΓT
b R̃
T
)
=
1
2
(
R̃Γb − ΓbR̃
T
)
= 0.
=⇒
R̃Γb = ΓbR̃
T (4.7)
Given matrix R̃, the eigenvalues are the solutions to the characteristic equation,
det(R− λI3) = 0. (4.8)
The Euler’s theorem states that: For any R ∈ SO(3), there is a non-zero vector v ∈ R
3
satisfying Rv = λv. This theorem implies that the attitude of a body can be specified in
terms of a rotation by some angle about some fixed axis. Since R̃ is a real orthogonal matrix,
the eigenvalue equation for R̃ and its complex conjugate transpose are given by:
R̃v = λv, v∗T R̃T = λ∗v∗T
Hence, multiplying these two equations together yields,
λ∗λv∗T v = v∗T R̃T R̃v = v∗T v (4.9)
since an orthogonal matrix satisfies R̃R = I3. Since eigenvectors must be nonzero, it follows
that v∗T v ̸= 0. Hence, equation (4.9) yields ∥λ∥ = 1. Thus, the eigenvalues of a real
orthogonal matrix must be complex numbers of unit modulus. That is, λ = eiα for some α
in the interval 0 ≤ α < 2π. Note that,
R̃T (I3 − R̃) = R̃T − I3 = −(I3 − R̃)T
Taking the determinant of both sides of this equation, it follows that:
det(R̃)det(I3 − R̃) = (−1)3det(I3 − R̃), (4.10)
since for the 3× 3 identity matrix, det(−I3) = (−1)3. We have det(R̃) = 1 and (−1)3 = −1.
Hence, eq. (4.10) yields
det(I3 − R̃) = 0,
Comparing with eq. (4.8), we can conclude that λ = 1 is an eigenvalue of R̃. Since det(R̃)
is the product of its three eigenvalues and each eigenvalue of R̃ is a complex number of unit
modulus, it follows that the eigenvalues of any proper 3× 3 orthogonal matrix must be 1, eiθ
and e−iθ for some value of θ that lies in the interval 0 ≤ θ ≤ π. All three eigenvalues of R̃
are real since complex eigenvalues must come in complex conjugate pairs. Hence, it follows
that:
eigen(R̃) = (1, eiθ, e−iθ) = (1, cos(θ) + isinθ, cos(θ)− isinθ)
or θ = 0 or θ = ±π. When θ = 0, the eigenvalues of R̃ are: eigen(R̃) = (1, 1, 1), using the
angle- axis coordinates representation of R̃ ∈ SO(3):
cos(θ) =
1
2
(tr(R̃− 1)) (4.11)
28
4.2 Attitude observer design
in θ = 0,
cos(0) = 1 =
1
2
(tr(R̃− 1))
=⇒ tr(R̃ − 1) = 2 or equivalently 3 = tr(R̃), This means that R̃ = I3. Now, evaluating eq.
(4.11) in θ = ±π:
cos(±π) = −1 =
1
2
(tr(R̃− 1))
=⇒ tr(R̃− 1) = −2 or equivalently tr(R̃) = −2+1 = −1. This ends the proof of the item ii).
More properties of the attitude estimation error R̃ aand eR.
The attitude estimation error R̃ and eR defined in (4.3) have the following properties:
Lemma 4.2.3 (More properties of the error eR.) The error eR defined in (4.3) can be
expressed as eR = 1
2
∑n
i=1
ki∥bi − b̂i∥
2 =
∑n
i=1
ki − tr(R̃Γb).
Proof.
eR =
1
2
n∑
i=1
ki∥bi − b̂i∥
2
=
1
2
n∑
i=1
ki(bi − b̂i)
T (bi − b̂i)
=
1
2
n∑
i=1
ki(b
T
i − b̂i
T
)(bi − b̂i)
=
1
2
n∑
i=1
ki(b
T
i bi − bTi b̂i − b̂i
T
bi + b̂i
T
b̂i)
=
1
2
n∑
i=1
ki(∥bi∥
2 + ∥b̂i∥
2 − bTi b̂i − b̂i
T
bi)
=
1
2
n∑
i=1
ki(1 + 1)−
1
2
n∑
i=1
ki(b
T
i b̂i + b̂i
T
bi)
=
1
2
n∑
i=1
ki(2)−
n∑
i=1
ki
2
(bTi b̂i + b̂i
T
bi)
=
n∑
i=1
ki −
n∑
i=1
ki
2
(bTi b̂i)−
n∑
i=1
ki
2
(b̂i
T
bi)
=
n∑
i=1
ki −
n∑
i=1
ki
2
(tr(bib̂
T
i ))−
n∑
i=1
ki
2
(tr(b̂ib
T
i ))
29
4.2 Attitude observer design
=
n∑
i=1
ki −
n∑
i=1
ki
2
(tr(bib̂
T
i ))−
n∑
i=1
ki
2
(tr(bTi b̂i))
=
n∑
i=1
ki −
n∑
i=1
ki
2
(tr(bib̂
T
i ))−
n∑
i=1
ki
2
(tr(bib̂
T
i ))
=
n∑
i=1
ki −
n∑
i=1
ki(tr(bib̂
T
i ))
Therefore,
eR =
n∑
i=1
ki −
n∑
i=1
ki(tr(bib̂
T
i )) =
n∑
i=1
ki − tr(R̃Γb).
where tr(R̃Γb) = tr(R̃RTΓrR) = tr(R̂TRRTΓrR) = tr(R̂TΓrR) = tr
(
R̂T
∑n
i=1
kirir
T
i R
)
=
∑n
i=1
kitr(R̂
T rir
T
i R) =
∑n
i=1
kitr(b̂ib
T
i ) =
∑n
i=1
kib̂i
T
bi. This ends the proof.
Lemma 4.2.4 (More properties of the error R̃.) d
dt
R̃Γb = [R̃Γb, S(ω)] − S(ω̃)R̃Γb −
S(kpσR)R̃Γb.
Proof.
d
dt
R̃Γb =
˙̃RΓb + R̃Γ̇b
= [R̃, S(ω)]Γb − S(ω̃)R̃Γb − S(kpσR)R̃Γb + R̃Γ̇b
but
Γ̇b = RTΓrṘ+ ṘTΓrR
= RTΓr(RS(ω)) + (RS(ω))TΓrR
= RTΓrRS(ω) + ST (ω)RTΓrR
= RTΓrRS(ω)− S(ω)RTΓrR
= ΓbS(ω)− S(ω)Γb = [Γb, S(ω)]
by substituting Γ̇b in the above expression, we have:
d
dt
R̃Γb = [R̃, S(ω)]Γb − S(ω̃)R̃Γb − S(kpσR)R̃Γb + R̃[Γb, S(ω)]
= (R̃S(ω)− S(ω)R̃)Γb + R̃(ΓbS(ω)− S(ω)Γb)− S(ω̃)R̃Γb − S(kpσR)R̃Γb
= R̃S(ω)Γb − S(ω)R̃Γb + R̃ΓbS(ω)− R̃S(ω)Γb − S(ω̃)R̃Γb − S(kpσR)R̃Γb
=✘✘✘✘✘✿
R̃S(ω)Γb − S(ω)R̃Γb + R̃ΓbS(ω)−✘✘✘✘✘✿
R̃S(ω)Γb − S(ω̃)R̃Γb − S(kpσR)R̃Γb
= R̃ΓbS(ω)− S(ω)R̃Γb − S(ω̃)R̃Γb − S(kpσR)R̃Γb
= [R̃Γb, S(ω)]− S(ω̃)R̃Γb − S(kpσR)R̃Γb
which completes the proof of the Lemma.
30
4.3 Stability analysis
Lemma 4.2.5 ėR = −tr
( d
dt
R̃Γb
)
= −2ω̃TσR − 2kp∥σR∥
2.
Proof. First, we can recall the definition of ėR Lemma 4.2.3.
eR =
n∑
i=1
ki − tr(R̃Γb)
Differentiating error with respect to time,
˙eR = −tr
(
d
dt
R̃Γb
)
= −tr
(
[R̃Γb, S(ω)]− S(ω̃)R̃Γb − S(kpσR)R̃Γb
)
by Lemma 4.2.4
= −tr
(
[R̃Γb, S(ω)]
)
+ tr
(
S(ω̃)R̃Γb
)
+ tr
(
S(kpσR)R̃Γb
)
= tr
(
S(ω̃)R̃Γb
)
+ tr
(
S(kpσR)R̃Γb
)
since tr([R̃Γb, S(ω)]) = 0
= tr
(
R̃ΓbS(ω̃)
)
+ tr
(
R̃ΓbS(kpσR)
)
since tr(AB) = tr(BA)
= tr
(
Pa(R̃Γb)S(ω̃)
)
+ tr
(
Pa(R̃Γb)S(kpσR)
)
property tr(AS(x)) = tr(Pa(A)S(x))
= −⟨S(ω̃), Pa(R̃Γb)⟩ − ⟨S(kpσR), Pa(R̃Γb)⟩ using tr(S(u)A) = −⟨S(u), Pa(A)⟩
= −2⟨ω, P∨
a (R̃Γb)⟩ − 2⟨kpσR, P
∨
a (R̃Γb)⟩ using −⟨S(u), Pa(A)⟩ = −2⟨u, P∨
a (A)⟩
= −2ω̃TP∨
a (R̃Γb)− 2(kpσR)
TP∨
a (R̃Γb) using ⟨u, P∨
a (A)⟩ = uTP∨
a (A)
= −2ω̃TσR − 2kpσ
T
RσR using P∨
a (R̃Γb) = σR
= −2ω̃TσR − 2kp∥σR∥
2
This concludes the proof of the Lemma.
4.3 Stability analysis
The convergence of the attitude observer (4.1) in cascade with the angular velocity observer
(3.15) is stated in the following theorem.
Theorem 4.3.1 (Almost global, asymptotic converge of the cascaded observer.)
Consider the kinematics (2.7) and the dynamics (2.11) of a rigid body. The observer defined
by (3.15)-(3.16) and (4.1) achieves ω̂ → ω and R̂ → R asymptotically for almost all initial
conditions (ω̂(0), R̂(0)) ∈ R
3×SO(3) except a unstable set on SO(3) of zero measure, provided
the observer gain kp satisfies kpλω > 1/2.
31
4.3 Stability analysis
Proof. Consider the error dynamics of the angular velocity observer (3.9), the attitude
observer (4.4), and the following Lyapunov-like function
V = Vω + eR =
1
2
ω̃TMω̃ +
1
2
n∑
i=1
∥bi − b̂i∥
2. (4.12)
By (3.11) and the facts stated in Lemma 3.2.2, the time derivative of (4.12) evaluated along
the error dynamics of (3.9) and (4.4) is
V̇ = V̇ω + ėR
≤ −λω∥ω̃∥
2 − 2ω̃TσR − 2kp∥σR∥
2
= −[∥ω̃∥ ∥σR∥]
[
λω 1
1 2kp
] [
∥ω̃∥
∥σR∥
]
≤ 0, (4.13)
provided 2λωkp > 1. By Barbalat lemma, it follows that ω̃ → 0 and σ → 0 asymptotically.
This gives that ω̂ → ω and R̂→ R or tr(R̃) = −1 asymptotically. This condition tr(R̃) = −1
corresponds to a set in SO(3) with measure zero, where eR reaches its local maximum. By
Chateav theorem [24], this set is unstable.
Remark 1. The observer system (3.15) and (3.16) in cascade with the attitude observer
(4.1) provides both angular velocity and attitude estimation using at least no-coplanar vector
measurements. While the convergence of the angular velocity observer is global and exponen-
tial, due to the topological constraints in SO(3) the attitude observer is only almost global
asymptotic [28]. Several improvements on its convergence including the gain selection (a ma-
trix gain instead of the scalar gain), linear acceleration estimation with a velocity sensor to
relax the assumption on small linear acceleration, and using a switching strategy to achieve
global exponential convergence are available.
Remark 2. The proposed observer has a cascaded structure, making the angular velocity
decoupled from the attitude estimation. This facilitates the convergence of the convergence
analysis of both the angular velocity and attitude observer. Moreover, the proposed observer,
by relying on only vector measurements, provides a backup solution in the eventual failure
of gyros.
32
Chapter 5
Linear velocity and position observer
In this chapter a novel nonlinear observer for estimating the translational state of a rigid body
is presented. This observer was created assuming that vector measurements are available
along with the GPS data information. This chapter provides a detailed development of
the theoretical framework on which the observer was created, this includes an overview of
previous work related to full state estimation, the theoretical proposal of the observer and
its stability analysis; as well as the discussion about its main characteristics, advantages and
disadvantages with respect to other proposals.
5.1 Problem statement
Consider the translational motion of a rigid body described by the equations (2.12)-(2.13).
The problem addressed in this section is to design a navigation observer to estimate the
translational state (position p and linear velocity v) using vector measurements, available
from onboard sensors, and a GPS sensor.
From the translational dynamics of a rigid body:
∑
p
: v̇ = ge3 −
f
m
Re3. (5.1)
The accelerometer measurements in body frame can be rewritten as follows: aB = −(f/m)e3.
Therefore, the model of the plant can be written as follows:
∑
p
: v̇ = ge3 +RaB (5.2)
We propose the following linear velocity observer in its analytical form
∑
o
: ˙̂v = ge3 + R̂aB − κ2(p̂− p)− κ3(v̂ − v) (5.3)
where v̂ represents the linear velocity estimation and v the true linear velocity. Note that the
observer (5.3) consists of the translational dynamics (5.2) plus a correction term. Also, note
that the theoretical proposal (5.3) is not directly implementable because it depends implicitly
on v. The implementable version will be written later.
The objectives in the design of the translational observer are:
33
5.2 Linear velocity and position observer design
1. Designing the observer such that the estimated linear velocity v̂ converges to true linear
velocity v exponentially, i.e, v̂ − v → 0 as t → ∞. Also, that the estimated position p̂
converges to true position p exponentially, i.e, p̂− p→ 0 as t → ∞.
2. To eliminate v so that the proposed observer (5.3), can be implementable using only
the accelerometer measurements vi, for i = 1, 2, ..., n and a GPS sensor.
5.2 Linear velocity and position observer design
Starting from the analytical form of the observer (5.3), we have:
˙̂v = ge3 + R̂aB − κ2(p̂− p)− κ3(v̂ − v)
We can rearrange the above equation as:
˙̂v − κ3v = ge3 + R̂aB − κ2(p̂− p)− κ3v̂
by substituting (2.12), ṗ = v into the equation above, gives:
ge3 + R̂aB − κ2(p̂− p)− κ3v̂ = ˙̂v − κ3v
= ˙̂v − κ3ṗ
=
d
dt
(v̂ − κ3p
︸ ︷︷ ︸
)
=
d
dt
v̄
This implies that the implementable version of the translational observer can be written as:
˙̄v = ge3 + R̂aB − κ2(p̂− p)− κ3v̂
and also, that:
˙̄v = v̂ − κ3p
This establishes the theoretical basis for generating the new proposal of the observer.
Let p̂, v̂ ∈ R
3 be the position and linear velocity estimate, respectively. The following
translational state observer is proposed.
˙̂p = v̂ − κ1(p̂− p) (5.4)
˙̄v = ge3 + R̂aB − κ2(p̂− p)− κ3v̂, (5.5)
v̂ = v̄ + κ3p, (5.6)
with initial conditions p̂(0) = p(0) and v̄(0) = v̂(0)− κ3p(0) where v̂(0) ∈ R
3, κi, i = 1, 2, 3,
are scalar gains, and R̂ the attitude estimate provided by the attitude observer (1.1). Assume
that:
A4: There exists a scalar cR > 0 such that ∥R̂RT − I3∥ ≤ cR∥[p̃
T (t) ṽT (t)]∥, ∀t ≥ t0 ≥ 0.
34
5.3 Stability analysis
5.3 Stability analysis
The convergence of the translational observer (5.4)- (5.5) is stated in the following theorem.
Theorem 5.3.1 (Global exponential stability of the translational state observer.)
Consider the translational motion of a quadrotor described by (2.12)-(2.13). Define the trans-
lational state estimation error as p̃ = p̂− p, ṽ = v̂ − v.
The observer defined by (5.4) - (5.6) guarantees the exponential stability of p̃ = 0 and
ṽ = 0 for all initial conditions [p̃T (0) ṽT (0)] ∈ R
3 × R
3 with the observer gains κi > 0,
i = 1, 2, 3.
Proof. Starting by the definition of ṽ = v̂ − v, the time derivative of ṽ, yields:
˙̃v = ˙̂v − v̇
= ˙̄v + κ3ṗ− v̇ using equation (5.6)
= ge3 + R̂aB − κ2(p̂− p)− κ3v̂ + κ3ṗ− v̇ by substituting (5.5)
= ge3 + R̂aB − κ2(p̂− p)− κ3v̂ + κ3v − ge3 −RaB using that ṗ = v and v̇ = ge3 +RaB
= (R̂−R)aB − κ2(p̂− p)− κ3(v̂ − v) rearranging the terms
= (R̂−R)aB − κ2p̃− κ3ṽ
= (R̂RT − I3)RaB − κ2p̃− κ3ṽ.
Also, note that:
˙̃p = ˙̂p− ṗ
= v̂ − κ1(p̂− p)− v using equation (5.4) and (2.12)
= (v̂ − v)− κ1(p̂− p) rearranging the terms
= ṽ − κ1p̃.
The time derivative of v̂ in Eq. (5.6) is
˙̂v = ge3 + R̂TaB − κ2p̃− κ3ṽ. (5.7)
Therefore, the dynamics of the translational state estimation error [p̃T ṽT ] is
[
˙̃p
˙̃v
]
=
[
−κ1I3 I3
−κ2I3 −κ3I3
] [
p̃
ṽ
]
+
[
0
(R̂RT − I3)RaB
]
. (5.8)
35
5.3 Stability analysis
Note that the matrix describing the dynamics of the translational state estimation error (5.8)
is Hurwitz for κi > 0, i = 1, 2, 3, giving an exponentially stable LTI system with a bounded
perturbation given by ∥(R̂RT − I3)RaB∥ ≤ ∥R̂RT − I3∥∥RaB∥ ≤ c̄R∥[p̃
T (t) ṽT (t)]∥ under
Assumption A3, where c̄R = cR∥RaB∥. By Lemma 9.1 of [24], it follows that the equilibrium
[p̃T ṽT ] = 0 is globally exponentially stable with an appropriate choice of gains κi, i = 1, 2, 3.
Remark 5.3.1 Assumption A4 may be fulfilled by either tuning the attitude observer gains
to converge sufficiently fast or waiting a short time t0 to start the translational state observer.
Note that this assumption is not critical for the convergence of the translational state observer,
because the inertial acceleration RaB of a quadrotor is much smaller than the gravity acceler-
ation, and the attitude estimation error (R̂RT − I3) is always bounded. If A3 is not satisfied,
by Theorem 4.18 of [24] that the translational estimation error tends to a neighborhood of the
origin with size bounded by ∥R̂RT − I3∥∥RaB∥.
The overall proposed navigation observer has a modular structure. It consists of three
cascade modules working together as shown in the diagram Fig. 5.1 to estimate the angular
velocity, the attitude, and the translational state, each of them being designed and run
independently, ensuring their functionality and robustness to failure. This modular feature
also facilitates the convergence analysis and implementation of the navigation observer in
practice.
Figure 5.1: Modular structure of the navigation observer.
36
Chapter 6
Validation by numerical simulations
In this chapter, a series of numerical simulations are presented to validate the proposed
observer (3.15), (3.16), (3.5), (4.1), (4.2), (5.4)- (5.6) developed in the previous chapters.
Three scenarios were considered in the simulations. The first scenario illustrates the
performance of the observer in an ideal situation where the two reference vectors in the
inertial frame are perpendicular and the on-board vector measurements are noise-free. The
second scenario simulates a more realistic situation where the angle between the reference
vectors is small and the vector measurements are contaminated by a Gaussian noise. Since the
angular velocity estimate (3.15) and (3.16) depends on the inertia matrix, the third scenario
illustrates the robustness of the observer under uncertainty in the inertia matrix.
We have selected an X-shaped quadrotor for simulation and experimental validation pur-
poses. We want to show the performance of the navigation observer in a simulation envi-
ronment and see how it performs in real flights, and compare the results in both situations.
What we expect is that the simulation results will be highly similar to what happens in
reality, which represents an interesting challenge in the implementation.
The quadrotor is a four rotor helicopter. Each motor with propeller mounted over a
symmetrical cross rigid structure. Since quadrotors have been very well studied, there is
extensive research on them; in addition, they are low-cost compared with other experimental
platforms. These reasons make them a suitable technology for academic research.
The quadrotor was simulated to fly in a circular trajectory of radius 15[m] given by r(t) =
[15sin(αt) 15cos(αt) 5]T , with α =
√
1/15 using Matlab and Simulink as shown in Fig. 6.1.
Note that the magnitude of the linear acceleration is equal to one. Since the linear acceleration
is much smaller than the gravity acceleration, the reference vectors of the normalized gravity
acceleration r1 = [0, 0,−1]T and the local magnetic field r2 = [0.6626, 0.0544, 0.7469]T were
used. The vector measurements are assumed to be obtained from a low-cost IMU sensor,
from which only measurements of the accelerometer and the magnetometer are used and
modeled as b1 = R̂T r1
∥R̂T r1∥
, b2 = R̂T r2
∥R̂T r2∥
. To simplify the implementation, the attitude observer
was implemented in its quaternion representation by
˙̂q =
1
2
q̂ ⊗ [0 ω̂T + kpσ
T
R]
T (6.1)
where q̂ = [q̂0, q̂v]
T is the estimate of the attitude in terms of the unit quaternion q ∈ S
3,
with S
3 = {q ∈ R
4|∥q∥ = 1} the unit sphere of dimension 3, and ⊗ denotes the product of
37
6.1 Quadrotor model
quaternions. The estimate b̂i can be obtained by b̂i = q̂⊗ [0 rTi ]
T ⊗ q̂∗, where q̂∗ = [q̂0, −q̂v]
T
is the conjugate of q. The resultant quaternion estimate from (6.1) was normalized to ensure
numerical correctness. The quaternion error is calculated as: q̃ = q̂−1 ⊗ q.
Figure 6.1: Circular path flight executed by the quadrotor in Simulink.
6.1 Quadrotor model
The propeller angular speeds, ωi, i = 1, 2, 3, 4 will determine the total thrust f and moments
τ [36].
38
6.1 Quadrotor model
Figure 6.2: Plus and X configuration of a quadrotor. Image credit [36].
For a plus-configuration quadrotor, the total thrust that acts on the quadrotor and the
moments produced by propellers are (6.2):
f
τx
τy
τz
=
cT cT cT cT
0 −dcT 0 dcT
dcT 0 −dcT 0
cM −cM cM −cM
ω2
1
ω2
2
ω2
3
ω2
4
(6.2)
The total thrust and moments produced by the propellers in an X-configuration quadrotor
are (6.3):
f
τx
τy
τz
=
cT cT cT cT
dcT
√
2
2
−dcT
√
2
2
−dcT
√
2
2
dcT
√
2
2
dcT
√
2
2
dcT
√
2
2
−dcT
√
2
2
−dcT
√
2
2
cM −cM cM −cM
ω2
1
ω2
2
ω2
3
ω2
4
(6.3)
where f represents the total thrust that acts on the quadrotor, τx, τy and τy are the moments,
d is the distance between the body center and any motor, cT , the thrust coefficient, cM , the
moment coefficient and ωi i = 1, .., 4 are the angular speeds of each motor.
Assuming that the speeds of the motors are known at all times, the applied torque τ and
thrust f can be calculated through (6.3). The physical data of the real quadrotor were used
in the simulation and summarized in the next section in Tab. 7.2.
Given the speed of the motors and the physical parameters of the quadrotor: cT , cM and
d, the components for τ in [Nm] units, can be modeled as the following functions of time:
τx = (0.000019919) + (0.00020641) ∗ sin(0.60803 ∗ t− 1.6324) (6.4)
τy = (−6.0042e− 007) + (0.000092638) ∗ sin(0.60746 ∗ t+ 3.1044) (6.5)
τz = (1.3528e− 008) + (0.00026022) ∗ sin(0.60816 ∗ t− 0.068381) (6.6)
By generating τ , we can find the true value of the angular velocity ω by integrating
equation (2.11) and the true value of attitude R by integrating equation (2.7) and compared
these results with the estimated values ω̂, R̂ output by the observer (3.15), (3.16), (3.5), (4.1),
(4.2). Now, for the translational motion, we know the trajectory followed by the quadrotor,
this means that we know the real values of the position p, the linear velocity v and the linear
acceleration v̇. We also know the model of the accelerometer and magnetometer sensors. The
true values are compared to estimates given by the observer (5.4)- (5.6).
The initial conditions and parameters are shown in Table 6.1.
39
6.2 Error Analysis
Table 6.1: Initial conditions and parameters used in simulations.
Initial state Value Units
q(0) [1 0 0 0]T
q̂(0) [0.7874 0.2 − 0.5 − 0.3]T
ω(0) [0.01 0.01 0.2]T [rad/s]
ω̄(0) [1 1 1]T [rad/s]
b1f (0) [0 0 − 1]T
b2f (0) [0 1 0]T
v̄(0) [0 0 0]T [m/s]
p̂(0) [0.2 0.2 0.2]T [m]
Parameters
γf 5.0
kp 1.0
6.2 Error Analysis
In order to estimate the reliability between the true values compared with the estimated
values, we wish to quantify the error between these two measurements. For error analysis,
Root Mean Square Error (RMSE) is a frequently used measure of the differences between
the values (sample or population values) predicted by a model or estimator and the observed
values.
Definition 6.2.1 Root mean squared error (RMSE) is defined as
RMSE =
√
√
√
√ 1
N
N∑
i=1
∥ŷi − yi∥2
where yi ∈ R
3 is the observation, ŷi ∈ R
3 is the real value of a variable, and N the number
of observations available for analysis.
The equation above is useful to evaluate the errors in the observers for angular velocity,
position and linear velocity. In case of quaternions, we have previously established that the
error between the true quaternion and the estimated quaternion is calculated as: q̂−1 ⊗ q,
which will be compared with the identity quaternion [1; 0; 0; 0].
40
6.3 Scenario I. Noise-free case
6.3 Scenario I. Noise-free case
This scenario represents the case where vector measurements are noise-free. Note that the
parameters used were selected to assign the same confidence in the magnetometer than in
the accelerometer because of the very small linear acceleration. Note that as the gravity
and the magnetic field of Earth are not perpendicular to each other, there is a small angle
between them, this represent the situation as would be a location near the above the Equator
of the Earth. Scalar gains were selected as k1 = 5.0, k2 = 5.0, κ1 = 1.0, κ2 = 1.0, κ3 = 5.0,
Λ1 = diag[0.15, 0.15, 0.15], Λ2 = diag[0.15, 0.15, 0.15]. Fig. 6.3 shows the performance of
the observer.
We observe from the graphs Fig. 6.3 that the variables show an exponential convergence
to zero within the first five seconds approximately, under ideal conditions. In the following
set of graphs 6.4, an error analysis in terms of Root Mean Square Error is shown. We
can see that the settling time for the angular velocity error remains stable after 10s and
within a time interval of ω̃RMSE ≤ 1× 10−4rad/s, for the quaternion estimation error within
(q̂−1 ∗ q)RMSE ≤ 1 × 10−6, and for the position and linear velocity error within p̃RMSE ≤
1× 10−6m and ṽRMSE ≤ 1× 10−6m/s, respectively.
6.4 Scenario II. Noisy case
In this scenario, the vector measurements were contaminated by a Gaussian noise of nb,i ∈
N(0, 0.12). In this case, the scalar gains were selected as k1 = 5.0, k2 = 5.0, κ1 = 1.0,
κ2 = 1.0, κ3 = 5.0. Λ1 = diag[0.0025, 0.0025, 0.0025], Λ2 = diag[0.0025, 0.0025, 0.0025].
Fig. 6.5 illustrates the results.
From the plots shown in Fig. 6.5, we can see that under noisy conditions the convergence
to zero is achieved at approximately a settling time of 20s. Furthermore, in the following
plots Fig. 6.6 we can see the Root Mean Square Error for each variable, which shows it
remains within an interval of 1× 10−2.
6.5 Scenario III. Uncertain inertia matrix
This scenario shows the performance of the observer in case of uncertainty in the inertia
matrix. The inertia matrix in the angular velocity observer (3.15), (3.16), (3.5), was perturbed
by ∆M , with Mpert =M +∆Mi, i=1,2. The values of ∆M1 and ∆M2 were selected as:
∆M1 =
0.0003 0 0
0 0.0006 0
0 0 −0.0021
, ∆M2 =
−0.0015 0 0
0 −0.005 0
0 0 0.002
such that Mpert is positive definite with ∥∆M1∥ ≈ 0.1∥M∥, ∥∆M2∥ ≈ 0.2∥M∥. Scalar
gains were selected as: k1 = 5.0, k2 = 5.0, κ1 = 1.0, κ2 = 0.2, κ3 = 0.5, Λ1 = diag[0.25, 0.25, 0.25],
Λ2 = diag[0.25, 0.25, 0.25]. The behavior of the observer under these circumstances is illus-
trated in Fig. 6.7-Fig. 6.8.
41
6.5 Scenario III. Uncertain inertia matrix
0 5 10 15 20 25 30 35 40 45 50
time (s)
-1
-0.5
0
0.5
1
0 5 10 15 20 25 30 35 40 45 50
time (s)
-0.5
0
0.5
1
q
0
q
1
q
2
q
3
0 5 10 15 20 25 30 35 40 45 50
time (s)
-2
-1
0
1
2
3
0 5 10 15 20 25 30 35 40 45 50
time (s)
-4
-2
0
2
4
Figure 6.3: Scenario I (Noise-free case): (Top-down) angular velocity error, attitude error q̂−1∗q,
position error p− p̂, and linear velocity error v − v̂ .
It is interesting to note that the convergence rate of the estimation errors occurs in less
time with respect to the noise-free case. However, the errors increased after a short period,
as shown in Fig.6.7. A similar behavior occurs with the observer of [3]. This indicates that
42
6.5 Scenario III. Uncertain inertia matrix
0 10 20 30 40 50
time (s)
10
-5
10
0
0 10 20 30 40 50
time (s)
10
-6
10
-4
10
-2
10
0
0 10 20 30 40 50
time (s)
10
-5
10
0
0 10 20 30 40 50
time (s)
10
-5
10
0
Figure 6.4: Scenario I (Noise-free case): (Top-down) Root Mean Square estimate for angular
velocity error ω̃, attitude error q̂−1 ∗ q, position error p̃ and linear velocity error ṽ.
the proposed observer is sensitive to uncertainty in the inertia matrix and may be relevant
when the uncertainty is greater than or equal to 20%.
These results encourage us to establish that the proposed observer is a good candidate
43
6.5 Scenario III. Uncertain inertia matrix
0 5 10 15 20 25 30 35 40 45 50
time (s)
-1
-0.5
0
0.5
1
0 10 20 30 40 50
time (s)
0
0.5
1 q
0
q
1
q
2
q
3
0 5 10 15 20 25 30 35 40 45 50
time (s)
-2
-1
0
1
2
3
0 5 10 15 20 25 30 35 40 45 50
time (s)
-4
-2
0
2
4
Figure 6.5: Scenario II (Noisy case): Top-down: Estimate for angular velocity error ω̃, attitude
error q̂−1 ∗ q, position error p̃ and linear velocity error ṽ.
for implementation in autonomous aerial vehicles.
44
6.5 Scenario III. Uncertain inertia matrix
0 10 20 30 40 50
time (s)
10
-2
10
-1
10
0
0 10 20 30 40 50
time (s)
10
-3
10
-2
10
-1
10
0
0 10 20 30 40 50
time (s)
10
-2
10
0
0 10 20 30 40 50
time (s)
10
-2
10
0
10
2
Figure 6.6: Scenario II (Noisy case): (Top-down) Root Mean Square estimate for angular
velocity error ω̃, attitude error q̂−1 ∗ q, position error p̃ and linear velocity error ṽ.
45
6.5 Scenario III. Uncertain inertia matrix
0 10 20 30 40 50 60 70 80 90 100
time (s)
0
0.05
0.1
0.15
0 10 20 30 40 50 60 70 80 90 100
time (s)
0
1
2
3
4
5
M
pert
= M
M
pert
= M + M
1
M
pert
= M + M
2
0 10 20 30 40 50 60 70 80 90 100
time (s)
0
1
2
3
4
5
M
pert
= M
M
pert
= M + M
1
M
pert
= M + M
2
Figure 6.7: Scenario III (Uncertain inertia matrix): norm of (top-down) the angular velocity
estimation error, position, and linear velocity estimation error.
46
6.5 Scenario III. Uncertain inertia matrix
0 10 20 30 40 50 60 70 80 90 100
time (s)
-0.2
0
0.2
0.4
0.6
0.8
1 q
0
q
1
M
pert
=M
q
2
q
3
0 10 20 30 40 50 60 70 80 90 100
time (s)
-0.2
0
0.2
0.4
0.6
0.8
1 q
0
q
1
M
pert
=M+ M
1
q
2
q
3
0 10 20 30 40 50 60 70 80 90 100
time (s)
-0.2
0
0.2
0.4
0.6
0.8
1 q
0
q
1
M
pert
=M+ M
2
q
2
q
3
Figure 6.8: Scenario III (Uncertain inertia matrix): Attitude error q̂−1 ∗ q.
47
Chapter 7
Experimental validation
In this chapter, we present validation results of the angular velocity, attitude observer and
translational state observer. The experiments were carried out in our laboratory. A detailed
description of the construction of an experimental platform based on a quadrotor is given
aimed at testing the proposed observer in real-world applications. This platform was de-
veloped from scratch in the Mechatronics Laboratory at Unidad de Alta Tecnoloǵıa campus
Juriquilla, UNAM Querétaro. The main objective of this platform is to close the gap between
theoretical results on the design of our estimation and control algorithms for autonomous ve-
hicles.
An overview of the hardware and firmware architecture is presented and the experimental
results are discussed.
7.1 Experimental platform
The experimental platform consists of an open architecture quadrotor, with a Pixhawk au-
topilot on board, which contains an IMU with accelerometer, magnetometer, and gyroscope.
The embedded algorithm in the IMU issues as the output the bias-corrected angular velocity
and the attitude in Euler angles, which are taken as the ”true” angular velocity and the
attitude of the quadcopeter to compared with.
The firmware consists of Ardupilot, which was modified to acquire the signals from the
motor speed sensors of the brushless DC motor of the quadcopeter to determine the total
torque acting on the quadrotor. The experiment involves programming an autonomous mis-
sion on the quadrotor to execute flights in an outdoor environment. The flight data are
processed to perform the analysis offline. The algorithm uses data from the accelerometer
and the magnetometer to estimate the angular velocity of the quadcopeter. The estimated
angular velocity is forwarded to the Explicit Complementary Filter to estimate the attitude
of the quadcopeter quaternions. For an intuitive comparison, Euler angles are also plotted.
Therefore, the output of the algorithm is the estimated angular velocity ω̂ and the estimated
attitude q̂, θ̂, ϕ̂, ψ̂. These data are compared with the angular velocity ω and the attitude in
terms of Euler angles θ, ϕ, ψ, provided by the microcontroller, which uses a Kalman Filter
to output the attitude.
48
7.1 Experimental platform
7.1.1 Hardware architecture
The platform that carries all the equipment of the quadrotor is illustrated in Fig. 7.1(a).
The most common configuration is the shape of plus + and cross X shown in Fig. 7.1(b).
The diameter (in mm) (diagonal size) is the circumcircle determined by the motor axes. In
general, it is the distance between the motor axes in the diagonal line to indicate the size
of an airframe (Fig. 7.1(c)). The diagonal size restricts the propeller size, which in turn
determines the maximum thrust and thus the payload capacity.
(a) F450 Frame (b) Quadrotor main configura-
tions
(c) Diagonal size
Figure 7.1: Quadrotor frame
Some of the main components for the platform construction are illustrated in Fig. 7.2
and are described below.
Figure 7.2: quadrotor components.
Frame
49
7.1 Experimental platform
Flight Controller
A flight controller is an integrated circuit capable of executing commands stored in its
memory, which monitors and controls the UAV. Basically, is the brain of the vehicle. It is
composed of a central processing unit, memory and control unit. The flight controller is
connected to a set of sensors. These sensors give information about its height, attitude, and
speed. The vehicle filters this information and fuses some to calculate the desired speed for
each of the four motors. The flight controller sends this desired speed to the Electronic Speed
Controllers (ESC’s), which translates this desired speed into a signal that the motors can read.
Another task performed by the controller board is to maintain communication between the
vehicle and the ground station, for example, to communicate flight status, battery level,
and to send and receive information. A wide variety of flight controllers are available in the
market. An overview about advanced autopilot, open source projects can be found in [25, 48].
Figure 7.3: Pixhawk Flight Controller.
We selected a Pixhawk (Fig. 7.3) as the flight controller due to its low cost and high
performance. These features make it very popular for aerial robots [4, 17, 32]. Some of its
main characteristics are shown below [35].
• Processor
– 32-bit ARM Cortex M4 core with FPU
– 168 Mhz/256 KB RAM/2 MB Flash
– 32-bit failsafe co-processor
• Sensors
– MPU6000 as the main accelerometer and gyroscope
– ST Micro 16-bit gyroscope
– ST Micro 14-bit accelerometer/compass
• Power
– Ideal diode controller with automatic failover
– Servo rail high-power (7 V) and high-current ready
50
7.1 Experimental platform
– All peripheral outputs over-current protected, all inputs ESD protected
• Interfaces
– 5x UART serial ports, 1 high-power capable, 2 with HW flow control
– PPM sum signal
– PWM or voltage imput
– I2C, SPI, 2x CAN, USB
– 3.3V and 6.6V ADC inputs
• Dimensions
– Weight 38 g, Width 50 mm, Height 15.5 mm , Length 81.5 mm
Brushless Motor.
The function of brushless DC motors is to convert electrical energy, stored in the battery,
into mechanical energy for the propeller. The size of the motor is generally represented by its
stator size with a four-digit number, such as motor 2212 , among which the first two indicate
its stator diameter (mm) and the latter two indicate its stator height (mm).
The value KV for brushless DC motors is the number of RPM that the motor will revolve
when 1(V olt) is applied without a load attached to the motor. For example, 1000KV means
that when 1V is applied, the no-load motor speed will be 1000RPM . The maximum current
or power the motor can handle is denoted by two numbers. For example, the maximum con-
tinuous current 25A/30s represents that the motor can work safely with continuous current
up to 25A, beyond which the motor may be burnt out for more than 30s. The quadrotor of
the platform uses four brushless motors A2212/10T 1000KV with Hobbyking ESC 30A as
shown in Fig. 7.4.
Figure 7.4: Motor Brushless and ESC used in our Quad.
Electronic Speed Controller (ESC).
An ESC is an electronic circuit device that controls and adjusts the speed of the electric
motor. A signal from the flight controller causes the ESC to raise or lower the voltage to the
51
7.1 Experimental platform
motor as required, thus changing the speed of the propeller. ESC can also handle active or
regenerative braking, a process by which mechanical energy from a motor is converted into
electrical energy that can be used to recharge the battery. During periods when the motor is
decelerating, the motor can act as a generator and the ESC handles excess current that can
be fed back into the battery.
An electronic speed control has three sets of wires. One wire plugs into the main battery,
the second wire has a standard servo wire that plugs into the throttle channel of the receiver.
And finally, the third wire whose function is to power the motor Fig. 7.5.
Figure 7.5: ESC Connection Diagram
Propeller.
The propeller is used to produce the thrust and torque to control a quadrotor. The motor
efficiency varies with the output torque which depends on the type, size, speed, and other
factors of a propeller. Therefore, a good match should ensure that the motor works in a
high efficiency condition, which will guarantee less power consumed for the same thrust and
then extend the time of endurance of the quadrotor. Therefore, choosing the appropriate
propellers is a very direct way to improve the performance and efficiency of a quadrotor.
Generally, the propeller model is described by a four-digit number, such as a 1045 (or
10 × 45) propeller, among which the first two represent the diameter of the propeller (in
inch), and the latter two represent the propeller pitch (also referred to as screw pitch, blade
pitch, or simplified as pitch in inch. The quadrotor of the platform uses a 1045 propeller
(Fig. 7.6).
Figure 7.6: Propeller 1045.
Power Module.
The power module (Fig. 7.7) is the bridge between the main power supply and the
platform. It has two XT60 connectors (female-male), one for the battery connection and the
52
7.1 Experimental platform
other where the motors will be connected through a power distribution board. It also has a
current/voltage sensor associated with the battery to report the remaining charge, which is
useful to stop the flight before the battery discharge.
Figure 7.7: Power Module.
Radio Control.
An RC transmitter ( Fig. 7.8(a)) is used to transmit commands from remote pilots to
the corresponding receiver. The receiver in Fig. 7.8(b) passes the commands to the autopilot
after decoding them. Finally, the quadrotor flies according to the commands.
(a) RC Transmitter (b) Receiver
Figure 7.8: Fly Sky RC transmitter and receiver.
Battery.
An electric battery is a device with one or more electrochemical cells that convert stored
chemical energy into electrical energy to power all the components of the system. Batteries
are rechargeable, and they can be found in many types such as Lithium Polymer (LiPo),
Nickel Metal Hydride (NiMH), Lithium ion (Li-ion) and Nickel Cadmium.
Battery cells consist of a positive anode electrode, a negative cathode electrode, and
electrolytes generating a potential difference or voltage that allows ions to move between the
electrodes and the electric current to flow. Lipo batteries are the most common for aerial
53
7.1 Experimental platform
vehicles.
In lithium polymer batteries, each cell has a nominal voltage of 3.7 volts with a maximum
voltage of 4.2 volts and a minimum of 3.0 volts. These voltages must be strictly observed
to avoid irreparable damage. The number of cells is specified with the nomenclature 1s- 6s,
where 1s means one cell, 2s means two cells, and so on. We use a Turnigy Lipo battery of 3
cells, discharged 2200mAh and 30− 40, as can be seen in Fig. 7.9.
Figure 7.9: Turnigy Lipo Battery
GPS sensor.
The Global Positioning System (GPS) is a U.S.-owned utility that provides users with
positioning, navigation, and timing (PNT) services. This system consists of three segments:
the space segment, the control segment, and the user segment. The U.S. Space Force develops,
maintains, and operates the space and control segments. The space segment consists of a
nominal constellation of 24 operating satellites that transmit one-way signals that give the
current GPS satellite position and time. The control segment consists of worldwide monitor
and control stations that maintain the satellites in their proper orbits through occasional
command maneuvers and adjust the satellite clocks. It tracks the GPS satellites, uploads
updated navigational data, and maintains health and status of the satellite constellation.
The user segment consists of the GPS receiver equipment, which receiver the signals from
the GPS satellites and uses the transmitted to calculate the vehicle’s position and time.
The GPS receiver mounted in the quadrotor is a UBlox GPS + Compass module (Fig.
7.10). The recommended orientation is to mount the module with the arrow facing toward
the front of the vehicle and in the same direction as the arrow on the autopilot.
54
7.1 Experimental platform
Figure 7.10: UBlox GPS plus Compass module.
RPM Sensors.
It detects voltage changes on the wires of the brushless motors and then outputs the
RPM signal to measure the main rotor speed and motor/engine RPM. This RPM sensor
can work with some speed control systems. Common types of RPM sensor that can be used
in ArduPilot are: Hall effect, ESC telemetry, electrical commutation, and optical [5]. We
selected four Hobbywing RPM sensors (Fig. 7.11) to measure the speed of each motor [21].
Figure 7.11: Hobbywing Brushless RPM Sensor
The quadrotor developed in Mechatronics Laboratory is shown in Fig. 7.12.
55
7.1 Experimental platform
Figure 7.12: quadrotor developed in our laboratory
7.1.2 Software architecture
Pixhawk supports Software Development Kit (SDKs) called ArduPilot and PX4. PX4 is an
open source flight control software for drones and other unmanned vehicles. The project pro-
vides a flexible set of tools for drone developers to share technologies to create solutions for
drone applications. PX4 provides a standard to deliver the hardware support and software
stack, allowing an ecosystem to build and maintain hardware and software in a scalable way.
PX4 is part of Dronecode, a non-profit organization administered by Linux Foundation to fos-
ter the use of open source software on flying vehicles. Dronecode also hosts QGroundControl,
MAVLink and the SDK.
The ArduPilot software suite consists of navigation software (typically referred to as
firmware when it is compiled to binary form for microcontroller hardware targets) running on
the vehicle, along with ground station controlling software including Mission Planner, APM
Planner, QGroundControl, it MavProxy, Tower and others.
7.1.2.1 Ardupilot
Ardupilot is an advanced open source autopilot available for autonomous unmanned vehicle
systems. It has been under development since 2010 by a world wide team of professional
engineers, computer scientists, and community contributors. ArduPilot firmware works on a
wide variety of different hardware to control unmanned vehicles of all types: multicopters,
airplanes, helicopters, rovers, antenna trackers and submarines. Coupled with ground control
software, unmanned vehicles running ArduPilot can have advanced functionality including
real-time communication with operators. It is continually being expanded to provide support
for new emerging vehicle types.
The basic structure of ArduPilot is broken up into 5 main parts:
56
7.1 Experimental platform
• Vehicle Code: The vehicle directories are the top level directories that define the
firmware for each vehicle type. Currently there are 6 vehicle types: Plane, Copter,
Rover, Sub, Blimp and AntennaTracker. There are a lot of common elements between
different vehicle types. Along with the cpp files, each vehicle directory contains a
wscript file which lists library dependencies.
• Shared libraries: These libraries include sensor drivers, attitude and position estimation
(aka EKF) and control code (i.e. PID controllers)
• Hardware abstraction layer (AP HAL): Is how the ArduPilot is portable to lots of
different platforms, for example AVR based boards, Pixhawk boards or Linux based
boards.
• Tools directories: The tools directories are miscellaneous support directories.
• External Support code: Some platforms need external support code to provide ad-
ditional features or board support. Currently the external trees are: PX4NuttX,
PX4Firmware, uavcan and mavlink.
Our project needs to customize the firmware of Ardupilot because we need to know the
torque produced by the propellers that act on the drone at all times. This can be done by
measuring the speeds of the four motors. Currently, the latest version of the firmware only
considers two ports for RPM sensors, and in our case, we need 4. So in the appendix we
present the code modified and implemented in the Pixhawk in order to have the ability to
read, write, and save (log) the readings of the 4 sensors.
7.1.2.2 Mission planner
Mission Planner (Fig. 7.13) [34] is a ground control station application for the Ardupilot
open source autopilot project for Plane, Copter and Rover, compatible with Windows only.
Mission Planner can be used as a configuration utility or as a dynamic control supplement
for autonomous vehicles. It is a board-user interface. This program contains a wide range
of options and information available on the different aspects of the Ardupilot hardware and
firmware. Some functions are as follows:
• Load the firmware (the software) into the autopilot board (i.e. Pixhawk series) that
controls the vehicle.
• Setup, configure, and tune your vehicle for optimum performance.
• Plan, save and load autonomous missions into you autopilot with simple waypoints
entry on Google or other maps.
• Download and analyze mission logs created by your autopilot.
• Interface with a PC flight simulator to create a full hardware-in-the-loop UAV simulator.
• Monitor the vehicle’s status while in operation with appropriate telemetry hardware.
57
7.1 Experimental platform
Figure 7.13: Mission Planner Main Menu
The main features used in this project were:
• Installing Mission Planner
• Loading custom Firmware
• Connect Mission Planner to AutoPilot
• Sensor calibration
• Programming an autonomous mission
These tasks are briefly summarized below.
Installing Mission Planner
Mission Planner was designed for native Windows installation. Download the latest Mis-
sion Planner installer [33] and double click on the downloaded .msi file to run the installer.
Follow the instructions to complete the setup process. The installation utility will automat-
ically install any necessary software drivers.
Loading custom Firmware
After customizing the firmware, an apj an executable file is generated. This file was
loaded into Pixhawk hardware. First, open Mission Planner, then select Initial Setup and
choose the option Load custom firmware as shown in Fig. 7.14. Select the path were the
apj is located. If all goes well, you will see a status appear on the bottom right including
the words: “erase. . . ”, “program. . . ”, “verify..”, and “Upload Done”. The firmware has been
successfully uploaded to the board. Wait to press CONNECT until this occurs.
58
7.1 Experimental platform
Figure 7.14: Load custom firmware
Connect Mission Planner to Autopilot
Connect the autopilot Pixhawk using the micro USB cable as shown and use a direct USB
port on the computer, Fig. 7.15. Select the COM port drop-down in the upper-right corner
of the window near the Connect button. Select AUTO or the specific port for your board.
Set the Baud rate to 115200 and press Connect.
Figure 7.15: Autopilot to computer connection
Sensor calibration
The first time to setup, some required hardware components are needed to be configured.
This process starts by selecting frame orientation and configuring the RC transmitter/re-
ceiver, compass, accelerometer and ESCs using Mission Planner. These sensors require a
one-time calibration during the Mandatory Hardware setup step.
Radio Control Calibration
RC transmitters allow the pilot to set the flight mode, control the vehicle’s movement and
orientation and also turn on/off auxiliary functions. This involves capturing each RC input
channel’s minimum, maximum and “trim” values so that ArduPilot can correctly interpret
the input. Some green bars should appear showing the ArduPilot is receiving input from the
Transmitter/Receiver Fig. 7.16. Then move the transmitter’s roll, pitch, throttle and yaw
59
7.1 Experimental platform
sticks and ensure the green bars move in the correct direction and click when done. Mission
Planner will show a summary of the calibration data.
Figure 7.16: RC calibration
Accelerometer Calibration
To perform basic accelerometer calibration using Mission Planner is a mandatory proce-
dure. Mission Planner will prompt you to place the vehicle each calibration position as shown
in Fig. 7.17. Press any key to indicate that the autopilot is in position and then proceed to
the next orientation. It is important that the vehicle is kept still immediately after pressing
the key for each step. Click when Done button once each position is reached and held still.
When you have completed the calibration process, Mission Planner will display “Calibration
Successful!”.
Figure 7.17: Accelerometer calibration. Diagram credit: [1].
Compass calibration
For basic compass calibration, click the “Onboard Mag Calibration” and “Start” button
Fig. 7.18. Hold the vehicle in the air and rotate it so that each side (front, back, left,
right, top and bottom) points down towards the earth for a few seconds in turn. Consider
60
7.1 Experimental platform
a full 360-degree turn with each turn pointing a different direction of the vehicle to the
ground. It will result in 6 full turns plus possibly some additional time and turns to confirm
the calibration. As the vehicle is rotated the green bars should increase in length until the
calibration completes. Finally, reboot the autopilot and the calibration will be completed.
It is not necessary to recalibrate the compass when the vehicle is flown at a new location
because ArduPilot includes a “world magnetic model” which allows converting the location’s
magnetic North to true North without recalibrating. It is important that when compass
calibration is done, the vehicle have a good 3D gps lock, in order to assure the best setup.
If necessary, move outdoors in order to get a good 3D gps lock before doing the compass
calibration.
Figure 7.18: Compass calibration
ESC Calibration
Electronic speed controllers are responsible for spinning the motors at the speed requested
by the autopilot. Most ESCs need to be calibrated so that they know the minimum and
maximum pwm values that the flight controller will send. The recommended procedures are:
”All at once” calibration, “Manual ESC-by-ESC” method or the Semi Automatic ESC-by-
ESC Calibration routine in Mission Planner. We followed the Manual ESC by ESC method:
1. Plug one of your ESC three-wire cables into the throttle channel of the RC receiver.
2. Turn on the transmitter and set throttle stick to maximum.
3. Connect the LiPo battery. You will hear a musical tone then two beeps.
4. After the two beeps, lower the throttle stick to full down.
5. You will then hear a number of beeps (one for each battery cell you’re using) and finally
a single long beep indicating the end points have been set and the ESC is calibrated.
6. Disconnect battery. Repeat these steps for all ESCs.
61
7.1 Experimental platform
7. If it appears that the ESC’s did not calibrate then the throttle channel on the trans-
mitter might need to be reversed.
Figure 7.19: ESC’s calibration
RPM sensor set up
In order to configure the RPM sensors, go to Configure Optional hardware. First the
board needs to be configured to allow PWM pins to be set for General Purpose Input/Outputs
(GPIOs). This is done using the parameter BRD PWM COUNT. Reduce the PWM count
to free up a pin to be used for GPIO. On non-Pixhawk boards the PWM count will include
all PWM outputs. On Pixhawk boards this parameter only affects AUX pins. Write the
parameter and reboot the autopilot. As the Pixhawk board has 6 auxiliary ports, 4 of
them are used as GPIOs, one for each motor, 6 − 4 = 2 is the number of free pins in the
BRD PWM COUNT parameter.
RPM2 TYPE. Set up to 2 if you are using the AUX ports on pixhawk. If using a PWM
port on a pixhawk or a different board set RPM TYPE to 1.
RP2 SCALING parameter. The scaling value is a function of the number of poles in the
motor and should be the reciprocal of the number of poles. E.g. A 14 pole motor will need
a scaling value of 0.071428.
RPM PIN. Is the pin number assigned to be used as GPIO. For our project, the next
parameters were configured as shown in the table 7.1.
For further explanation, an extensive documentation can be found from the Ardupilot
Community [39].
Programming an Autonomous Mission
Mission Planner allows planning the path of the vehicle by selecting points or ”way-
points”, and other mission-related options. Mission Planner provides a list of the commands
appropriate for the current vehicle type, and adds column headings for the parameters that
needed to be set up by the user. These include navigation commands to travel to waypoints
and loiter in the vicinity, DO commands to execute specific actions (for example taking
pictures), and condition commands that can control when DO commands are able to run.
Latitude and Longitude are entered by clicking on the map. Altitude is relative to the launch
altitude/home position, so if it is set to 100m, for example, it will fly 100m above the pilot.
Default Alt is the default altitude when entering new waypoints.
62
7.1 Experimental platform
Table 7.1: RPM sensor Parameters Set up
Parameter Value
BRD PWM COUNT 2
RPM2 TYPE 1
RPM2SCALING 0.071428
RPM PIN 55
RPM2 PIN 54
RPM3 PIN 53
RPM4 PIN 52
Verify height means that the Mission Planner will use Google Earth topology data to
adjust your desired altitude at each waypoint to reflect the height of the ground beneath.
Create a mission by clicking on the map the locations of the way points, see for example Fig.
7.20, then select Write and it will be sent to board and saved in EEPROM. To confirm that
the mission was saved, select Read. You can save multiple mission files to your local hard
drive by selecting Save WP File or read in files with Load WP File.
Figure 7.20: Autonomous Mission executing a triangle path.
For our experimental validation, we have programmed an autonomous mission of a circular
flight 7.20 and an 8-shape flight.
63
7.2 Parameter identification
Figure 7.21: Autonomous Mission executing a circular trajectory.
7.2 Parameter identification
Before the experimental validation, we first need to know the physical parameters of our
quadrotor. These parameters are: the mass of the quadrotor m, the arm length d, its inertia
matrix I, the thrust coefficient cT and the torque coefficient cM . The physical parameters of
the UAV were determined experimentally. The mass of the body quadrotor m was measured
using a digital balance, the distance between the motor and the center of the quadrotor d was
measured using a ruler. The principal moment of inertia was estimated using a CAD model, a
theoretical model and through an algorithm. The thrust coefficient and the torque coefficient
were determined by static tests in the laboratory. A brief description of the experiments
carried out in the laboratory to identify the physical parameters of the system is presented.
7.2.1 Moment of inertia
The moment of inertia of a rigid body is expressed as:
I =
Ixx −Ixy −Ixz
−Iyx Iyy −Iyz
−Izx −Izy Izz
Here Ixy = Iyx, Ixz = Iyz and Iyz = Izy. Assuming that the quadrotor has a symmetric
mass distribution, then Ixy = Ixz = Iyz = 0 and the inertia matrix is simplified:
I =
Ixx 0 0
0 Iyy 0
0 0 Izz
(7.1)
where Ixx, Iyy and Izz are the principal moments of inertia. Principal moments of inertia
were determined by two methods: computer-aided design (CAD) software and analytical
method using the parallel axis theorem for homogeneous solids.
64
7.2 Parameter identification
7.2.1.1 Quadrotor CAD model
A CAD model design of the quadrotor was used to determine the moment of inertia using
software tools, where all parts of the quadrotor were modeled and the masses and dimensions
of each component were known.
Figure 7.22: Quadrotor CAD model
I =
0.011202 0 0
0 0.011621 0
0 0 0.020196
kgm2
7.2.1.2 Analytical method
To determine the moment of inertia, the quad rotor vehicle was separated into different
components. The model of each component was represented by a simplified geometric shape
of constant internal density. The weight of each component was then measured. The moment
of inertia of each component about the axes of the vehicle is determined using the parallel
axis theorem. The total moment of inertia matrix of the vehicle is the sum of the inertias for
every component about each axis. The components that need to be modeled are the motor,
the ESC, arms, and the central hub. The parallel axis theorem.
I =
0.0115 0 0
0 0.0119 0
0 0 0.0126
kgm2
7.2.2 Determination of the motor-propeller thrust coefficient
The steady-state thrust generated by a hovering rotor (i.e., a rotor that is not translating
horizontally or vertically) in free air may be modeled using momentum theory as:
Ti = CTρArir
2
i ω
2
i
65
7.2 Parameter identification
where for the rotor i, Ari is the area of the rotor disk, ri is the radius, ω2
i is the angular
velocity of the motor, and CT is the thrust coefficient that depends on the geometry and
profile of the rotor and ρ is the density of the air. A simple lumped-parameter model is
Ti = cTω
2
i (7.2)
where cT = CTρArir
2
i > 0 is modeled as a constant that can be determined by static thrust
test. Identifying the thrust constant experimentally has the advantage that it will also natu-
rally incorporate the effect of drag on the airframe induced by the rotor flow [29]. our project,
we used a Racerstar Brushless Motor Thrust Stand V3 for static thrust test, it is a High pre-
cision thrust sensor with precise digital monitoring for speed, voltage and power parameters.
The set up is shown in Fig. 7.23. According with (7.2), we can plot the thrust force vs.
motor-propeller speed in order to determine constant cT . The result of this experiment is
shown in Fig. 7.24 .
Figure 7.23: Test bench for motor-propeller characterization to measure thrust force and motor
speeds.
66
7.2 Parameter identification
Figure 7.24: Thrust force vs. motor-propeller speed.
7.2.3 Determination of the motor-propeller torque coefficient
The reaction torque (due to rotor drag) acting on the airframe generated by a hovering rotor
in free air may be modeled as
Mi = cMω
2
i (7.3)
where Mi is the reaction torque of the ith propeller acting of the fuselage, cM = (1/4π2) ∗
ρD5
pCM , Dp denotes the diameter of the propeller, ρ is the air density and CM is the torque
coefficient and ωi is the motor speed. Similarly, as in the previous case, we have from equation
(7.3) that if reaction torque and motor speed are measured, then cM can be determined, see
Fig. 7.25.
67
7.2 Parameter identification
Figure 7.25: Torque vs. motor-propeller speed.
The physical parameters of the quadrotor are summarized in Tab. 7.2 above.
Table 7.2: Quadrotor physical parameters
Symbol Physical property Value Units
m quadrotor mass 1.07 kg
d distance 0.23 m
Ixx Moment of inertia x-axis 0.0112 kgm2
Iyy Moment of inertia y-axis 0.0116 kgm2
Izz Moment of inertia z-axis 0.0201 kgm2
cT Thrust coefficient 1.257× 10−05 N/(rad/s)2
cM Moment coefficient 1.717× 10−07 Nm/(rad/s)2
68
7.3 Experimental results
7.3 Experimental results
In this section, the experimental results regarding the performance of the angular velocity
(3.15), (3.16), (3.5), attitude observer (4.1), (4.2) and translational observer (5.4)- (5.6), are
presented and discussed.
The experiment was carried out by programming an autonomous mission so that the
quadrotor followed a regular path. Data from the IMU sensor were collected from the flight
and processed in an outdoor environment for experimental verification. The reference vector
for acceleration is taken as [0, 0,−9.8]T m/s2, or as a unit vector [0, 0,−1]T . The nor-
malized vector of the Earth’s magnetic field was taken as: [0.6626, 0.0544, 0.7469]T , which
corresponds to the value in Juriquilla, Querétaro, Mexico, place where the experiments were
carried out, according to [26] which uses the World Magnetic Model to calculate the Mag-
netic Field in a certain region of space knowing its latitude, longitude and altitude. The
initial states were taken as: q(0) = [0.9071, 0.0165,−0.0075, 0.4205]T , q̂(0) = [1, 0, 0, 0]T ,
ω̄(0) = [0.05, 0.05, 0.05]T and the parameters used for implementation are summarized in the
Tab. 7.3.
Table 7.3: Data used in the implementation of the complete algorithm in the Quad
Value 1 Value 2
Reference vectors r1 = [0, 0,−1]T r2 = [0.6626, 0.0544, 0.7469]T
Parameters
Gain matrix Λ1 = diag[0.25, 0.25, 0.25] Λ2 = diag[0.25, 0.25, 0.25]
sensor’s weights k1 = 5.0 k2 = 5.0
Filter’s gain γf = 5.0
Gain κp = 1.0
The results of the two flights were analyzed and are presented below.
7.3.1 Circular path flight
A circular trajectory of radius of 15 m during approximately 80 with take-off and landing s
was executed by the quadrotor. It started its motion very close to the center of the circle.
It takes off and heads towards the first point of the circle. Then, the flight followed a
circular trajectory Fig. 7.26. Because the drone rotates around the center of the circle, this
experiment is suitable to analyze mainly the z component of the angular velocity and the
evolution of yaw along the trajectory, because they are the significant components for the
type of motion so that roll and pitch are kept almost constant and small.
The GPS sensor provides the position measurement of the quadrotor in terms of latitude
(degrees), longitude (degrees) and altitude (meters) in the Earth Centered Earth Fixed frame,
which were transformed to the north-east-down (NED) coordinates into meters. The following
69
7.3 Experimental results
figure 7.26 shows the trajectory executed by the quadrotor in the real flight.
-20 -15 -10 -5 0 5 10 15
North (m)
-15
-10
-5
0
5
10
15
20
E
a
st
(
m
)
Figure 7.26: Circular flight of the quadrotor in NED frame.
The origin of the NED frame was chosen in the center of the circular trajectory. The true
linear velocity was obtained by time differentiation of the GPS signal and then compared
with the position and velocity estimated by the observer (5.4)- (5.6).
As discussed previously, the choice of gains k1 and k2 should be in accordance with the
reliability of each sensor. In the experiments where the flying vehicles are operated in aggres-
sive maneuvers susceptible to high accelerations, it is advisable to rely on the magnetometer
rather than the accelerometer; therefore, give a higher gain value to the magnetometer or
use a third sensor such as GPS or INS. On the other hand, if the experiment can now be
affected by magnetic disturbances or the sensors are near magnets, then it is advisable to
rely on the accelerometer rather than the magnetometer. In our experiment, the quadrotor
flies smoothly during the circular motion; therefore, it is not subject to abrupt changes in
linear acceleration during this period. For this reason, the same reliability was given to the
accelerometer as for the magnetometer.
The calibrated accelerometer and magnetometer measurements from the IMU onboard
the quadrotor are shown in Fig. 7.27 and Fig. 7.28, respectively. This set of data are the
input for angular velocity - attitude observer algorithm.
70
7.3 Experimental results
0 10 20 30 40 50 60 70 80
time (s)
-0.15
-0.1
-0.05
0
0.05
0.1
a
x
(
m
/s
2
)
0 10 20 30 40 50 60 70 80
time (s)
-0.15
-0.1
-0.05
0
0.05
0.1
a
y
(
m
/s
2
)
0 10 20 30 40 50 60 70 80
time (s)
-1
-0.998
-0.996
-0.994
-0.992
-0.99
a
z
(
m
/s
2
)
Figure 7.27: Normalized accelerometer measurements (Circular flight).
0 10 20 30 40 50 60 70 80
time (s)
-1
-0.5
0
0.5
1
m
x
(
T
)
0 10 20 30 40 50 60 70 80
time (s)
-1
-0.5
0
0.5
1
m
y
(
T
)
0 10 20 30 40 50 60 70 80
time (s)
0.6
0.65
0.7
0.75
0.8
0.85
m
z
(
T
)
Figure 7.28: Magnetometer measurements (Circular flight).
71
7.3 Experimental results
As can be seen from (3.15), the implementable version of the observer, the angular velocity
can be estimated using at least two body vector measurements and the total torque. To
calculate the torque that acts on the quadrotor, the speed of each motor is required. Data
from the RPM sensors are shown in Fig. 7.29.
0 10 20 30 40 50 60 70 80
time (s)
1800
2000
2200
2400
2600
m
o
to
r
1
(
rp
m
)
0 10 20 30 40 50 60 70 80
time (s)
1800
2000
2200
2400
2600
2800
m
o
to
r
2
(
rp
m
)
0 10 20 30 40 50 60 70 80
time (s)
1000
1500
2000
2500
3000
m
o
to
r
3
(
rp
m
)
0 10 20 30 40 50 60 70 80
time (s)
1000
1500
2000
2500
3000
m
o
to
r
4
(
rp
m
)
Figure 7.29: Motor angular speeds (Circular flight).
The performance of the angular velocity observer is illustrated in Fig. 7.30. The results of
the experiment show that there are a good resemblance between the real angular velocity and
the estimated angular velocity. In order to show the performance of the attitude observer,
we represent the orientation of the quadrotor in Euler angles, and compared the true values
ϕ , θ, ψ with the estimated values ϕ̂ , θ̂ ψ̂ in Fig. 7.31.
72
7.3 Experimental results
0 10 20 30 40 50 60 70 80
time (s)
-2
-1
0
1
2
x
(
ra
d
/s
)
0 10 20 30 40 50 60 70 80
time (s)
-1
-0.5
0
0.5
1
1.5
y
(
ra
d
/s
)
0 10 20 30 40 50 60 70 80
time (s)
-2
-1
0
1
2
z
(
ra
d
/s
)
Figure 7.30: True and estimated angular velocity of the quadrotor (Circular flight).
0 10 20 30 40 50 60 70 80
time (s)
-40
-20
0
20
40
ro
ll
(
°
)
0 10 20 30 40 50 60 70 80
time (s)
-10
-5
0
5
p
it
c
h
(
°
)
0 10 20 30 40 50 60 70 80
time (s)
0
100
200
300
400
y
a
w
(
°
)
Figure 7.31: True and estimated attitude in Euler angles representation of the quadrotor (Cir-
cular flight).
Regarding the translation state observer, we show in Fig. 7.32 the comparison between
the real linear velocity v with the estimated linear velocity v̂ and in Fig.7.33 the comparison
between the real position p with the estimated position p̂.
73
7.3 Experimental results
0 10 20 30 40 50 60 70 80
time (s)
-2
-1
0
1
2
v
x
(
m
/s
)
0 10 20 30 40 50 60 70 80
time (s)
-2
-1
0
1
2
v
y
(
m
/s
)
0 10 20 30 40 50 60 70 80
time (s)
-2
-1
0
1
2
v
z
(
m
/s
)
Figure 7.32: True and estimated linear velocity (Circular flight).
0 10 20 30 40 50 60 70 80
time (s)
-10
0
10
p
x
(
m
)
0 10 20 30 40 50 60 70 80
time (s)
-20
-10
0
10
20
p
y
(
m
)
0 10 20 30 40 50 60 70 80
time (s)
-1
0
1
2
3
p
z
(
m
)
Figure 7.33: True and estimated position (Circular flight).
We show the error for the overall observer in Fig. 7.34. Note from the figure Fig.6.3 that
simulation results coincide with the results obtained in the experiments Fig. 7.34.
74
7.3 Experimental results
0 10 20 30 40 50 60 70 80
time (s)
-2
-1
0
1
2
0 10 20 30 40 50 60 70 80
time (s)
-1
-0.5
0
0.5
1 q
0
q
1
q
2
q
3
0 10 20 30 40 50 60 70 80
time (s)
-4
-2
0
2
4
0 10 20 30 40 50 60 70 80
time (s)
-1
0
1
2
Figure 7.34: Real flight: (Top-down) angular velocity error ω̃, attitude error q̂−1 ⊗ q, linear
velocity error ṽ and position error p̃ (Circular flight).
75
7.3 Experimental results
0 10 20 30 40 50 60 70 80
time (s)
10
-5
10
0
0 10 20 30 40 50 60 70 80
time (s)
10
-2
10
-1
0 10 20 30 40 50 60 70 80
time (s)
10
-2
10
-1
10
0
10
1
0 10 20 30 40 50 60 70 80
time (s)
10
0
10
2
Figure 7.35: Real flight: (Top-down) Root Mean Square estimate for angular velocity error ω̃,
attitude error q̂−1 ∗ q, position error p̃ and linear velocity error ṽ.
76
Chapter 8
Conclusions
In this thesis, the problem of design a navigation observer for autonomous vehicles such as
quadrotors has been addressed. A new angular velocity-attitude observer has been proposed
first, within a framework where neither the angular velocity from gyros nor the attitude is
needed to perform the estimation, only requiring vector measurements available from a sensor,
such as accelerometer and magnetometer. The proposed angular velocity observer is then
combined with Explicit Complementary Filter (ECF) for attitude estimation. The almost
global asymptotic convergence of the proposed observer to the true state was established
using the Lyapunov analysis. The resulting cascaded structure of the observer decouples the
design of the angular velocity observer from the attitude observer, facilitates the convergence
of the proposed observer and its implementation. Moreover, since the proposed observer uses
only vector measurements, it may represent a backup solution for autonomous vehicles in
the event of gyroscope failures. A translational state observer is developed at last which is
globally exponentially stable if a mild condition on the attitude estimation is fulfilled.
Through simulation, the behavior of the navigation observer was validated in ideal and
practical situations, where the measurements noise and the parameter uncertainty of the
inertia matrix were present. The results were in good agreement with the theoretical analysis.
In addition, the proposed observer was validated on an experimental platform built in
the Mechatronics Laboratory based on a quadrotor. The design and implementation of the
platform were described in detail. The quadrotor ran a circular flight path and an eight-
shape flight. The experimental results showed that there is a good correspondence between
the experimental results and the simulation results.
Future works may include designing an adaptive observer to estimate the inertia matrix
of the autonomous vehicle and incorporate the proposed navigation observer into a control
loop.
77
Appendix A
Appendix A
A.1 AP RPM Library
Ardupilot firmware supports the use of numerous types of RPM sensors. In principle, any
rpm sensor that outputs a PWM signal or a step-change in voltage as a function of RPM can
be used by ArduPilot. All RPM sensors use the AP RPM library because the rpm sensor
is plugged into an AUX port on a Pixhawk, so we need to ensure that the AUX port is
configured as a GPIO pin to receive a signal input. Also, Ardupilot needs to know which
pin the RPM sensor is connected to. The RPM library in ArduPilot monitors the voltage of
the designated signal pin. With the latest version firmware of Ardupilot a maximum of two
RPM sensors can be used. For our project, we need to active four rpm sensors, one for each
motor. This is the reason why we require to customize the firmware, particularly the RPM
library.
The following lines of programming code are those that were added to the firmware and
implemented on the Pixhawk board. Basically, the tasks the code performs are:
• To active four pins, AUX pins as GPIO (Instead of two).
• To read and store the RPM sensor signals.
• To send the messages to the Full Parameters list, so that the user can configure the
parameters related with RPM sensors through Mission Planner.
• To log RPM sensors readings into a microsd card.
78
A.1 AP RPM Library
libraries/AP RPM/AP RPM.h
/*
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation , either version 3 of the License , or
(at your option) any later version.
This program is distributed in the hope that it will be useful ,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not , see .
*/
#pragma once
#include
#include
#include
#include
// Maximum number of RPM measurement instances available on this platform
#define RPM_MAX_INSTANCES 4
class AP_RPM_Backend;
class AP_RPM
{
friend class AP_RPM_Backend;
public:
AP_RPM ();
/* Do not allow copies */
AP_RPM(const AP_RPM &other) = delete;
AP_RPM &operator =(const AP_RPM &) = delete;
// RPM driver types
79
A.1 AP RPM Library
enum RPM_Type {
RPM_TYPE_NONE = 0,
RPM_TYPE_PWM = 1,
RPM_TYPE_PIN = 2,
RPM_TYPE_EFI = 3,
RPM_TYPE_HNTCH = 4,
#if CONFIG_HAL_BOARD == HAL_BOARD_SITL
RPM_TYPE_SITL = 10,
#endif
};
// The RPM_State structure is filled in by the backend driver
struct RPM_State {
uint8_t instance; // the instance number of
this RPM
float rate_rpm; // measured rate in revs
per minute
uint32_t last_reading_ms; // time of last reading
float signal_quality; // synthetic quality
metric
};
// parameters for each instance
AP_Int8 _type[RPM_MAX_INSTANCES ];
AP_Int8 _pin[RPM_MAX_INSTANCES ];
AP_Float _scaling[RPM_MAX_INSTANCES ];
AP_Float _maximum;
AP_Float _minimum;
AP_Float _quality_min;
static const struct AP_Param :: GroupInfo var_info [];
// Return the number of rpm sensor instances
uint8_t num_sensors(void) const {
return num_instances;
}
80
A.1 AP RPM Library
// detect and initialise any available rpm sensors
void init(void);
// update state of all rpm sensors. Should be called from main loop
void update(void);
/*
return RPM for a sensor. Return -1 if not healthy
*/
bool get_rpm(uint8_t instance , float &rpm_value) const;
/*
return signal quality for a sensor.
*/
float get_signal_quality(uint8_t instance) const {
return state[instance ]. signal_quality;
}
bool healthy(uint8_t instance) const;
bool enabled(uint8_t instance) const;
static AP_RPM *get_singleton () { return _singleton; }
private:
static AP_RPM *_singleton;
RPM_State state[RPM_MAX_INSTANCES ];
AP_RPM_Backend *drivers[RPM_MAX_INSTANCES ];
uint8_t num_instances :3; // This is a bit field not =
void detect_instance(uint8_t instance);
};
namespace AP {
AP_RPM *rpm();
81
A.1 AP RPM Library
};
libraries/AP RPM/AP RPM.cpp
/*
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation , either version 3 of the License , or
(at your option) any later version.
This program is distributed in the hope that it will be useful ,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not , see .
*/
#include "AP_RPM.h"
#include "RPM_Pin.h"
#include "RPM_SITL.h"
#include "RPM_EFI.h"
#include "RPM_HarmonicNotch.h"
extern const AP_HAL ::HAL& hal;
// table of user settable parameters
const AP_Param :: GroupInfo AP_RPM :: var_info [] = {
// Param: _TYPE
// DisplayName: RPM type
// Description: What type of RPM sensor is connected
// Values: 0:None ,1:PWM ,2: AUXPIN ,3:EFI ,4: Harmonic Notch
// User: Standard
AP_GROUPINFO("_TYPE", 0, AP_RPM , _type [0], 0),
// Param: _SCALING
// DisplayName: RPM scaling
// Description: Scaling factor between sensor reading and RPM.
// Increment: 0.001
82
A.1 AP RPM Library
// User: Standard
AP_GROUPINFO("_SCALING", 1, AP_RPM , _scaling [0], 1.0f),
// Param: _MAX
// DisplayName: Maximum RPM
// Description: Maximum RPM to report
// Increment: 1
// User: Standard
AP_GROUPINFO("_MAX", 2, AP_RPM , _maximum , 100000) ,
// Param: _MIN
// DisplayName: Minimum RPM
// Description: Minimum RPM to report
// Increment: 1
// User: Standard
AP_GROUPINFO("_MIN", 3, AP_RPM , _minimum , 10),
// Param: _MIN_QUAL
// DisplayName: Minimum Quality
// Description: Minimum data quality to be used
// Increment: 0.1
// User: Advanced
AP_GROUPINFO("_MIN_QUAL", 4, AP_RPM , _quality_min , 0.5),
// Param: _PIN
// DisplayName: Input pin number
// Description: Which pin to use
// Values: -1:Disabled ,50: PixhawkAUX1 ,51: PixhawkAUX2 ,52: PixhawkAUX3
,53: PixhawkAUX4 ,54: PixhawkAUX5 ,55: PixhawkAUX6
// User: Standard
AP_GROUPINFO("_PIN", 5, AP_RPM , _pin[0], 54),
#if RPM_MAX_INSTANCES > 1
// Param: 2_TYPE
// DisplayName: Second RPM type
// Description: What type of RPM sensor is connected
// Values: 0:None ,1:PWM ,2: AUXPIN ,3:EFI ,4: Harmonic Notch
83
A.1 AP RPM Library
// User: Advanced
AP_GROUPINFO("2_TYPE", 10, AP_RPM , _type [1], 0),
// Param: 2_SCALING
// DisplayName: RPM scaling
// Description: Scaling factor between sensor reading and RPM.
// Increment: 0.001
// User: Advanced
AP_GROUPINFO("2_SCALING", 11, AP_RPM , _scaling [1], 1.0f),
// Param: 2_PIN
// DisplayName: RPM2 input pin number
// Description: Which pin to use
// Values: -1:Disabled ,50: PixhawkAUX1 ,51: PixhawkAUX2 ,52: PixhawkAUX3
,53: PixhawkAUX4 ,54: PixhawkAUX5 ,55: PixhawkAUX6
// User: Standard
AP_GROUPINFO("2_PIN", 12, AP_RPM , _pin[1], -1),
// Param: 3_TYPE
// DisplayName: Third RPM type
// Description: What type of RPM sensor is connected
// Values: 0:None ,1:PWM ,2: AUXPIN ,3:EFI ,4: Harmonic Notch
// User: Advanced
AP_GROUPINFO("3_TYPE", 13, AP_RPM , _type [2], 0),
// Param: 3_SCALING
// DisplayName: RPM scaling
// Description: Scaling factor between sensor reading and RPM.
// Increment: 0.001
// User: Advanced
AP_GROUPINFO("3_SCALING", 14, AP_RPM , _scaling [2], 1.0f),
// Param: 3_PIN
// DisplayName: RPM3 input pin number
// Description: Which pin to use
// Values: -1:Disabled ,50: PixhawkAUX1 ,51: PixhawkAUX2 ,52: PixhawkAUX3
,53: PixhawkAUX4 ,54: PixhawkAUX5 ,55: PixhawkAUX6
84
A.1 AP RPM Library
// User: Standard
AP_GROUPINFO("3_PIN", 15, AP_RPM , _pin[2], -1),
// Param: 4_TYPE
// DisplayName: Fourth RPM type
// Description: What type of RPM sensor is connected
// Values: 0:None ,1:PWM ,2: AUXPIN ,3:EFI ,4: Harmonic Notch
// User: Advanced
AP_GROUPINFO("4_TYPE", 16, AP_RPM , _type [3], 0),
// Param: 4_SCALING
// DisplayName: RPM scaling
// Description: Scaling factor between sensor reading and RPM.
// Increment: 0.001
// User: Advanced
AP_GROUPINFO("4_SCALING", 17, AP_RPM , _scaling [3], 1.0f),
// Param: 4_PIN
// DisplayName: RPM4 input pin number
// Description: Which pin to use
// Values: -1:Disabled ,50: PixhawkAUX1 ,51: PixhawkAUX2 ,52: PixhawkAUX3
,53: PixhawkAUX4 ,54: PixhawkAUX5 ,55: PixhawkAUX6
// User: Standard
AP_GROUPINFO("4_PIN", 18, AP_RPM , _pin[3], -1),
#endif
AP_GROUPEND
};
AP_RPM :: AP_RPM(void)
{
AP_Param :: setup_object_defaults(this , var_info);
if (_singleton != nullptr) {
AP_HAL :: panic("AP_RPM must be singleton");
}
_singleton = this;
85
A.1 AP RPM Library
}
/*
initialise the AP_RPM class.
*/
void AP_RPM ::init(void)
{
if (num_instances != 0) {
// init called a 2nd time?
return;
}
for (uint8_t i=0; iupdate ();
}
}
}
/*
check if an instance is healthy
*/
bool AP_RPM :: healthy(uint8_t instance) const
{
if (instance >= num_instances || _type[instance] == RPM_TYPE_NONE) {
return false;
}
87
A.1 AP RPM Library
// check that data quality is above minimum required
if (state[instance ]. signal_quality < _quality_min.get()) {
return false;
}
return true;
}
/*
check if an instance is activated
*/
bool AP_RPM :: enabled(uint8_t instance) const
{
if (instance >= num_instances) {
return false;
}
// if no sensor type is selected , the sensor is not activated.
if (_type[instance] == RPM_TYPE_NONE) {
return false;
}
return true;
}
/*
get RPM value , return true on success
*/
bool AP_RPM :: get_rpm(uint8_t instance , float &rpm_value) const
{
if (! healthy(instance)) {
return false;
}
rpm_value = state[instance ]. rate_rpm;
return true;
}
// singleton instance
AP_RPM *AP_RPM :: _singleton;
88
A.1 AP RPM Library
namespace AP {
AP_RPM *rpm()
{
return AP_RPM :: get_singleton ();
}
}
libraries/AP RPM/AP Pin.cpp
/*
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation , either version 3 of the License , or
(at your option) any later version.
This program is distributed in the hope that it will be useful ,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not , see .
*/
#include
#include "RPM_Pin.h"
#include
#include
extern const AP_HAL ::HAL& hal;
AP_RPM_Pin :: IrqState AP_RPM_Pin :: irq_state[RPM_MAX_INSTANCES ];
/*
open the sensor in constructor
89
A.1 AP RPM Library
*/
AP_RPM_Pin :: AP_RPM_Pin(AP_RPM &_ap_rpm , uint8_t instance , AP_RPM ::
RPM_State &_state) :
AP_RPM_Backend(_ap_rpm , instance , _state)
{
}
/*
handle interrupt on an instance
*/
void AP_RPM_Pin :: irq_handler(uint8_t pin , bool pin_state , uint32_t
timestamp)
{
const uint32_t dt = timestamp - irq_state[state.instance ].
last_pulse_us;
irq_state[state.instance ]. last_pulse_us = timestamp;
// we don’t accept pulses less than 100us. Using an irq for such
// high RPM is too inaccurate , and it is probably just bounce of
// the signal which we should ignore
if (dt > 100 && dt < 1000*1000) {
irq_state[state.instance ]. dt_sum += dt;
irq_state[state.instance ]. dt_count ++;
}
}
void AP_RPM_Pin :: update(void)
{
if (last_pin != get_pin ()) {
// detach from last pin
if (last_pin != (uint8_t) -1 &&
!hal.gpio ->detach_interrupt(last_pin)) {
gcs().send_text(MAV_SEVERITY_WARNING , "RPM: Failed to detach
from pin %u", last_pin);
// ignore this failure or the user may be stuck
}
irq_state[state.instance ]. dt_count = 0;
irq_state[state.instance ]. dt_sum = 0;
90
A.1 AP RPM Library
// attach to new pin
last_pin = get_pin ();
if (last_pin) {
hal.gpio ->pinMode(last_pin , HAL_GPIO_INPUT);
if (!hal.gpio ->attach_interrupt(
last_pin ,
FUNCTOR_BIND_MEMBER (& AP_RPM_Pin :: irq_handler , void ,
uint8_t , bool , uint32_t),
AP_HAL ::GPIO:: INTERRUPT_RISING)) {
gcs().send_text(MAV_SEVERITY_WARNING , "RPM: Failed to
attach to pin %u", last_pin);
}
}
}
if (irq_state[state.instance ]. dt_count > 0) {
float dt_avg;
// disable interrupts to prevent race with irq_handler
void *irqstate = hal.scheduler ->disable_interrupts_save ();
dt_avg = irq_state[state.instance ]. dt_sum / irq_state[state.
instance ]. dt_count;
irq_state[state.instance ]. dt_count = 0;
irq_state[state.instance ]. dt_sum = 0;
hal.scheduler ->restore_interrupts(irqstate);
const float scaling = ap_rpm._scaling[state.instance ];
float maximum = ap_rpm._maximum.get();
float minimum = ap_rpm._minimum.get();
float quality = 0;
float rpm = scaling * (1.0e6 / dt_avg) * 60;
float filter_value = signal_quality_filter.get();
state.rate_rpm = signal_quality_filter.apply(rpm);
if (( maximum <= 0 || rpm <= maximum) && (rpm >= minimum)) {
if (is_zero(filter_value)){
91
A.1 AP RPM Library
quality = 0;
} else {
quality = 1 - constrain_float ((fabsf(rpm -filter_value))/
filter_value , 0.0, 1.0);
quality = powf(quality , 2.0);
}
state.last_reading_ms = AP_HAL :: millis ();
} else {
quality = 0;
}
state.signal_quality = (0.1 * quality) + (0.9 * state.
signal_quality);
}
// assume we get readings at at least 1Hz, otherwise reset quality to
zero
if (AP_HAL :: millis () - state.last_reading_ms > 1000) {
state.signal_quality = 0;
state.rate_rpm = 0;
}
}
ardupilot/libraries/AP Logger/LogFile.cpp
void AP_Logger :: Write_RPM(const AP_RPM &rpm_sensor)
{
float rpm1 = -1, rpm2 = -1, rpm3 = -1, rpm4 = -1;
rpm_sensor.get_rpm(0, rpm1);
rpm_sensor.get_rpm(1, rpm2);
rpm_sensor.get_rpm(2, rpm3);
rpm_sensor.get_rpm(3, rpm4);
const struct log_RPM pkt{
LOG_PACKET_HEADER_INIT(LOG_RPM_MSG),
time_us : AP_HAL :: micros64 (),
rpm1 : rpm1 ,
92
A.1 AP RPM Library
rpm2 : rpm2 ,
rpm3 : rpm3 ,
rpm4 : rpm4
};
WriteBlock (&pkt , sizeof(pkt));
}
ardupilot/libraries/AP LogStructure.h
struct PACKED log_RPM {
LOG_PACKET_HEADER;
uint64_t time_us;
float rpm1;
float rpm2;
float rpm3;
float rpm4;
};
LOG_STRUCTURE_FROM_AHRS \
{ LOG_DF_FILE_STATS , sizeof(log_DSF), \
"DSF", "QIHIIII", "TimeUS ,Dp ,Blk ,Bytes ,FMn ,FMx ,FAv", "s--b---", "F
--0---" }, \
{ LOG_RPM_MSG , sizeof(log_RPM), \
"RPM", "Qffff", "TimeUS ,rpm1 ,rpm2 ,rpm3 ,rpm4", "sqqqq", "F0000" }, \
{ LOG_RALLY_MSG , sizeof(log_Rally), \
"RALY", "QBBLLh", "TimeUS ,Tot ,Seq ,Lat ,Lng ,Alt", "s--DUm", "F--GGB"
}, \
{ LOG_MAV_MSG , sizeof(log_MAV), \
"MAV", "QBHHHBHH", "TimeUS ,chan ,txp ,rxp ,rxdp ,flags ,ss ,tf", "s#----
s-", "F-000-C-" }, \
93
Appendix B
Quadrotor Moment of inertia
To determine the moment of inertia of the quadrotor analytically, the rigid body can be
divided into 6 parts: the motor, arm, battery, pixhawk, ESC and a hollow plate as shown in
the picture B.1. The parallel axis theorem is used to calculate de moment of inertia of each
component around the rotation axis x, y, z.
Figure B.1: Quadrotor divided in small parts. Credit image: [20]
Parallel axis Theorem S tates that the moment of inertia of an object around a par-
ticular axis is equal to the moment of inertia around a parallel axis that goes through the
center of mass, plus the distance between the axes of rotation squared times the mass.
I = Icm +md2
where I is the moment of inertia of an object with respect to an axis from which the
center of mass of the object is a distance d. Icm is the moment of inertia of the object with
respect to an axis that is parallel to the first axis and passes through the center of mass, m
is the mass of the object, d is the distance between the two axes.
94
Principal moment of inertia around x axis
Ixx = 4Imotor + 4Iarm + Ibattery + Ipix + 4IESC + 2Iplate
The motor were modelled as a solid cylinder. Motor mass mmotor = 0.05kg.
Imotor =
1
4
mr2 +
1
12
mL2 +md2
=
1
4
(0.05kg)(0.014m)2 +
1
12
(0.05kg)(0.024m)2 + (0.050kg)(0.16m)2
= 1.2842× 10−3kgm2
The arm is modelled as a solid rectangular prism: marm = 0.055kg.
Iarm =
1
12
m(b2 + c2) +md2 =
1
12
(0.055kg)((0.01m)2 + (0.03m)2) + (0.055kg)(0.16m)2
= 1.4126× 10−3kgm2
The battery is modelled as a solid rectangular prism: mbattery = 0.185kg.
Ibattery =
1
12
m(b2 + c2) =
1
12
(0.185kg)((0.105m)2 + (0.023m)2) = 1.7812× 10−4kgm2
The Pixhawk board is modelled as a solid rectangular prism: marm = 0.065kg.
Ipix =
1
12
m(b2 + c2) =
1
12
(0.065kg)((0.08m)2 + (0.015m)2) = 3.588× 10−5kgm2
The ESC is modelled as a solid rectangular prism: mESC = 0.025kg.
IESC =
1
12
m(b2 + c2) +md2 =
1
12
(0.025kg)((0.05m)2 + (0.025m)2) + (0.025kg)(0.06m)2
= 9.65104× 10−5kgm2
The plate: mplate = 0.065kg.
Iplate =
1
12
m(b2 + c2) =
1
12
(0.065kg)((0.10m)2 + (0.05m)2) = 6.77× 10−5kgm2
The total moment of inertia around x axis is:
Ixx = 4Imotor + 4Iarm + Ibattery + Ipix + 4IESC + 2Iplate
= 4(1.2842× 10−3kgm2) + 4(1.4126× 10−3kgm2) + 1.7812× 10−4kgm2
+ 3.588× 10−5kgm2 + 4(9.651× 10−5kgm2) + 2(6.77× 10−5kgm2)
= 0.0115kgm2
95
Principal moment of inertia around y axis
Iyy = 4Imotor + 4Iarm + Ibattery + Ipix + 4IESC + 2Iplate
The motor were modelled as a solid cylinder. Motor mass mmotor = 0.05kg.
Imotor =
1
4
mr2 +
1
12
mL2 +md2
=
1
4
(0.05kg)(0.014m)2 +
1
12
(0.05kg)(0.024m)2 + (0.050kg)(0.16m)2
= 1.2848× 10−3kgm2
The arm is modelled as a solid rectangular prism: marm = 0.055kg.
Iarm =
1
12
m(a2 + c2) +md2 =
1
12
(0.055kg)((0.185m)2 + (0.01m)2) + (0.055kg)(0.16m)2
= 1.5653× 10−3kgm2
The battery is modelled as a solid rectangular prism: mbattery = 0.185kg.
Ibattery =
1
12
m(a2 + c2) =
1
12
(0.185kg)((0.032m)2 + (0.023m)2) = 2.394× 10−5kgm2
The Pixhawk board is modelled as a solid rectangular prism: marm = 0.065kg.
Ipix =
1
12
m(a2 + c2) =
1
12
(0.065kg)((0.05m)2 + (0.015m)2) = 1.476× 10−5kgm2
The ESC is modelled as a solid rectangular prism: mESC = 0.025kg.
IESC =
1
12
m(a2 + c2) +md2 =
1
12
(0.025kg)((0.05m)2 + (0.007m)2) + (0.025kg)(0.06m)2
= 9.5310× 10−5kgm2
The plate: mplate = 0.065kg.
Iplate =
1
12
m(a2 + c2) =
1
12
(0.065kg)((0.10m)2 + (0.05m)2) = 6.77× 10−5kgm2
The total moment of inertia around y axis is:
Iyy = 4Imotor + 4Iarm + Ibattery + Ipix + 4IESC + 2Iplate
= 4(1.2848× 10−3kgm2) + 4(1.5653× 10−3kgm2) + 2.394× 10−5kgm2
+ 1.476× 10−5kgm2 + 4(9.5310× 10−5kgm2) + 2(6.77× 10−5kgm2)
= 0.0119kgm2
96
Principal moment of inertia around z axis
Izz = 4Imotor + 4Iarm + Ibattery + Ipix + 4IESC + 2Iplate
The motor were modelled as a solid cylinder. Motor mass mmotor = 0.05kg.
Imotor =
1
2
mr2 +md2 =
1
2
(0.05kg)(0.0135m)2 + (0.05kg)(0.22m)2 = 2.42455× 10−3kgm2
The arm is modelled as a solid rectangular prism: marm = 0.055kg.
Iarm =
1
12
m(a2 + b2) +md2 =
1
12
(0.055kg)((0.185m)2 + (0.03m)2) + (0.055kg)(0.06m)2
= 3.589× 10−4kgm2
The battery is modelled as a solid rectangular prism: mbattery = 0.185kg.
Ibattery =
1
12
m(a2 + b2) =
1
12
(0.185kg)((0.105m)2 + (0.05m)2) = 2.085× 10−4kgm2
The Pixhawk board is modelled as a solid rectangular prism: marm = 0.065kg.
Ipix =
1
12
m(a2 + b2) =
1
12
(0.065kg)((0.08m)2 + (0.05m)2) = 4.8208× 10−5kgm2
The ESC is modelled as a solid rectangular prism: mESC = 0.025kg.
IESC =
1
12
m(a2 + b2) +md2 =
1
12
(0.025kg)((0.05m)2 + (0.025m)2) + (0.025kg)(0.10m)2
= 2.5651× 10−4kgm2
The plate: mplate = 0.065kg.
Iplate =
1
12
m(a2 + b2) =
1
12
(0.065kg)((0.10m)2 + (0.10m)2) = 1.083× 10−4kgm2
The total moment of inertia around z axis is:
Izz = 4Imotor + 4Iarm + Ibattery + Ipix + 4IESC + 2Iplate
= 4(2.4245× 10−3kgm2) + 4(3.589× 10−4kgm2) + 2.085× 10−4kgm2
+ 4.8208× 10−5kgm2 + 4(2.5651× 10−4kgm2) + 2(1.083× 10−4kgm2)
= 0.0126kgm2
97
Bibliography
[1] Accelerometer calibration (2022). https://diyodemag.com/projects/part_three_take_
flight_with_h4wk. xii, 60
[2] Ahuatźın, G., Tang, Y., and Esṕındola, E. (2022). Navigation observer design using vector mea-
surements and a gps sensor. IEEE Sensors Journal (Submitted). 8
[3] Akella, M. R., Thakur, D., and Mazenc, F. (2015). Partial lyapunov strictification: Smooth angular
velocity observers for attitude tracking control. Journal of Guidance, Control, and Dynamics,
38(3):442–451. 42
[4] Ardupilot, A., Mendoza-Mendoza, J. A., Gonzalez-Villela, V., Sepulveda-Cervantes, G., Mendez-
Martinez, M., and Sossa-Azuela, H. (2020). Advanced Robotic Vehicles Programming. Springer.
50
[5] Ardupilot Common rpm sensors (2022). https://ardupilot.org/copter/docs/common-rpm.
html. 55
[6] Barczyk, M. and Lynch, A. F. (2012). Invariant observer design for a helicopter uav aided inertial
navigation system. IEEE Transactions on Control Systems Technology, 21(3):791–806. 6
[7] Barczyk, M. and Lynch, A. F. (2013). Invariant observer design for a helicopter uav aided inertial
navigation system. IEEE Transactions on Control Systems Technology, 21(3):791–806. 5
[8] Benziane, L., Benallegue, A., Chitour, Y., and Tayebi, A. (2016). Velocity-free attitude stabiliza-
tion with inertial vector measurements. International Journal of Robust and Nonlinear Control,
26(11):2478–2493. 5
[9] Berkane, S., Abdessameud, A., and Tayebi, A. (2016). Global exponential angular velocity observer
for rigid body systems. In 2016 IEEE 55th Conference on Decision and Control (CDC), pages 4154–
4159. IEEE. 5, 23
98
BIBLIOGRAPHY
[10] Berkane, S., Tayebi, A., and De Marco, S. (2021). A nonlinear navigation observer using imu
and generic position information. Automatica, 127:109513. 6
[11] Black, H. D. (1964). A passive system for determining the attitude of a satellite. AIAA journal,
2(7):1350–1351. 2
[12] Bonnabel, S., Martin, P., and Rouchon, P. (2008). Symmetry-preserving observers. IEEE Trans-
actions on Automatic Control, 53(11):2514–2526. 5
[13] Chavez-Moreno, R., Tang, Y., Hernandez, J.-C., and Ji, H. (2018). Contracting angular velocity
observer for small satellites. IEEE Transactions on Aerospace and Electronic Systems, 54(6):2762–
2775. 22
[14] Cirillo, A., Cirillo., P., Maria, G. D., Natale, C., and Pirozzi, S. (2017). A Comparison of
Multisensor Attitude Estimation Algorithms, chapter 29. CRC Press Taylor and Francis Group. 4
[15] Crassidis, J., Markley, F., and Cheng, Y. (2007). Survey of nonlinear attitude estimation methods.
Journal of Guidance, Control and Dynamics. 1, 5, 6
[16] Esṕındola, E. and Tang, Y. (2022). A new angular velocity observer for attitude tracking of
spacecraft. ISA transactions. 17, 22
[17] Feng, L. and Fangchao, Q. (2016). Research on the hardware structure characteristics and ekf
filtering algorithm of the autopilot pixhawk. In 2016 Sixth International Conference on Instrumen-
tation Measurement, Computer, Communication and Control (IMCCC), pages 228–231. 50
[18] Fourati, H. and Belkhiat, D. E. C. (2016). Multisensor attitude estimation: fundamental concepts
and applications. CRC Press. 5
[19] Grip, H. F., Fossen, T. I., Johansen, T. A., and Saberi, A. (2015). Globally exponentially stable
attitude and gyro bias estimation with application to gnss/ins integration. Automatica, 51:158–166.
6
[20] Grujin, I. and Nilsson, R. (2016). Model-based development and evaluation of control for complex
multi-domain systems: Attitude control for a quadrotor uav. Department of Engineering, Aarhus
University, Technical report ECE-TR-23:92. xii, 94
[21] Hobbywing Direct (2022). https://www.hobbywingdirect.com/products/rpm-sensor. 55
[22] Hua, M.-D. (2010). Attitude estimation for accelerated vehicles using gps/ins measurements.
Control Engineering Practice, 18:723–732. 4
99
BIBLIOGRAPHY
[23] Hua, M.-D., Ducard, G., Hamel, T., and Mahony, R. (2014). Introduction to nonlinear attitude
estimation for aerial robotic systems. Aerospace Lab, pages AL08–04. 1, 2, 5
[24] Khalil, H. K. (2002). Nonlinear systems third edition. Patience Hall, 115. 22, 32, 36
[25] Lim, H., Park, J., Lee, D., and Kim, H. (2012). Build your own quadrotor: Open-source projects
on unmanned aerial vehicles. IEEE Robotics Automation Magazine, 19(3):33–45. 50
[26] Magnetic Fields Calculators (2022). https://www.ngdc.noaa.gov/geomag/calculators/
magcalc.shtml#igrfwmm. 69
[27] Magnis, L. and Petit, N. (2017). Angular velocity nonlinear observer from vector measurements.
Automatica, 75:46–53. 5
[28] Mahony, R., Hamel, T., and Pflimlin, J.-M. (2008). Nonlinear complementary filters on the
special orthogonal group. IEEE Transactions on automatic control, 53(5):1203–1217. 4, 6, 8, 32
[29] Mahony, R., Kumar, V., and Corke, P. (2012). Multirotor aerial vehicles: Modeling, estimation,
and control of quadrotor. IEEE Robotics Automation Magazine, 19(3):20–32. 66
[30] Markley, F. L. and Crassidis, J. (2014). Fundamentals of Spacecraft Attitude Determination and
Control. Springer. 2, 3, 12
[31] Martin, P. and Salaün, E. (2008). An invariant observer for earth-velocity-aided attitude heading
reference systems. IFAC Proceedings Volumes, 41(2):9857–9864. 5
[32] Meier, L., Tanskanen, P., Fraundorfer, F., and Pollefeys, M. (2011). Pixhawk: A system for
autonomous flight using onboard computer vision. In 2011 IEEE International Conference on
Robotics and Automation, pages 2992–2997. 50
[33] Mission Planner Installation (2022). https://ardupilot.org/planner/docs/
mission-planner-installation.html. 58
[34] Mission Planner Overview (2022). https://ardupilot.org/planner/docs/
mission-planner-overview.html. 57
[35] Pixhawk Overview (2022). https://ardupilot.org/copter/docs/
common-pixhawk-overview.html. 50
[36] Quan, Q. (2017). Introduction to multicopter design and control. Springer. xi, 38, 39
100
BIBLIOGRAPHY
[37] Rehbinder, H. and Ghosh, B. K. (2003). Pose estimation using line-based dynamic vision and
inertial sensors. IEEE Transactions on Automatic control, 48(2):186–199. 1, 6
[38] Roberts, A. and Tayebi, A. (2011). On the attitude estimation of accelerating rigid-bodies using
gps and imu measurements. In 2011 50th IEEE Conference on Decision and Control and European
Control Conference, pages 8088–8093. 4
[39] RPM Sensor Wiki - Knowledge Share (2022). https://discuss.ardupilot.org/t/
rpm-sensor-wiki-knowledge-share/45091. 62
[40] Salcudean, S. (1991). A globally convergent angular velocity observer for rigid body motion.
IEEE transactions on Automatic Control, 36(12):1493–1497. 22
[41] Sarras, I. (2020). Global exponential estimation of rigid body angular velocity directly from
multiple vector measurements. In 2020 European Control Conference (ECC), pages 979–984. 5
[42] Schaub, H. and Junkins, J. (2014). Analytical Mechanics of Space Systems. AIAA Education
Series. 2
[43] Shuster, M. D. et al. (1993). A survey of attitude representations. Navigation, 8(9):439–517. 2
[44] Tayebi, A. (2008). Unit quaternion-based output feedback for the attitude tracking problem.
IEEE Transactions on Automatic Control, 53(6):1516–1520. 22
[45] Tayebi, A., Roberts, A., and Benallegue, A. (2013a). Inertial vector measurements based velocity-
free attitude stabilization. IEEE Transactions on Automatic Control, 58(11):2893–2898. 5
[46] Tayebi, A., Roberts, A., and Benallegue, A. (2013b). Inertial vector measurements based velocity-
free attitude stabilization. IEEE Transactions on Automatic Control, 58(11):2893–2898. 23
[47] Wahba, G. (1965). A least squares estimate of satellite attitude. SIAM Review, 7(3):409–409. 2
[48] Yang, Z., Lin, F., and Chen, B. M. (2016). Survey of autopilot for multi-rotor unmanned aerial
vehicles. In IECON 2016 - 42nd Annual Conference of the IEEE Industrial Electronics Society,
pages 6122–6127. 50
101