Skip to content

an Autonomous Navigator implemented on the TI-RSLK MAX Chassis, by Lucero Aguilar-Larios and Gian Fajardo

Notifications You must be signed in to change notification settings

dnblvr/robot_navigation_project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Robot Navigation Project

Principal Authors:   Lucero Aguilar-Larios & Gian Fajardo

Faculty Advisor:   Dr. Shahnam Mirzaei

Table of Contents

  1. Goals
  2. Methods
  3. Materials
  4. Online Resources
  5. Checklist
  6. Main Tasks

Goals

alt text

   The scope of this project is to explore the fundamentals of autonomous navigation. In a world reliant on more autonomous systems, these systems need to navigate effectively as they can enable efficient and precise movements in complex environments without human intervention. Furthermore, the system needs to prioritize the safety of the human agents around it. In part, it should traverse dangerous places that humans cannot survive. Exploration of our oceans, disasters created from war, and planets can and do benefit from having autonomous systems. To do this, Autonomous Systems need to identify its environment. From there, motion planning algorithms are applied.

   Whereas other work uses other autonomous systems like humanoids, which will have to adopt different mathematical models to describe their states, our project is different. To see if this plan is feasible, we will simplify the scope to make a two-wheeled robot navigator from the TI-RSLK chassis. This navigator is to be equipped with a set of solid-state multi-zone LiDAR sensors placed around the chassis. The goal of this research project is to make an autonomous system that:

  1. scans its environment, and
  2. navigates from one user-defined coordinate to another all while avoiding obstacles.

Methods

Here is how we achieve our goals:

  • We will recreate its pose and its environment in memory using the graph-based Simultaneous Localization and Mapping (GraphSLAM) algorithm using the multiple LiDAR ICs mentioned before.

  • We will also write a motion planning algorithm called the Rapidly-Exploring Random Trees (RRT*) or some equivalent.

  • If there is time, we plan to incorporate sensor fusion via the extended Kalman Filter (EKF) which will hopefully gather a better estimate of its state without the expected drift from graphSLAM alone. We intend to use additional sensors like a MARG sensor and a GPS receiver.

Materials

The materials involved include:

Amounts Name Description Link
1 TI-RSLK Chassis Two-Wheeled Platform link
4-6 VL53L5CX SparkFun Qwiic ToF Imager link
1 GPS Receiver N/A link
1 ICM-20948 MARG Sensor link

Online Resources

Checklist

In general, for all tasks, they should follow the same guidelines:

  • make the code happen in any IDE or simulator
  • simulate it in normal C code in Visual Studio Code, MATLAB, or any other IDE, if possible
    • if simulating in MATLAB, use its C-code converter
  • optimize the functions once done, if applicable

Main Tasks

  • RRT*

    • simulation on C and Python 1
    • MSP432 implementation (wk 13) 1
      • configure the correct I/O required 2
  • multizone LiDAR configuration (wk 10-12)

    • configure I2C with interrupt! 1
    • configure with Timer_Ax interrupt hardware 1
    • implement state machine to know when to read 2
  • Odometry (wk 10-11)

    • extended Kalman Filter 2
    • simulation in C/Python 1
  • GraphSLAM (wk 10-12) 1

    • research 2
      • landmark selection 1
      • I will look into what the I/O of this thing will require 2
  • combine all our code together (wk 14-17)

Footnotes

  1. assigned to both Gian and Lucy 2 3 4 5 6 7

  2. assigned to Gian 2 3 4 5

About

an Autonomous Navigator implemented on the TI-RSLK MAX Chassis, by Lucero Aguilar-Larios and Gian Fajardo

Topics

Resources

Stars

Watchers

Forks