Principal Authors: Lucero Aguilar-Larios & Gian Fajardo
Faculty Advisor: Dr. Shahnam Mirzaei
The scope of this project is to explore the fundamentals of autonomous navigation. In a world reliant on more autonomous systems, these systems need to navigate effectively as they can enable efficient and precise movements in complex environments without human intervention. Furthermore, the system needs to prioritize the safety of the human agents around it. In part, it should traverse dangerous places that humans cannot survive. Exploration of our oceans, disasters created from war, and planets can and do benefit from having autonomous systems. To do this, Autonomous Systems need to identify its environment. From there, motion planning algorithms are applied.
Whereas other work uses other autonomous systems like humanoids, which will have to adopt different mathematical models to describe their states, our project is different. To see if this plan is feasible, we will simplify the scope to make a two-wheeled robot navigator from the TI-RSLK chassis. This navigator is to be equipped with a set of solid-state multi-zone LiDAR sensors placed around the chassis. The goal of this research project is to make an autonomous system that:
- scans its environment, and
- navigates from one user-defined coordinate to another all while avoiding obstacles.
Here is how we achieve our goals:
-
We will recreate its pose and its environment in memory using the graph-based Simultaneous Localization and Mapping (GraphSLAM) algorithm using the multiple LiDAR ICs mentioned before.
-
We will also write a motion planning algorithm called the Rapidly-Exploring Random Trees (RRT*) or some equivalent.
-
If there is time, we plan to incorporate sensor fusion via the extended Kalman Filter (EKF) which will hopefully gather a better estimate of its state without the expected drift from graphSLAM alone. We intend to use additional sensors like a MARG sensor and a GPS receiver.
The materials involved include:
Amounts | Name | Description | Link |
---|---|---|---|
1 | TI-RSLK Chassis | Two-Wheeled Platform | link |
4-6 | VL53L5CX | SparkFun Qwiic ToF Imager | link |
1 | GPS Receiver | N/A | link |
1 | ICM-20948 | MARG Sensor | link |
-
(94) Path Planning with A* and RRT | Autonomous Navigation, Part 4 - YouTube
-
path finding - rapid exploring random trees - Stack Overflow
-
(94) Understanding SLAM Using Pose Graph Optimization | Autonomous Navigation, Part 3 - YouTube
-
A brief introduction to GraphSLAM | by Shiva Chandrachary | Medium
-
Problems with MSP432
-
Research for Alternative Plan using RPLIDAR C1
- Amazon.com: youyeetoo RPLIDAR C1 Fusion Lidar DTOF Laser Scanner - 360-Degree Full-Range 12-Meter Laser Scanner for Robot Projects Involving Positioning, Mapping, Navigation, and Obstacle Avoidance : Tools & Home Improvement
- (307) Low Cost Lidar RPLIDAR C1 - Unboxing & Review - YouTube
- RPLIDAR C1 with Raspberry PI 4 and ROS2 | Details | Hackaday.io
- RPLIDAR C1 Documentation : r/arduino
- Editing robot_navigation_project/README.md at main · dnblvr/robot_navigation_project
In general, for all tasks, they should follow the same guidelines:
- make the code happen in any IDE or simulator
- simulate it in normal C code in Visual Studio Code, MATLAB, or any other IDE, if possible
- if simulating in MATLAB, use its C-code converter
- optimize the functions once done, if applicable
-
RRT*
-
multizone LiDAR configuration (wk 10-12)
-
Odometry (wk 10-11)
-
GraphSLAM (wk 10-12) 1
-
combine all our code together (wk 14-17)