Skip to main content

Student Projects 2025

2025 was the second year that the ASE project was run. The student projects were scoped such that they were supervised by a Domain Expert who specialized in a specific field. Some students formed groups of 2 and some worked alone. Each had access to the ASE Labs for the entire duration of the project. They were given a Rover with a camera that had basic autonomous driving capabilities out of the box. Some groups decided to use this as a baseline to compare against in their research, while other groups focused their research on specific parts of the provided hardware or software. The following are short summaries of student projects of this year.

Max Gallup

Max Gallup

Founding Member, Ex-Hardware Lead and Ex-Project Administrator

Designing and Implementing an Indoor Positioning System for Autonomous Vehicles

Konstantinos standing in-front of title slide presentation picture

Project by Konstantinos Syrros and supervised by: Dr. Natalia Silvis-Cividjian

This thesis explores the design and implementation of a Bluetooth Low Energy (BLE)-based positioning system and its effectiveness in providing accurate indoor positioning for the VU Autonomous Systems Engineering (ASE) Rover. The work navigates around the BLE technology and the basics of positioning, and builds beacon infrastructure in a custom software and hardware solution, that allows the Rover to perform trilateration, estimating its position. The latter is achieved by implementing a custom service that scans and monitors the beacons, calculates the distances from each through a path loss model, and transmits the final position estimate at regular intervals. The system is versatile and extensible, adhering to the purpose of this thesis, which is to serve as a baseline for future research and development of a complete and optimised solution. Formal experimentation focused on 8 different configurations, ranging from static to moving tests, from simpler to more complex track layouts, while implementing different filtering methods. The results show an average error of 42 centimetres in a straight-line configuration in the best case, with accuracy dropping when decreasing beacon density or constructing more complex tracks. The work finally discusses limitations of the implemented system, its applications in the current state within ASE and elsewhere, as well as future work that can be performed to improve it.

Developing a Simulator for Autonomous Systems Engineering (ASE)

Egle and Simao standing in-front of title slide presentation picture

Project by Egle Jurkeviciute and João Maria Castela Simão and supervised by: Dr. Ilias Gerostathopoulos

Robotic simulation accelerates the design, development, and validation processes. We developed a high-fidelity Rover simulator to support ASE, researchers, and NXP Cup teams, where each team competes to complete the fastest lap on a set track with their Rover. Our long-term vision is to build a digital twin—a physics-based model with real-time (hardware-in-the-loop) simulation—which we continuously test throughout the Rover's life cycle. While a true digital twin entails bidirectional data fusion over the entire life cycle, this paper focuses on the real-time simulation layer. Over several iterative development cycles, we captured requirements from ASE and NXP Cup teams, recreated the Rover’s dynamics by adjusting mass, tyre friction, and motor constants, and created modular track and lighting assets. We assessed the simulator's efficacy in three dimensions: physical behaviour, track generation, and light responsiveness. Timing trials revealed that single-lap and reset-per-lap runs were close to real-world performance, whereas continuous laps diverged due to accumulated hardware-timing drift. Visual assessments exhibit that most generated tracks match their physical counterparts, though a mis-modelled chicane piece still introduces a slight lateral shift. Although the simulator does not replicate real-world light conditions, its dynamic lighting still affects Rover's steering and challenges the camera vision. By providing researchers and competition teams with a virtual environment that replicates Rover's physical and software components, our simulator lays the groundwork for a full hardware-in-the-loop digital twin that optimises the testing and validation process.

Assessing and Improving the Energy Efficiency of an Autonomous Rover

Ambra standing in-front of title slide presentation picture

Project by Ambra Mihu and supervised by: Dr. Ivano Malavolta

This project contributes to the ASE Labs at Vrije Universiteit Amsterdam by assessing and improving the energy consumption of the ASE/Rover, a battery-powered autonomous vehicle built to perform in racing competitions. Autonomous systems have become more commonly used in various industries that demand both precision and scalability. Energy consumption is influenced by a complex combination of hardware, software and control-level decisions. Accurately measuring and interpreting energy data is far from trivial, as results can be easily affected by external conditions, internal variability and system behaviour. Goal. This thesis aims to assess and improve the current energy consumption patterns of the ASE rover, under different operational conditions, propose two targeted interventions (CPU governor adjustments and imaging resolution scaling) to improve efficiency, and evaluate their impact through systematic experimentation and analysis. A series of controlled experiments are conducted using a battery-powered autonomous rover with onboard sensors and camera-based vision for path detection. Three CPU governors (ondemand, powersave and conservative) and imaging resolutions (640x480, 352x288 and 160x120) are tested, in different scenarios: idle, imaging and driving. Power, energy, CPU temperature and battery voltage data are collected at high sampling rates. These metrics are analyzed to assess energy savings, thermal stress and data variability. The results show that the powersave governor significantly reduces mean energy consumption, mean power, and CPU temperature, compared to the baseline ondemand governor and the other mode tested, conservative. However, powersave introduces greater variability and less consistent measures, likely due to its fixed low-frequency behaviour. Imaging resolution is also a strong driver of energy consumption: while lower resolutions (160x120, 352x288) consume less energy, they introduce limitations in the rover’s driving accuracy. Due to software optimizations targeting the default resolution, 640x480 has the smallest variability of the data. The final driving experiment that compared the baseline with the powersave governor running along the 352x288 resolution on the default pipeline, shows 12.19% of energy saved. This result highlights the effectiveness of the interventions. Targeted, low-complexity interventions can meaningfully reduce energy consumption in robotic systems without compromising functional performance. The framework and findings presented in this thesis can guide future energy measurements in embedded and autonomous platforms."

Vision-Guided Racing Line Tracking

lucaisidora standing in-front of title slide presentation picture

Project by Isidora Jovanovic and Luca De Nobili and supervised by: Dr. Natalia Silvis-Cividjian

This project investigates how different image processing algorithms and control strategies influence the ability of an autonomous rover to follow an optimal racing line and minimise lap time. By comparing Canny edge detection and Morphological gradient methods for track midline extraction, paired with Pure Pursuit and Stanley controllers for path tracking, we show the trade-offs between accuracy and speed. Our findings reveal that regardless of Morphological gradient offering faster processing, Canny edge detection delivers more precise midline extraction, resulting in a faster and more accurate racing line computation. When it comes to controllers, Stanley and Pure Pursuit performed equally well in terms of time and accuracy, with no conclusive advantage in overall effectiveness. The study further demonstrates that following an optimised racing line, rather than the track centerline, substantially improves lap time and path efficiency. However, the effectiveness of each method is sensitive to tuning parameters, such as lookahead distance and controller gain, which must be adapted to the track layout for optimal results. These findings emphasise the importance of algorithm selection and parameter tuning in the design of high-performance autonomous racing systems."

From GO slow to C faster: Latency reduction on the ASE rover

Luis standing in-front of title slide presentation picture

Project by Luis-Enrique Felipe Sartorius and supervised by: Dr. Atze van der Ploeg

Processing latency is a crucial factor in real-time embedded systems, particularly in scenarios like autonomous navigation which require quick reactions. This research investigates strategies for minimizing delays in end-to-end image processing to actuation of the ASE rover, powered by a Debix Model A board. Although the system is deployed in a real-time setting, previous development efforts did not focus on methodically assessing and enhancing timing performance. This work is a more detailed examination of the rover’s image processing pipeline, revealing the parts where significant performance gains can be achieved. The objective is to improve system responsiveness and create a good starting point for the work of future students to reduce latency on embedded platforms.

A Comparison of Moving Object Detection and Removal Techniques in 2D Lidar SLAM

Collin standing in-front of title slide presentation picture

Project by Collin Matthew Adams and supervised by: Drs. Kees Verstoep

Simultaneous Localization and Mapping, also known as SLAM, is a key technology in autonomous robotics, enabling rovers to build maps of unknown environments while tracking their own location. However, most SLAM algorithms assume static environments, limiting their effectiveness in real-world scenarios where moving objects are common. This thesis investigates and compares methods of detecting and removing moving objects in 2D lidar-based SLAM. The two methods compared are vision line detection and consecutive frames detection. Vision line detection removes outdated map points by checking whether they should be obstructing current lidar sensor visibility, while consecutive frames detection suppresses points in moving objects by comparing consecutive point clouds. A combined approach leveraging both methods is also explored. The results show that vision line detection performs well at identifying and removing moved objects, even when the rover leaves and revisits areas. Consecutive frames detection, while less effective at detecting moving objects, is better at preserving static map features. The combined method improves the effectiveness of moving object detection, but at the cost of often erroneously discarding small or narrow stationary features. This thesis concludes that no single approach is universally optimal; the choice of detection method should be guided by the specific movement patterns and environmental characteristics expected in deployment.

Enabling Encoder-less Robot Navigation Through Scan-to-Map ICP: A Comparative Analysis

Sam standing in-front of title slide presentation picture

Project by Sam Patrick Daly and supervised by: Drs. Kees Verstoep

This thesis investigates whether a LIDAR-based scan-to-map Iterative Closest Point (ICP) algorithm can enable accurate navigation for mobile robots without wheel encoders. Motivated by our university’s encoder-less racing rover, we evaluate three navigation approaches using benchmark datasets with ground truth. Our results show that pure odometry using real wheel encoders accumulates 49.63m drift over a 150m trajectory (33% error). Remarkably, scan-to-map ICP without any odometry achieves only 8.36m drift (5.6% error); an 83.2% improvement. The hybrid approach combining both sensors achieves 1.38m drift (0.9% error). Through systematic parameter optimization over 600 combinations, we identify settings that achieve 5.94ms processing time while maintaining accuracy. Cross-dataset validation confirms our approach generalizes well. Notably, we discover instances where pure ICP outperforms the hybrid approach, indicating wheel slip events where odometry provides incorrect information. These findings demonstrate that wheel encoders are not mandatory for practical indoor robot navigation, with important implications for cost-sensitive applications in education, research, and service robotics.

Comparative Evaluation of Deep Learning Models for Lane Detection in Autonomous Driving

Dimitri standing in-front of title slide presentation picture

Project by Dimitri Liauw and supervised by: Dr. Natalia Silvis-Cividjian

This thesis evaluates deep learning models for lane detection by balancing accuracy and efficiency. The models U-Net, ENet, and LaneNet are trained and evaluated. All models are trained on the TuSimple dataset, which contains real-world roads and is considered a research standard in the field of lane detection. Furthermore, a custom ASE dataset was captured and labeled in the ASE lab of Vrije Universiteit Amsterdam. The ASE lab uses the same lanes as in the NXP cup. Therefore, the dataset consists of white lanes with black lane lines in three various lighting conditions: bright sunlight, artificial light, and low-light. This dataset helps assess the performance of the models in varying lighting conditions and environments for small embedded systems. The results indicate that the implementations of the models perform consistently on both datasets. The models achieved a higher accuracy on the ASE dataset. ENet was overall the most accurate and efficient model as it takes the least amount of parameters, storage, and training time. LaneNet is an attractive option when information on lane instances is needed and was almost as accurate and efficient as ENet. U-Net is still an accurate model, but is the least efficient. These findings make ENet an attractive option when balancing accuracy and efficiency on a resource-constrained embedded system for autonomous driving.

Positioning and Navigation of an Autonomous Rover Using Top-View Visual Odometry

Alpdeniz standing in-front of title slide presentation picture

Project by Alpdeniz Sarici Hernandez and supervised by: Dr. Natalia Silvis-Cividjian

This project explores real-time indoor localization and navigation of an autonomous rover using top-view visual odometry. By calibrating an overhead camera and applying homography transformations, the rover's position is accurately estimated and used to guide it to target coordinates. A custom navigation service integrates with the ASE rover pipeline to enable live path planning and control.