We have exciting new projects, not only for SA and MA students, but also other team members looking for a fun and interesting project next to ETH obligations. If you are interested in a SA or MA please contact one of our PhDs to discuss project ideas. For other projects, check out the list of current projects below and contact one of our TAs for more information.

If you are interested in previous projects, go to our Project Report site, where you can download multiple reports of previous projects.

Bachelor / Semester / Master Projects

Please get in contact with us if you would like to do a project with us during HS23. The list of available projects will be updated soon.

  1. Long Term Tracking for Similarly Looking Objects in RoboCup Soccer SPL Games
  2. Distributed Optimization for Multi-Robot Systems Cooperation in RoboCup [closed]

If you are interested in one of the projects above, please do not hesitate to reach out to the corresponding supervisors. For general questions about projects, please send an email to nomadz@list.ee.ethz.ch or submit your application here

Freelancer Projects

This is a glance of the projects we have to deal with as members of the team. Every year new challenges arise and our job is to find solution to them exploiting our capabilities while implementing the newest technological advancements to made our robots work better and better.

If you are interested in one of those or have other ideas, please contact us for more information. You can also join the team to immediately dive into the RoboCup experience.

Control

  1. System Identification for Stabilization

    Our walking controller is based on the inverse pendulum assumption. Given this first order model approximation, a PID controller is used to produce a joint command for the AnkleRoll and AnklePitch (check here for visual reference); however, this is purely model-free controller with heuristic tuning. The goal of this project is to improve the control effort by including model information.

    Using both geometrical assumptions and system identification techniques, we plan to obtain an accurate model not only useful for control, but also to build a simulation based it. Obtaining a good mathematical model will then allow to adjust the controller from a model-free (PID) to a model-based one (Inverse-Dynamics + PID correction).

    Ground-truth is obtained wither with filtered IMU data or with an external MoCap system.

    The model can be dependent on additional factors such as temperature, battery level, legs extension and so on, such that the model can be adapted online to various conditions.

Behavior & Decision-Making

  1. Collaborative Behavior

    The current behavior is relatively old and contains many issues that have yet to be solved, but until now there were many other more pressing issues that needed to be solved.

    Now with the SelfLocator, Ball Detection, Robot Detection, and most basic motion working reasonably well, the implementation of more advanced behaviors starts to make sense.

    The final goal of this project is to enable the robot to pass the ball to each other and incorporate this skill into offensive and defensive gameplay. Theoretically, this is fairly simple but in practice, it is very hard to execute since most kicks are inaccurate, opponents are usually all over the place when trying to execute this maneuver, and most importantly all estimated states and information you have are not 100% correct.

  2. Improved Obstacle Avoidance

    When a robot is walking to a specific destination, there might be some other robots in the way. Then it should plan and follow a path to avoid the obstacles so as to efficiently reach the destination.

    Based on the current computer vision pipeline, a robot can detect other robots and estimate their locations by its own vision. Note that a robot could also get some information of other robots from the shared knowledge (teammates’ detection). Limited by the SPL Rules 2022 (Chapter 2.5.2 – Wireless Communications) [1], nevertheless, the shared knowledge might not be fully updated, making it not as reliable as the robots’ own vision.

    The current implementation is, whenever there is a detected robot between the current position and the destination, a waypoint will be set. The robot will move to the waypoint (including the location and the supposed direction) first and then head for the destination. This yields two questions: 1. Is the waypoint optimal (we will call it as obstacle avoidance) 2. how does a robot walk to the waypoint efficiently (e.g. see the figure below, and we will call this part as path following).

    This project aims to use all the available sensors (upper and lower cameras, sonars, touch sensors) to get a global and accurate description of the environment, in order to efficiently compute solutions.

    A possible expansion would be to build a global map, with occupancy grid, uncertainty, safe positions, target positions, … and provide optimal behaviors based on that.

Computer Vision

  1. Team Detection

    Detecting robots is one of the main vision tasks in a robot soccer perception stack. Currently, our framework uses a single-stage object detector based on the SSD meta-architecture to detect robots but unfortunately at the moment we are not able to determine the team to which the robot belongs i.e. our own team or the opponent team. While this is already useful for obstacle avoidance, it is not good enough to implement collaborative behaviors.

    In this project we are going to design and implement an approach to determine the team of a detected player based on its jersey color. We are going to start with classic computer vision techniques using color based heuristics and eventually move to a more robust learning-based approach.

  2. Sim2Real Translation

    Deep learning models have become the standard for perception in SPL, especially for tasks such as object detection and scene understanding. However, collecting high-quality data with the hardware of the NAO and annotating it is a tedious and time consuming task.

    One possible workaround is to use synthetic data. State-of-the-art game engines such as Unreal are capable of rendering highly photorealistic scenes, which can then be used to generate synthetic dataset. However, training only on synthetic data might result in poor generalization. 

    In recent years, it has been shown that deep learning can be used to transfer the properties of a dataset to another one with the same labels but different domains e.g. real and synthetic robot soccer scenes. In particular, models belong to the CycleGAN family have shown very promising results.

    The goal of this project is to set up a pipeline to automatically generate annotated synthetic training data for computer vision models from robot soccer scenes rendered e.g. using Unreal Engine and make it more “realistic” using a style transfer model training on publicly available datasets.

System

  1. Automatic Data Labelling

    The ball and line detection models are very dependent on the fields and lightning conditions. It is thus necessary to fine-tune our model on a new dataset on the new fields.

    However, creating a new dataset in every field is not possible, if done manually.

    In the project, we look into labelling a dataset using computer vision models having a lot of parameters. We would be looking into knowledge distillation for training a smaller model. Since this would be done offline, there is no restriction on the number of parameters, the model should have. The teacher model would focus more on accuracy and metrics rather than latency.

    The output of the pre-trained model would be used as pseudo-labels for the new dataset and some (ideally none) human interjection to correct the dataset in case some error is present.
    The smaller model, that needs to be deployed on the robots, would now be trained on the data generated from the pseudo-labels.

    The model that we are looking into is a semantic segmentation model for pixel-wise classification of the images, which need to be obtained from the robots in the new field. The model can be either an object detection model or a semantic segmentation model and generate the object detection boundary from the semantic segmentation model.