Mini-RoboCup

Huai-Ping Lee and Lei Wei

1. Motivation and Background

RoboCup is an international robotics soccer competition founded in 1993. It is intended to promote research and education of autonomous robotics and artificial intelligence. Many new techniques in real robotics are originated from this competition. According to the type of robots, there are different leagues and divisions in RoboCup. Teams using AIBO belongs to Four Legged League in RoboCupSoccer division. The full name of this competition is “Robot Soccer World Cup.” There is an interesting and optimistic official goal:

By mid-21st century, a team of fully autonomous humanoid robot soccer players shall win the soccer game, complying with the official rule of the FIFA, against the winner of the most recent World Cup.

Maybe it is too optimistic.

Since we only have two AIBOs in our department, we are reducing the problem to a two-player soccer game—one goalkeeper and one striker; therefore it’s called “Mini-RoboCup.” Although it seems to be simple relative to a full soccer team, it is the basis of everything, and therefore it is a good starting point for building a real RoboCup team. Because of the features of the whole robotics system, we think that this will be a great opportunity for us to integrate nearly all of the course materials into practice. And we would also hope to make contribution to one or some of the areas when building our system.

2. State of the Art, Challenges and Our Plans

We plan to divide the whole project into several key subtasks:

Vision:

The ability of the robot to sense its environment is a prerequisite for any decision making on the Aibo. As such, we placed a strong emphasis on the vision component of our team. The vision module processes the images taken by the CMOS camera located on the Aibo. The module identifies colors in order to recognize objects, which are then used to localize the robot and to plan its operation.

The visual processing is done using the established procedure of color segmentation followed by object recognition. Color segmentation is the process of classifying each pixel in an input image as belonging to one of a number of predefined color classes based on the knowledge of the ground truth on a few training images. The fundamental methods employed in this module have been applied previously (both in RoboCup and in other domains), we hope to build from scratch like all the other modules in our project.

Locomotion and Movement:

Enabling the Aibos to move precisely and quickly is equally as essential to the overall RoboCup task as the vision task. In this part, we will design our approach to Aibo movement, including walking and the interfaces from walking to the higher level control modules.

The Aibo comes with a stable but slow walk. From watching the videos of past RoboCups, and from reading the available technical reports, it became clear that a fast walk is an essential part of any RoboCup team. Some teams are actively developing and comparing several different approaches to walking including one based on tracing out simple shapes with the aid of a new, efficient solution to the inverse kinematics problem; one based on a hill-climbing approach in joint space with respect to an internal simulated model of the robot's physical structure; and one based on a decomposition of an arbitrary trajectory into four equivalent component steps. Each of these approaches has yielded a parameterized walk with different properties. Most of the teams are in the process of optimizing the parameters and exploring their relative strengths.

The walk is perhaps the most feasible component to borrow from another team's code base, since it can be separated out into its own module. Nonetheless, we hope to create our own walk in the hopes of ending up with something at least as good, if not better, than that of other teams, while retaining the ability to fine tune it on our own. We will also have to design the robot’s kick movement based on several current approaches: Head Kick, Chest Push Kick, Arms Together Kick and Fall Forward Kick.

Localization:

Since it requires at least vision and preferably locomotion to already be in place, localization was a relatively late emphasis in our efforts if we have enough time. For self-localization, the Austin Villa team implemented a Monte-Carlo localization approach similar to the one used by the German Team. This approach uses a collection of particles to estimate the global position and orientation of the robot. These estimates are updated by visual percepts of fixed landmarks and odometry data from the robot's movement module. The particles are averaged to find a best guess of the robot's pose.

3. Goals

A basic RoboCup system consists of four main parts:

1. Vision: recognize the ball, the goal, the beacons, and other robots.

2. Localization: determine current position of robot and other objects on the field.

3. Locomotion: walking, kicking, blocking the ball, etc.

4. Behavior: high level soccer playing behaviors.

We are going to build our mini-RoboCup system bottom-up based on these four sub-systems. The most basic (but not easy) ones are vision and locomotion, and they will be built first. We plan to have a primitive version without localization, because if the robots can chase the ball and kick it toward the goal, we can expect them to play the game, although in an awkward way. Localization will be added later.

Here is a tentative schedule for this semester:

By Nov. 7, 2006: complete the vision part, including image segmentation and detection of the ball and goal.

By Dec. 10, 2006: build a set of locomotion. Since the walking behavior is already available, we will focus on kicking and goalkeeping. Some high-level game-play behavior will also be coded, so at least we can play a primitive game. Localization will be added if time permitting.

The long-term goal of this project is to build a full RoboCup team, adding inter-robot communication mechanisms, and possibly some strategies of the game.

4. References

1. Nardi, D., Riedmiller, M., Sammut, C., and Santos-Victor, J, Editor. Robocup 2004: Robot Soccer World Cup VIII. Springer-Verlag, 2005.

2. Peter Stone, Kurt Dresner, Peggy Fidelman, Nicholas K. Jong, Nate Kohl, Gregory Kuhlmann, Mohan Sridharan, and Daniel Stronger. The UT Austin Villa 2004 RoboCup Four-Legged Team: Coming of Age. Technical Report UT-AI-TR-04-313. The University of Texas at Austin, Department of Computer Sciences, AI Laboratory, 2004.

3. Thomas Roefer and Matthias Jungel. Fast and Robust Edge-Based Localization in the Sony Four-Legged Robot League. Seventh International RoboCup Symposium, 2003.

4. Greg Welch and Gary Bishop. An Introduction to the Kalman Filter. SIGGRAPH 2001 Course Notes.