-
Notifications
You must be signed in to change notification settings - Fork 2
Environment and Localization
Accurate and precise localization - knowledge of each agent’s position and heading in relation to the world frame - is critical for each agent to successfully calculate and navigate to its destination. To do this, we executed ground-truth localization.
Each Neato is outfitted with an upwards-facing camera to track ceiling-mounted fiducials. Six unique fiducials are placed throughout the ~10' by ~14' arena.
Top view shot of environment with fiducials overlaid.
The package ar_tools
calculates the bot's position and heading by comparing the seen fiducial a library of fiducials. ar_tools
was originally configured for a single bot. Our fork reconciles individual agents' coordinate frame tranforms into a single world reference frame, called STAR
.
Modified (current) TF frame.
Each Neato's camera is slightly different, in terms of both characterization and mounting. These differences, although seemingly minute, greatly affect localization quality. There were four factors we had to account for: environment, camera thresholding, camera location, and camera tilt.
Environment: The camera optimizes its settings on robot launch. The cameras were spotting ceiling mounted fiducials positioned directly adjacent to bright extremely bright fluorescent lights. This resulted in an image comprised of very bright lights and very dark ceiling tiles, meaning the fidicials, also cast in shadow, were not visible. To counteract this, we would dim the lights before launch, launch the robots, then restore the lights. This resulted in washing out the ceiling tiles.
Although this photo was taken with a phone, one can clearly see that the brightness of the overhead lights overpowers all.
Camera thresholding: To account for this, the system
Positioning and tilt: Each camera was taped - manually and without measuring - on to the Neato's approximate center. Uncorrected offset would result in positioning inaccuracies, because ar_tools
wholesale assumes that the camera is dead center. Positioning inaccuracies of a single agent would subsequently throw off the calculated velocities and executed positions of the other agents, possibly resulting in collisions. To compensate, we ran a calibration routine, also courtesy of our professor.
Before we integrated AR localization of robots in the world frame, we were localizing based on /robot[n]/odom
. For simplicity's sake, we had to start all the robots in a consistently spaced line and hard-code those positions as offsets.