Now, I build software for robots at the Autonomous Systems Lab. Currently, I'm working on closing the perception-control loop to make robots that can robustly manipulate objects and respond to what is happening around them.
Stray Scanner is an app for collecting RGB-D datasets with an iPhone or iPad with a LiDAR sensor.
We created a simple demo to investigate the limitations of state-of-the-art methods in mobile manipulation. The robot is tasked to go through an office to find an object, pick it up and return with the object. It uses SLAM, motion planning, grasp planning and some perception algorithms to get the job done.
We built an autonomous race car using an old 1/10th scale radio-controlled touring car. It uses an Nvidia Jetson TX 1 which communicates with some rc electronics through an Arduino board. It has an RGB camera at the front and an IMU sensor. Velocity is measured through a sensor inside the brushless motor.
The car be both be driven using the regular RC remote or autonomously reading commands from the Jetson. The commands from the remote can be recorded and used for learning. The picture is from an early version which used a Raspberry Pi instead of the Jetson.
We used the platform to do some research into driving the car using reinforcement learning. Most of the code is available here.
In this project, we basically tried to build a modern convolutional neural network using Gaussian processes. We ran the model on some image classification benchmarks. At the time, we were able to get better results than any other GP based method, but results are still behind the advanced neural network based techniques. Scaling these large GP models still remains a challenge.