Aseem Saxena

I am currently pursuing my Masters in Robotics at Oregon State University. I am fortunate to be advised by Prof Alan Fern at the Dynamic Robotics Lab.

Until very recently, I worked as a Machine Learning Engineer at Panasonic, Singapore.

Before that, I was working as a Researcher at M2AP Lab, School of Computing, NUS, Singapore under the guidance of Prof. David Hsu.

I did a Dual Major in Biology and Electrical And Electronics Engineering at BITS Pilani, my education was funded by the KVPY Fellowship. I spent time on my thesis at Robotics Research Lab, IIIT Hyderabad, where I was advised by Prof Madhava Krishna.

Blog  /  Email  /  CV  /  Github  /  Google Scholar  /  LinkedIn

Research Interests

I am focusing broadly on combining perception and planning for decision making under uncertainty, specifically on perception for bipedal locomotion on Cassie. In the ancient past, I worked on Protein Structure Prediction and Cancer Genomics.

Other Interests

I play music - Youtube, Soundcloud. I run, swim and cycle to stay fit.

Publications

LeTS-Drive: Driving in a Crowd by Learning from Tree Search
Panpan Cai, Yuanfu Luo, Aseem Saxena, David Hsu, Wee Sun Lee
RSS (Robotics Science and Systems), 2019 (Accepted)
video

Autonomous driving in a crowded environment, e.g., a busy traffic intersection, is an unsolved challenge for robotics. We propose LeTS-Drive, which integrates online POMDP planning and deep learning.

Exploring Convolutional Networks for End-to-End Visual Servoing
Aseem Saxena, Harit Pandya, Gourav Kumar, K. Madhava Krishna
IEEE ICRA (International Conference on Robotics and Automation), 2017 (Accepted)
video code

We present an end-to-end learning based approach for visual servoing in diverse scenes where the knowledge of camera parameters and scene geometry is not available apriori. This is achieved by training a convolutional neural network over color images with synchronised camera poses.

Projects

Avoiding Side Effects in Complex Navigation Environments
Aseem Saxena, Devin Crowley

We explore methods to train agents to complete tasks and simultaneously avoid side effects in the SafeLife. We demonstrate the effectiveness of MT-DQN, a multi task variant of Deep Q Networks for side effect avoidance.

Visualizing QMDPNet
Aseem Saxena

I created a full fledged GUI Visualizer using Python Tkinter Library to understand the QMDPnet algorithm. I visualize various components of a POMDP such as reward map, belief and value function to get an intuition on how the algorithm works.

prl

Guess from Far, Recognize when Near: Searching the Floor for Small Objects
M Siva Karthik, Sudhanshu Mittal, K. Madhava Krishna, ICVGIP 2014
video

Object recognition is achieved using 3-D Point Cloud data from Kinect sensors and constructing a Bag of Words Model on it. It is trained using a Support Vector Machine Classifier. Object Detection is achieved using segmentation of 2-D images by Markov Random Fields. The implementation is done on a Turtlebot with a Kinect Sensor mounted on top of it.

Deep Learning for Table Interest Point Detection
Aseem Saxena

I attempt to find interest points or corner points of tables in a scene using cues from semantic segmentation and vanishing lines. Availabilty of semantic information such as interest points can help mobile robots navigate in a better way.

Automating GrabCut for Multilabel Image Segmentation
Aseem Saxena

Performing Image Segmentation for 3 labels without user guidance by learning a GMM for each label and performing alpha expansion algorithm using MRF2.2 Library.


Inspired by this