Aseem Saxena

I am currently pursuing my MS in Artificial Intelligence at Oregon State University. I am fortunate to be advised by Prof Alan Fern.

Until very recently, I worked as a Machine Learning Engineer at Panasonic, Singapore.

Before that, I was working as a Researcher at M2AP Lab, School of Computing, NUS, Singapore under the guidance of Prof. David Hsu.

I did a Dual Major in Biology and Electrical And Electronics Engineering at BITS Pilani, my education was funded by the KVPY Fellowship. I spent time on my thesis at Robotics Research Lab, IIIT Hyderabad, where I was advised by Prof Madhava Krishna.

Blog  /  Email  /  Resume  /  CV  /  Github  /  Google Scholar  /  LinkedIn

Research Interests

I want to build ML systems which work well in data and resource constrained situations.

Other Interests

I play music - Youtube, Soundcloud. I am an amateur triathlete.

Publications

Multi-Task Learning for Temporal Processes: A Case Study on Modeling Plant Cold Hardiness
Aseem Saxena, Paola Pesantez-Cabrera, Jonathan Magby, Markus Keller, Alan Fern
Machine Learning Journal, Springer, 2024 (Under Review)

We present a real-world case study of multi-task learning (MTL) for temporal process modeling from limited data. Specifically, we investigate multi-task learning for the important agricultural problem of predicting grape and cherry cold hardiness. We investigate multi-task learning (MTL) approaches for combining data, where different tasks correspond to different cultivars. Our results show significant differences between architectures and that certain architectures are able to consistently outperform single-task learning and state-of-the-art scientific models.

Multi-Task Learning for Budbreak Prediction
Aseem Saxena, Paola Pesantez-Cabrera, Rohan Ballapragada, Markus Keller, Alan Fern
Workshop on AI for Agriculture and Food Systems at AAAI, 2023 (Accepted)

Grapevine budbreak is a key phenological stage of seasonal development, which serves as a signal for the onset of active growth. This is also when grape plants are most vulnerable to damage from freezing temperatures. Hence, it is important for winegrowers to anticipate the day of budbreak occurrence to protect their vineyards from late spring frost events. This work investigates deep learning for budbreak prediction using data collected for multiple grape cultivars. While some cultivars have over 30 seasons of data others have as little as 4 seasons, which can adversely impact prediction accuracy. To address this issue, we investigate multi-task learning, which combines data across all cultivars to make predictions for individual cultivars. Our main result shows that several variants of multi-task learning are all able to significantly improve prediction accuracy compared to learning for each cultivar independently.

Grape Cold Hardiness Prediction via Multi-Task Learning
Aseem Saxena, Paola Pesantez-Cabrera, Rohan Ballapragada, Kin-Ho Lam, Alan Fern, Markus Keller
Innovative Applications of Artificial Intelligence (IAAI), 2023 (Accepted)

Cold temperatures during fall and spring have the potential to cause frost damage to grapevines and other fruit plants, which can significantly decrease harvest yields. we study whether deep-learning models can improve cold hardiness prediction for grapes based on data that has been collected over a 30-year time period. A key challenge is that the amount of data per cultivar is highly variable, with some cultivars having only a small amount. For this purpose, we investigate the use of multi-task learning to leverage data across cultivars in order to improve prediction performance for individual cultivars.

Formalizing the Problem of Side Effect Regularization
Alexander Matt Turner, Aseem Saxena, Prasad Tadepalli
Equal Contribution
NeurIPS ML Safety Workshop, 2022 (Accepted)

AI objectives are often hard to specify properly. Some approaches tackle this problem by regularizing the AI's side effects: Agents must weigh off" how much of a mess they make" with an imperfectly specified proxy objective. We propose a formal criterion for side effect regularization via the assistance game framework.

Sim-to-Real Learning of Footstep-Constrained Bipedal Dynamic Walking
Helei Duan, Ashish Malik, Jeremy Dao, Aseem Saxena, Kevin Green, Jonah Siekmann, Alan Fern, Jonathan Hurst
IEEE ICRA (International Conference on Robotics and Automation), 2022 (Accepted)

we aim to maintain the robust and dynamic nature of learned gaits while also respecting footstep constraints imposed externally. We develop an RL formulation for training dynamic gait controllers that can respond to specified touchdown locations. We then successfully demonstrate simulation and sim-to-real performance on the bipedal robot Cassie.

LeTS-Drive: Driving in a Crowd by Learning from Tree Search
Panpan Cai, Yuanfu Luo, Aseem Saxena, David Hsu, Wee Sun Lee
RSS (Robotics Science and Systems), 2019 (Accepted)
video

Autonomous driving in a crowded environment, e.g., a busy traffic intersection, is an unsolved challenge for robotics. We propose LeTS-Drive, which integrates online POMDP planning and deep learning.

Exploring Convolutional Networks for End-to-End Visual Servoing
Aseem Saxena, Harit Pandya, Gourav Kumar, K. Madhava Krishna
Equal Contribution
IEEE ICRA (International Conference on Robotics and Automation), 2017 (Accepted)
video code

We present an end-to-end learning based approach for visual servoing in diverse scenes where the knowledge of camera parameters and scene geometry is not available apriori. This is achieved by training a convolutional neural network over color images with synchronised camera poses.

Projects

Avoiding Side Effects in Complex Navigation Environments

We explore methods to train agents to complete tasks and simultaneously avoid side effects in the SafeLife. We demonstrate the effectiveness of MT-DQN, a multi task variant of Deep Q Networks for side effect avoidance.

Distributed Q-Learning

We implement a distributed version of DQN via the Ray Distributed Framework.

Offline-RL for Bipedal Robots

Reinforcement Learning requires the entire model of the world or interactive access to the world. However, the world model may not be always known or it may be expensive or unsafe to perform multiple interactions with the world. In such scenarios, we would like to make use of existing transaction data to learn a control policy. This is addressed by a class of algorithms referred to as "Offline Reinforcement Learning". In this work, we study and implement "Behaviour Cloning"(BC), "TD3" and a combination of both "TD3+BC" for offline reinforcement learning. We evaluate them on various syntheic datasets and investigate the performance of each of them on different qualities of datasets. We also attempt to use offline RL for the real-world bipedal robot "Cassie" and introduce various datasets for a bipedal locomotion task

Studying Robustness of Semi-supervised Visual Features to Adversarial Attacks

Neural Network Verification is an important tool towards gauging robustness to adversaries. In this report, I summarise the work of Salman et al who formulate most past work on LP based neural network verification as a convex relaxation problem. The framework can handle different activation functions and pooling layers and also can handle both primal and dual versions of verification. In my work, I try to evaluate the adversarial robustness of classifiers which are trained to simultaneously classify as well as reconstruct the input. I focus on two domains, image classification on the CIFAR10 dataset and Q-Learning in the OpenAI gym cartpole environment.

MC Dropout for Efficient Exploration

Agents need to explore the world intelligently so as to discover new skills that are useful to perform downstream tasks. To perform exploration, there have been several methods that have been introduced in literature – however they lack a one-on-one comparison under the same policy setting. There is a discrepancy in terms of whether a model-based or a model-free policy is used to perform exploration and the choice of policy can effect the sample-efficiency of the agent significantly. In this project, we focus on implementing three exploration methods in model-based reinforcement learning setting and thoroughly investigate their qualitative and quantitative performance on the continuous control problem of Point Maze. Our experiments show that while ensemble based Plan2Explore (Sekar et al. 2020) performs the best, a naive and simple method such as Monte Carlo Dropout can perform on par with other exploration based methods.

Visualizing QMDPNet

I created a full fledged GUI Visualizer using Python Tkinter Library to understand the QMDPnet algorithm. I visualize various components of a POMDP such as reward map, belief and value function to get an intuition on how the algorithm works.

Deep Learning for Table Interest Point Detection

I attempt to find interest points or corner points of tables in a scene using cues from semantic segmentation and vanishing lines. Availabilty of semantic information such as interest points can help mobile robots navigate in a better way.

Automating GrabCut for Multilabel Image Segmentation

Performing Image Segmentation for 3 labels without user guidance by learning a GMM for each label and performing alpha expansion algorithm using MRF2.2 Library.


Inspired by this