Curriculum Vitae
Education History
Doctor of Philosophy
August 2021 - Present (anticipated May 2026)Qualifying Exam: Passed in May, 2023I am a Ph.D. student in Computer Science at the Georgia Institute of Technology with a focus on human-interactive robot learning.Bachelor of Science
August 2016 - May 2021 I graduated from the Georgia Institute of Technology with a Bachelor of Science in Computer Science, specializing in theory and intelligence.Research Experience
Graduate Research Assistant
August 2021 - Present, Georgia Institute of TechnologyFoundation Models
- Developing a bi-directional model reconciliation pipeline that uses a large language model as a human proxy and generates language explanations personalized to a user’s mental model (of the task, environment, and robot).
- Developed a framework to enable novice robot users to iteratively teach a robot table-top tasks, leveraging vision-language model (VLM) explanations of how to improve upon kinesthetic demonstrations in a learning from demonstration (LfD) paradigm.
- Developed a framework to enable novice robot users to improve upon their demonstration set in an inverse reinforcement learning (IRL) paradigm, leveraging large-language model (LLM) explanations of Shapley values [FSS 2024].
- Developed a novel attention-based approach to language-conditioned multi-task reinforcement learning using language models to convert goal specifications to semantically meaningful embeddings for learningagents [RA-L 2022]
Machine Learning
- Compared the performance of hierarchical machine learning models at predicting the need for and duration of mechanical ventilation, extracorporeal membrane oxygenation, and mortality [in submission].
- Employed heterogeneous graph neural networks (GNN) to synthesize multi-modal, multi-agent interaction data and predict relational affect [AHFE 2024].
- Performed tennis ball localization using EKF sensor fusion to enable a Barrett WAM robotic arm mounted on a wheelchair to learn to play tennis [RA-L 2023].
Human-Interactive Robot Learning
- Learned interpretable features regarding user goals and preferences at intervention time [LEAP 2024].
- Assessed end-user attitudes towards embodied robots that learn to determine how involved end-users would like to be in in-home robot learning [HRI 2023].
- Developed a novel learning from demonstration (LfD) framework that meta-learns a mapping from sub-optimal and heterogeneous human feedback to optimal labels [HRI 2022].
- Investigate whether non-expert demonstrators can generalize robot teaching strategies to provide necessary and sufficient demonstrations to robots zero-shot in novel domains [RSS 2023].
- Examined whether non-roboticist end-users are capable of providing hierarchical demonstrations without explicit training from a roboticist [RSS 2022].
Undergraduate Research Assistant
May 2019 - August 2021, Georgia Institute of TechnologyGraph Theory
- Tightened the bounds of the smallest eigenvalue of the Laplacian of Cayley Graphs, thus bounding the bipartiteness of this class of graphs [EJC 2020].
Computer Vision
- Performed tennis ball localization using EKF sensor fusion for robotic wheelchair tennis [RA-L 2023].
- Performed pose estimation to study gene expression (via the mating ritual) of selectively bred fish.
- Investigated the neuro-mechanics of moth flight by analyzing individual wing muscle physical and electrical activity.
Industry Experience
Research Intern
May 2023 - August 2023, Honda Research Institute, USA- Currently filing a patent on employing geometric deep learning to model interpersonal dynamics.
- Developed and implemented human-human-robot interaction algorithms that have a positive effect on the interpersonal dynamics in the shared space [AHFE 2024].
Software Systems Engineer Intern
August 2019 - July 2020, Georgia Tech Research Institute (GTRI)- Employed techniques in natural language processing (NLP) and computer vision (CV) for flight data processing automation, deployed through Apache Airflow.
- Implemented a web-based solution to data-processing architecture using SQLite3.
Leadership Experience and Community Outreach
Robotics Outreach
October 2024Dressed up the Spot robot as a pumpkin, and programmed it to pass out candy to trick-or-treaters on Halloween night.
Workshop and Symposium Organizer
October 2023 - November 2024Organized the Artificial Intelligence for Aging in Place symposium the AAAI 2024 Fall Symposium Series with six keynote speakers, a debate, an interactive panel discussion, and twenty accepted submissions that gave oral presentations and participated in a poster session.
Organized the Human-Robot Interaction for Aging in Place workshop at ACM/IEEE International Conference on Human-Robot Interaction 2024 for 35 attendees. The workshop hosted two keynote speakers, a panel discussion, and eight accepted submissions that gave oral presentations and participated in a poster session.
Vice President of RoboWomen
May 2023 - July 2024Executive board member of the Robotics Graduate Student Organization (RoboGrads) at Georgia Tech.
Organized an inter-institutional Women's Panel and Networking event at Georgia Tech with 6 panelists from academia (Georgia Tech, MIT, and University of Michigan), and industry (Toyota Research Institute, and Amazon Labs 126) for 50 undergraduate and graduate attendees.
Organized a TED Women's Conference Discovery Session at Georgia Tech titled "The Robo-Shop" with 5 different, concurrent robot demos led by female roboticists for 26 attendees.
Secretary of TechMasters Club
June 2020 - December 2020Executive board member of the local chapter of Toastmasters International at Georgia Tech.
Teaching and Mentorship Experience
Teaching Assistantships
TA for CS 3630: Introduction to Robotics and Perception - Spring 2024
Head TA for CS 3630: Introduction to Robotics and Perception - Fall 2023
Head TA for CS 4400: Introduction to Database Systems - Summer 2019
Group Leader
August 2023 - October 2023Instructor for a small-group, peer-led, extended-orientation program for first-semester graduate students as part of GT6000.
Research Mentorships
Rynaa Grover (MS), now at Google - Summer 2024
Aryan Vats (MS), now at Nagarro - Summer 2024
Pablo Alvarez (BS), now at SageVR - Summer 2023
Aman Singh (MS), now at Edaptive Computing Inc. - Spring 2023
Publications
Journal Publications
Investigating Strategies Enabling Novice Users to Teach Plannable Hierarchical Tasks to RobotsNina Moorman, Aman Singh, Manisha Natarajan, Erin Hedlund-Botti, Mariah Schrum, Chuxuan Yang, Lakshmi Seelam, Matthew Gombolay, Nakul GopalanIJRR 2024
Learning from demonstration (LfD) seeks to democratize robotics by enabling non-experts to intuitively program robots to perform novel skills through human task demonstration. Yet, LfD is challenging under a task and motion planning (TAMP) setting, as solving long-horizon manipulation tasks requires the use of hierarchical abstractions. Prior work has studied mechanisms for eliciting demonstrations that include hierarchical specifications for robotics applications but has not examined whether non-roboticist end-users are capable of providing such hierarchical demonstrations without explicit training from a roboticist for each task. We characterize whether, how, and which users can do so. Finding that the result is negative, we develop a series of training domains that successfully enable users to provide demonstrations that exhibit hierarchical abstractions. Our first experiment shows that fewer than half (35.71\%) of our subjects provide demonstrations with hierarchical abstractions when not primed. Our second experiment demonstrates that users fail to teach the robot with adequately detailed TAMP abstractions, when not shown a video demonstration of an expert’s teaching strategy. Our experiments reveal the need for fundamentally different approaches in LfD to enable end-users to teach robots generalizable long-horizon tasks without being coached by experts at every step. Toward this goal, we developed and evaluated a set of TAMP domains for LfD in a third study. Positively, we find that experience obtained in different, training domains enables users to provide demonstrations with useful, plannable abstractions on new, test domains just as well as providing a video prescribing an expert's teaching strategy in the new domain.
Athletic Mobile Manipulator System for Robotic Wheelchair TennisZulfiqar Zaidi*, Daniel Martin*, Nathaniel Belles, Viacheslav Zakharov, Arjun Krishna, Kin Man Lee, Peter Wagstaff, Sumedh Naik, Matthew Sklar, Sugju Choi, Yoshiki Kakehi, Ruturaj Patil, Divya Mallemadugula, Florian Pesce, Peter Wilson, Wendell Hom, Matan Diamond, Bryan Zhao, Nina Moorman, Rohan Paleja, Letian Chen, Esmaeil Seraj, and Matthew GombolayRA-L 2023, presented at IROS 2023
In this paper, we propose the first opensource, autonomous robot for playing regulation wheelchair tennis. We demonstrate the performance of our full-stack system in executing ground strokes and evaluate each of the system’s hardware and software components. The goal of this paper is to (1) inspire more research in human-scale robot athletics and (2) establish the first baseline towards developing a robot in future work that can serve as a teammate for mixed, human-robot doubles play. Our paper contributes to the science of systems design and poses a set of key challenges for the robotics community to address in striving towards a vision of human-robot collaboration in sports.
LanCon-Learn: Learning With Language to Enable Generalization in Multi-Task ManipulationAndrew Silva, Nina Moorman; William Silva; Zulfiqar Zaidi; Nakul Gopalan; Matthew GombolayRA-L 2022, presented at ICRA 2022
We present LanCon-Learn, a novel attention-based approach to language-conditioned multi-task learning in manipulation domains to enable learning agents to reason about relationships between skills and task objectives through natural language and interaction. We evaluate LanCon-Learn for both reinforcement learning and imitation learning, across multiple virtual robot domains along with a demonstration on a physical robot. LanCon-Learn achieves up to a 200% improvement in zero-shot task success rate and transfers known skills to novel tasks faster than non-language-based baselines, demonstrating the utility of language for goal specification.
On the Bipartiteness Constant and Expansion of Cayley graphsNina Moorman, Peter Ralli, Prasad TetaliEJC 2020
Let G be a finite, undirected, d-regular graph and A(G) its normalized adjacency matrix, with eigenvalues 1 = λ1(A) ≥ · · · ≥ λn ≥−1. It is a classical fact that λn = −1 if and only if G is bipartite. Our main result provides a quantitative separation of λn from−1 in the case of Cayley graphs, in terms of their expansion. Denoting hout by the (outer boundary) vertex expansion of G, we show that if G is a non-bipartite Cayley graph (constructed using a group and a symmetric generating set of size d) then λn ≥ −1 + ch^2out /d^2 , for c an absolute constant. We exhibit graphs for which this result is tight up to a factor depending on d. This improves upon a recent result by Biswas and Saha (2021) who showed λn ≥ −1 + h^4 out /(2^9 d^8) . We also note that such a result could not be true for general non-bipartite graphs.
Conference Publications
Dyadic Interactions and Interpersonal Perception: An Exploration of Behavioral Cues for Technology-Assisted MediationHifza Javed, Nina Moorman, Thomas Weisswange, and Nawid JamaliAHFE 2024; Best Paper Award
Mediators aim to shape group dynamics in various ways, such as improving trust and cohesion, balancing participation, and promoting constructive conflict resolution. Technological systems used to mediate human-human interactions must be able to continuously assess the state of the interaction and generate appropriate actions. In this paper, we study behavioral cues that indicate interpersonal perception in dyadic social interactions. These cues may be used by such systems to produce effective mediation strategies. We evaluate dyadic interactions in which each interactant rates the other on how agreeable or disagreeable the other interactant comes across. We take a multi-perspective approach to evaluate interpersonal affect in dyadic interactions, employing computational models to investigate behavioral cues that reflect interpersonal perception in both the interactant providing the rating and the interactant being rated. Our findings offer nuanced insights into interpersonal dynamics, which will be beneficial for future work on technology-assisted social mediation.
Investigating the Impact of Experience on a User's Ability to Perform Hierarchical AbstractionNina Moorman, Nakul Gopalan, Aman Singh, Erin Hedlund-Botti, Mariah Schrum, Chuxuan Yang, Lakshmi Seelam, Matthew GombolayRSS 2023; Best Student Paper Award Finalist
The field of Learning from Demonstration enables end-users, who are not robotics experts, to shape robot behavior. However, using human demonstrations to teach robots to solve long-horizon or multi-modal problems by leveraging the hierarchical structure of the task is still an unsolved problem. Prior work has yet to show that human users can provide sufficient demonstrations in novel domains without showing the demonstrators explicit teaching strategies for each domain. In this work, we investigate whether non-expert demonstrators can generalize robot teaching strategies to provide necessary and sufficient demonstrations to robots zero-shot in novel domains. We find that increasing participant experience with providing demonstrations improves their demonstration's degree of sub-task abstraction (p<.001), teaching efficiency (p<.001), and sub-task redundancy (p=.046) in novel domains allowing generalization in robot teaching. Our findings demonstrate for the first time that non-expert demonstrators can transfer experience from a series of training experiences to provide high-quality demonstrations when programming robots to complete task and motion planning problems on novel domains without the need for explicit instruction.
Impacts of Robot Learning on User Attitude and BehaviorNina Moorman, Erin Hedlund-Botti, Mariah Schrum, Manisha Natarajan, Matthew Gombolay HRI 2023 (Video, Slides)
We investigate the impacts on end-users of in situ robot learning through a series of human-subjects experiments. We examine how different learning methods influence both in-person and remote participants’ perceptions of the robot. While we find that the degree of user involvement in the robot’s learning method impacts perceived anthropomorphism (p = .001), we find that it is the participants’ perceived success of the robot that impacts the participants’ trust in (p < .001) and perceived usability of the robot (p < .001) rather than the robot’s learning method. Therefore, when presenting robot learning, the performance of the learning method appears more important than the degree of user involvement in the learning. Furthermore, we find that the physical presence of the robot impacts perceived safety (p < .001), trust (p < .001), and usability (p < .014). Thus, for tabletop manipulation tasks, researchers should consider the impact of physical presence on experiment participants.
Negative Result for Learning from Demonstration: Challenges for End-Users Teaching Robots with Task And Motion Planning AbstractionsNakul Gopalan, Nina Moorman, Manisha Natarajan, Matthew Gombolay RSS 2022
Prior works have not examined whether non-roboticist endusers are capable of providing such hierarchical demonstrations without explicit training from a roboticist showing how to teach each task. To address the limitations and assumptions of prior work, we conduct two novel human-subjects experiments to answer (1) what are the necessary conditions to teach users through hierarchy and task abstractions and (2) what instructional information or feedback is required to support users to learn to program robots effectively to solve novel tasks. Our first experiment shows that fewer than half (35.71%) of our subjects provide demonstrations with sub-task abstractions when not primed. Our second experiment demonstrates that users fail to teach the robot correctly when not shown a video demonstration of an expert’s teaching strategy for the exact task that the subject is training. Not even showing the video of an analogue task was sufficient. These experiments reveal the need for fundamentally different approaches in LfD which can allow end-users to teach generalizable long-horizon tasks to robots without the need to be coached by experts at every step.
MIND MELD: Personalized Meta-Learning for Robot-Centric Imitation LearningMariah Schrum, Erin Hedlund-Botti, Nina Moorman, Matthew C. GombolayHRI 2022; Best Technical Paper Award
To create a more human-aware version of robot-centric LfD, we present Mutual Information-driven Meta-learning from Demonstration (MIND MELD). MIND MELD meta-learns a mapping from suboptimal and heterogeneous human feedback to optimal labels, thereby improving the learning signal for robot-centric LfD. The key to our approach is learning an informative personalized embedding using mutual information maximization via variational inference. The embedding then informs a mapping from human provided labels to optimal labels. We evaluate our framework in a human-subjects experiment, demonstrating that our approach improves corrective labels provided by human demonstrators. Our framework outperforms baselines in terms of ability to reach the goal (p < .001), average distance from the goal (p = .006), and various subjective ratings (p = .008).
Effects of Social Factors and Team Dynamics on Adoption of Collaborative Robot AutonomyMariah Schrum, Glen Neville, Michael Johnson, Nina Moorman, Rohan Paleja, Karen Feigh, Matthew GombolayHRI 2021
In an analog manufacturing environment, we explore how these various factors influence an individual's willingness to work with a robot over a human co-worker in a collaborative Lego building task. We specifically explore how this willingness is affected by: 1) the level of social rapport established between the individual and his or her human co-worker, 2) the anthropomorphic qualities of the robot, and 3) factors including trust, fluency and personality traits. Our results show that a participant's willingness to work with automation decreased due to lower perceived team fluency (p=0.045), rapport established between a participant and their co-worker (p=0.003), the gender of the participant being male (p=0.041), and a higher inherent trust in people (p=0.018).
Poster Sessions, Presentations, and Invited Talks
Invited to give a demonstration at the ONR Tech Review and S&T Expo 09/2024
Invited talk at the AI-CARING National Artificial Intelligence Research Institute Research Symposium at UMass Lowell 04/2024
Invited talk for CS 3001: Computing, Society, and Professionalism at Georgia Tech 10/2023
AI-CARING National Artificial Intelligence Research Institute Research Symposium at CMU 03/2023
Charlie and Harriet Shaffer Cognitive Empowerment Program (CEP) Research Symposium 02/2023 (Slides)
AAAI Fall Symposia Series on Artificial Intelligence for Human-Robot Interaction 11/2022
Institute for Robotics and Intelligent Machines Robotics Days for Industry 11/2022
AI-CARING National Artificial Intelligence Research Institute Research Symposium at GaTech 04/2022
Honors and Awards
2024 | Pathbreakers Fellowship
AHFE 2024 | Best Paper Award
RSS 2023 | Best Student Paper Award Finalist
HRI 2022 | Best Technical Paper Award
2020 | President’s Undergraduate Research Award
2016-2021 | Zell Miller Scholarship
2016-2021 | Georgia Tech's Honors Program
Reviewing Experience
Journals and Conferences
Robotics: Science and Systems (RSS)
IEEE/ACM International Conference on Human-Robot Interaction (HRI)
IEEE International Conference on Robotics and Automation (ICRA)
IEEE Robotics and Automation Letters (RA-L)
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
International Conference on Social Robots (ICSR)
International Journal of Robotics Research (IJRR)
Symposia and Workshops
International Symposium of Robotics Research (ISRR)
AAAI Fall Symposium Series (FSS)
RoboLetics Workshop (CoRL)
HRI Pioneers (HRI)
Machine Learning in Human-Robot Collaboration: Bridging the Gap Workshop (HRI)