I am a 1st year PhD student in CS at Princeton advised by Olga Russakovsky and supported by the President's Fellowship.
I received my B.S. in EECS from Berkeley in 2022 and my M.S. in 2023, advised by Jitendra Malik.
I am broadly interested in creating computer vision systems which can learn from and interpret visual data as humans do.
While at Berkeley, I've had the great fortune to have collaborated with and been mentored by a number of
wonderful people, including Karttikeya
Mangalam, Alvin Wan, and Dan Hendrycks.
I was also heavily involved in teaching and outreach, serving on CS 70 course
staff multiple times and previously leading Machine Learning @ Berkeley. You can find
more from my main website here.
If you are interested in collaborating, or just want to reach out and chat about research or advice, feel free to reach out to me at [first][last][at]cs[dot]princeton[dot]edu.
[May 2023] Our paper, PaReprop, was accepted as a spotlight at the Transformers for Vision Workshop @ CVPR 2023!
[Apr 2023] I am starting my PhD at Princeton in Fall 2023, advised by Professor Olga Russakovsky!
I'm broadly interested in computer vision, especially in drawing from human cognition to create visual systems whiich are effective and robust.
Humans have incredibly proficient visual systems.
As examples, they are highly adaptive to new information and settings and can accurately track objects even through occlusions.
I am interested in understanding how we can replicate such capabilities in machines to teach them to see as we do, especially drawing from psychology and cognition for inspiration.
My goal is to be able to develop flexible and general learners which can learn efficiently from data.
Some trends in this direction are scalable methods of self-supervised learning, robustness to distribution shift in real-world deployment, and utilizing rich visual priors in data (in some happy accordance with the Bitter Lesson), especially in video.
PaReprop: Fast Parallelized Reversible Backpropagation
Transformers for Vision Workshop @ CVPR, 2023 (Spotlight Paper)
A simple extension of Reversible Vision Transformers that parallelizes the
backward pass using CUDA streams with a study into when these benefits are
The many faces of robustness; A critical analysis of
Four new datasets measuring real-world distribution shifts, as well as a new
state-of-the-art data augmentation method that outperforms models pretrained
with 1000x more labeled data.
Making Reversible Transformers Accurate, Efficient, and Fast
In this work, we present an in-depth analysis of reversible transformers and demonstrate that they can be more accurate, efficient, and fast than their vanilla counterparts. We introduce a new method of reversible backpropagation which is faster and scales better with memory than previous techniques, and also demonstrate new results which show that reversible transformers transfer better to downstream visual tasks.