Ananya Harsh Jha
Hi! I am a first year PhD student at UW, co-advised by Hannaneh Hajishirzi and Luke Zettlemoyer. I am broadly interested in efficient machine learning and optimization for neural networks.
Previously, I was a predoctoral researcher at AI2, where I was mentored by Iz Beltagy and Emma Strubell.
Before that, I was a research engineer at PyTorch Lightning. I co-wrote TorchMetrics with Teddy Koker and worked on stochastic-autoencoders with Kyunghyun Cho.
In a previous life, I worked on cycle-VAEs with Saket Anand at IIIT-Delhi.
If you wanna chat about research/academia/whatever, feel free to reach out to ananyahj [at] cs [dot] washington [dot] edu.
representative papers
conferences
OLMo: Accelerating the Science of Language Models
Dirk Groeneveld, Iz Beltagy, …, Ananya Harsh Jha, …, Noah A. Smith, Hannaneh Hajishirzi
(🥇 Best Theme Paper Award) ACL ’24 | [code] [website] [🤗 artifacts]Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Luca Soldaini, Rodney Kinney, …, Ananya Harsh Jha, …, Jesse Dodge, Kyle Lo
(🥇 Best Resource Paper Award) ACL ’24 | [code] [website] [🤗 artifacts]Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders
Ananya Harsh Jha, Saket Anand, Maneesh Singh, and VSR Veeravasarapu
ECCV ’18 | [code]
arXiv
Just CHOP: Embarrassingly Simple LLM Compression
Ananya Harsh Jha, Tom Sherborne, Evan Pete Walsh, Dirk Groeneveld, Emma Strubell, Iz Beltagy
May ’23AASAE: Augmentation-Augmented Stochastic Autoencoders
William Falcon*, Ananya Harsh Jha*, Teddy Koker, Kyunghyun Cho
July ’21 | [code]
resources
- 📜 You can find my grad school statement of purpose at CS-SOP. I hope this is helpful to you. Also, remember that graduate students come from diverse backgrounds, and your profile need not look like mine to be accepted into the same program.
online presence
- 🤓 Scholar: Ananya Harsh Jha
- 💻 Github: ananyahjha93@GitHub
- 📄 Resume (probably not updated): Ananya Harsh Jha