I just started as a post-doctoral researcher at Stanford, working with Diyi Yang at the SALT Lab. I recently graduated with a PhD from the Center for Data Science at New York University where I was advised by Prof. He He.
My research is broadly in human-centered NLP. I’m currently thinking about (1) how AI agents shape the future of work, (2) how to support co-creativity with foundation models, (3) how we can elicit more diverse LLM outputs for scientific discovery, and (4) how we can elicit more diverse output from LLMs.
My PhD thesis was on human-AI collaboration with LLMs for creative writing tasks. For many years, I also helped organize the NYU NLP and Text-as-Data talk series. Prior to this, I completed my MS in Computer Science at NYU’s Courant Institute of Mathematical Sciences during which I was a Graduate Research Associate at the Center for Social Media and Politics working on political stance classification and multimodel content sharing in online disinformation campaigns. I’ve also had the chance to intern with AWS, Amazon Alexa AI and LAER.AI while in grad school. I did my undergrad at the National Institute of Technology - Karnataka where my thesis was advised by Prof. Sowmya Kamath.
Here’s my list of publications, CV and Google Scholar page. Head over to my personal blog for more light hearted content. All the other relevant links are in the footer. You can contact me at vishakh@nyu.edu.
Updates:
- September 2025: Moved to Stanford to join Diyi's lab!
- July 2025: My AI2 internship work on creating schemas to compare research papers was accepted to EMNLP Findings. Check out the paper here.
- May 2025: My internship work from Adobe got accepted to ACL 2025. Check out the paper here . I think it has cool implications for how we should think and design agentic LLM pipelines.
- April 2025: Passed my thesis defense, still can't quite believe it!
- January 2025: Gave a talk covering a lot of my thesis work, titled 'Side Effects May Include Homogenization and Overusing Cliches' discussing how the way we do post training of LLMs today can have uninenteded side effects. The talk is available online here.
- September 2024: Interning with the Semantic Scholar research team over the Fall.
- June 2024: Attended Creativity and Cognition in Chicago, my first HCI conference, and presented our work on analyzing perspectives from emerging professional writers.
- May 2024: I interned with the Document Intelligence Lab at Adobe in San Jose over the summer, working on long document generation prioritizing diversity of content with Jennifer Healey, David Arbour and Tong Sun.
- May 2024: I attended ICLR 2024 to present our work on diversity in collaborative writing. On the way back, I also gave a talk about our work at Oxford in the UK!
- March 2024: I gave virtual talks at Bocconi University and the University of Melbourne
- December 2023: I attended EMNLP 2023 and helped give a tutorial on Creative Natural Language Generation with Tuhin Chakrabarty, Violet Peng and He He. See the slides here!
- July 2023: I presented our work on extrapolative generation with iterative refinement at ICML 2023.
- July 2023: I was the co-chair for the ACL 2023 Student Research Workshop along with Gisela Vallejo and Yao Fu.
</ul>