Announcement_2

Happy to share that I’ve successfully defended my thesis titled “You Can (Not) Trust: Reliability and Robustness of LLMs as Human-Like Annotators and Judges.” This work explores critical questions about the reliability, robustness, and alignment of Large Language Models (LLMs) when used for annotation and evaluation tasks in NLP. It investigates the indistinguishability of LLM-generated and human annotations, the alignment and reliability of LLMs as subjective judges of language quality, and their susceptibility to misinformation in evaluation settings. Two papers from this work have been accepted, and we’re actively working on future directions. I’m deeply grateful to my advisor Vasudeva Varma, reviewers Vikram Pudi and Asif Ekbal, my collaborators, and the iREL lab at IIIT Hyderabad. Joining this lab has truly been a life-changing decision, and I’m thankful for the support and freedom I’ve received throughout this journey.