news

Jun 30, 2025 Happy to share that I’ve successfully defended my thesis titled “You Can (Not) Trust: Reliability and Robustness of LLMs as Human-Like Annotators and Judges.” This work explores critical questions about the reliability, robustness, and alignment of Large Language Models (LLMs) when used for annotation and evaluation tasks in NLP. It investigates the indistinguishability of LLM-generated and human annotations, the alignment and reliability of LLMs as subjective judges of language quality, and their susceptibility to misinformation in evaluation settings. Two papers from this work have been accepted, and we’re actively working on future directions. I’m deeply grateful to my advisor Vasudeva Varma, reviewers Vikram Pudi and Asif Ekbal, my collaborators, and the iREL lab at IIIT Hyderabad. Joining this lab has truly been a life-changing decision, and I’m thankful for the support and freedom I’ve received throughout this journey.
May 16, 2025 Thrilled to share that our paper, “Screening of Oral Potentially Malignant Disorders and Oral Cancer Using Deep Learning Models,” has been accepted in Scientific Reports, a Nature Portfolio journal (2025). This research was part of my internship at the Applied AI Research Centre (INAI, IIIT Hyderabad), a collaborative initiative by IIIT-H, Intel, the Government of Telangana, and PHFI. We applied AI to improve early detection of oral cancer through deep learning and image analysis. Grateful to be part of impactful healthcare research using AI.