About me

Hello! My name is Nishitha and I'm a computer science student at the University of Virginia. I have a passion for building purposeful and scalable software solutions that have a meaningful impact.

Through my internships and research experience, I’ve built full-stack applications, deployed cloud-native systems, developed ML pipelines and fine-tuned large language models . I am also actively involved in UVA’s Girls Who Code and the Society of Women Engineers - communities that have shaped my interest in ethical, inclusive innovation.

I am excited for the opportunity to apply these skills in mission-driven environments, where I get to contribute to developments that make people's lives better. As I continue learning and growing, I look forward to collaborating on purposeful projects that challenge me and push the boundaries of what’s possible.

Feel free to explore this website to learn more about my work — or contact me here! You can also email me at nishitha.khasnavis@gmail.com

Software Engineer Intern @ VocsAI

(March 2025 - July 2025)

As a Software Engineering Intern at VocsAI, I led the full-stack development of a voice synthesis web application that enables real-time user interaction and dynamic audio generation. I built the frontend using React, designed responsive UI components, and developed scalable backend services using Node.js and FastAPI.

To manage data effectively, I designed relational schemas with SQL for user and session data, and used MongoDB for optimized storage of audio metadata in a document-based structure. I also implemented and deployed RESTful APIs to improve data flow across services, which reduced frontend latency by approximately 15%, resulting in a smoother user experience.

On the infrastructure side, I deployed the application to AWS EC2, managed static content with S3, and gained hands-on experience in orchestrating a cloud-native architecture that scales efficiently. This experience gave me practical exposure to cross-service communication, cloud deployment workflows, and performance monitoring in production environments.

Through this project, I strengthened my understanding of end-to-end web development, API design, cloud infrastructure, and building real-world systems that are both scalable and user-centric.




Student Researcher @ Aikyam Lab (UVA)

(May 2025 - Present)

At the Aikyam Lab, I am working on research that addresses a timely and critical challenge: how to enable machine unlearning in personalized recommender systems. Under the mentorship of Professors Chirag Agrawal and Sam Levy (UVA Darden School of Business), my focus is on building high-performance recommendation pipelines using Large Language Models (LLMs). This will serve as the foundation for exploring efficient and scalable unlearning techniques - allowing a model to "forget" specific users or interactions without full retraining.

My contributions began with the development of a robust Python-based data pipeline to support model training and experimentation. Using pandas, I processed and structured over 100,000 user-item interactions, transforming raw clickstream data into a clean, memory-efficient format. The pipeline handles user filtering, sequence generation, and input-output alignment for modeling personalized product interactions, ensuring data quality and performance consistency throughout the workflow.

To build a powerful base recommender, I fine-tuned LLaMA-3.2-1B, an open-source LLM, using Unsloth with LoRA (Low-Rank Adaptation). This allowed for rapid experimentation on modest hardware without compromising performance. Training and evaluation were implemented in PyTorch, and I used Hugging Face Transformers for modular access to tokenization, model layers, and configuration control. I also created custom dataset splits to evaluate model generalization and robustness across different user behavior patterns and item types.

Performance was measured using industry-standard ranking metrics such as Recall@5, NDCG@5, and MRR, implemented with the ranx evaluation toolkit. These metrics allowed for clear comparison across different training strategies and highlighted the model’s ability to retrieve relevant products effectively and consistently.

While the current phase emphasizes building a strong baseline model, the next phase of the project will explore machine unlearning methods — such as gradient-based removal, retraining triggers, or data deletion-aware regularization — that can selectively forget individual user histories or interactions. This line of work is critical for ensuring compliance with privacy regulations and user-centered model governance.

This experience has deepened my experience in large-scale model training, LLM fine-tuning, information retrieval, and the design of privacy-preserving ML systems. It represents a meaningful blend of core machine learning principles, scalable engineering, and responsible AI, all within the context of a real-world problem space.




Student Researcher @ Aikyam Lab (UVA)

(August 2024 - May 2025)

In recent years, Large Language Models (LLMs) have rapidly advanced and become integral to a wide range of applications - from chatbots and virtual assistants to content generation tools and personalized search engines. However, these systems often operate as black boxes and can unintentionally leak sensitive information, especially when trained on user-generated or proprietary data. As part of my research with Professor David Evans, I am exploring ways to systematically identify, simulate, and mitigate these privacy risks through automated information disclosure audits.

My work involves developing a novel methodology that enables auditing LLM-based systems for potential information leakage. I focus on an approach called auditing-by-parity, where we simulate an adversary who queries the system with crafted inputs designed to elicit sensitive responses. To conduct these audits, I utilize OpenAI APIs in combination with advanced NLP frameworks like Hugging Face Transformers and PyTorch. This setup allows me to reproduce real-world attack scenarios and assess how much confidential or memorized information the model may expose.

One of the key components of this research has been the development of a robust and modular pipeline for data preprocessing, model training, and evaluation. This pipeline automates the process of crafting adversarial prompts, querying LLMs, and evaluating the responses.

This research sits at the intersection of machine learning, security, and privacy - areas that I am deeply passionate about. It requires not only a strong understanding of model deep learning workflows but also creative thinking to simulate threats and devise countermeasures.

The broader goal of this work is to help developers, researchers, and organizations deploy LLMs more responsibly - with greater transparency, auditability, and trust. As LLMs become ubiquitous, ensuring that they don’t leak sensitive training data is not just a technical issue but a societal one. Through this project, I hope to contribute to building safer and more privacy-aware AI systems.




CoveStack

(June 2025 - Present)

CoveStack is a personal project born out of a common frustration I face as a developer - constantly switching between disconnected tools like GitHub, Slack, Notion, VSCode, and shared docs during team-based projects. This fragmentation often breaks flow, slows collaboration, and makes it harder to focus on building. I realized this isn’t just my problem - it's something many engineers experience daily.

To solve this, I started building CoveStack, a cloud-native collaboration platform that brings together code sharing, task management, and real-time communication into a single, unified workspace. The vision is to streamline workflows and create a tool that lets teams focus on solving problems and significantly improve productivity.

Technically, CoveStack is powered by a modular backend built with FastAPI and Node.js (Fastify), supporting workspace isolation and plugin extensibility. I am designing normalized relational schemas in PostgreSQL to support concurrent operations across multiple collaborative workspaces. The frontend is built with React, TypeScript, and TailwindCSS, ensuring a responsive, user-centric interface that performs well across devices.

For deployment, I will containerize the infrastructure using Docker, automated deployment with CI/CD pipelines, and leveraged AWS for scalable cloud hosting. Monitoring is handled via Firebase and AWS CloudWatch, which helped reduce response time by 40% during feature testing.

CoveStack reflects my interest in building systems that are functional, scalable, and developer-friendly. It aligns closely with my passion for developing innovative solutions that not only improve individual productivity but also foster a stronger sense of collaboration and shared purpose within developer communities.

Languages

Python, Java, C, JavaScript, SQL, HTML, CSS, YAML, Shell(Bash), MATLAB


Frameworks & Libraries

React, Node.js, FastAPI, Django, Angular, PyTorch, TensorFlow, Hugging Face Transformers, Zustand, TRL, Unsloth


Technologies

AWS (EC2, S3, RDS, CloudFront, CloudWatch), Firebase, Docker, GitHub, Git, CI/CD, REST APIs, Pandas, NumPy, OpenCV, OpenAI API, Stripe API, Groq API, OAuth2, JWT, RBAC, GitHub Actions, Agile


Databases

PostgreSQL, MongoDB


Collaboration

Figma, Adobe XD, Autodesk Inventor, Fusion 360, Microsoft Teams, Slack, Microsoft Word, Microsoft Excel


Let's Connect

Elements

Text

This is bold and this is strong. This is italic and this is emphasized. This is superscript text and this is subscript text. This is underlined and this is code: for (;;) { ... }. Finally, this is a link.


Heading Level 2

Heading Level 3

Heading Level 4

Heading Level 5
Heading Level 6

Blockquote

Fringilla nisl. Donec accumsan interdum nisi, quis tincidunt felis sagittis eget tempus euismod. Vestibulum ante ipsum primis in faucibus vestibulum. Blandit adipiscing eu felis iaculis volutpat ac adipiscing accumsan faucibus. Vestibulum ante ipsum primis in faucibus lorem ipsum dolor sit amet nullam adipiscing eu felis.

Preformatted

i = 0;

while (!deck.isInOrder()) {
    print 'Iteration ' + i;
    deck.shuffle();
    i++;
}

print 'It took ' + i + ' iterations to sort the deck.';

Lists

Unordered

  • Dolor pulvinar etiam.
  • Sagittis adipiscing.
  • Felis enim feugiat.

Alternate

  • Dolor pulvinar etiam.
  • Sagittis adipiscing.
  • Felis enim feugiat.

Ordered

  1. Dolor pulvinar etiam.
  2. Etiam vel felis viverra.
  3. Felis enim feugiat.
  4. Dolor pulvinar etiam.
  5. Etiam vel felis lorem.
  6. Felis enim et feugiat.

Icons

Actions

Table

Default

Name Description Price
Item One Ante turpis integer aliquet porttitor. 29.99
Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
Item Three Morbi faucibus arcu accumsan lorem. 29.99
Item Four Vitae integer tempus condimentum. 19.99
Item Five Ante turpis integer aliquet porttitor. 29.99
100.00

Alternate

Name Description Price
Item One Ante turpis integer aliquet porttitor. 29.99
Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
Item Three Morbi faucibus arcu accumsan lorem. 29.99
Item Four Vitae integer tempus condimentum. 19.99
Item Five Ante turpis integer aliquet porttitor. 29.99
100.00

Buttons

  • Disabled
  • Disabled

Form