Hyper/LoRA vs Full‑FT (RepoBench)
Repository‑level QA fine‑tuning with custom masking and GPU memory profiling of LoRA, full FT, and a hypernetwork that generates LoRA weights. Focus on stable training & reproducibility.
LLM systems • Privacy‑Preserving NLP • LoRA/HyperLoRA • Agent Systems
I am a Graduate Computer Science student at the University of Waterloo, supervized by Yuntian Deng. My research interests include Large Language Models, privacy‑preserving NLP, and agent systems.
Waterloo, ON, Canada · Open to research collaborations and internships.
Repository‑level QA fine‑tuning with custom masking and GPU memory profiling of LoRA, full FT, and a hypernetwork that generates LoRA weights. Focus on stable training & reproducibility.
Email: lhotsko@uwaterloo.ca · GitHub: lilianahotsko · LinkedIn: Liliana Hotsko