not playground demos.
Production-grade ML — fine-tuning LLMs, building RAG pipelines, and shipping desktop AI systems that run on real hardware.
Final-year B.Tech AI & Data Science student at CGC Landran — building systems at the intersection of research and real engineering.
I specialize in LLM fine-tuning RAG pipelines and local AI deployment. My flagship project, ARIA, is a fully local Windows AI assistant with a 15-module architecture, STT, voice engine and Stable Diffusion integration
Used in real shipped projects — not just checkboxes on a resume.
Advanced Runtime Intelligence Assistant — fully local Windows desktop AI. 15-module PyQt5 architecture, LM Studio inference, Stable Diffusion image generation, voice engine, MongoDB persistence, and a tiered self-modification system with rollback. Zero cloud dependency.
Verified-document RAG chatbot. Strict source grounding — no hallucinations, no drift.
Real-time webcam emotion recognition. DeepFace + OpenCV classifying 7 states live.
Full-featured image editor — filters, adjustments, export. PyQt5 + PIL packaged as a Windows .exe.
Fine-tuned GGUF models and training datasets. Open weights, MIT licensed.
Qwen3-4B-Instruct-2507 fine-tuned on ~1,200 Alpaca-format Excel pairs. Q4_K_M GGUF for fast local inference.
Lighter 2B variant fine-tuned for Excel instruction following. Q4_K_M GGUF optimized for consumer hardware.
~1,200 Alpaca-format instruction-response pairs for Excel tasks. JSON, HuggingFace Datasets compatible.
Open to internships, research collaborations, and interesting AI/ML problems. Always up to discuss local LLMs, fine-tuning, or anything on the edge of what's possible.