π First-year CSE @ Delhi Technological University π€ Deep Learning enthusiast | Full-stack explorer | DSA beast in training πΈ Fun fact: I play the guitar like I train models β with layers and soul.
-
π Currently building and fine-tuning models in:
- π§ LLMs | π£οΈ NLP | π― Computer Vision
- βοΈ Retrieval-Augmented Generation (RAG) with Transformers
- π Custom Object Detectors | Facial & Human Action Recognition
-
π Exploring the MERN Stack and leveling up my web dev game
-
π Deep diving into Data Structures & Algorithms with C++ for that extra edge
"I donβt just learn β I backprop through life." "Iβm pretrained on hustle, fine-tuned on purpose."
"Every epoch makes me better"\
Languages:
π» C++ | Python | JavaScript | HTML | CSS | C
Frameworks & Tools:
- π§ PyTorch, TensorFlow, OpenCV, Scikit-learn, HuggingFace
- π Node.js, Express, React, MongoDB
- π§Ύ Git, GitHub, VS Code, Kaggle, Postman
- π΅ Text-to-Music Generator with MusicGen & DiffSinger
- π RAG-based QA Bot with custom retrievers & LLMs
- π¦ Building object detectors: RetinaNet | EfficientDet | DETR
- π§ Facial Expression + Human Action Recognition (YOLO + CNN hybrid)
- π MERN-based Web Dashboards for ML Ops
- π GitHub
- π¬ Email: divansumishra47@gmail.com
- π§ Always open to collabs in AI, CV, or full-stack builds
Β Β Β Β
- Every loss is just another gradient to descend
- I'm pretrained on red flags β I detect 'em in one shot
- YOLO? Nah. I detect chances in one shot
- Got 99 problems but vanishing gradients ainβt one
- Drop the past like a residual β I skip connections to the future
- I backpropagate through failures and optimize for growth
- Overfitting? Nah, Iβm just deeply trained
- Even the best models carry some bias β so do we