A showcase of AI and data science projects that demonstrate real-world impact and technical excellence across various domains.
Personal Project • Dec 2025 - Present
Anonymous voice support app for people preparing for jobs or major exams, or pushing through burnout and depression. No follows, no networking, no influencer dynamics.
Impact:
Creates a safe, low-pressure space where encouragement feels human and sincere.
Tech Stack:
Personal Project • Aug 2025 - Oct 2025
Mobile invoicing for field technicians, dog walkers, and private tutors. Create invoices on-site with voice input or templates, attach proof photos, and share with clients immediately.
Impact:
Replaces paper forms with faster billing, clearer documentation, and more professional client communication.
Tech Stack:

Deloitte • 2024 - Present
Led the data pipeline and audit-risk labeling workflow for global news intelligence.
Impact:
Keeps audit-risk data structured, current, and reliable for downstream RiskSensing, API Platform, and Omnia teams.
Tech Stack:

Personal Project • 2025 July
Developed a cross-platform desktop notification system for Claude Code that enhances developer productivity by providing real-time alerts when AI tasks complete, errors occur, or user input is needed. Features native desktop notifications across macOS, Windows, and Linux with customizable sound alerts, global and project-specific settings, and seamless integration with Claude Code workflows.
Impact:
Eliminated context switching for developers by providing instant awareness of Claude Code status, reducing idle time and improving workflow efficiency. Homebrew package distribution ensures easy installation and updates for the developer community. Quick aliases and flexible configuration options make it adaptable to individual workflow preferences.
Tech Stack:

GWU Research Lab • 2023-2024
Led the end-to-end development of an intelligent nursing diagnostic system. I designed and implemented a Retrieval-Augmented Generation (RAG) system that references a knowledge base of 80+ documented nursing scenarios. When new patient data is entered, the system retrieves the top 3 most similar scenarios to inform its diagnostic suggestions.
Impact:
Crucially, I architected a Human-in-the-Loop (HITL) feedback mechanism. Nurses can provide feedback on the AI's suggestions, which is then vectorized and stored in our Deeplake (Vector DB). This creates a self-improving system where accuracy and relevance continuously increase with each interaction.
Tech Stack:

GWU Research Lab • 2023-2024
I was responsible for the entire audio processing pipeline. My primary role was to extract and analyze audio from raw video footage, tackling the significant challenge of low-quality audio in Korean. I developed a noise reduction process using spectral subtraction and a filtering logic to isolate the child's voice from background noise and parental speech, significantly improving the quality of data for the model.
Impact:
This work was critical for enabling the analysis of 'in-the-wild' videos, a key goal of our research. By successfully processing the audio data, I helped create a system that provides objective, data-driven insights to support clinicians, making behavioral analysis more efficient and accessible.
Tech Stack:

Atos Zdata • 2023
Developed a private LLM with RAG using LangChain and vector databases (FAISS, Qdrant) to support Q&A, summarization, and enterprise document retrieval. Built an auto-updating vector index that detects document changes in real time and compared LLMs (Llama-2, Falcon, GPT4ALL) for accuracy and latency.
Impact:
Enabled automated draft responses for RFP/RFI/SoW workflows and faster retrieval across internal knowledge bases.
Tech Stack:

GWU Research Lab • 2023
Developed a multimodal AI system enabling the Pepper robot to navigate autonomously. The robot uses Microsoft HoloLens for real-time environment scanning, obstacle detection, and spatial mapping.
Impact:
This research aimed to give Pepper spatial awareness for free movement in new environments, with future goals of recognizing and remembering individuals. LLM was used for conversational interaction.
Tech Stack:

GWU Research Lab • 2023
I took over a stalled project that used a traditional NLP model and a Unity 3D avatar. I completely redesigned the system by integrating the GPT API for fluid conversation and OpenAI's Whisper API for robust speech-to-text and text-to-speech capabilities. The virtual avatar was replaced with a physical Pepper robot for tangible user interaction.
Impact:
This overhaul transformed a non-interactive prototype into a successful project. The new system was not only presented at a university poster session but was also significant enough for my supervising professor to present at an academic conference.
Tech Stack:

Bauman Moscow State Technical University (Bachelor's Thesis) • 2022
Developed a novel methodology combining Finite Element Analysis (FEA) with Machine Learning to predict structural integrity in aerospace components.
Impact:
Achieved 95-97% predictive accuracy by generating a proprietary dataset from scratch via complex ANSYS simulations.
Tech Stack: