Projects

A showcase of AI and data science projects that demonstrate real-world impact and technical excellence across various domains.

Unseen: Anonymous Voice Support
Active

Unseen: Anonymous Voice Support

Personal ProjectDec 2025 - Present

Anonymous voice support app for people preparing for jobs or major exams, or pushing through burnout and depression. No follows, no networking, no influencer dynamics.

Impact:

Creates a safe, low-pressure space where encouragement feels human and sincere.

Tech Stack:

React NativeExpoTypeScriptSupabaseSupabase AuthWhisper API (STT)+2
PayLoadApp: Mobile Invoice Generator
Completed

PayLoadApp: Mobile Invoice Generator

Personal ProjectAug 2025 - Oct 2025

Mobile invoicing for field technicians, dog walkers, and private tutors. Create invoices on-site with voice input or templates, attach proof photos, and share with clients immediately.

Impact:

Replaces paper forms with faster billing, clearer documentation, and more professional client communication.

Tech Stack:

React NativeExpoTypeScriptSupabaseSupabase AuthWhisper API (STT)+2
Audit Risk Intelligence Platform
Production

Audit Risk Intelligence Platform

Deloitte2024 - Present

Led the data pipeline and audit-risk labeling workflow for global news intelligence.

  • Built a data ETL pipeline for news metadata and source-credibility labeling across 17 countries, partnering with regional Deloitte SMEs to map trusted sources
  • Built relevance + audit-risk classification using title/summary/company tags; curated a goldenset with SMEs for evaluation
  • Implemented client-specific processing and DUNS-based entity updates to handle M&A changes near real time; processes 40–50K articles/day and ~20K labeled records
  • Delivered RiskSensing API on AKS with QA test suites, OpenTelemetry monitoring, and LLM cost optimization from ~$40/day to ~$6/day (GPT-4/4o/5)

Impact:

Keeps audit-risk data structured, current, and reliable for downstream RiskSensing, API Platform, and Omnia teams.

Tech Stack:

PythonAzureAKSOpenTelemetryLLMRAG+4
Claude-Notify: Cross-Platform Desktop Notifications for AI Workflows
Completed

Claude-Notify: Cross-Platform Desktop Notifications for AI Workflows

Personal Project2025 July

Developed a cross-platform desktop notification system for Claude Code that enhances developer productivity by providing real-time alerts when AI tasks complete, errors occur, or user input is needed. Features native desktop notifications across macOS, Windows, and Linux with customizable sound alerts, global and project-specific settings, and seamless integration with Claude Code workflows.

Impact:

Eliminated context switching for developers by providing instant awareness of Claude Code status, reducing idle time and improving workflow efficiency. Homebrew package distribution ensures easy installation and updates for the developer community. Quick aliases and flexible configuration options make it adaptable to individual workflow preferences.

Tech Stack:

GoCross-platform DevelopmentDesktop NotificationsHomebrewCLI DevelopmentSystem APIs+1
Nursing AI Diagnostic System with Human-in-the-Loop
Research

Nursing AI Diagnostic System with Human-in-the-Loop

GWU Research Lab2023-2024

Led the end-to-end development of an intelligent nursing diagnostic system. I designed and implemented a Retrieval-Augmented Generation (RAG) system that references a knowledge base of 80+ documented nursing scenarios. When new patient data is entered, the system retrieves the top 3 most similar scenarios to inform its diagnostic suggestions.

Impact:

Crucially, I architected a Human-in-the-Loop (HITL) feedback mechanism. Nurses can provide feedback on the AI's suggestions, which is then vectorized and stored in our Deeplake (Vector DB). This creates a self-improving system where accuracy and relevance continuously increase with each interaction.

Tech Stack:

RAGHuman-in-the-loopGPT-4Vector DatabaseData LakePython+1
Multi-modal AI for Autism Analysis
Research

Multi-modal AI for Autism Analysis

GWU Research Lab2023-2024

I was responsible for the entire audio processing pipeline. My primary role was to extract and analyze audio from raw video footage, tackling the significant challenge of low-quality audio in Korean. I developed a noise reduction process using spectral subtraction and a filtering logic to isolate the child's voice from background noise and parental speech, significantly improving the quality of data for the model.

Impact:

This work was critical for enabling the analysis of 'in-the-wild' videos, a key goal of our research. By successfully processing the audio data, I helped create a system that provides objective, data-driven insights to support clinicians, making behavioral analysis more efficient and accessible.

Tech Stack:

Audio ProcessingNoise ReductionSpectral SubtractionKorean NLPSpeech RecognitionPython+1
Private LLM with RAG
Completed

Private LLM with RAG

Atos Zdata2023

Developed a private LLM with RAG using LangChain and vector databases (FAISS, Qdrant) to support Q&A, summarization, and enterprise document retrieval. Built an auto-updating vector index that detects document changes in real time and compared LLMs (Llama-2, Falcon, GPT4ALL) for accuracy and latency.

Impact:

Enabled automated draft responses for RFP/RFI/SoW workflows and faster retrieval across internal knowledge bases.

Tech Stack:

PythonLangchainRAGFAISSQdrantLlama-2+2
Pepper Robot Navigation with HoloLens
Research

Pepper Robot Navigation with HoloLens

GWU Research Lab2023

Developed a multimodal AI system enabling the Pepper robot to navigate autonomously. The robot uses Microsoft HoloLens for real-time environment scanning, obstacle detection, and spatial mapping.

Impact:

This research aimed to give Pepper spatial awareness for free movement in new environments, with future goals of recognizing and remembering individuals. LLM was used for conversational interaction.

Tech Stack:

Microsoft HoloLensComputer VisionROSPythonRoboticsLLM+2
Pepper Robot AI Integration for Healthcare
Research

Pepper Robot AI Integration for Healthcare

GWU Research Lab2023

I took over a stalled project that used a traditional NLP model and a Unity 3D avatar. I completely redesigned the system by integrating the GPT API for fluid conversation and OpenAI's Whisper API for robust speech-to-text and text-to-speech capabilities. The virtual avatar was replaced with a physical Pepper robot for tangible user interaction.

Impact:

This overhaul transformed a non-interactive prototype into a successful project. The new system was not only presented at a university poster session but was also significant enough for my supervising professor to present at an academic conference.

Tech Stack:

GPT APIWhisper APISTT/TTSPepper RobotHuman-Robot InteractionPython+3
AI-Driven Defect Detection for Aerospace Composites
Academic

AI-Driven Defect Detection for Aerospace Composites

Bauman Moscow State Technical University (Bachelor's Thesis)2022

Developed a novel methodology combining Finite Element Analysis (FEA) with Machine Learning to predict structural integrity in aerospace components.

Impact:

Achieved 95-97% predictive accuracy by generating a proprietary dataset from scratch via complex ANSYS simulations.

Tech Stack:

ANSYS (FEA)Machine LearningComputer VisionData GenerationPython