Demand Forecasting Marketplace App

Classic Predictive AI
Project Overview
     The app is designed to address short-staffing in the hospitality industry during rush hours. Businesses can post short-term jobs (1–6 hours), while helpers receive offers and accept those that fit their schedule. This creates a flexible, on-demand staffing solution that benefits both employers and workers.
My Contributions
      I led the development of the AI-powered matching algorithm that drives the platform. The goal was to automatically send the right job offers to the right helpers by predicting whether they would accept the job, perform it reliably, and maintain good behavior on the platform (e.g., avoiding cancellations and lateness). The system was designed to fully automate job fulfillment with no human intervention — before (no unaccepted offers), during (no calls from businesses about issues), or after (positive reviews from employers).
    The project began with a deep analysis of user behavior. I segmented users into three main categories based on their activity patterns on the platform and defined appropriate activity windows (e.g., 1-day, 4-day, or weekly) to capture engagement dynamics and distinguish active from inactive users. This allowed me to identify the key drivers of engagement, uncover early signals of churn, and calculate critical business metrics such as LTV, churn, and retention rates. I also analyzed seasonal behavior to better understand demand cycles and user availability (Pic. 1).

     Next, I designed the matching algorithm with AI-driven elements to optimize how candidates were connected to job offers. The algorithm searched through the user base to find the best matches while applying a constraint of no more than 10 active offers per user. This ensured fairness by limiting bias toward the most active candidates and reduced churn risk among users with negative historical experiences.
      Hard filters were applied upfront, while positive behavior and reliability were integrated through machine learning models that scored and ranked users.
      To further increase efficiency, I implemented dynamic pricing for offers. This helped prevent non-attractive stints from remaining unfilled by adjusting pricing if an offer was unaccepted within 4 days of the start date.
     On top of that, the Operations team was equipped with a dashboard to intervene when an offer remained unaccepted less than 48 hours before its start (Pic. 2).

      Finally, I established a structured machine learning development cycle, covering feature collection, model evaluation, robustness testing, and hyperparameter tuning. To ensure safe deployment, new models were validated through key metric comparisons before going live, followed by ongoing monitoring of performance in production (Pic. 3)
Stint
Lead Data Scientist
Dec 2020 — Apr 2022