Home

Most AI products try to do things for you. I wanted to build one that helps you figure out what to do for yourself.

What's Next AI started as a class project in Kellogg's AIML-901 AgentOps course, but the problem it solves is one I've carried for years. I have ADHD. I know what it's like to stare at a full to-do list and feel paralyzed — not because you don't know what matters, but because knowing isn't enough. There is a gap between intention and action, and for some of us, it's a chasm.

The question I kept asking was: what kind of tool actually helps with that? Not another productivity app that assumes you'll just "do the thing." Not a chatbot that gives generic advice. Something that understands your cognitive state, your energy, your context — and meets you where you are.

Why I Was the Person to Build This

Before business school, I worked in economic research and B2B SaaS. Those two experiences gave me something specific: a deep respect for how people actually behave versus how we assume they behave.

In economic research, you learn that human decisions are messy, contextual, and shaped by invisible constraints. In B2B SaaS, you learn that products succeed when they reduce friction at the exact moment a user encounters it — not five screens earlier.

What's Next AI sits at the intersection. It's an adaptive cognitive partner — an AI that doesn't just list your tasks, but decomposes large goals into micro-steps, adapts to your energy and clarity throughout the day, and helps you navigate the psychological friction that keeps smart people stuck.

What I Built

The product uses an onboarding flow I call "Growth Mapping" — a conversational survey that captures your work streams, your biological peak hours, the moments where your momentum is most fragile, and the kind of AI voice that actually feels helpful to you (not everyone wants a cheerleader).

Behind the scenes, it's built on a Lovable frontend and n8n backend with OpenAI's API, using a custom persona called "Present" — designed to be calm, grounded, and never performatively enthusiastic. The system maintains session memory so conversations feel continuous, not cold-started every time.

I chose this stack deliberately. Lovable and n8n are low-code tools — I know that. The tradeoff was intentional: I wanted to get to user feedback in days, not months. The goal was to test whether the interaction patterns worked, not to build infrastructure. The technical debt was a feature, not a bug.

Final pitch video — replace this placeholder
Impressionistic watercolor of a lake with trees and dramatic sky

My impression of the lake — watercolor. Seeing clearly before you build.

What I Learned

Building What's Next AI reinforced something I already believed but hadn't fully articulated: the hardest part of AI product work isn't the model — it's the judgment layer around it.

Anyone can wire up an API call. The differentiating work is in the decision architecture: which moments do you intervene? What tone do you use? When do you hold back? I spent more time designing the interaction patterns — the "Two-Path Nudge," the "Vibe Check" micro-check-in, the "Safe Mode" for when someone is in survival mode — than I spent on any technical integration.

That's the kind of AI product person I want to be. Not someone who optimizes token usage, but someone who asks: is this actually helping a real person navigate their real day?

What I'd Do Differently

If I built this again, I wouldn't build it as a standalone app at all.

The core insight of What's Next AI is that overwhelmed people need help at the moment of paralysis — not five minutes later when they've context-switched to a separate tool. The next version wouldn't live on its own website. It would live inside the workflows people are already in: Slack, Gmail, Teams, Notion. Meet people where they are, not where you wish they were.

That's a product lesson, not a technical one. The best AI products don't ask users to adopt a new habit. They dissolve into existing ones. The foundation is built — the interaction patterns, the persona design, the decision architecture — but the delivery mechanism was wrong. The next iteration would be an embedded experience, not a destination.

The thesis still holds: the best AI products don't replace human judgment. They create the conditions for it. I just know more now about where those conditions need to be created.