AI for Entrepreneurs — Part 3
AI Ops, autonomous systems, and multi-project infrastructure — from builder to operator
Part 1 made you a builder. Part 2 made you a shipper. Part 3 makes you an operator. You learn to run multiple projects simultaneously, build shared infrastructure across them, create autonomous agents that work while you sleep, and monitor everything through exception-based dashboards. This is where the portfolio approach compounds — you're not managing one app, you're running a system of apps, agents, and automations that operate independently and only surface when they need your judgment. PRE-REQUISITE: Completed Part 2, OR demonstrable equivalent (you have a production app with database, auth, tests, CI/CD, and a VPS). Bring your Part 2 project to evolve, or a new problem that requires infrastructure. OVERT/COVERT: Overt = learn DevOps, multi-project architecture, agent systems, monitoring. Covert = the shift from 'I ship products' to 'I run systems.' Part 1 killed 'I'm not technical.' Part 2 killed 'I'm not a real developer.' Part 3 kills 'I need a team to scale.' The identity shift: they stop thinking they need to hire engineers because they ARE the engineering team — with AI agents as their staff.
What Makes This Program Different
The Third Identity Shift
Part 1 killed 'I'm not technical.' Part 2 killed 'I'm not a real developer.' Part 3 kills 'I need a team to scale.' When they present a system of autonomous agents, shared infrastructure, and self-healing monitoring — they realize they ARE the engineering team, with AI agents as their staff.
Exception-Based Management
The philosophical shift: dashboards surface problems and decision points, not status reports. You don't need to watch everything. You need to know when something needs your judgment. This applies to business management far beyond AI ops.
Demo Day Part 3
This isn't a demo of an app — it's a demo of an operation. Participants present their running systems: what agents work for them, what they monitor, how failures recover automatically. The audience sees autonomous infrastructure, not just a product.
The Live Portal Becomes a Platform
The class portal from Parts 1-2 is now a monorepo with shared packages, an API layer, agent-powered automation, and a monitoring dashboard. It evolved through every stage they learned — from static page to autonomous system.
To Do — Resources Needed
- ☐Determine if Part 3 requires Part 2 in-person, or if Part 2 Zoom graduates can attend
- ☐Build the sample monorepo template that participants can fork as a starting point
- ☐Create the agent-report wrapper script as a distributable tool for participants
- ☐Write the agent monitoring dashboard template (based on Pulse) as a forkable project
- ☐Design the self-healer exercise — what's the simplest self-healing agent to build in 30 min?
- ☐Create the 1Password CLI setup guide (service account provisioning for participants)
- ☐Write the bootstrap script exercise — what's the minimum viable infra-as-code?
- ☐Document the email-triggered task execution pipeline as a step-by-step buildable exercise
- ☐Create the App Store submission walkthrough with real screenshots and common rejection examples
- ☐Design the '6-month operator's roadmap' template
- ☐Determine cohort size limit — Part 3 needs more 1-on-1 time, likely 8-10 max
- ☐Consider adding Tailscale setup as an exercise — mesh VPN between laptop + VPS + phone
- ☐Add a 'revisiting old projects with new skills' exercise — the pattern Dustin does naturally
- ☐Consider whether Part 3 needs its own portal or evolves the Part 1/2 portal
Curriculum — Teaching Sequence
Why: You left Part 2 with a production app and a 90-day plan. What happened? What's running? What broke while you weren't looking? What: Quick round-the-room: each person shares their systems status — what they built, what's deployed, what cron jobs are running, what failed. Sets the tone: this room is full of operators now, not beginners. How: Each person shares in 3 minutes. Facilitator maps the room's collective infrastructure on a whiteboard. What If: What if the room you walked into already felt like an engineering team — because everyone's running production systems?
Why: You've been building separate projects in separate repos. That works until you need to share code between them — and then you're copying files, getting out of sync, and fixing the same bug in three places. What: Monorepo = all your projects in one repository with shared code. pnpm workspaces (multiple projects share dependencies). Turborepo (run builds/tests across all projects efficiently). The folder structure: apps/ for deployable projects, packages/ for shared code. Why companies like Google, Facebook, and Vercel use monorepos. How: Demo — restructure two existing projects into a monorepo live. Set up pnpm workspaces and Turborepo. Show shared code flowing between projects. Each person starts restructuring their projects. What If: What if fixing a bug in one place automatically fixed it everywhere — because all your projects share the same code?
Why: You have a contact form in three apps. A design system in four. An API client in five. Without shared packages, every change is multiplied by the number of apps. What: Internal packages — code that lives in your monorepo and is imported by your apps. Types of shared packages: UI components, utilities, API clients, type definitions, configuration. How to structure a package: src/, package.json, exports. Versioning with Changesets. How: Demo — extract a shared component from an existing app into a package, then import it in two different apps. Each person identifies and extracts one shared package from their projects. What If: What if every new app you built started 80% done — because it inherited everything from your shared packages?
Why: Your apps all talk to Supabase directly. That works until you need business logic, validation, or integrations that don't belong in the frontend. A shared API gives you one place for all that logic. What: API platform = a central service that your apps call. REST endpoints for common operations. Middleware for auth, logging, rate limiting. Deploying on Vercel as a serverless API. The pattern: apps → your API → database/external services. How: Demo — build a shared API service in the monorepo. Create endpoints that two apps call. Deploy to Vercel. Each person designs and starts building their API layer. What If: What if adding a new app to your portfolio took hours instead of weeks — because the API and shared infrastructure already existed?
Why: If your VPS died right now, how long would it take to rebuild? If you can't answer 'an hour,' your infrastructure isn't codified. What: Bootstrap scripts — a single script that sets up a fresh server from scratch: packages, tools, configs, cron jobs, environment. Dotfiles repo — track all configuration in git. Idempotent scripts (safe to re-run). The mental model: your server is disposable because you can recreate it from code. How: Demo — walk through a real bootstrap script. Show how to go from fresh VPS to fully configured server in one command. Each person starts writing their own bootstrap script. What If: What if losing a server was a 30-minute inconvenience instead of a catastrophe — because everything is code?
Why: You have API keys in .env files, tokens in config, passwords in notes. One leaked secret can compromise everything. What: 1Password CLI — store secrets in a vault, retrieve at runtime. The pattern: never store secrets in code or git. Environment variables sourced from 1Password. The workflow: op item get 'ServiceName' --fields password. Rotating secrets. Access control. How: Demo — set up 1Password CLI, store an API key, retrieve it in a script, use it in a deployment. Each person migrates at least 3 secrets from hardcoded/.env to 1Password. What If: What if you could share your entire codebase publicly — because no secret was ever stored in it?
Why: In Part 2 you set up a cron job. In reality, you'll have 20, 30, 50. Managing them becomes its own challenge — which ones are running? Which ones failed? Which ones conflict? What: Crontab safety (never replace, always append). The agent-report wrapper — every cron reports its status. Expected intervals and staleness detection. Organizing crons by project. The crontab as shared infrastructure. Logging and debugging failed crons. How: Demo — show a crontab with 30+ entries, the agent-report wrapper, and a failed cron being debugged. Each person adds 3+ cron jobs for their projects with proper reporting. What If: What if you had 30 automated tasks running 24/7 and you knew the status of every single one?
Why: Email is the universal interface. Your users have it. Your services use it. Being able to send and receive email from code unlocks entire categories of automation. What: SendGrid — sending email from your apps and agents. Inbound email (SendGrid Parse) — receive email at a custom address and process it in code. MX records and DNS setup. Parsing incoming email: sender, subject, body, attachments. Use cases: notification systems, email-triggered workflows, automated responses. How: Demo — set up SendGrid sending, configure an inbound email address, process an incoming email into a database record. What If: What if you could email your server a task and it would do it — parse the request, execute it, and email you the result?
Why: Google, Apple, Spotify, GitHub — every platform has an API. OAuth is the key that unlocks them all. Without it, your apps are islands. What: OAuth 2.0 demystified — authorization codes, tokens, refresh tokens, scopes. The flow: user clicks 'Sign in with Google' → redirect → callback → access token → API calls. Implementing OAuth for Google (Gmail, Calendar, Drive), Apple Sign In, and Spotify. Storing and refreshing tokens. How: Demo — implement a full Google OAuth flow live. Show API calls with the resulting token. Each person picks a platform relevant to their project and implements OAuth. What If: What if your app could read someone's Google Calendar, pull their Spotify playlists, or sync their contacts — because you know how to ask permission and connect?
Why: Every real app needs a way for you to see what's happening, manage data, and make decisions. That's an admin panel. What: Admin panel patterns: data tables with search/filter/sort, detail views, action buttons (approve, reject, flag). Building with your existing stack (Next.js + Supabase). Protected routes — admin-only access. Dashboard design: the 5 numbers that matter. How: Demo — build an admin panel for the class portal. Data table, detail view, action button. Each person adds an admin panel to their project. What If: What if you could see everything happening in your app — every user, every transaction, every error — and act on it from one screen?
Why: You've been using AI by asking it questions. An agent takes a goal and figures out the steps itself. It's the difference between driving and telling the car where to go. What: Agent architecture: the think → act → observe → think loop. Task decomposition — breaking a goal into steps. Error recovery — what happens when a step fails. Human-in-the-loop — when to ask for help vs. proceed. Agent types: data enrichment (scrape/process), content creation (write/publish), monitoring (watch/alert), workflow (email → parse → execute → respond). How: Demo — build a content enrichment agent live that scrapes data, processes it with AI, and stores the result. Each person designs an agent for their business. What If: What if you had AI employees that worked 24/7, handled their own errors, and only asked you when they genuinely needed a decision?
Why: The most powerful interface isn't an app — it's email. If you can email your system a task and get a result back, you've built something anyone can use from anywhere. What: The pipeline: inbound email → parse sender/subject/body → authorize → spawn agent/script → execute → email result back. Task routing — different email addresses for different capabilities. Dialogue — the agent can email back questions, you reply, it continues. Status tracking. How: Demo — email the VPS a task live. Watch it arrive, parse, execute, and respond. Each person designs an email-triggered workflow for their business. What If: What if you could run your business from your phone's email app — because your system understood what you wanted and made it happen?
Why: You have 20+ agents and cron jobs running. Without a dashboard, you're blind. With the wrong dashboard, you're drowning in noise. What: Building a health monitoring dashboard: agent registry (what should be running), run history (what actually ran), staleness detection (what stopped running), error tracking (what failed). The key metrics: last run time, success rate, average duration, stale threshold. How: Demo — build the monitoring dashboard for the class portal's agents. Register agents, track runs, show status. Each person starts building their own. What If: What if you could glance at one screen and know the health of every automated system in your business?
Why: Most dashboards show you everything. That's useless at scale. You don't need to know what's working — you need to know what's broken. What: Exception-based management: the dashboard only surfaces problems and decision points. Green = don't look at it. Yellow = degraded, watch it. Red = broken, act now. Notification hierarchy: dashboard → email → SMS/call based on severity. Designing alert thresholds that prevent noise but catch real problems. The operator's mindset: if you're checking dashboards manually, you haven't automated enough. How: Group exercise — design an exception-based alert hierarchy for your systems. What should trigger what level of notification? What If: What if you never checked a dashboard unless it told you to — and when it did, you knew exactly what needed your attention?
Why: An alert tells you something broke. A self-healing system fixes it before you even see the alert. What: Self-healing patterns: auto-restart failed agents, retry with backoff, fallback to cached data, circuit breakers. The self-healer agent — monitors other agents and takes corrective action. Crontab reconciler — audits what should be running vs. what is running. When to self-heal vs. when to alert a human. How: Demo — show a self-healing agent detecting a failure and recovering automatically. The crontab reconciler catching a missing cron. Each person designs self-healing rules for their most critical agent. What If: What if your systems fixed their own problems — and you only heard about the ones that genuinely needed a human decision?
Why: You have a mobile app on TestFlight. Getting it to the App Store is a different process with its own rules, requirements, and gotchas. What: The submission process: app metadata, screenshots, privacy policy, review guidelines. Common rejection reasons and how to avoid them. The review timeline. Post-launch: updates, versioning, responding to reviews. Fastlane for automated submissions. How: Demo — walk through a real App Store submission. Show the metadata, screenshots, review process. Discuss real rejections and how they were resolved. What If: What if your app was on the App Store next month — because the submission process stopped being mysterious?
Why: You've learned monorepos, shared packages, APIs, agents, crons, email, OAuth, monitoring, self-healing. Now see it as one system. What: The complete stack: code in monorepo → shared packages serve all apps → API platform handles business logic → VPS runs agents and crons → monitoring dashboard tracks everything → self-healer recovers from failures → exception alerts reach you only when needed. The operator's daily routine: check the dashboard (30 seconds), address exceptions (if any), build new things (the rest of the day). How: Demo — walk through the entire stack running live. Show a change flowing from code to deploy to agent execution to monitoring to self-healing. Map the data flow on a whiteboard. What If: What if your business infrastructure ran itself — and you spent your time building new things instead of maintaining old ones?