Why Hannah Doesn't Need Your GPS Coordinates
In the current landscape of AI fitness apps, the standard operating procedure is "upload everything." GPS tracks, heart rate variability, sleep stages, and location history are vacuumed up into the cloud to train models. We believe that's fundamentally unnecessary. You shouldn't have to trade your privacy for performance.
When we started designing Flux, we faced a contradiction. We wanted to build the world's most intelligent running coach—one that understands your entire career, not just today's workout. But we also operate under a strict ethos: data should belong to the athlete.
The industry told us we had to choose. To get "smart" AI, we needed to build a massive data lake of user information. To have privacy, we had to offer a dumb, offline logbook.
We rejected that binary. Instead, we architected a third way. We built Hannah.
Named after a loyal training partner (my dog), Hannah is designed to be relentlessly supportive and protective—specifically protective of your data. By flipping the standard data model upside down, we asked a simple question: Can we build a world-class AI coach that knows everything about your running, but nothing about you?
The Data Vampire Problem
Modern fitness apps are often surveillance engines in disguise. To give you a generic "Good job!" message after a 5K, many apps require you to sync your entire health history to their cloud. This data is often sold to third parties, used to train proprietary models, or simply stored indefinitely on servers that are vulnerable to breaches.
The excuse is always "context." Developers claim they need your data to personalize your experience. And to an extent, they are right. If Hannah doesn't know you ran a marathon last year, she gives terrible advice.
But there is a massive difference between context and surveillance.
- Context is knowing you ran 50 miles last week and your legs are tired.
- Surveillance is knowing exactly where you ran, who you ran with, and what time you left your house.
We realized that effective coaching relies on the former, not the latter. This realization led to our core architectural innovation: Hierarchical Context.
The 80/20 Rule of Training Context
Most developers assume that for an AI to be smart, it needs all the data. This is false. We analyzed how human coaches actually work. If you hired an elite coach today, they wouldn't ask for the GPS coordinates of your easy run from three years ago. They wouldn't care that you ran down Main Street instead of First Avenue.
They would look for patterns. Hannah operates on what we call the 80/20 rule of context:
- Recent History (80% value): What have you done in the last 12 weeks? Are you fatigued? This requires high fidelity.
- Long-term Patterns (20% value): Do you get injured every time you hit 50 miles per week? Do you race better in the spring? This requires deep history, but low fidelity.
We realized that Hannah doesn't need to memorize your history; she needs to understand your trajectory.
Daily pace, distance, heart rate, acute load.
Weekly volume, key workouts, injury flags.
PRs, yearly volume, long-term trends.
How It Works: The Tiers
To solve this, we built a system called Hierarchical Context. Instead of dumping a raw database into the LLM context window, we aggregate data on your device before it ever touches an API. This process happens entirely on your iPhone using Apple's Neural Engine and SwiftData.
Tier 1: Recent Detail (The Micro View)
This is the "high fidelity" zone covering the last 90 days. Hannah sees specific workouts, paces, and daily load. She needs this to adjust your schedule for next week. If you ask, "Why am I tired today?", Hannah needs to know that you ran intervals yesterday.
However, even here, we strip data. We send "5 miles at 8:00 pace," but we do not send the GPS track, the start time, or the location. The "where" and "when" are irrelevant to the physiology of the "what."
Tier 3: Career Profile (The Macro View)
This is where the magic happens. For data older than a year, we aggressively compress it. Hannah doesn't see "Run on Jan 12, 2021." She sees "2021 Summary: 1,200 miles total, Peak 10K time: 42:30, Injury: IT Band Syndrome in March."
This allows Hannah to say, "You historically peak well off of 40 miles per week, but you tend to get injured when you spike volume in the spring." She can give you career-level advice without holding a career-level dossier on your personal movements.
Privacy by Design: The "Privacy Dial"
We believe privacy is a user experience feature, not just a legal checkbox. It shouldn't be buried in a EULA. It should be a button you can press. That’s why Flux includes a Privacy Level setting that physically alters the data payload sent to Hannah.
This isn't a "request" to our servers to ignore data. It is a hard switch in the local code that prevents the data from ever leaving your device.
Minimal
Sends only the last 30 days of data. No history. No pattern recognition. Maximum privacy. Good for quick, tactical questions.
Balanced
DEFAULT12 months of history + key injury patterns. The sweet spot. Hannah knows your season, but not your life story.
Maximum
Full career context (24 months detail + lifetime PRs). For elite personalization where every historical trend matters.
We built a live demonstration of this logic. In the widget below, we've simulated a user profile for "Alex" who has some injury history. Try asking advice like "Should I do speedwork today?" and toggle the privacy levels. You will see exactly what Hannah "sees" and how her advice changes.
MEET HANNAH: PRIVACY DEMO
See how Hannah's advice changes when you adjust the Privacy Dial.
(Simulated User: Alex, recovering from shin splints)
Under the Hood: Local-First Engineering
How do we actually achieve this? It starts with SwiftData. Unlike many apps that are essentially web browsers wrapped in an app icon, Flux is a true native iOS application.
Your workouts, journals, and fitness markers live in an encrypted SQL-backed database on your iPhone. When you open the app, you aren't waiting for a server in Virginia to tell you what your run schedule is. It's already there.
The Anonymous Context Packet
When you ask Hannah a question, the following sequence occurs in milliseconds:
- Local Query: The app queries your local SwiftData store based on your selected Privacy Level.
- Aggregation: The app summarizes this data. It converts "Run at 5:00 PM at Green Lake" into "Run: 5 miles, Moderate Effort."
- Sanitization: A local filter runs through the text to remove names, addresses, or identifiable markers.
- Transmission: This anonymous "Context Packet" is sent to the LLM (via Anthropic's Claude API) along with your question.
- Destruction: The response is generated and sent back. The context is discarded by the provider per our zero-retention agreement.
"We don't train our models on your runs. We don't sell your location data. We just built a really good coach."
Safety Protocols & Medical Boundaries
Privacy is one pillar of trust; safety is the other. AI models are prone to hallucination, and in a fitness context, bad advice can lead to injury. We have implemented rigorous "Client-Side Guardrails" to prevent Hannah from acting like a doctor.
Eating Disorder Detection
Running has a complicated relationship with body image. We hard-coded safety filters that detect language associated with eating disorders or exercise compulsion. If a user asks, "How many miles do I need to run to burn off a slice of pizza?", Hannah is instructed to reject the premise of the question. She will pivot to fueling for performance rather than punishment.
The Medical Red Line
If you report "sharp pain" or "chest pressure," Hannah shuts down coaching immediately. It triggers a hard-coded response directing you to medical professionals. We do not allow the LLM to "hallucinate" physical therapy advice. She is a coach, not a clinician.
Design as Trust
Finally, we believe that design is a function of trust. When an app is cluttered with ads, pop-ups, and "share this" buttons, it signals that the user is the product.
Our aesthetic—inspired by the technical minimalism of brands like Norda and Raide—is intentional. The "Liquid Glass" UI, the matte finishes, and the absence of social feeds are designed to create a space of focus. It is a tool for you, not a billboard for us.
The Future of Private AI
We are betting on a future where "Private AI" isn't a niche luxury, but the standard. You shouldn't have to choose between a smart coach and a private life.
The technology exists to build brilliant, helpful, and deeply personalized tools that respect human dignity. It just requires developers to care enough to build it that way.
We are currently finalizing the beta for the new Flux. If you are interested in testing a training app that respects your data as much as it respects your PRs, we invite you to join us.