iOS / Mobile2024

Bean Dialer

A coffee app that replaces the guesswork with data. Scan the bag, get a grind setting, log the cup, adjust from signal instead of vibes.

Why this exists

Home espresso is a closed feedback loop with no data. You grind, you pull, you taste, you adjust. The problem is the adjust step is entirely vibes, and the loop takes 30 seconds per iteration. By the time you've dialed in a new bag of beans, you've pulled four bad shots and you're halfway through a second cup you didn't want.

Specialty roasters solve half the problem by printing suggested brew parameters on the bag. But those are suggestions from their equipment. Your grinder steps differently. Your basket is a different size. Your water is softer or harder. The printed recipe is a starting point, not an answer.

Bean Dialer closes the loop. Scan the bag, it pulls in the roaster's recipe, translates it for your specific grinder and setup, logs every pour, and adjusts the recommended grind setting based on what you told it about the last shot. It turns coffee dialing from guesswork into signal.

The flow

STEP 1
Scan bag
vision → JSON recipe
STEP 2
Pick grinder
one-time calibration
STEP 3
Translate recipe
per-grinder step number
STEP 4
Pull shot
weigh in / weigh out / time
STEP 5
Log + adjust
taste note → next step

One continuous loop. Scan once per bag, then log each shot to refine the recommendation until the bag tastes right.

How bag-scanning works

The first hard problem: going from “photo of a bag of coffee” to “structured recipe data.” Roaster packaging varies enormously. Some bags print recipes as tidy tables. Others use free-form prose. Some roasters don't print a recipe at all and just list tasting notes.

The pipeline uses a vision-language model (initially Claude Haiku via the API, now a local llava checkpoint for privacy) with a strict extraction prompt. The model extracts what it can find — origin, process, roast date, weight, suggested dose, suggested yield, suggested brew time — into a typed JSON schema. Missing fields come back as null, and the app falls back to its default database of known-good starting recipes indexed by origin + process + roast level.

The single most important thing for the extraction to be reliable is showing the model examples of the wildly different bag formats in the prompt. Few-shot examples beat any amount of prompt tuning. The current prompt includes five example bag photos and their corresponding JSON outputs. That one change moved extraction accuracy from ~70% to ~95% on my test set.

The translation layer

Here is where most coffee apps fall over. The roaster says “grind medium-fine.” Your grinder has numbered stepped settings from 1 to 40. Which step is medium-fine on your grinder?

The translation layer is a per-grinder calibration. When you first set up the app, you pick your grinder from a list (or add a new one). Each grinder profile stores the burr geometry type (flat, conical, or hybrid) and the calibrated fineness curve — a mapping from step number to a normalized fineness score on a 0-to-100 scale. This was the slow part: I brewed through a bag of known-stable beans at every step on my own grinder and hand-calibrated the curve. The community crowdsources curves for popular grinders now.

GrinderBurr geometryEspresso rangePour-over range
Niche Zero63mm conical12–1618–24
DF64 (SSP MP burrs)64mm flat1.8–2.43.5–4.5
1Zpresso K-ProHand / conical1.5–2.02.8–3.5
Comandante C40Hand / conical14–1824–32

Example of the shipped grinder-step ranges. Each grinder profile also stores a calibrated fineness curve; these are the rough working ranges.

With a calibrated curve, the app translates the roaster's recipe (“aim for a fineness of about 42 for this pour-over”) into the specific step number for your specific grinder. You click-click-click the grinder to that step and pull the shot. No vibes required.

The counter-intuitive detail

The feature I thought would matter most — the recommendation engine that adjusts grind based on shot outcomes — turned out to be less valuable than the feature I almost cut: just logging every shot. The log itself does most of the work. Being able to scroll back and see “three days ago I tried step 14 with this dose and it was sour” is more useful than any algorithm telling me to try step 15 next, because my palate calibrates to my own history faster than the app can calibrate to my palate.

The generalizable lesson: for personal-use tools, the logging is often the product. The clever inference layer you want to build on top is downstream of the data you accumulate, and half the time the logging alone is enough. Resist the urge to build the clever layer before you have six months of logged data to feed it.

Stack and honest scope

Bean Dialer is a SwiftUI iOS app with a Swift Data backing store for shots, bags, and grinder profiles. The bag-scanning pipeline calls out to either a local on-device vision model via Core ML or the Anthropic API when the user opts in for better accuracy. The grinder curves are shipped as a static JSON bundle in the app, so the scan flow works offline for anyone using a supported grinder.

It is not in the App Store. It probably will not be. It is a one-person tool, and the one person is me. If the idea is interesting enough that you want to build your own, the three things worth copying are: the bag-extraction schema, the per-grinder calibration curve, and the decision to make shot logging central rather than peripheral.