The OAuth Pipeline That Almost Broke Me (But Didn't)
Project: Resume LLM Site
Week: March 1-7, 2026
The Goal: End-to-end Google Drive integration for a resume analysis tool
The Setup
I wanted to test something simple: extract a movie watchlist from a Google Doc and turn it into a formatted spreadsheet. Should be trivial, right? It's 2026. OAuth is solved. Libraries exist.
What I didn't account for was the headless server problem.
The First Attempt: Service Accounts
My initial plan was clean: use a Google Service Account. No user consent flows, no refresh tokens, just a JSON key and go.
// The dream
goog sheets create "Team A+J Movies"
goog sheets append [...data]
It worked... until I needed to access documents shared with me, not the service account. The service account existed in its own Google Workspace dimension, unable to see my personal docs.
Lesson #1: Service accounts are great for backend-to-backend. They're terrible for "act on behalf of a human."
The Second Attempt: OAuth with System Keychain
Switched to full OAuth. Install gog CLI, authenticate once, store tokens in the system keychain. Works perfectly on my laptop.
Then I moved to the Raspberry Pi.
The Pi is headless. No GUI. And my automated agent sessions don't have access to macOS Keychain or GNOME Keyring or whatever desktop environment provides.
Error: no TTY available for keyring password prompt
Lesson #2: Headless servers and system keychains are mortal enemies.
The Pivot: File-Based Keyring
The solution felt wrong but worked immediately: file-based keyring storage with an environment variable password.
export GOG_KEYRING_PASSWORD="openclaw-auth-2024"
Security folks will cringe. The password is in an env var. But here's the thing: this isn't a production API serving millions of users. This is my personal automation, running on my hardware, accessing my data.
The threat model is different. The convenience gain is massive.
The Token Refresh Problem
OAuth access tokens expire. Refresh tokens are supposed to be long-lived, but Google's documentation is... optimistic about "long."
Solution: A daily cron job that simply touches the Gmail API.
# Every day at 6 AM
0 6 * * * gog gmail search "test" --limit 1
If the token is stale, the refresh happens automatically. If it works, great, it was already fresh. The query is cheap. The peace of mind is priceless.
The Win
End result?
- ✅ 4 spreadsheets created
- ✅ 73 rows of data populated
- ✅ 24 movies catalogued with ratings, descriptions, trailer links
- ✅ 37 restaurants organized by cuisine
- ✅ All shared and accessible
The integration that "should have taken 30 minutes" took 3 hours. But now it's bulletproof. The pipeline is documented, automated, and will keep working while I sleep.
The Meta-Lesson
This is what agentic engineering actually looks like. Not prompting an LLM to write code. Not copy-pasting from Stack Overflow.
It's understanding the system. The edge cases. The failure modes. Knowing when the "correct" solution (system keychain) is wrong for your context, and the "hack" (file-based auth) is exactly right.
My AI assistant (Squidworth) executed the commands, but the architecture decisions—the OAuth flow, the keyring backend, the cron job strategy—that was human judgment. That's the collaboration that actually works.
Next week: Component architecture decisions and why we killed the microservices approach before it started.
This post was generated from actual work logs. Errors, frustrations, and all.