Case studies
ContentRX, applied to real OSS projects
Each case study is a real run of ContentRX against a public project's UI copy, with the maintainers' written approval, focused on the judgment calls a generic linter would miss — error messages that blame the user, permissions buttons that should be specific verbs, empty states that aren't encouraging.
This page stays honest about its state. When no study has shipped yet, it says so. When studies ship, they land here with per-finding critique and links to any resulting PRs on the project's repo.
Published case studies
None yet. The scaffolding for case studies (sessions 26–28) shipped first; the three published studies land as maintainer approval lands. See the shortlist below for the projects under consideration.
Candidate shortlist
The shortlist below is the output of tools/case_study_candidates.py, which scores every repo in external_signal/allow_list.json by the count of quality signals (content-designer presence, active i18n, content-design blog) and license permissiveness. Surfacing the shortlist publicly is part of the transparency commitment — anyone can see which projects are being considered and object before outreach starts.
Shortlist generated .
- supabase/supabase — Actively-maintained docs with content-designer review.
Signals: content designer · active i18n · content-design blog · licenseApache-2.0· score 1 - vercel/next.js — Vercel's flagship repo with active content review.
Signals: content designer · active i18n · content-design blog · licenseMIT· score 1 - mdn/content — Mozilla Developer Network — documentation-first content.
Signals: content designer · active i18n · content-design blog · licenseCC-BY-SA-2.5· score 0.8367 - linear/linear — Linear OSS artifacts — content reputation.
Signals: content designer · content-design blog · licenseMIT· score 0.8165 - PostHog/posthog — Product-analytics UI with strong content practice.
Signals: content designer · content-design blog · licenseMIT· score 0.8165
How case studies get published
The publishing workflow is on Robo's side, not automated:
- Pick a candidate from the shortlist.
- Contact the maintainers via the repo's issue tracker or a public channel. Get explicit written approval before any evaluation runs against their strings.
- Run ContentRX against the project and draft the MDX at
docs-site/app/case-studies/<slug>/page.mdxusing the template atdocs-site/content/case-studies/_template.mdx. - Add an entry to the
CASE_STUDIESregistry withmaintainer_approval: true,approved_by, andapproved_at. The CI guard inscripts/check_case_study_approval.pyrejects any registry entry missing those fields. - Open the PR. Three judgment-calls minimum per study.