0015 — Image storage = Supabase Storage
- Status: accepted
- Date: 2026-05-08
- Deciders: Derek
Context
ark needs somewhere to put tenant-uploaded images: brand logos (Workstream D, this PR), and soon CMS hero images and member profile photos (Workstream B). The original docs/inputs/SOURCE-MAP.md reference came from internalize and actualize-v2, which use Cloudinary — excellent at transforms, but a black-box delivery layer that holds the canonical bytes and gates access via signed URLs the tenant can’t directly own.
Two non-negotiables shape the call:
- Data sovereignty. A tenant who leaves ark must be able to take their bytes with them. Not “request an export from us.” Not “wait for the vendor to honor a deletion.” Their bytes, their ownership, in a format that round-trips. (Memory: “Vendor choices preserve data sovereignty.”)
- RLS-as-security (per ADR 0014). Storage writes need the same gate as data writes. An admin of org A uploading a logo for org B must be denied at the database layer, not at the UI or API layer. Cloudinary can do app-side gating but doesn’t share the policy model with our Postgres RLS.
Decision
Use Supabase Storage as the canonical image store. One bucket (org-public) for publicly-readable assets. Org-prefixed paths plus storage.objects RLS policies enforce write isolation.
Concretely:
- Buckets:
org-public(public read; admin/member writes scoped to org-prefixed paths).org-privateis deferred until a concrete auth-gated-asset use case appears. - Path conventions:
{org_slug}/branding/...— logos and branding assets (this workstream){org_slug}/cms/{entry_id}/...— CMS entry hero images (B){org_slug}/profiles/{user_id}/...— member profile photos (B)
- Storage RLS is the load-bearing gate. Workstream D ships the
branding/-prefix policies (migration 018); B extends withcms/andprofiles/second-folder predicates. - Image transforms use Supabase’s built-in
?width=…&format=webpquery string. Sufficient for the kinds of assets NFP/SK-scale tenants upload. No second vendor for transforms. - Tenant export =
supabase storage download {bucket} {org_slug}/...pluspg_dump. Both the bytes and the metadata travel in one tarball. - Reads of
org-publicgo through Supabase’s REST API (anon CDN URL or service-role-backed admin paths); no SELECT policy onstorage.objectsis needed for production reads. The keystone test installs a test-only SELECT policy inbeforeAllfor direct-SQL UPDATE/DELETE assertions; production never runs that code.
Consequences
Easier:
- One vendor for DB + Storage + RLS + Auth. One billing line. One admin console. One
supabaseCLI. - Storage RLS policies see the same
auth.uid()and the sameorg_membersrow as data RLS. The keystone test extends to cover Storage with the same four-quadrant matrix it uses for tables. - Tenant migration off the platform =
supabase storage download+pg_dump. No vendor-side keys to revoke, no signed-URL TTLs to wait out. - The
useLogoUploadhook in@ark/db/reactis a 20-line wrapper aroundsupabase.storage.from('org-public').upload(...). B reuses the same shape for hero images and profile photos.
Harder:
- We give up Cloudinary’s transform pipeline (named transforms, automatic responsive sizes, on-the-fly format negotiation, focal-point cropping). Supabase’s transforms cover the basics (resize, format conversion). If a tenant’s image library demands more, we put Cloudflare Images in front of
org-publiclater — not replace Supabase as the store. - Storage policy SQL is more verbose than Cloudinary’s app-side gating. Mitigation:
storage.foldername(name)[1]is the{org_slug}predicate; it composes naturally withorg_members.role = 'admin'. The keystone test extension keeps every new prefix honest. apps/apiand migration tooling never touch Supabase Storage as a peer dep, but@ark/dbexposesuseLogoUploadonly via the@ark/db/reactsub-export — the main entry stays Node-clean.
Trip-wires
We revisit this if:
- A tenant’s image volume or transform requirements outgrow Supabase Storage’s capabilities (rare for arts/NFP-scale tenants; the tradeoff would be visible long before it bit).
- Egress costs for the public bucket exceed Cloudflare-fronted alternatives by a non-trivial margin. The mitigation is a CDN layer in front of the bucket, not a backend swap.
- A future deliberately-non-Supabase tenant emerges. The call sites are already abstracted in
@ark/db/src/storage.ts, so the cost would be one adapter, not a wholesale port.
Alternatives considered
- Cloudinary — best-in-class transforms; black-box delivery, vendor-side keys, signed-URL TTLs, no shared policy model with Postgres. Rejects the data-sovereignty tenet. Rejected.
- R2 + Cloudflare Images — excellent infrastructure; pulls us out of Supabase’s one-RLS-model story; adds a second auth surface; complicates tenant export. Reasonable v2 if Supabase Storage hits a ceiling. Deferred.
- S3 + custom signed URLs — sovereign and cheap; reinvents what Supabase Storage already gives us with RLS integration. Rejected.
Migration journey
- D introduces
org-publicand the policies for thebranding/prefix (migration018_storage_branding). - B extends with
cms/andprofiles/second-folder predicates. docs/inputs/SOURCE-MAP.mdis updated in this same PR to point here. The internalize / actualize-v2 Cloudinary references stay for historical context (lift, not exemplar — per ADR 0011) but are flagged as superseded.