Managing Snowflake organization listings from dbt
Using a highly experimental dbt package

Most teams I talk to already treat dbt as the place where tables and views get built, tested, and deployed. The awkward bit shows up right after that: Internal and External Marketplace listings — shares, grants, manifests, publish vs draft — often live in one-off SQL scripts, runbooks, or someone’s Snowsight tabs. The warehouse is versioned; the listing story is not.
I’ve been experimenting with a small dbt package that tries to close that gap: dbt-snowflake-listings (anthu/dbt_snowflake_listings on the hub of your packages.yml). The idea is simple: a listing is just another dbt model with a custom materialization. dbt run creates or alters the share, applies grants, and syncs the listing manifest — still wired into the DAG via ordinary ref() dependencies.
Fair warning up front: this is highly experimental. APIs and Snowflake listing behavior evolve; I’m dogfooding it on real projects, but I would not bet a compliance audit on “set and forget” without your own testing. Think of it as a opinionated spike that happened to grow tests and docs — not a supported product.
Why bother with listings-as-code?
Organization listings sit on top of shares and metadata. Doing that by hand works until:
You need the same objects you already model in dbt to appear in the share, in lockstep with builds.
You want idempotent updates (rerun deploy → alter listing, re-grant) instead of duplicate “create listing” scripts.
You care about reviewability: manifest YAML in Git next to the models consumers actually see.
dbt already knows your graph. The package leans on that: objects are declared with share_model / share_models and ref(), so staging runs before the listing model, and semantic views (if you use them) stay in dependency order.
What it does (in one breath)
Custom materialization
organization_listingfor Internal Marketplace listings.share_models([...])(orshare_model) to register what goes into the share; object types (table, view, semantic view, Cortex search service) are auto-detected at runtime so you are not hand-picking grant verbs for every object.Manifest as YAML under
config.meta.listing_manifestin schema files — aligned with Snowflake’s organization listing manifest reference.Lifecycle: normal runs alter in place;
--full-refreshis your escape hatch when you need drop/recreate semantics.Optional
listing_ref()macro for ULL-style references on the producer side.
There is also an external_listing materialization in the repo that I treat as a blueprint for public Marketplace flows — same ideas, different privileges and constraints. I’m focusing this post on organization listings because that’s where most internal sharing pain lives.
Install
Add the package to packages.yml (pin a release tag you trust; the example below matches the sample project in the repo at time of writing):
packages:
- git: "https://github.com/anthu/dbt-snowflake-listings.git"
revision: v0.2.3
Then:
dbt deps
You’ll need a Snowflake role that can create shares and organization listings (often something like ACCOUNTADMIN during a spike, or a dedicated role with the right grants). The package ships a grant_listing_privileges run-operation if you want to standardize that — see the repo’s docs/macros.md.
Minimal pattern: two files
For a full example - please see the latest example in the repo itself. I will try to keep it in sync.
1. Listing model (.sql)
The model’s config selects the materialization and names the share. The body lists what gets granted into that share using ref() so dbt’s DAG stays honest:
{{ config(
materialized='organization_listing',
meta={
'share_name': 'TPCH_SAMPLE_SHARE',
'publish': true,
},
) }}
{{ dbt_snowflake_listings.share_models([
ref('stg_tpch_nation'),
ref('stg_tpch_region'),
ref('stg_tpch_customer'),
ref('stg_tpch_orders'),
]) }}
That snippet is lifted from the TPC-H sample example under examples/snowflake_sample_data/ — it shares staging models built from SNOWFLAKE_SAMPLE_DATA, which is a nice zero-ingestion way to try the flow.
2. Manifest (.yml)
Keep prose and marketplace-facing fields in YAML next to the model. At minimum you want a clear title, description, and organization_targets; everything else maps to Snowflake’s manifest schema:
models:
- name: tpch_sample_listing
description: >
Organization listing that shares TPC-H benchmark sample tables with all
accounts in the organization via the Internal Marketplace.
config:
meta:
listing_manifest:
title: "TPC-H Sample Data (tables)"
description: |
Sample data from the TPC-H benchmark dataset, sourced from
Snowflake's SNOWFLAKE_SAMPLE_DATA database.
organization_profile: "INTERNAL"
organization_targets:
access:
- all_internal_accounts: true
locations:
access_regions:
- name: "ALL"
auto_fulfillment:
refresh_type: "SUB_DATABASE"
refresh_schedule: "10 MINUTE"
usage_examples:
- title: "Top customers by order volume"
description: "Find the most active customers by number of orders placed"
query: >
SELECT
c.CUSTOMER_NAME,
c.MARKET_SEGMENT,
COUNT(*) AS order_count,
SUM(o.TOTAL_PRICE) AS total_spend
FROM STG_TPCH_CUSTOMER c
JOIN STG_TPCH_ORDERS o ON c.CUSTOMER_KEY = o.CUSTOMER_KEY
GROUP BY 1, 2
ORDER BY order_count DESC
LIMIT 20
Good usage_examples are worth the time: they show up in the listing experience and they force you to write SQL that actually matches what subscribers will query.
Run it
dbt run --select tpch_sample_listing+
Or run the whole project if your graph is small. On success you should see the share, grants, and listing aligned with what you declared — without maintaining a parallel script tree.
When you need a hard reset:
dbt run --select tpch_sample_listing --full-refresh
Producer-side querying with listing_ref
If you want to reference shared objects via a Uniform Listing Locator from the producer project (where the models already live), the package exposes:
SELECT *
FROM {{ dbt_snowflake_listings.listing_ref('MY_LISTING', ref('my_shared_table')) }}
Consumers in other accounts still see whatever names the listing exposes; this macro is mainly for keeping producer analytics consistent with the same DAG.
What I’d watch closely
Privileges and org settings — listing creation fails in boring ways if the role is short a grant; bake that into your platform story early.
Manifest vs reality — YAML typos or invalid combinations surface as Snowflake errors; treat manifest changes like DDL reviews.
Experimental tier — I ship semver tags, but you should still pin and read the changelog when upgrading. If something breaks, open an issue on the repo; I’m motivated by real-world friction.
Further reading
dbt-snowflake-listings on GitHub — README,
docs/configuration.md,docs/lifecycle.md,examples/snowflake_sample_data/
If you try it on a real internal listing, I’m curious whether the two-file pattern (SQL for the graph, YAML for the manifest) matches how your team already reviews dbt changes — or where it fights your process. That feedback is what turns an experiment into something durable.



