PostgresAudit
Index audit

Postgres unused index check for production databases

PostgresAudit identifies unused index candidates by reading PostgreSQL catalog and statistics data, then separates low-scan evidence from any decision to drop an index.

Low idx_scan candidates
Index size impact
Primary and unique index guardrails
EXPLAIN before changes

What counts as an unused index candidate

The check looks for indexes with low or zero scan counts, meaningful storage cost, and no obvious primary-key or unique-constraint role. It reports candidates, not automatic drop instructions.

Why size and table context matter

A rarely scanned small index is usually noise. A large low-scan index on a write-heavy table can add measurable insert, update, delete, and vacuum overhead.

Risk filters before removal

PostgresAudit keeps constraint indexes, migration history, application release timing, and query-plan validation in view so a stale statistic does not become a bad production change.

Read-only evidence, human decision

The audit does not drop indexes or run corrective operations. It gives evidence for DBA review, staging tests, and EXPLAIN-based confirmation before any production change.

Related topics

Use these focused guides to compare query pressure, index decisions, and maintenance signals before you change production.

FAQ

Frequently asked questions

These answers stay inside the current PostgresAudit product boundary: read-only collection, evidence-gated findings, and human-reviewed next steps.

Does PostgresAudit automatically drop unused indexes?

No. It only reports unused index candidates from read-only evidence. A human should verify workload coverage, constraints, and query plans before dropping anything.

Is a zero idx_scan value enough to remove an index?

No. Statistics can reset, rare jobs may need the index, and constraint-backed indexes can be required for correctness. The report treats zero scans as a review signal.

Which indexes are usually excluded from quick drop review?

Primary-key indexes, unique indexes, and indexes tied to constraints need stronger review. PostgresAudit marks those guardrails instead of treating every low-scan index the same.

What should happen before an index is dropped?

Review recent workload, check application release timing, run EXPLAIN for affected queries, test in staging, and keep a rollback path for recreating the index if needed.