faz
Troubleshooting

Connection Failed

Why a database isn't connecting and how to fix it. Per-database diagnoses with the literal error messages faz emits.

When faz can't reach a database, you'll see one of two symptoms:

  • At startup: WARNING: FAILED to connect <name> — <reason>. Continuing without it. — faz boots anyway; the database appears in /v1/databases with reachable: false.
  • At query time: databases_connected: 0 from /v1/health, or a 404 from /v1/databases/{db}/... for a database that should exist.

Both come from the same root cause: the connector raised an exception when faz tried to open the connection. This page walks through the common causes by category.

faz.yaml parse errors

Before any connection is attempted, faz parses faz.yaml. Parse errors abort startup entirely.

'databases' must be a list

The databases: value is a mapping or scalar. It must be a list of mappings:

databases:
  - name: <database-1>
    type: postgresql
    # ...
  - name: <database-2>
    type: mongodb

unknown connector type 'X'

The type: field doesn't match any of the 14 supported values. See the table on faz.yaml.

permissions[N] (database=X) has unknown access level 'Y'

You used a code that isn't in the six valid values. See Permission levels.

Run faz policy to surface parse errors before they break server startup.

Network reachability

could not connect to server: Connection refused

The database isn't listening on host:port. Common causes:

  • Process isn't running. Start it (docker start postgres, systemctl start mysql, etc.).
  • Wrong port. Check the database's actual listening port (netstat -ln | grep <port>).
  • Firewall blocking the local port. Check iptables / ufw / cloud security groups.

Quick test outside faz: nc -vz <host> <port>. If nc can't connect either, it's a network problem, not a faz problem.

Name or service not known

DNS resolution failed for host. Either:

  • Wrong hostname in faz.yaml. Confirm with nslookup <host>.
  • The host is reachable only from inside a VPC/VPN. Run faz on a machine inside that network, or set up a tunnel.

Connection timed out

The host is reachable but not responding within the connector's timeout. Likely:

  • The database is overloaded.
  • Network path is blocked partway through (firewall dropping packets without RST).
  • TLS handshake hanging — common with self-signed certs that the client doesn't recognise.

Authentication failures

Postgres: password authentication failed for user "X"

Credentials in faz.yaml don't match what Postgres expects. Test with psql -h <host> -U <user> -d <database>.

If psql works but faz doesn't, check that pg_hba.conf allows the auth method for the host faz is connecting from. faz uses psycopg2, which negotiates TLS, MD5, SCRAM, and password methods automatically.

MySQL: Access denied for user 'X'@'host'

MySQL keys users by <user>@<host>. The connecting host (faz's machine) might not have a matching grant. Check:

SELECT user, host FROM mysql.user;

Add a grant for the relevant host if missing.

MongoDB: Authentication failed.

Most common cause: your user lives in the admin database, but you connected to a different database without an authSource. Workaround: connect to admin in faz.yaml (set database: admin), or use a user defined in the database you're connecting to.

Neo4j: Unauthorized

Wrong username or password. Test with cypher-shell. If credentials are right, confirm the user has access to the requested database (Neo4j 4+ has multi-database authorization).

Pinecone / managed cloud services: API key required

Set password: to your API key, or put it under extra.api_key.

DynamoDB: Unable to locate credentials

boto3 has nothing to authenticate with. For production, attach an IAM role to the host. For local dev, set AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY env vars, or set username: / password: in faz.yaml (development only). See Secrets.

TLS / SSL

Postgres / MySQL: SSL connection required

Your database requires TLS. Set ssl: true in the connector config:

databases:
  - name: prod_db
    type: postgresql
    host: db.example.com
    port: 5432
    ssl: true
    # ...

faz uses sslmode=require (Postgres) or pymysql's TLS (MySQL).

SSL: CERTIFICATE_VERIFY_FAILED

The database's TLS cert is signed by a CA the OS doesn't trust (self-signed, internal CA). Two paths:

  1. Add the CA cert to the OS trust store (update-ca-certificates on Debian/Ubuntu, equivalent elsewhere).
  2. Run a TLS-terminating proxy in front of faz that handles the trust chain.

faz doesn't currently expose a per-connector "trust this cert" setting.

Cassandra / DynamoDB-local: SSL handshake failure

These connectors use different TLS implementations. Cassandra uses the Java-style trust-store concept (handled by cassandra-driver); DynamoDB-local doesn't speak TLS at all (use ssl: false).

Connector-specific gotchas

Pinecone managed cloud: wrong index hostname

Each managed Pinecone index has its own hostname (e.g. my-index-xxxx.svc.us-east-1-aws.pinecone.io). Set host to that exact hostname; don't use pinecone.io. List your indexes via the Pinecone console to find the right one.

Cassandra: NoHostAvailable

Every contact point you provided is down. Cassandra clients only need one initial contact point — the rest of the cluster is discovered. If your only host is the one that's down, you're stuck. Add a second contact point as a separate databases: entry or use a load-balancing hostname.

Note: faz's databases: entries are individual connections — to provide multiple contact points to one cluster, you'd typically use a DNS name that resolves to multiple A records and let Cassandra's driver iterate.

Oracle: ORA-12514: TNS:listener does not currently know of service

The database: field isn't a service name the listener knows about. On multitenant Oracle, this is the PDB name (e.g. ORCLPDB1), not the CDB. Run lsnrctl services on the Oracle host to list registered services.

Elasticsearch managed by AWS

AWS-managed Elasticsearch (now OpenSearch) uses IAM-signed requests by default. faz's connector uses HTTP basic auth, which works only when fine-grained access control is enabled with internal users. Otherwise, you'll need to front faz with a sidecar that does AWS SigV4 signing.

Confirming the fix

After changing faz.yaml, restart faz:

# REST server
pkill -f 'faz serve' && faz serve

# MCP — restart your AI client (it respawns the stdio process)

Then verify:

faz policy        # parses cleanly?
faz test          # every DB reachable?
curl http://localhost:8787/v1/health

/v1/databases shows the per-DB status — the database you fixed should now have reachable: true and error: null.

  • Common errors — the cross-error index.
  • faz.yaml — the schema reference.
  • Secrets — credential management patterns.
  • The per-connector pages under Databases — connector-specific quirks.

On this page