-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Supabase containers keep restarting due to authentication-related error #2696
Comments
I have the same error. And some user on Discord (Moritz) also seems to have it. |
Same here. It’s been one thing or another with Supabase on the last few beta releases |
There was only one small change (since 297) to the template which I wouldn't have expected to cause the issue: So I can only presume there's some issue with parsing/injecting |
Same for me, supabase doesn't work due to the supabase-db module. The supabase_admin won't be created, I think |
For me the |
If I were a betting man, I'd say it was this commit: 1266810 |
No, it already did'nt work on Monday |
Yes, because the supabase_admin user won't be created. You can see this inside of the supabase-db logs |
Fair, I read through it in more detail and if I were a betting man, I'd have lost money! Haha |
I am also experiencing this. |
I've figured out the issue, can replicate and mitigate. Coolify is setting the You can resolve the issue by setting Issue: When the env is incorrectly overridden the value is set to I might still be on to win my bet. 😂 Refs: docker-library/postgres#941 |
@Mortalife seems you are right. I was guessing maybe other services are loading sooner than DB but I believe it should be an environment conflict as you mentioned. |
How would they sell their cloud services if self hosting was that easy? It's just marketing and it has to be somehow possible. But they don't want the mass to self host supabase.. |
I am having same issues too, even after removing analytics service |
@Mortalife solution works! Thank you! |
@Mortalife This worked for me as well, thank you for the fix! |
It didn't start before this issue. To clarify, it shouldn't be running. It runs once to ensure the mimo server has the default bucket created that's used by the storage server. https://github.com/coollabsio/coolify/blob/main/templates/compose/supabase.yaml#L1067-L1071 It's creates the The |
I don't experience that problem. I would double check you've replaced all of the |
@Mortalife |
I think I'd rather the env variables be correctly parsed than putting a PR up for this work around. PR don't seem to be approved with much velocity so it won't change things immediately regardless. |
I understand. personally I had much difficulties deploy supabase instances as separate projects. coolify at least made it easy. and on the other hand supabase also is under development so maybe we have a lot of breaking coming forward. |
Hello @Mortalife and sorry to bother you. I just ran into this issue and found about your solution. Could you please clarify a bit what needs to be changed ? I don't understand where and what values are causing the issue. As I understood, in the .env file, I need to add a new parameter called |
Correct, and then once you've done that, remove |
Sorry again, this is surely an error between the chair and the keyboard, but my analytics service is still failing to start. Due to that Here is my docker-compose.yml file : services:
supabase-kong:
image: 'kong:2.8.1'
entrypoint: 'bash -c ''eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'''
depends_on:
supabase-analytics:
condition: service_healthy
environment:
- SERVICE_FQDN_SUPABASEKONG
- 'JWT_SECRET=${SERVICE_PASSWORD_JWT}'
- KONG_DATABASE=off
- KONG_DECLARATIVE_CONFIG=/home/kong/kong.yml
- 'KONG_DNS_ORDER=LAST,A,CNAME'
- 'KONG_PLUGINS=request-transformer,cors,key-auth,acl,basic-auth'
- KONG_NGINX_PROXY_PROXY_BUFFER_SIZE=160k
- 'KONG_NGINX_PROXY_PROXY_BUFFERS=64 160k'
- 'SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}'
- 'SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}'
- 'DASHBOARD_USERNAME=${SERVICE_USER_ADMIN}'
- 'DASHBOARD_PASSWORD=${SERVICE_PASSWORD_ADMIN}'
volumes:
-
type: bind
source: ./volumes/api/kong.yml
target: /home/kong/temp.yml
supabase-studio:
image: 'supabase/studio:20240514-6f5cabd'
healthcheck:
test:
- CMD
- node
- '-e'
- "require('http').get('http://127.0.0.1:3000/api/profile', (r) => {if (r.statusCode !== 200) process.exit(1); else process.exit(0); }).on('error', () => process.exit(1))"
timeout: 5s
interval: 5s
retries: 3
depends_on:
supabase-analytics:
condition: service_healthy
environment:
- HOSTNAME=0.0.0.0
- 'STUDIO_PG_META_URL=http://supabase-meta:8080'
- 'POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
- 'DEFAULT_ORGANIZATION_NAME=${STUDIO_DEFAULT_ORGANIZATION:-Default Organization}'
- 'DEFAULT_PROJECT_NAME=${STUDIO_DEFAULT_PROJECT:-Default Project}'
- 'SUPABASE_URL=http://supabase-kong:8000'
- 'SUPABASE_PUBLIC_URL=${SERVICE_FQDN_SUPABASEKONG}'
- 'SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}'
- 'SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}'
- 'AUTH_JWT_SECRET=${SERVICE_PASSWORD_JWT}'
- 'LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}'
- 'LOGFLARE_URL=http://supabase-analytics:4000'
- NEXT_PUBLIC_ENABLE_LOGS=true
- NEXT_ANALYTICS_BACKEND_PROVIDER=postgres
supabase-db:
image: 'supabase/postgres:15.1.1.41'
healthcheck:
test: 'pg_isready -U postgres -h 127.0.0.1'
interval: 5s
timeout: 5s
retries: 10
depends_on:
supabase-vector:
condition: service_healthy
command:
- postgres
- '-c'
- config_file=/etc/postgresql/postgresql.conf
- '-c'
- log_min_messages=fatal
restart: unless-stopped
environment:
- POSTGRES_HOST=/var/run/postgresql
- 'PGPORT=${POSTGRES_PORT:-5432}'
- 'POSTGRES_PORT=${POSTGRES_PORT:-5432}'
- 'PGPASSWORD=${SERVICE_PASSWORD_POSTGRES}'
- 'POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
- 'PGDATABASE=${POSTGRES_DB:-postgres}'
- 'POSTGRES_DB=${POSTGRES_DB:-postgres}'
- 'JWT_SECRET=${SERVICE_PASSWORD_JWT}'
- 'JWT_EXP=${JWT_EXPIRY:-3600}'
volumes:
- 'supabase-db-data:/var/lib/postgresql/data'
-
type: bind
source: ./volumes/db/realtime.sql
target: /docker-entrypoint-initdb.d/migrations/99-realtime.sql
-
type: bind
source: ./volumes/db/webhooks.sql
target: /docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql
-
type: bind
source: ./volumes/db/roles.sql
target: /docker-entrypoint-initdb.d/init-scripts/99-roles.sql
-
type: bind
source: ./volumes/db/jwt.sql
target: /docker-entrypoint-initdb.d/init-scripts/99-jwt.sql
-
type: bind
source: ./volumes/db/logs.sql
target: /docker-entrypoint-initdb.d/migrations/99-logs.sql
- 'supabase-db-config:/etc/postgresql-custom'
supabase-analytics:
image: 'supabase/logflare:1.4.0'
healthcheck:
test:
- CMD
- curl
- 'http://127.0.0.1:4000/health'
timeout: 5s
interval: 5s
retries: 10
restart: unless-stopped
depends_on:
supabase-db:
condition: service_healthy
environment:
- LOGFLARE_NODE_HOST=127.0.0.1
- DB_USERNAME=supabase_admin
- 'DB_DATABASE=${POSTGRES_DB:-postgres}'
- 'DB_HOSTNAME=${POSTGRES_HOSTNAME:-supabase-db}'
- 'DB_PORT=${POSTGRES_PORT:-5432}'
- 'DB_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
- DB_SCHEMA=_analytics
- 'LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}'
- LOGFLARE_SINGLE_TENANT=true
- LOGFLARE_SINGLE_TENANT_MODE=true
- LOGFLARE_SUPABASE_MODE=true
- LOGFLARE_MIN_CLUSTER_SIZE=1
- 'POSTGRES_BACKEND_URL=postgresql://supabase_admin:${SERVICE_PASSWORD_POSTGRES}@${POSTGRES_HOSTNAME:-supabase-db}:${POSTGRES_PORT:-5432}/${POSTGRES_DB:-postgres}'
- POSTGRES_BACKEND_SCHEMA=_analytics
- LOGFLARE_FEATURE_FLAG_OVERRIDE=multibackend=true And here is my .env file :
As you recommanded, I removed the Also, here is what I got when I tried to manually log into postgres inside the supabase-db service : $ psql -U supabase_admin -W
Password:
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: password authentication failed for user "supabase_admin" |
@deozza Try stopping the stack, removing the associated You can find the volume by running For example my random stack string that is before my url etc is Once you've done that you should be able to start the service again and hopefully the migrations will run correctly. |
This worked perfectly for me. For future reference, here are the steps I did to resolve :
|
Tried removing the volumes after double checking the host values... still Rest is listed as unhealthy and also says public schema is still not available for me |
Description
When attempting to deploy Supabase using Coolify
v4.0.0-beta.306
, the process fails, and the logs of the containers indicate an authentication-related error.Do note that it works fine on
v4.0.0-beta.297
except for that fact that:Minio Createbucket
container fails to run and exits.Supabase Rest
andRealtime Dev
showsrunning (unhealthy)
.Minimal Reproduction (if possible, example repository)
Exception or Error
No response
Version
v4.0.0-beta.306
The text was updated successfully, but these errors were encountered: