-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reload fails: "tsnet: listener already open for tailscale, :80" #6
Comments
I suspect this will interfere with plugins like https://github.com/lucaslorentz/caddy-docker-proxy which dynamically generate config. |
I'm not able to reproduce this now, so I'm fairly certain it got fixed in #30, which added proper listener shutdown. Could you try again with the latest version of the plugin, and see if you still run into this behavior? |
I don't have anything presently using this plugin so I can only agree with "likely fixed". |
I've just run into this myself, using caddy-docker-proxy, as AstraLuma mentioned DockerfileARG CADDY_VERSION=2.8
FROM caddy:${CADDY_VERSION}-builder AS builder
RUN xcaddy build \
--with github.com/lucaslorentz/caddy-docker-proxy/v2 \
--with github.com/tailscale/caddy-tailscale
FROM caddy:${CADDY_VERSION}-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
CMD ["caddy", "docker-proxy"] Base Caddyfile(read by caddy-docker-proxy and merged with config from docker labels) {
acme_dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN}
acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
tailscale {
auth_key {$TS_AUTHKEY}
state_dir /data/tailscale
}
}
(internal) {
abort
}
http://*.example.com:80 https://*.example.com:443 {
import internal
bind tailscale/caddy
bind 127.0.0.1
} Rendered/merged CaddyFile{
acme_dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN}
acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
tailscale {
auth_key {$TS_AUTHKEY}
state_dir /data/tailscale
}
}
(internal) {
@test {
host test.example.com
}
@test2 {
host test.example.com
}
abort
route @test {
reverse_proxy 172.18.0.33:8000
}
route @test2 {
reverse_proxy 172.18.0.34:8000
}
}
http://*.example.com:80 https://*.example.com:443 {
import internal
bind tailscale/caddy 127.0.0.1
} barebones docker-compose.ymlservices:
test:
container_name: test
image: crccheck/hello-world
labels:
caddy: (internal)
[email protected]: test.example.com
caddy.route: "@test"
caddy.route.reverse_proxy: "{{upstreams 8000}}"
test2:
container_name: test2
image: crccheck/hello-world
labels:
caddy: (internal)
[email protected]: test2.example.com
caddy.route: "@test2"
caddy.route.reverse_proxy: "{{upstreams 8000}}"
caddy:
container_name: caddy
build: ./caddy/
restart: unless-stopped
ports:
- "6080:80"
- "6443:443"
environment:
- CLOUDFLARE_DNS_API_TOKEN=${CLOUDFLARE_DNS_API_TOKEN}
- CADDY_DOCKER_CADDYFILE_PATH=/etc/caddy/Caddyfile
- CADDY_DOCKER_POLLING_INTERVAL=2s
- CADDY_DOCKER_PROCESS_CADDYFILE=false
- TS_AUTHKEY=${TS_CADDY_PROXY_AUTHKEY}
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./network/caddy/Caddyfile:/etc/caddy/Caddyfile
- caddydata:/data
volumes:
caddydata: Error log entry
After updating the base Caddyfile, it will begin to fail I wonder if it could be due to what looks like caddy starting new apps before it stops old ones?: |
So I've tried out a horrible proof of concept, basically just lifting caddy's Though I also haven't checked out what run-on impacts there are for that |
Running into this issue today as I use lucaslorentz/caddy-docker-proxy and it can't reload new config if there's a caddy-tailscale listener in the mix. My workaround so far is to have a second instance of Caddy that just runs caddy-tailscale with a fixed config. It's not ideal, but at least the rest of the system still works dynamically and I only have to manually work on the things I'm only serving into the Tailnet. |
The usual systemd unit for caddy includes an
ExecReload=
option:When this is run on a caddy service that already connected to Tailscale, we get this error:
loading config: loading new config: http app module: start: listening on tailscale/nitter:80: tsnet: listener already open for tailscale, :80
Restarting the service works fine, as expected.
Full Caddyfile (most of this is boilerplate from my usual template):
The text was updated successfully, but these errors were encountered: