Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reload fails: "tsnet: listener already open for tailscale, :80" #6

Open
iliana opened this issue Dec 26, 2022 · 6 comments
Open

Reload fails: "tsnet: listener already open for tailscale, :80" #6

iliana opened this issue Dec 26, 2022 · 6 comments
Labels
likely-fixed Likely fixed, but waiting for confirmation.
Milestone

Comments

@iliana
Copy link

iliana commented Dec 26, 2022

The usual systemd unit for caddy includes an ExecReload= option:

ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force

When this is run on a caddy service that already connected to Tailscale, we get this error: loading config: loading new config: http app module: start: listening on tailscale/nitter:80: tsnet: listener already open for tailscale, :80

Dec 26 21:10:05 hydrangea systemd[1]: Reloading Caddy.
Dec 26 21:10:05 hydrangea caddy[2407]: {"level":"info","ts":1672089005.6320755,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":""}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.6348317,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"40046","headers":{"Accept-Encoding":["gzip"],"Content-Length":["203"],"Content-Type":["application/json"],"Origin":["http://localhost:2019"],"User-Agent":["Go-http-client/1.1"]}}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.6360798,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"warn","ts":1672089005.6363716,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.636495,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000354a10"}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.63655,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc000354a10"}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"error","ts":1672089005.6366794,"logger":"admin.api","msg":"request error","error":"loading config: loading new config: http app module: start: listening on tailscale/nitter:80: tsnet: listener already open for tailscale, :80","status_code":400}
Dec 26 21:10:05 hydrangea caddy[2407]: Error: sending configuration to instance: caddy responded with error: HTTP 400: {"error":"loading config: loading new config: http app module: start: listening on tailscale/nitter:80: tsnet: listener already open for tailscale, :80"}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.6377842,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
Dec 26 21:10:05 hydrangea systemd[1]: caddy.service: Control process exited, code=exited, status=1/FAILURE
Dec 26 21:11:35 hydrangea systemd[1]: caddy.service: Reload operation timed out. Killing reload process.

Restarting the service works fine, as expected.

Full Caddyfile (most of this is boilerplate from my usual template):
{
        admin localhost:2019
        email [email protected]
}

(global) {
        encode zstd gzip
        handle_errors {
                respond "{http.error.status_code} {http.error.status_text}"
        }
        header cache-control "public, max-age=0, must-revalidate"
        log {
                output file /var/log/caddy/access.log
        }
}

:80 {
        bind tailscale/nitter
        respond / "butts"
}
@AstraLuma
Copy link

I suspect this will interfere with plugins like https://github.com/lucaslorentz/caddy-docker-proxy which dynamically generate config.

@willnorris willnorris added this to the v1 milestone Jun 3, 2024
@willnorris
Copy link
Member

I'm not able to reproduce this now, so I'm fairly certain it got fixed in #30, which added proper listener shutdown. Could you try again with the latest version of the plugin, and see if you still run into this behavior?

@willnorris willnorris added the likely-fixed Likely fixed, but waiting for confirmation. label Jun 3, 2024
@iliana
Copy link
Author

iliana commented Jun 3, 2024

I don't have anything presently using this plugin so I can only agree with "likely fixed".

@remos
Copy link

remos commented Jul 7, 2024

I've just run into this myself, using caddy-docker-proxy, as AstraLuma mentioned

Dockerfile
ARG CADDY_VERSION=2.8
FROM caddy:${CADDY_VERSION}-builder AS builder

RUN xcaddy build \
  --with github.com/lucaslorentz/caddy-docker-proxy/v2 \
  --with github.com/tailscale/caddy-tailscale

FROM caddy:${CADDY_VERSION}-alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

CMD ["caddy", "docker-proxy"]
Base Caddyfile

(read by caddy-docker-proxy and merged with config from docker labels)

{
	acme_dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN}
	acme_ca https://acme-staging-v02.api.letsencrypt.org/directory

	tailscale {
		auth_key {$TS_AUTHKEY}
		state_dir /data/tailscale
	}
}

(internal) {
	abort
}

http://*.example.com:80 https://*.example.com:443  {
	import internal
	bind tailscale/caddy
	bind 127.0.0.1
}
Rendered/merged CaddyFile
{
        acme_dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN}
        acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
        tailscale {
                auth_key {$TS_AUTHKEY}
                state_dir /data/tailscale
        }
}
(internal) {
        @test {
                host test.example.com
        }
        @test2 {
                host test.example.com
        }
        abort
        route @test {
                reverse_proxy 172.18.0.33:8000
        }
        route @test2 {
                reverse_proxy 172.18.0.34:8000
        }
}
http://*.example.com:80 https://*.example.com:443 {
        import internal
        bind tailscale/caddy 127.0.0.1
}
barebones docker-compose.yml
services:
  test:
    container_name: test
    image: crccheck/hello-world
    labels:
      caddy: (internal)
      [email protected]: test.example.com
      caddy.route: "@test"
      caddy.route.reverse_proxy: "{{upstreams 8000}}"

  test2:
    container_name: test2
    image: crccheck/hello-world
    labels:
      caddy: (internal)
      [email protected]: test2.example.com
      caddy.route: "@test2"
      caddy.route.reverse_proxy: "{{upstreams 8000}}"

  caddy:
    container_name: caddy
    build: ./caddy/
    restart: unless-stopped
    ports:
      - "6080:80"
      - "6443:443"
    environment:
      - CLOUDFLARE_DNS_API_TOKEN=${CLOUDFLARE_DNS_API_TOKEN}
      - CADDY_DOCKER_CADDYFILE_PATH=/etc/caddy/Caddyfile
      - CADDY_DOCKER_POLLING_INTERVAL=2s
      - CADDY_DOCKER_PROCESS_CADDYFILE=false
      - TS_AUTHKEY=${TS_CADDY_PROXY_AUTHKEY}
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./network/caddy/Caddyfile:/etc/caddy/Caddyfile
      - caddydata:/data

volumes:
  caddydata:
Error log entry
caddy                        | {"level":"error","ts":1720330017.807547,"logger":"admin.api","msg":"request error","error":"loading config: loading new config: http app module: start: listening on tailscale/caddy:443: tsnet: listener already open for tcp, :443","status_code":400}
caddy                        | {"level":"error","ts":1720330017.8076308,"logger":"docker-proxy","msg":"Error response from server","server":"localhost","status code":400,"body":"{\"error\":\"loading config: loading new config: http app module: start: listening on tailscale/caddy:443: tsnet: listener already open for tcp, :443\"}\n"}

After updating the base Caddyfile, it will begin to fail
On a fresh restart it does work just fine, similar to what iliana mentioned

I wonder if it could be due to what looks like caddy starting new apps before it stops old ones?:
https://github.com/caddyserver/caddy/blob/4ef360745dab1023a7d4c04aebca3d05499dd5e1/caddy.go#L342-L355

@willnorris willnorris reopened this Jul 7, 2024
@remos
Copy link

remos commented Jul 7, 2024

So I've tried out a horrible proof of concept, basically just lifting caddy's listenReusable (the non-unix version) to wrap the Listeners + PacketConns from tsnet and it seems to resolve the issue.

Though I also haven't checked out what run-on impacts there are for that

@whitestrake
Copy link

Running into this issue today as I use lucaslorentz/caddy-docker-proxy and it can't reload new config if there's a caddy-tailscale listener in the mix.

My workaround so far is to have a second instance of Caddy that just runs caddy-tailscale with a fixed config. It's not ideal, but at least the rest of the system still works dynamically and I only have to manually work on the things I'm only serving into the Tailnet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
likely-fixed Likely fixed, but waiting for confirmation.
Projects
None yet
Development

No branches or pull requests

5 participants