flipboard.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Welcome to Flipboard on Mastodon. A place for our community of curators and enthusiasts to inform and inspire each other. If you'd like to join please request an invitation via the sign-up page.

Administered by:

Server stats:

1.2K
active users

#caddy

2 posts2 participants0 posts today

Ok, so it took me more than 10 minutes to figure out the right Caddyfile syntax for a reverse-proxy with TLS using DNS challenge from Cloudflare.

Caddy is great, and generally it is super easy, but this particular case was not.

So in the interest of saving some other poor frazzled soul like myself from digging through the interwebs, I'm throwing an example up on my blog. Hope it saves someone a few.

christopherbauer.org/blog/cadd

Caddy Reverse Proxy with TLS and Cloudflare DNS Challenge - A Caddyfile ExampleCaddy Reverse Proxy with TLS and Cloudflare DNS Challenge - A Caddyfile Example

Trying out #Caddy but not sure I like all the extra headache that comes from having to use statically compiled modules for core features....

So now I need to subscribe to the git repo of all my modules, their dependencies and the main caddy repo so I know when I need to rebuild and redistribute a new set of bits?

I'm curious to hear what others are #SelfHosting! Here's my current setup:

Hardware & OS

Infrastructure & Networking

Security & Monitoring

Authentication & Identity Management

  • Authelia (Docker): Just set this up for two-factor authentication and single sign-on. Seems to be working well so far!
  • LLDAP (Docker): Lightweight LDAP server for managing authentication. Also seems to be working pretty well!
    #AuthenticationTools #IdentityManagement

Productivity & Personal Tools

Notifications & Development Workflow

  • Notifications via: #Ntfy (Docker) and Zoho's ZeptoMail (#Zoho)
  • Development Environment: Mostly using VSCode connected to my server via Remote-SSH extension. #VSCodeRemote

Accessibility Focus ♿🖥️

Accessibility heavily influences my choices—I use a screen reader full-time (#ScreenReader), so I prioritize services usable without sight (#InclusiveDesign#DigitalAccessibility). Always open to discussing accessibility experiences or recommendations!

I've also experimented with:

  • Ollama (#Ollama): Not enough RAM on my Pi.
  • Habit trackers like Beaver Habit Tracker (#HabitTracking): Accessibility issues made it unusable for me.

I don't really have a media collection, so no Plex or Jellyfin here (#MediaServer)—but I'm always open to suggestions! I've gotten a bit addicted to exploring new self-hosted services! 😄

What's your setup like? Any cool services you'd recommend I try?

#SelfHosted #LinuxSelfHost #OpenSource #TechCommunity #FOSS #TechDIY

@selfhost @selfhosted @selfhosting

Hi all. Hoping someone in the #SelfHosting community can help. I'm trying to set up #Linkwarden in #Docker behind #Caddy. The service is running, but I'm unable to create a user account. This is what I see in my browser console when I try:

register:1 [Intervention] Images loaded lazily and replaced with placeholders. Load events are deferred. See https://go.microsoft.com/fwlink/?linkid=2048113
register:1 [DOM] Input elements should have autocomplete attributes (suggested: "new-password"): (More info: https://www.chromium.org/developers/design-documents/create-amazing-password-forms)
<input data-testid=​"password-input" type=​"password" placeholder=​"••••••••••••••" class=​"w-full rounded-md p-2 border-neutral-content border-solid border outline-none focus:​border-primary duration-100 bg-base-100" value=​"tyq5ghp!QVH-mva1agc">
register:1 [DOM] Input elements should have autocomplete attributes (suggested: "new-password"): (More info: https://www.chromium.org/developers/design-documents/create-amazing-password-forms)
<input data-testid=​"password-confirm-input" type=​"password" placeholder=​"••••••••••••••" class=​"w-full rounded-md p-2 border-neutral-content border-solid border outline-none focus:​border-primary duration-100 bg-base-100" value=​"tyq5ghp!QVH-mva1agc">
Error
api/v1/users:1 Request unavailable in the network panel, try reloading the inspected page Failed to load resource: the server responded with a status of 400 () Failed to load resource: the server responded with a status of 400 ()

compose file:

services:
postgres:
image: postgres:16-alpine
container_name: linkwarden_postgres
env_file: .env
restart: always
volumes:
- ./pgdata:/var/lib/postgresql/data
networks:
- linkwarden_net
linkwarden:
env_file: .env
environment:
- DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@linkwarden_postgres:5432/postgres
restart: always
# build: . # uncomment this line to build from source
image: ghcr.io/linkwarden/linkwarden:latest # comment this line to build from source
container_name: linkwarden
ports:
- 3009:3000
volumes:
- ./data:/data/data
networks:
- linkwarden_net
depends_on:
- postgres

networks:
linkwarden_net:
driver: bridge

Relevant part of .env file:

NEXTAUTH_URL=https://bookmarks.laniecarmelo.tech/api/v1/auth
NEXTAUTH_SECRET=x8az9q9w8ofAxnrVcer2vsPHeMmKSPbf

# Manual installation database settings
# Example: DATABASE_URL=postgresql://user:password@localhost:5432/linkwarden
DATABASE_URL=

# Docker installation database settings
POSTGRES_PASSWORD=redacted

# Additional Optional Settings
PAGINATION_TAKE_COUNT=
STORAGE_FOLDER=
AUTOSCROLL_TIMEOUT=
NEXT_PUBLIC_DISABLE_REGISTRATION=false
NEXT_PUBLIC_CREDENTIALS_ENABLED=true

Caddyfile snippet

*.laniecarmelo.tech {
tls redacted {
dns cloudflare redacted
}

header {
Content-Security-Policy "default-src 'self' https: 'unsafe-inline' 'unsafe-eval';
img-src https: data:;
font-src 'self' https: data:;
frame-src 'self' https:;
object-src 'none'"
Referrer-Policy "strict-origin-when-cross-origin"
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Xss-Protection "1; mode=block"
}

encode br gzip

# Bookmarks
@bookmarks host bookmarks.laniecarmelo.tech
handle @bookmarks {
reverse_proxy 127.0.0.1:3009
}
}

Can anyone help? I have no idea how to fix this.
#SelfHosted #CaddyServer #Linux #Tech #Technology
@selfhost @selfhosted @selfhosting

Och ffs ey. Ich will #Seafile in #Docker mit einem #apache-#Proxy in einer #virtuellenMaschine installieren (weil ich das Testen will und nur Chuck Norris in Prod testet). Warum geht das nicht wenigstens halbwegs out-of-the-box?

Auch ohne den apache-Proxy klappt das nicht. (Edit: da spielt ja jetzt immer noch ein #caddy mit rum, bei dem nicht dokumentiert ist, ob ich ihn wirklich brauche, wenn hinter apache, oder wie da die Einstellungen sein müssen.)

Fun (actually not fun at all) fact about Caddy:

This expression will be merged with
AND:

@matcher {
    path /foo
    header Header-Name value
}

But this one will be merged with
OR, despite being functionally identical:
@matcher {
    expression `path('/foo')`
    expression `header({'Header-Name': 'value'})`
}

Caddy has some cursed, barely-documented logic where matcher blocks always merge with
AND unless two matchers of the same time are adjacent. In the latter case, they may be merged with AND or OR depending on matcher-specific logic, which is not publicly documented.


This results in completely different behavior depending on whether a matcher is defined using expression or directive syntax. Despite the docs implying that the two options are identical,
they are not! You can have an existing, functional matcher with a mix of directives and expressions, and suddenly it breaks because one of the directives was replaced with an identical expression. It's extremely counter-intuitive.

#Caddy #PSA #ServerAdmin #SelfHost

New blog post: how to pull web logs from #Caddy into #Clickhouse using #Vector.

scottstuff.net/posts/2025/02/2

Clickhouse is an open-source (plus paid, as usual) columnar DB. This lets you do ad hoc SQL queries to answer questions as well as create Grafana dashboards to show trends, etc.

scottstuff.net · Getting Caddy logs into Clickhouse via VectorAs mentioned before, I’ve been using the Caddy web server running on a couple machines to serve this site. I’ve been dumping Caddy’s access logs into Grafana’s Loki log system, but I haven’t been very happy with it for web logs. It’s kind of a pain to configure for small uses (a few GB of data on one server), and it’s slow for my use case. I’m sure I could optimize it one way or another, but even without the performance issues I’m still not very happy with it for logs analysis. I’ve had a number of relatively simple queries that I’ve had to fight with both Loki and Grafana to get answers for. In this specific case, I was trying to understand how much traffic my post on the Minisforum MS-A2 was getting and where it was coming from, and it was easier for me to grep through a few GB of gzipped JSON log files than to get Loki to answer my questions. So maybe it’s not the right tool for the job and I should look at other options. I’d been meaning to look at Clickhouse for a while; it’s an open source (plus paid cloud offering) column-store analytical DB. You feed it data and then use SQL to query it. It similar to Google BigQuery, Dremel, etc, and dozens of other similar systems. The big advantage of column-oriented databases is that queries that only hit a few fields can be really fast, because they can ignore all of the other columns completely. So a typical analytic query can just do giant streaming reads from a couple column without any disk seeks, which means your performance mostly just ends up being limited by your disks’ streaming throughput. Not so hot when you want to fetch all of the data from a single record, but great when you want to read millions of rows and calculate aggregate statistics. I managed to get Clickhouse reading Caddy’s logs, but it wasn’t quite as trivial as I’d hoped, and none of the assorted “how to do things like this” docs that I found online really covered this case very well, so I figured I’d write up the process that I used.

So I want to set up a #CI pipeline on my webserver to serve static sites.

I already have a @caddy setup that can serve static files, as well as a bunch of other stuff that all runs in #Docker containers. But I would like to have a CI pipeline that will pick up my repository changes, and build and deploy stuff to a directory that #Caddy can serve.

Now, how ridiculous would it be to have:

- an SSH server running in a Docker container
- @WoodpeckerCI, also in Docker

and get Woodpecker to build the site and use scp to copy files over to the SSH server, that will have a shared volume with the Caddy container that maps to the /var/www directory?

I am not ready to set up a whole @forgejo instance to serve from Forgejo Pages. Plus, why use the Pages thing when I have a perfectly good Caddy server running already, that would be serving the Forgejo instance anyway?

Why not some sort of S3 compatible service in a container?
Why not FTP?
How many containers can a guy run?
Am I losing my mind (probably)?