How to host your own (secure) blog...
...or the story of Let's Encrypt rate limits
This article is part of a series about setting up your own blog and hosting stuff in general:
Photo by Markus Spiske / Unsplash
So, after the setup of my first own blog, I was left with the dilemma of me being eager to start writing up something about my current adventures in writing code and my possible shame of hosting a blog without https support.
So I started looking around for a simple but secure solution. In the end it seemed obvious to use some kind of decoupled Let’s Encrypt container in combination with or with an integrated reverse proxy. This way the app, in this case ghost, would be very simple while the complicated security stuff happens elsewhere.
The first thing I found was linuxserver/letsencrypt. This seemed promising. Just spin up the container and point the integrated reverse proxy to a linked ghost host. To make this short, this would have required me looking into nginx configuration. I'm pretty sure that this isn't that hard and I will get back to this at some point but for now I didn't want to try out too many new things at once.
The Solution
As is typical for my personal stereotype of developers, I turned to the very general solution to my problem next. I learned that you can build a docker compose configuration that hosts every other container spun up at some later point, provided its configured with some environment variables. This felt somewhat like building a framework that is only used by one application. Still I found it quiet interesting.
I ended up using ekkis/ekkis/nginx-proxy-LE-docker-compose, as it had everything I wanted:
- isolated application container for easy setup
- abstracted nginx configuration
- automatic Let's Encrypt setup
- the possibility to host multiple secured containers on one host.
The result can be found on Github . Let's go over the parts where I deviate from the original setup.
docker-compose.yml
nginx-proxy
...
nginx-gen:
build: ./patched_nginx_gen
image: patched-nginx-gen
...
nginx-ssl
...
This uses the following Dockerfile:
./patched_nginx_gen/Dockerfile
FROM jwilder/docker-gen
COPY ./nginx.tmpl /etc/docker-gen/templates/nginx.tmpl
This just fixes the problems that ekkis is talking about at the end of his Readme. The nginx template gets copied to the appropriate position in the docker-gen container. The "gen" here stands for generation (of config files).
Now with all this set up we can start any other docker container, and with the right environment variables set, the cluster of containers gets you a certificate for your application and serves it under the specified domain. This means of course there is one more prerequisite to this whole setup. You need your own domain and it has to be pointed at the URL of the docker host we are using for the reverse proxy compose construct and the served applications.
Now the critical environment variables as seen for example in my ghost definition for this blog:
The actual App-Setup
./ghost/docker-compose.yml
version: '3'
services:
ghost:
image: ghost:alpine
container_name: ghost-blog
volumes:
- $GHOSTCONTENT:/var/lib/ghost/content
expose:
- "2368"
environment:
- url=https://$DOMAIN
- VIRTUAL_HOST=$DOMAIN
- LETSENCRYPT_HOST=$DOMAIN
- LETSENCRYPT_EMAIL=admin@antonherzog.com
networks:
- nginx-proxy
networks:
nginx-proxy:
external: true
Lets got over this file from top to bottom. It binds a folder on the docker hosts filesystem as the content folder for the ghost blog. In addition it exposes the port ghost works on "2368". I think this is redundant because the ghost:alpine image already exposes this port, but I thought it is more clear to explicitly have it in your definition of the app. The first environment variable is also related to ghost. It uses url to know to which place to redirect you with links on the blog itself.
Now to the juicy part. VIRTUAL_HOST
is used by the nginx proxy container to know which incoming request should got to the app. In my case this would be the value in $DOMAIN and it would be mapped to port 2368 of the ghost container.
The next two variables are for the Let's Encrypt setup. You have to specify your domain and you can name an email to contact you. As is suggested in the original repo you can now use docker logs -f nginx-ssl
to see the certification process in action. This is still a nailbiter for me, because if seen it fail so much, when tried this setup for the first time. The last thing here is the entry under networks
. This definition is used so every container we use in the general setup as wall as any app-container is working in the same context. For that to work, there has to be a network with name 'nginx-proxy' present. For that purpose I added a bashscript under ./init_network.sh.
The Pitfalls
Here I will now list the biggest problems hindering me from deploying this, not so complicated, setup:
Config File Generation/Encoding
This should not be a thing you have to say in my opinion, but keep in mind on which OS you are working and whether your encodings are compatible when generating files. In my case I downloaded first the main docker-compose.yml file and then the ./pachted-nginx-gen/nginx.tmpl using a copy-pasted command from Github.
curl -O https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl
Seems easy enough, doesn't it. Because I was working in Windows/Powershell at the time I had to replace -UseBasicParsing
and appended > nginx.tmpl
or so I thought. This was horribly wrong in multiple ways:
Firstly 'curl' under linux returns the content of the webrequest you send, where the Powershell-Alias 'curl' is actually just Invoke-WebRequest. This means the return in Powershell is a PSObject with a lot of properties like...
StatusCode : 200
StatusDescription : OK
Content : {{ $CurrentContainer := where $ "ID" .Docker.CurrentContainerID | first }}
{{ define "upstream" }}
{{ if .Address }}
{{/* If we got the containers from swarm and this container's port is published...
RawContent : HTTP/1.1 200 OK
Content-Security-Policy: default-src 'none'; style-src 'unsafe-inline'; sandbox
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
X-Frame-Options: deny
X...
Forms :
Headers : {[Content-Security-Policy, default-src 'none'; style-src 'unsafe-inline'; sandbox],
[Strict-Transport-Security, max-age=31536000], [X-Content-Type-Options, nosniff], [X-Frame-Options,
deny]...}
Images : {}
InputFields : {}
Links : {}
ParsedHtml :
RawContentLength : 16288
So I was happily writing this whole datatable into nginx.tmpl
and it took me longer than I'm willing to say to notice that. I then fixed the command to
Invoke-WebRequest -UseBasicParsing https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl `
| Select -ExpandProperty Content `
> nginx.tmpl.
Sadly I still got errors in the realm of can't read content of 'nginx.tmpl:4'
. At this point I learned that Let's Encrypt has a fail-rate-limit when certificate aquisition fails. This forced me to leave the system for a while and come back later when I wasn't persona non grata anymore.
This of course helped me to take a step back. In the end the problem is the encoding which Invoke-WebRequest
uses for its content: Utf16LE. The little >
I used in the command just keeps the encoding of the string it gets and writes it into the file, as he should. When using curl
you get Utf8 encoded files. So, finally, I managed to get a working command to get my files from Github:
Invoke-WebRequest -UseBasicParsing https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl `
| Select -ExpandProperty Content `
| Set-Content -Encoding UTF8 nginx.tmpl
This whole problem in my opinion is a good argument for scrapping all the alias definitions in Powershell. The behaviour of curl
and imposter Invoke-WebRequest
is not equal in any sense, not even without additional arguments. There is hope though. I for a while now always try to use written out Powershell-CmdLets, as it is more readable in scripts, and you lern then better. The latest installment of Powershell Powershell 6.X which is running on dotnet-core is not using an alias for curl
anymore. Instead it's implemented in a native way, as you can see when calling Get-Command curl
when using Powershell 6:
CommandType Name Version Source
----------- ---- ------- ------
Application curl.exe 7.55.1.0 C:\WINDOWS\system32\curl.exe
Using this curl
you get plain Utf8 output, so copy/paste would just have worked.
Compose-up vs. Compose-build
This point is a more of a point to keep in mind. In my setup I use special build docker image for the nginx-gen container. For that I just copy a config file to the appropriate path in the docker-gen structure.
./patched_nginx_gen/Dockerfile
FROM jwilder/docker-gen
COPY ./nginx.tmpl /etc/docker-gen/templates/nginx.tmpl
This image gets built the first time you call docker-compose up (-d)
. So in my head there was this instant connection docker-compose builds all referenced Dockerfiles. But that's just not the case. When you call docker-compose up (-d)
again with an image patched-nginx-gen on your machine it just uses that. That can produce unexpected update problem, when you try to get the image right. You either have to call docker-compose build
or delete the current image with docker images rm patched-nginx-gen
.
Human Error and missing Research
The one thing I learned again is that a little bit more time spent on research before trying to hack away would save me a lot of time in the end. The hours and hours figuring the first run with linuxserver/letsencrypt was completely unnecessary. I think this will be recurrent theme on this blog.
Summary
In the end setting up your own blog is not the biggest effort, especially when you have experience in using webservers, domains and such things. The former can be substituted for some knowledge in using docker. At some point in the future, I will get back to the setup of this blog and refine it a bit.
As always I'd like to get some feedback. Just pm/@ me on twitter. I'm happy to answer questions and take constructive criticism.