In this post, I will explain how to setup a low-maintenance, carbon-neutral selfhosted Nextcloud instance that stores its data in Google Cloud Storage.To make the setup easier, we’ll be using Docker.

For the last few years, all my important data was backed up in Google Drive.I chose it because I was already deep into the Google ecosystem, the first ten gigabytes of storage are free and it’s just easy to use.

Access a terminal for a running job. Generate the OpenSSH certificate-key pairs. Generate a set of random 20 & 64 character alpha. First, create a Maven project and then add this dependency to the pom.xml io.minio minio 7.1.2 pom.xml. Create a new class with a main method and insert the following code. To send operations to the server, the application needs an.

Then, something weird started happening: The more I got in touch with Software Development, the more I started to become interested in free and open source software - I just like the idea of services that you can selfhost.It gives me peace of mind that I’ll still be able to access my data when the vendor goes out of business.

I can hear you screaming: “Geez, Google will rather have eaten the world before it goes out of business. Why care about it?”Yes, you’re right. Google won’t go out of business anytime soon, but still: They could become evil (some argue they already are) or they could just change their pricing model.Which would also be kind of evil.

So, in order to stay out of Google’s walled gardens (beautiful, but still walled), I started to think about how I could switch to a selfhost-able data and backups cloud.My requirements:

  1. Low Maintenance. If it were high maintenance, I would’ve surely fucked something up and all of my data would’ve been gone at some point.
  2. Regional redundancy. I don’t want to lose all my data due to some data center burning down or having a water damage.
  3. Cross-Platform Support. I want to access my data on my phone, on all desktop systems and in the browser.

Introducing: Nextcloud

As it turns out, requirement #3 limits the range to pretty much just Nextcloud - which is great, since it’s widely supported and there even are dedicated Nextcloud hosting companies.I tried out Hetzner’s Nextcloud hosting, which had two big flaws: It’s not regionally redundant and you cannot assign a custom domain to it.Since all Nextcloud hosters I found out about were either expensive or didn’t feature regional redundancy, I decided to look into self-hosting.

Since Low-Maintenance was one of my requirements, I did some research to find out how to abstract away as much responsibility as possible and assign them to some cloud provider.This is how I found about using object storage as primary storage backends.You can imagine object storage as some API that models a file system and allows you to store arbitrary BLOBs of data in it.The beauty of it is that the specifics of storage, retention and backups are abstracted away by the provider and you don’t have to worry about it.Well-known object storage solutions are Amazon S3, Google’s GCS or Azure Storage, which are cloud-hosted solutions; you could also use OpenStacks’s Swift or Minio, which are both open source and can be self-hosted.

Using S3 was the most obvious since it’s natively supported by Nextcloud.Nonetheless, I decided against using it since Amazon’s AWS draws its power from coal plants, which I don’t want to support.Both Microsoft’s Azure and Google’s GCP draw their power from renewable energies and compensate their carbon emissions by donating to e.g. rainforest projects, so in the end I decided to use GCP.

Remember: It’s not about being free from Google but being free from Google’s products. Object storage is pretty similar, and I could easily move my data from GCP to some other host the future while still using my Nextcloud instance.

There’s one problem, though: Nextcloud does not have built-in support for GCP’s protocol, just for the S3 protocol (Maybe thats a PR to be made? 😃).For translating between these two, I used Minio, which has a S3-Compliant interface and can use GCS for its own storage backend.You can think of it as a mediator between the two protocols.

In the following chapter, I will explain how to setup all of this on a tiny VM.

Setting up Nextcloud with Minio

To follow this guide, you’ll need a VM with Docker and Docker-Compose installed (How to install Docker on Ubuntu, How to install Docker-Compose).I personally use and recommend Hetzner’s cloud offering, which is cheap and has servers located in the EU.For me, the smallest size (1 core, 2gb RAM) definitely sufficed, but your mileage may vary.

To begin, rent a VM and SSH into it. Place this docker-compose.yml on your server.We will fill out the blanks in a moment.

If you don’t know about Docker yet, take a moment to read about it.Docker-Compose is a tool that’s used to easily specify deployment details of multiple Docker containers.If you’re interested in how the file is structured, have a look at the documentation.I will try to explain the different services, and their interdependencies using the follwowing diagram:

graph TD Internet((Internet)) Traefik Postgres Nextcloud pgbackup[Postgres Backup] Redis Minio GCS Internet --makes HTTPS requests--> Traefik Traefik --'proxies to (as HTTP traffic)'--> Nextcloud Nextcloud --stores files in--> Minio Nextcloud --stores user data in--> Postgres Nextcloud --uses as cache--> Redis Minio --uses as storage backend--> GCS Postgres --> pgbackup pgbackup --stores backups in--> Minio
Random

Note that this diagram models data flow and service dependency, not control flow.

The docker-compose.yml file you copied earlier basically encodes this model and your deployment configuration.

Before deploying this, we’ll need make sure everything is configured, so let’s go through all required settings and fill in the placeholders in docker-compose.yml:

First, think of the administrator credentials that should be initially set for your user.Enter them in the fields NEXTCLOUD_ADMIN_USER and NEXTCLOUD_ADMIN_PASSWORD.Replace the placehoder for NEXTCLOUD_TRUSTED_DOMAINS with the domain you’re deploying your server to.If you don’t have a domain name that you want to use, you can also fill in the IP address of the server, so Nextcloud knows which requests to respond to (if you’re using an IP address, have a look at the arguments of the traefik service to make sure it doesn’t try to apply for SSL certificates).You’ll also need to enter your domain/IP in one of the nextcloud serivces labels, the corresponding place is marked as put_in_your_domain.This label will tell Traefik (the reverse proxy that handles HTTPS certificates and SSL resolution in our setup) to route incoming requests on this domain to the Nextcloud service.

The next thing to configure is the Minio instance.Remember: Since Minio has a S3-Compliant interface, it handles translation between S3 and GCS.Think of a good pair of credentials (you don’t need to remember them, just use some random generator) and enter thim into MINIO_ACCESS_KEY and MINIO_SECRET_KEY.To connect Postgres Backup, fill in the access- and secret key you just created into S3_ACCESS_KEY_ID and S3_SECRET_ACCESS_KEY.Think of a good name for the bucket your backups should be stored (it has to be globally unique amongst all of GCS’s buckets) and enter it into S3_BUCKET.

To connect Nextcloud to Minio, create the file s3.config.php in the same directory as your docker-compose.yml.It will be mounted into the config directory of the Nextcloud container so it’s taken into account on Nextcloud’s startup.Fill out the key and secret fields using your access- and secret keys.Also fill in another unique bucket name into the bucket field.

The last configuration step needed is to hook up Minio with GCS (if you decided to use some other storage provider, have a look at the Minio documentation).To begin, go to console.cloud.google.com and create a GCP account if you don’t already have one.

Once you have access to your account, go on to create the storage buckets.Search for “Storage”, click on the corresponding result item and then on “Create New Bucket”.Give it a name, select “Multi-Region” as the location type and leave the remaining options as they are.Click “Create” to finish setup.

Now you just need to obtain credentials for Minio to access GCS.Search for “Service accounts”, click on the corresponding result, click on “Create Service Account” and fill in some service account name and description.Press on “Continue” and select “Storage Object Admin” as its role.Create a key of type JSON and download it to your computer.This file contains the key that’s needed to authenticate with GCS.Move it to your virtual machine and place it in the same directory as your docker-compose.yml.It needs to be named credentials.json.

Now that everything is configured properly, you can create the deployment.Run docker-compose up -d.You’ll now see how you’re containers are starting.Run docker-compose logs -f to see logs while they are starting.Once everything’s settled you can close the logs using CTRL-C.

Open your browser and go to the domain or IP address of your server.You’ll now see the Nextcloud login screen, where you can use the admin credentials you set earlier.

This concludes the setup!Have fun with your selfhosted, regionally redundant Nextcloud instance.

Conclusion

I’m running this Nextcloud setup for some months now.The cost is slightly higher than Google Drive, but you get the benefits of an open-source solution and prevent vendor lock-in.I love my instance, because it’s reliable and works flawlessly with all my devices.It also feels great to be in control your own infrastructure 😄

Hopefully, this post inspired you to host your own Nextcloud instance using object storage as its backend.Tweet at me (@skn0tt) or let me know in the comments how it worked out for you!I’d love to hear your thoughts about this post, too.

What is Outline?

Outline is a content first wiki site. As opposed to more traditional wikis like wikijs and bookstack, Outline is about getting out of your way and actually letting you write. It’s designed to be simple to use. Outline provides one of the smoothest and friendliest documentation platforms on the market, paid or otherwise. I mean, just look at this:

Where outline becomes complicated is hosting it yourself. Outline is designed to be a webscale solution and has multiple moving parts, including some that are AWS specific (we’ll get around to that later). For those of you who would prefer the simple option, Outline’s cloud edition is quite affordable: especially for teams.

This guide (on the other hand) is all about selfhosted technology. Let’s walk through setting up Outline in a smaller, selfhosted environment!

Requirements

  • Basic knowledge of Docker Compose
  • A working Linux docker environment, with Docker Compose. This guide will be running docker on Opensuse Leap 15.2
  • A domain and a public IP with the ability to forward ports
  • A way to remotely manage your docker environment. This guide will be using VSCode 1.53.2 with the remote SSH extension
  • A slack account with your own workspace. This will be used for authentication purposes.

TL;DR

You can find the final configuration files on the blog repository here. You can jump to the final steps here.

Getting Started

The last requirement on that list might have been confusing. If this is a self hosted guide, why are we involving slack?

At the moment Outline only supports two OAuth authentication providers: Slack and Google. On one hand, this makes sense because authentication is hard to do right. Of course it makes sense to let a larger company do the work. On the flip side, this a problem for people trying to disconnect from the cloud. You just can’t, completely, if you’re using Outline.

The good news is there is work happening on allowing additional authentication providers, but it seems like it’s slow going. For now it’s just a sour part of the experience that we’ll have to tolerate. Using Slack as an authentication provider is completely free, so that’s something. Let’s set that up now.

Setting up Slack

  1. Head over to slack to create a new workspace, substituting your e-mail
  2. Set up your domain for slack if you haven’t already. You will be prompted for domain name, workspace name, and (optionally) adding teammates:
  3. Once you have a workspace, head over to the slack app section and create a new app:
  4. And finally, create your new app (it’s worth having a read through the Terms of Service, as you should when using any service)

Now that you have created your app, there’s some information we’ll need from the portal, but that comes up later. With that out of the way, lets set up outline!

Setting up Outline

Outline (conveniently enough) does come in a docker package to use, which is great. After all, we’re using a docker host for the guide! However it’s not like any docker package you’d have seen on linuxserver.io. Just look at the laundry list of dependencies:

  • Node.js >= 12
  • Postgres >=9.5
  • Redis >= 4
  • AWS S3 bucket or compatible API for file storage
  • Slack or Google developer application for authentication

Holy Moly. That’s an intimidating chain of requirements. The good news is we’ve already gotten the Slack requirement out of the way. Now, how about the rest of them?

Crafting the Outline Docker Compose File

Outline does helpfully provide a docker-compose file within their github, but it’s aimed purely for development. There is many settings in that compose file that will not apply to a production self-hosted environment (like the build argument for example). However, it’s something to work with. First off let’s strip out all of the environment data and clean up the content.

That’s starting to look better. I’ve also stripped out the port mappings, as we don’t plan to use them: Instead we’ll introduce caddy to take care of that later.

Access

However I’ve left out something. The docker image called fake-s3 (in the original docker-compose) hasn’t been included. But what is it?

Well Outline actually has two databases: one of which is for file storage. In cloud terms this is an S3 bucket, but in reality it’s really just a database optimized for file storage. The S3 side of it refers to Amazon’s AWS S3 service, which is a proprietary cloud format (my favourite). It looks like fake s3 is supposed to emulate that.

Minio Generate Random Access Keyboard

Well, there’s another service that emulates AWS S3, and it has over 100 million downloads on dockerhub. Given the higher level of support, Minio seems like a much more obvious choice for S3 emulation. So let’s massage a minio image into our docker-compose file.

Excellent, we’re getting somewhere! However we’re only about halfway done with setting up our environment. Let’s have a look at networking!

Configuring the Networking in Docker Compose

As mentioned before, Outline is designed to be a cloud first service. A consequence of this is that networking gets a bit complicated, as everything Outline does makes the assumption that it’ll do all communication through discrete, encrypted, HTTPS services. Of course if you’re hosting it all on a single machine, that’s not true by default.

For Postgres, we can get away with manually disabling the encryption check later on. Note that while this is fine to do when Postgres is on the same host as all services accessing the database, it’s not ok to do in a distributed environment like Kubernetes.

For the minio S3 bucket, this has to be accessed through HTTPS as the service will be delivering content straight to the end users. Which also means the outline service needs to contact minio through HTTPS.

To make all this magic work, we’re going to be using caddy to encrypt our HTTP services. To do so though, we first need to define our networks in the docker-compose.yaml. Let’s do that now.

This is the final revision of the outline docker-compose.yaml file, also found in the blog’s repository.

Rather than having Docker define our networks automatically, we now have explicitly defined two networks: outline-internal and reverseproxy-nw. The internal network allows outline to communicate with redis and postgres directly, where as reverseproxy-nw allows communication with our (soon to be setup) reverse proxy.

You may also notice that the reverseproxy-nw is defined as external. This allows you to share the network between multiple docker-compose.yaml files (also known as stacks). The reverse proxy network will be joined up later with caddy to provide HTTPS encryption. Speaking of which let’s get Caddy involved!

Setting Up Caddy

Like most web apps, you will need HTTPS encryption for most browsers to find it acceptable. Which means setting up a domain, setting up a static IP, getting certs from your CA, installing certs, letting certs expire and wreck your site while you scramble to fix it… ugh. No thanks. Let’s let somebody else do most of that hard work.

Minio Generate Random Access Key Codes

Caddy is a webserver designed around using sane defaults and being simple to set up. Simple is good. Caddy is also capable of being a reverse proxy: you can set up Caddy in front of your trusted, unencrypted communication (preferably on a private or localhost network). Then Caddy can automatically negotiate certificates with Let’s Encrypt and forward your traffic to the right places. In fact, this blog is being served with Caddy right now!

Double Checking the Pre-requisites

Before going any further, you must have the following:

  • A Domain you control with at least two subdomains configured to point at your public IP address. This guide will be using kb.<mydomain>.com.au and kbdata.<mydomain>.com.au respectively.
  • An IP address that is publicly accessible and doesn’t change (or has DDNS configured).
  • Ports 443 and 80 forwarded to your docker host.

Note that more ISPs these days are doing Carrier Grade NAT (CGNAT), which basically breaks port forwarding. Double check that you are not on a plan that is NATed. Most ISPs can offer an option to avoid this.

If the above pre-requisites lost you, you may instead want to consider a VPS service, like Digital Ocean. A basic docker host on digital ocean is only £5 a month and should handle Outline fine. You also do not have to worry about port forwarding or NAT (but you do have to worry about securing your SSH connection).

Crafting Caddy’s Docker Compose

Reverse proxies are extremely useful. They let you run multiple services out of a single port and have them all be routed by subdomain. Because of this, it’s good practice to separate your reverse proxy from other docker-compose files. As such we’ll make our own one for caddy:

This is the final revision of the docker-compose.yaml file for caddy, also found in the blog repository.

Most of this is fairly standard docker-compose material. We are defining a service using the caddy official docker image. We define two volumes: a generic data folder for caddy, and a Caddyfile.

A Caddyfile is a config file for caddy that defines how caddy will operate

We also bind ports 80 (http) and 443 (https) to caddy as it will be our gateway for both the minio service and the outline service.

As said before, Caddy is known for its simplicity, which comes in to play now. We need to define two services (hence the two subdomains): One for outline, and one for minio. Let’s create a Caddyfile now (change <mydomain> to your actual domain):

Yeah, seriously, that’s all you need! Well, almost. S3 buckets (including minio) do something called a “header signature”. It’s basically a way to determine if someone’s been messing with the HTTP header during transport (which most reverse proxies, including caddy, do by default). If we don’t fix this communication to the bucket will break. Luckily fixing this is not difficult:

And done! This is probably feeling like a marathon to get outline operational, but the good news is we’re almost there. Next up is the environment files and then we can put it all together.

Configuring the Environment Files

Something you’ll notice about the outline docker-compose.yaml is that all the images point to separate environment files. It is good practice in docker to separate your environment data from your docker-compose.yaml file, to help prevent leaking secrets. It also helps clean up the configs.

Let’s generate our three .env files now: outline_postgres.env, outline_minio.env, and outline.env.

outline_postgres.env:

outline_minio.env:

as for outline.env, we first need some data from slack.

Getting the Slack Information

Open up your slack app and get the client ID and Client Secret:

Head to oauth and permissions on the sidebar and create a new redirect url. the URL should be in the format of https://kb.<yourdomain>.com.au/auth/slack.callback. Add the URL and save.

Now. let’s take a heavily modified version of the sample.env file here. Remember to change the following:

  • SECRET_KEY to a randomly generated key. You can do this in linux by running openssl rand -hex 32
  • UTILS_SECRET to a randomly generated key. You can do this in linux by running openssl rand -hex 32
  • URL to your domain
  • DATABASE_URL to your new Postgres usernames and password in
  • SLACK_KEY to your Slack Client ID
  • SLACK_SECRET to your Slack Client Secret
  • AWS_ACCESS_KEY_ID to your minio access key
  • AWS_SECRET_ACCESS_KEY to your minio secret key
  • AWS_S3_UPLOAD_BUCKET_URL to kbdata at your domain
  • (Optionally) fill out your slack integration details and SMTP provider information

Hooray! We have everything we need to actually generate a working instance of outline.

Putting it all Together

Getting close to the end goal now. You can find the generic folder structure at this github page for convenience.

Creating the Folder Structure

Open a remote connection to your docker host, and create the following structure wherever you plan to store your data:

  1. Two folders: outline and caddy
  2. Inside outline, place your outline docker-compose.yaml, and your three env files: outline_postgres.env, outline_minio.env, and outline.env (replacing any generic data with your data, including new passwords!)
  3. Inside caddy, place your caddy docker-compose.yaml and your Caddyfile (replacing the domain with your domain)

You can see me do so (with the generic files) here, using Visual Studio Code opened to /mnt/media/containers-testing:

Initializing the Containers

The first run is going to look a bit different than a normal run.

  1. Create the reverseproxy-nw using the command docker network create reverseproxy-nw (docker compose will complain if you try to include an external network before creating it).
  2. navigate in the terminal to the caddy folder and run docker-compose up -d. If your domain is set up correctly, it should run without error. You can check the errors with docker-compose logs.
  3. navigate in the terminal to the outline folder
  4. In the docker-compose.yaml file, comment out the normal outline command and uncomment yarn sequelize:migrate
  5. run docker-compose up -d. You should see the database being initialized when you run docker-compose log outline.

Creating the Bucket

Minio Generate Random Access Key Code

We also have to initialize the bucket where image uploads will be stored.

  1. navigate to https://kbdata.<yourdomain>.com.au. Log in with your minio access key and secret key.
  2. Press the plus sign in the bottom right and “create new bucket”. Name the bucket outline, and press enter to create

Finishing Up

  1. Head back to your minio.env file and uncomment the MINIO_BROWSER=off section.
  2. Go to your docker-compose.yaml for outline, comment out the yarn sequelize command, and uncomment the yarn start command.
  3. rerun docker-compose up -d
  4. Navigate to https://kb.<yourdomain>.com.au. If all’s gone well, you should have a working outline instance!

Minio Generate Random Access Keyword

  • If you find that slack kicks back an error when logging in, double check that you set your callback url in the slack app properly.
  • If you find that you can’t upload images, your minio bucket might not be set up correctly

Conclusion

Minio Generate Random Access Key Generator

Phew, that was not the easiest docker container to deploy. It definitely has a feel of shoving a square peg into a round hole. However, if you’ve persevered, you are now self hosting the (subjectively) best documentation platform available. Development on Outline is very active, to stay tuned and watch your new platform get better every month!