Hosting a Static Website on IPFS

Update 2022-07-17: This post is outdated. While there is still a mirror of this website deployed to IPFS, the claims below that the page you are currently on is served by the IPFS network is no longer true if you are viewing this on The non-IPFS version is now hosted by SourceHut Pages. The automatic IPFS deployment has changed as well, instead IPFS watching the domain, I have SourceHut Builds push the content to my IPFS gateway and pin it. (See .build.yml).

The page you are browsing right now is being served by IPFS. You can verify this yourself; any content available on IPFS is also available on this gateway¹. For example, you can visit the IPFS getting started directory or a previous version of this blog.

I'm not even assuming which gateway you're using to view this post. Any of the links on this page work whether you are reading this on, my public gateway, have a gateway running locally, or even using the raw IPFS protocol handler. (To use the last one you'll need a browser that will route ipfs:/ipns: protocols. You can use IPFS Companion to do so on Firefox/Chrome/Opera/Edge).

I set this up mostly as a technical POC for my own interest, but there are some advantages that come with this setup:

The content I am hosting is always available at ipns:// as long as at least one computer on the network has the content. Even if my public gateway goes down or is overloaded with traffic I still have backup nodes hosting the content.

Updates to the content don't require touching any remote servers. I ipfs add the new version locally, then run ipfs name publish {CID} which signs the root hash of the website with my key. Alternatively, I can set a DNS TXT record at directly to the IPFS hash of the content.

To keep the content seeded on the IPFS network, I have a cron job that runs something like this to pick up any updates and ensure they are pinned:

#!/usr/bin/env bash

  # Others

PINNED_HASHES=$(ipfs pin ls -q --type recursive)

for DOMAIN in "${DOMAINS_TO_PIN[@]}"; do
  DNSLINK=$(dig +short TXT "$DOMAIN" | sed 's/^"dnslink=\|"$//g')
  HASH=$(ipfs resolve "$DNSLINK" | sed 's/^\/ipfs\///g')
  if [[ "$PINNED_HASHES" != *"$HASH"* ]]; then
    echo "Pinning $HASH for $DOMAIN"
    ipfs pin add -r "$HASH"

TLS on my public gateway

IPFS doesn't require TLS on its own since consumers are able to check the integrity of the content by verifying its hash². However, since most browsers are not able to load content from IPFS directly yet, gateways exist to act as a glue layer between IPFS and HTTP. This means content can still be tampered with between the gateway and a visitor's computer. Therefore, I am using Let's Encrypt & Nginx to offer TLS on my public gateway.

Using proxy_pass, Nginx can sit in front of the IPFS server and terminate SSL. This is also useful for binding to restricted ports 80 & 443 without needing to run IPFS as root.

IPFS will run the value passed to the Host request header through an IPNS lookup to know which hash to resolve to if the path doesn't begin with /ipfs or /ipns. By default Nginx sends its own Host header when proxying, but we can configure Nginx to forward it with proxy_set_header Host $host.

location / {
  proxy_set_header Host $host;
location /ipfs {
location /ipns {

I've created a project to help automate all of this. It includes scripts to get this setup up and running quickly on a new host for any number of domains, all from a single host.

¹ Assuming the gateway isn't blacklisting the content
² There is still the issue of verifying that the ipfs/ipns hash in the DNS record hasn't been tampered with, but that is out of scope for this post