Gross

In the second grade my dad took me to my first Mariners game. It was in the Kingdome against the Blue Jays and I really had no interest in going. We got there after the game started and got to our seats to the right of the press box. At some point during the game a foul ball came up to us (hit by backup catcher Matt Sinatro). A guy near us caught it and handed me the ball. I was hooked.

That was my entry into sports. I loved the Mariners and couldn't get enough of them. Fast forward to today. Last season was a huge disappointment; blowing another lead to some historic A's team is old hat. Thankfully we are going on a plan to tear things down and rebuild in a way that might actually get us competitive in a few years (I'm all aboard this plan FWIW – it's the only way we're going to ever compete for a World Series).

But last week a story came out that is just gross. I feel gross in wanting to cheer them on to win it all.

For about a year, the Mariners employed Dr. Lorena Martin. She came to a new role with the team as a "high performance trainer". She was let go from the team last month and is now saying that GM Jerry Dipoto, Manager Scott Servais, and Farm Director Andy McKay demonstrated clear racist tendencies, especially against Latin players. I don't know if her claims are true or not; the Mariners have fervently denied all the claims.

A lot of the stories she recounts seem to be personal meetings without third parties to back them up. So we seem left in a he-said, she-said position with no real insight of what to think. If it's true, Dipoto, Servais, and McKay need to go. Full stop. I don't want them leading the Mariners, and I don't want to cheer for a team they lead. But if it's not true then Dr. Martin should say so immediately.

As a fan I don't know what to think. I just want the truth to come out. And to root on my Mariners. One day they will win the World Series and I want to feel good about cheering them on to it.

The Ultimate Guide to iOS Notification Lifecycles

I work on the notifications team at Lyft for my day job. One of the things that comes up frequently is the question of how can we trigger our code from a push notification (or even, “what is a push notification”). I’ve had these chats enough that it seemed relevant to put together what I hope to be the definitive answer. I hope it’s useful to you.

Here we go.

What is a user notification?

Let’s start small. A user notification is a way for app developers to let their user know that something in the app warrants their attention. Users can be notified in any combination of three different ways: an alert (most commonly a banner that drops down from the top of the screen), a sound, and/or a badge.

Apps must ask users for permission to show these notifications before anything can be displayed. Oftentimes apps will show their own custom view asking for permission before displaying the system dialog. This is because an app can only prompt their users once for display of notifications. After that, the user must go to the app’s preference screen in the system Settings.app.

How can user notifications be triggered?

Notifications can be scheduled as a local notification, or delivered via push. Both types of notifications can do the same things, appear exactly the same to the user, and as of iOS 10, Apple unified the notification system under the UNUserNotificationCenter suite of APIs. This makes handling of notifications the same.

What is a push notification?

A push notification is a notification that comes in remotely. It may or may not display any content (ones that do not display content are referred to as silent pushes). It is important to note that apps do not have to ask the user’s permission to deliver push notifications. Silent pushes do not require user permission.

You must register your app with the system in order to send a push. This is done with UIApplication.shared.registerForRemoteNotifications(). You’ll be called back with the result of this by implementing application(UIApplication, didRegisterForRemoteNotificationsWithDeviceToken: Data) for success, and application(UIApplication, didFailToRegisterForRemoteNotificationsWithError: Error) for failure. When the registration succeeds, you’ll get a device token that needs to be registered with your server that you’ll use to send a push.

When does my code run via push?

This is going to get complicated. The short answer is: it depends. But you didn’t come here for short answers, so let’s dig in. We’ll start by looking at an actual notification payload. I’ll put in the stuff we need, but you can find the full payload content reference here.

{
  "aps": {
	"alert": {
	  "title": "Our fancy notification",
	  "body": "It is for teaching purposes only."
	},
	"badge": 13,
	"sound": "fancy-ding",
	"content-available": 1,
	"category": "CUSTOM_MESSAGE",
	"thread-id": "demo_chat",
	"mutable-content": 1
  }
}

If you were to send this payload to a device, the user would see an alert with the title being the aps.alert.title value, the app icon would badge with the aps.badge value, and the sound would be the aps.sound value (assuming all the permissions were granted by the user).

You might want to send a notification that wakes up your app to fetch some content in the background, and a silent push is perfect for that task. This can be accomplished by setting the aps. content-available field like the example. When the system receives this notification it may wake up your app. Wait, may wake up? Why won’t it always wake up the app?

This brings us to the uncomfortable reality: the payload contents as well as the app state determine when your code is activated. Let’s look at a chart:

So, if your app is running or has been backgrounded, you’ll be called. But if the user has killed your app, nothing happens. Bummer! The next question you may ask is if there is a way for your code to run every time a notification comes in. Indeed there is.

Apple gives us two different extension points that let us run code when notifications arrive: UNNotificationServiceExtension and UNNotificationContentExtension.

Notification Service Extension

With the service extension, the intention is that you are going to mutate the content of the extension before it is presented to the user. The typical use cases are for things like decrypting an end-to-end encrypted message, or attaching a photo to a notification.

There are 2 main requirements that your notification must meet in order for this extension to fire: Your payload must contain both aps.alert.title and aps.alert.body values, and aps.mutable-content must be set to 1. The payload we have above satisfies both of those requirements and would trigger our app’s service extension if it had one.

It’s important to note that you cannot trigger a service extension with a silent push. The intention is for the extension to mutate a notification’s content, and silent pushes have no content.

Notification Content Extension

The other extension type allows us to paint a custom view on top of a notification, instead of relying on the system to render the notification. Users get to see this custom view when they force press or long press a notification (or in Notification Center, they can swipe left on a notification and tap “View”).

When you include one of these with your app, the extension’s Info.plist file declares an array called UNNotificationExtensionCategory. Each of these categories maps to a notification’s aps.category value. To go back to our sample notification above, our app would have to register a content extension for the CUSTOM_MESSAGE category.

One more thing

There’s one last bit that I want to cover: The UNUserNotificationCenterDelegate protocol. A conformer will get called in two main cases: when your app is in the foreground and will present a notification, and when a notification is tapped on.

When the user taps on one of your notifications, the app will fire up and come to the foreground. It’s a good idea to register the notification center delegate at startup so you’ll get the userNotificationCenter(_:didReceive:withCompletionHandler:) call. This lets you properly handle the notification, otherwise your app will just come to the foreground and that’s it.

Wrap-up

We’ve come a long ways, looking over the kinds of notifications we can have (interactive and silent), the different transport mechanisms for them (local and push), and how all these ways can interact with our code.

Securing an nginx Container with letsencrypt

One of the things I wanted to automate with my new website was deployment, including fetching SSL certificates and installing them in nginx. I came into this project knowing nothing about shell scripts and Docker. Now I know a _little_about each. With the web going increasingly https I knew I had to jump this hurdle to be production ready.

Thankfully I found a blog post that helped me get started. All I had to do was make some adaptations to my setup. While that post is very much talking about his environment I wanted this post to generalize for others (be it a static site or one on a separate container like mine).

My sample project uses a Vapor web server, but you can sub out anything you wish for that. Here’s the sample on GitHub

We start with a Dockerfile that will build up our nginx container:

# build from the official Nginx image
FROM nginx

# install essential Linux packages
RUN apt-get update -qq && apt-get -y install apache2-utils curl

# sets the environment domain variable for use in other files
ARG build_domain
ENV DOMAIN $build_domain

# where we store everything SSL-related
ENV SSL_ROOT /var/www/ssl
 
# where Nginx looks for SSL files
ENV SSL_CERT_HOME $SSL_ROOT/certs/live
 
# copy over the script that is run by the container
COPY ./nginx/web_cmd.sh /tmp/
RUN ["chmod", "+x", "/tmp/web_cmd.sh"]

# establish where Nginx should look for files
ENV WEB_ROOT /home/hello-world/Public/

# Set our working directory inside the image
WORKDIR $WEB_ROOT

# copy over static assets
COPY Public /home/hello-world/Public

# copy our Nginx config template
COPY ./nginx/server.conf /tmp/docker_example.nginx

# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst '$DOMAIN:$WEB_ROOT:$SSL_ROOT:$SSL_CERT_HOME' < /tmp/docker_example.nginx > /etc/nginx/conf.d/default.conf

# Define the script we want run once the container boots
# Use the "exec" form of CMD so Nginx shuts down gracefully on SIGTERM (i.e. `docker stop`)
CMD [ "/tmp/web_cmd.sh" ]

This file had to be modified a bit from the referenced blog post above. The script he used was updated to a new name and support current letsencrypt standards. I also wanted to inject the domain in from the outside. So the build_domain argument comes in to this script and creates an environment variable out of it. Pretty handy!

The last line here calls a script which preps everything for the SSL procedure:

#!/usr/bin/env bash

# initialize the dehydrated environment
setup_letsencrypt() {

  # create the directory that will serve ACME challenges
  mkdir -p .well-known/acme-challenge
  chmod -R 755 .well-known

  # See https://github.com/lukas2511/dehydrated/blob/master/docs/domains_txt.md
  echo "$DOMAIN www.$DOMAIN" > domains.txt

  # See https://github.com/lukas2511/dehydrated/blob/master/docs/wellknown.md
  echo "WELLKNOWN=\"$WEB_ROOT.well-known/acme-challenge\"" >> config

  # fetch stable version of dehydrated
  curl "https://raw.githubusercontent.com/lukas2511/dehydrated/v0.6.2/dehydrated" > dehydrated
  chmod 755 dehydrated
}

# creates self-signed SSL files
# these files are used in development and get production up and running so dehydrated can do its work
create_pems() {
  openssl req \
      -x509 \
      -nodes \
      -newkey rsa:1024 \
      -keyout privkey.pem \
      -out $SSL_CERT_HOME/fullchain.pem \
      -days 3650 \
      -sha256 \
      -config <(cat <<EOF
[ req ]
prompt = no
distinguished_name = subject
x509_extensions    = x509_ext
 
[ subject ]
commonName = $DOMAIN
 
[ x509_ext ]
subjectAltName = @alternate_names
 
[ alternate_names ]
DNS.1 = localhost
IP.1 = 127.0.0.1
EOF
)
 
  openssl dhparam -out dhparam.pem 2048
  chmod 600 *.pem
}

# if we have not already done so initialize Docker volume to hold SSL files
if [ ! -d "$SSL_CERT_HOME" ]; then
  mkdir -p $SSL_CERT_HOME # _HOME
  chmod 755 $SSL_ROOT
  chmod -R 700 $SSL_ROOT/certs
  cd $SSL_CERT_HOME
  create_pems
  cd $SSL_ROOT
  setup_letsencrypt
fi

# if we are configured to run SSL with a real certificate authority run dehydrated to retrieve/renew SSL certs
if [ "$CA_SSL" = "true" ]; then

  # Nginx must be running for challenges to proceed
  # run in daemon mode so our script can continue
  nginx

  # accept the terms of letsencrypt/dehydrated
  ./dehydrated --register --accept-terms

  # retrieve/renew SSL certs
  ./dehydrated --cron

  # copy the fresh certs to where Nginx expects to find them
  cp $SSL_ROOT/certs/$DOMAIN/fullchain.pem $SSL_ROOT/certs/$DOMAIN/privkey.pem $SSL_CERT_HOME

  # pull Nginx out of daemon mode
  nginx -s stop
fi

# start Nginx in foreground so Docker container doesn't exit
nginx -g "daemon off;"

The gist of what’s going on here is that we are setting up SSL. This script can be invoked in either a dev environment or a production one (that variable is configured in the docker-compose file below). This is handy for local development where a self-signed cert will do. For production it will go through the whole process of phoning the letsencrypt service and getting real certificates.

We then assemble our server config file that gets copied into the container. Here’s that template:

server {
	server_name $DOMAIN;
	listen 443 ssl;

	root $WEB_ROOT;

	# configure SSL
	ssl_certificate $SSL_CERT_HOME/fullchain.pem;
	ssl_certificate_key $SSL_CERT_HOME/privkey.pem;
    ssl_dhparam $SSL_CERT_HOME/dhparam.pem;

	try_files $uri @proxy;

	location @proxy {
		proxy_pass http://web:8080;
		proxy_pass_header Server;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_pass_header Server;
		proxy_connect_timeout 3s;
		proxy_read_timeout 10s;
	}
}

server {
  # many clients will send unencrypted requests
  listen 80;

  # accept unencrypted ACME challenge requests
  location ^~ /.well-known/acme-challenge {
    alias $SSL_ROOT/.well-known/acme-challenge/;
  }

  # force insecure requests through SSL
  location / {
    return 301 https://$host$request_uri;
  }
}

I’m no nginx expert and largely this is copied from the other blog post. You’ll hopefully notice the environment variables throughout the file (like $DOMAIN). They get substituted out in our Dockerfile.

Another easily overlooked part of this file is the line proxy_pass http://web:8080. This host is used by Docker as a token. The web host will be setup as a service in our Docker compose file.

Speaking of the compose file, here it is:

version: "3.3"
services:
  web:
    image: jsorge/hello-world
    volumes:
      - ./Public:/app/Public
  nginx:
    build:
      context: .
      dockerfile: ./nginx/Dockerfile-nginx
      args:
        build_domain: hello.world
    environment:
      CA_SSL: "true"
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./Public:/home/hello-world/Public
      - ./nginx/ssl:/var/www/ssl
      - ./nginx/logs:/var/log/nginx
    depends_on:
      - web

The webservice sets up our web server, and then the nginx service configures our container by running the Dockerfile. We expose the ports needed by nginx and also install the volumes.

If you wanted to simplify and use a static site, you could pass the whole directory of your site as a volume and put it in the nginx server’s root directory. I’ve not done that myself, but it seems logical for static sites. In that case you wouldn’t need the web service at all, and in the server config file you’d probably use localhost  for the proxy pass value.

I hope this has been helpful, and taken some of the pain out of getting SSL up and running on your website. I’m happy to help with any questions you might have, via email or Twitter.

Nerd Snipe - 2 Apps, 1 Extension

I’m working on a project for Lyft that will bring in a new app extension to both our passenger and driver apps. The functionality between both apps is going to be identical so sharing code should be maximized. I am trying to figure out how to get it working to make just a single extension that can be used on both apps. This is more difficult than you might imagine, because extensions can be fiddly. They need to share the same version and build strings, and building in Xcode revealed this hidden gem:

Embedded binary's bundle identifier is not prefixed with the parent app's bundle identifier.

Ugh. This begged the question of how I could possibly get it to work. Off to chase a snipe. Let’s start with a sample project, shall we?

I’m no shell script wizard, but it seemed that a run script build phase should do the trick. It needs to run in the host app prior to anything else happening. This means it’s the second build phase (dependencies always need to be built first). Here’s what I came up with:

#! /bin/sh

APPEX_DIR=$BUILT_PRODUCTS_DIR/SharedExtension.appex
INFO_PLIST=$APPEX_DIR/Info.plist
suffix=sharedextension

if [ "$IS_EMPLOYEE" = YES ] ; then
bundleID=io.taphouse.EmployeeApp.$suffix
shortVersion=$VERSION_EMPLOYEE
version=$VERSION_EMPLOYEE.$BUILD_NUMBER_EMPLOYEE
else
bundleID=io.taphouse.CustomerApp.$suffix
shortVersion=$VERSION_CUSTOMER
version=$VERSION_CUSTOMER.$BUILD_NUMBER_CUSTOMER
fi

plutil -replace CFBundleIdentifier -string "$bundleID" $INFO_PLIST
plutil -replace CFBundleShortVersionString -string "$shortVersion" $INFO_PLIST
plutil -replace CFBundleVersion -string "$version" $INFO_PLIST

I’ve extracted out the short version and build values as build settings in each main app target and those get funneled into the script. In this sample I’ve hard-coded each app’s bundle ID to prefix the extension, but at Lyft these are build settings as well. (Pro-tip: make good use of xcconfig files because editing this stuff in Xcode is a huge pain, not to mention the possible merge conflicts that could arise in your project file.)

I’ve got a build setting in one of my apps indicating what kind of app it is (in this case that is the IS_EMPLOYEE setting). Checking that will tell me the environment the script is running in. I setup 3 variables for each app and replace their values in the extension’s already built Info.plist file. This is important, because it’s too late to change the file in my source directory.

So now I’ve got an Info.plist file with all the proper values. Each app will build and run in the simulator. 🎉!

But we don’t write apps for the simulator. We write them for phones. Let’s give that a shot.

0x16b257000 +MICodeSigningVerifier _validateSignatureAndCopyInfoForURL:withOptions:error:: 147: Failed to verify code signature of /private/var/installd/Library/Caches/com.apple.mobile.installd.staging/temp.KCRuNY/extracted/EmployeeApp.app/PlugIns/SharedExtension.appex : 0xe8008001 (An unknown error has occurred.)

Dang. Looking at the build steps for my extension reveals that there’s a code signature step at the very end and changing the Info.plist afterwards seems to break that signature. So how do we get this done?

Well, I don’t know. My best guess is that I might need to figure out how to re-sign the extension after manipulating its Info.plist. Is this possible? Reasonable to do?

What I’ll probably wind up doing is have individual extensions (one for each app) and a shared framework that backs each. I think it will be easier to get going and more resilient to changes in tooling in the future.

Unless you’ve got a way for me to get this done… 🙂

Finny Turns 2

2 years ago yesterday, I was unable to attend Xcoders. I got a frantic call from Emily that her water broke, and I got a ride to the hospital from a good buddy. 24 hours later Finnian was born.

Today he turns 2. It’s hard to believe how big he’s getting. Watching him grow from newborn to our floppy haired jolly boy has been a real blessing. Happy birthday, buddy!

This is My New Site; It Looks Like The Old Site

I have launched my new website. If you’ve been around here before you might not notice anything different (if you’re a new reader – welcome! 👋). But I assure you, it’s different.

I was running my site on Ghost. And while it was great for me for the last few years I found it lacking lately (mostly I want to combine microblogging and full-length blogging). So I decided to build my own engine.

I call it Maverick, and my ambitions for this project are pretty lofty. It’s not a finished product yet; far from it. But I realized that it was close enough to replace Ghost that I should just pull the trigger. Real artists ship, after all. In fact, this is the first thing I’ve shipped on my own since Scorebook back in November, 2014.

Maverick is powered by Vapor, which is a web framework written in Swift. This appealed to me on a few levels but the main one is that I already know Swift since I’m an iOS developer by trade. Not learning a whole new language is a big win for me (as I’ll explain in later posts there were several other things I had to learn to get here).

I started writing the intro post which went into some technical detail but realized that maybe I’d gone too far after 1100 words. So I’m going to go a different route and break things up into different posts. I wanted to get this inaugural post out there to commemorate the occasion.

If you’re curious about Maverick, it’s fully open source. Things are a little confusing, but there are 2 repos: the first is for the actual application, and the second is for my website. I’ll leave you with links to the two repos. Cheers!

Proxying Vapor 3 Using nginx and Docker

Vapor is a framework for running server side code, all built in Swift. There’s a great post by bygri about how to build a Vapor app using Docker. If any of what I’ve just said is confusing, I highly recommend reading that post before getting started here.

During my building of a couple of Vapor apps, I’ve found getting the sites running while hitting localhost:8080 to be really easy. The hard part is putting that site behind a domain locally and even more, getting it deployed to a server. So I’m going to put a huge disclaimer here: I’m no expert in nginx or Docker. I’ve been able to piece things together using web searches and a lot of conversation in the Vapor Discord room. There’s a great channel for Docker specifically, where I’ve been hugely helped by @bygri. He’s good people.

This post’s goal is purely to show how I got my Vapor app - which worked in development - deployed to production and successfully proxied by nginx. There will be a few extra bits I learned along the way as well (just to sweeten the deal for you).

My initial thought process was this:

  1. Everything for my web app lives inside of the git repository.
  2. I have a script to run for local development
  3. When it’s time to deploy, I clone the repo to my server, and run a script to boot it all up there.

Turns out I was a bit mistaken. If you read through the post I linked at the top, you’ll notice that he has separate Dockerfiles for development and production. I skipped over that part, much to my initial peril (so don’t do that).

What I did was create a repository on the Docker Hub which would hold a named, built image. That image is built with the following Dockerfile:

# Build image
FROM swift:4.1 as builder
RUN apt-get -qq update && apt-get -q -y install \
  && rm -r /var/lib/apt/lists/*
WORKDIR /app
COPY . .
RUN mkdir -p /build/lib && cp -R /usr/lib/swift/linux/*.so /build/lib
RUN swift build -c release && mv `swift build -c release --show-bin-path` /build/bin

# Production image
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
  libicu55 libxml2 libbsd0 libcurl3 libatomic1 \
  tzdata \
  && rm -r /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /build/bin/Run .
COPY --from=builder /build/lib/* /usr/lib/
COPY Resources/ ./Resources/
EXPOSE 8080
ENTRYPOINT ./Run serve -e prod -b 0.0.0.0

If you have discerning eyes you’ll notice that this Dockerfile is almost exactly the same as the one in the post  I referenced above (go read it and come back if you didn’t earlier). That’s because I copied it from him 😊. The one addition I had to make was COPY Resources/ ./Resources. What this does is copies the Resources directory from my local drive and into the Docker image.

I was initially skeptical that I needed the EXPOSE directive since nginx would proxy over there by default, but I’ve confirmed that it’s not exposed on my server and this will make for good documentation, so 🤷‍♂️.

I built that image using docker build -t jsorge/taphouse.io. Then I can push the image using docker push jsorge/taphouse.io. These 2 commands will build the image and tag it locally, then push it up to Docker Hub. I highly suggest scripting this stuff (I’m becoming partial to Makefiles to get it all done).

Now let’s flip our environment to the production server. I don’t need much to bring in the Vapor app since it’s already built and hosted. I just need to reference it and mount the Public folder.

But then comes the fun part. I’ll catch you on the flipside of the compose file:

version: "3.3"
services:
  web:
	image: jsorge/taphouse.io
	volumes:
	  - ./Public:/app/Public
  nginx:
	image: nginx:alpine
	restart: always
	ports:
	  - 80:80
	  - 443:443
	volumes:
	  - ./Public:/home/taphouse/Public
	  - ./nginx/server.conf:/etc/nginx/conf.d/default.conf
	  - ./nginx/logs:/var/log/nginx
	depends_on:
	  - web

Ok, so what’s going on here? We’re grabbing the nginx image from alpine (a trusted provider of nginx). I’m honestly not sure what the restart command does but it showed up on my DuckDuckGo results of examples. But the rest I understand.

ports: I’m exposing ports 80 and 443 to get http and https traffic listened to (the syntax for ports is <external>:<container-internal>). I could pick other internal ports to listen to and update my nginx configs appropriately but didn’t feel like rocking the boat too much. I plan on adding TLS support via letsencrypt but that’s a problem for another day.

volumes: I’m sure people familiar with Docker won’t need this explained (or any of this compose file really) but this part blows my mind a bit. The volumes directive basically puts a link from the local machine to inside the container. The link could be a file or a folder; that part doesn’t matter. So to get the root of my site working in nginx, I mount the Public folder that exists locally on my machine into the container at the specified path. The syntax for this is <local-path>:<container-path>.

I also created the path of nginx/logs on my server, and mounted it as the log directory in the container. This allows me to have the logs from the container persisted to my server volume and easily read them as they come in. This is super cool!

depends_on: This one is pretty cool. It establishes the dependency chain between your containers and starts them up in the proper order to satisfy them. In this case, nginx depends on my Vapor app (called web).

The nginx configuration file is the last link in the chain to get us going. This is what mine looks like:

server {
	server_name taphouse.io;
	listen 80 default_server;

	root /home/taphouse/Public/;

	try_files $uri @proxy;

	location @proxy {
		proxy_pass http://web:8080;
		proxy_pass_header Server;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_pass_header Server;
		proxy_connect_timeout 3s;
		proxy_read_timeout 10s;
	}
}

I got this by finding the suggested configuration for Vapor 2. Curiously this page hasn’t made its way to the Vapor 3 docs but I’m guessing that has something to do with them wanting you to deploy on Vapor Cloud. I’m removing my tinfoil hat now.

As far as nginx configuration files go, this one looks pretty standard. I don’t know the exact ins and outs of what’s going on but the point of note is here: proxy_pass http://web:8080;. Remember how I said earlier that Docker Compose provides its own networking? Well it turns out I can use my Vapor container name as the host and it will resolve everything inside the container network. Super cool!

From here I got these files on the server and ran my make server command (which aliases to docker-compose -f docker-compose-prod.yml up --build -d) and tried hitting http://taphouse.io. It worked! 🤯

The next thing on my list is to get acme.sh working so that TLS is up and running, and I can disable port 80. That’s my current holdup, but when I get that done I’ll be sure to write that up as well.

If I made any grievous errors or you have general feedback, I’d love to hear it. I’m still very green when it comes to Docker, nginx, and server admin stuff in general.

Building a Blog Engine

For the last few months I’ve been thinking about what to do with my blog setup. I experimented a bit with Jekyll but that ended up not really appealing to me. In the consideration of switching platforms I realized I wanted something that would be easy to move (and not be beholden to should the need to move again arise). Markdown text files check that box; but what about images?

How would I move the images I’ve hosted on my own server? Where do they land in the new location? I had no good idea until I stumbled up on textbundles. Textbundles basically wrap up a markdown file and assets folder inside of a directory. Bundles are a very familiar concept for anyone on a Mac; we’ve had them since OS X became a thing.

I first decided to figure out how to make this format work in Jekyll. Would it be a hook I need to build? Where would that go in the pipeline and how does it work? Also, how do I even Ruby?

All of this started to make my new site feel like a distant goal. Learning a new language and framework was kind of daunting. Not that I’m opposed to such things, but I don’t have a ton of extra time available plus it could easily distract me from my goal.

In the meantime I decided to give Vapor a look again. I’d played with v2 last year when trying to rebuild my company website. It was… ok. But Vapor 3 really clicked for me and I could see while writing it how a blog engine could work. So I figured I’d go for it.

Meet Maverick.

As of this writing I’m probably 80% done with the site. In local testing it does exactly what I need it to do on a basic level. Blog post and static page support. Everything served from a textbundle. That’s the easy part.

The hard part now is how to deploy. I’ve got a lot to learn about Docker and TLS. I tried adding TLS to the new Taphouse site and couldn’t get it figured out. But also if I want to make this an easy system for someone else to adapt, how do I do that?

I know that posts should be separated from the engine itself. So the bare repo for new sites should have some script to grab the Maverick app and run it in some container. Since it’s just a self-contained Swift app that should be straightforward enough, right?

All of this consideration kind of has me paralyzed at the moment. I was hoping to punt on deployment a bit until I got Ulysses and Micro.blog support wrapped up but they seem to depend on TLS connections, and that brings me back to getting things deployed.

So that’s where I’m at. I’m super excited about getting Maverick onto my server and running it full time. But getting there means jumping a few more hurdles first.

Rebuilding taphouse.io

A few months ago, when I decided I needed to update JavaScript on my server that ran both my blog and company website, I broke the company site. I wrote it a few years ago using a very early iteration of the Sails.js framework. Fast forward to today and I want to bring it back. I also have been wanting to dabble in server-side Swift. Enter Vapor.

I got the site up and running in Vapor 3 in a few days (last year I had started doing this in Vapor 2, but 3 was just released so the timing was good). I’m a big fan of what the Vapor team is doing and may yet take on a separate project based on Vapor. But that’s for another day.

Today is about Docker. I want my new company site to be housed in a single git repo (✅), and deployable using Docker. I want to have the vapor app in a container, and nginx on the front of it in a separate container. I also want to have SSL on the site, managed by letsencrypt. I’m also hoping to have a dev environment and a production environment (all self contained inside this repo).

This is where my holdup is. I’ve never used Docker, and have been inundating myself trying to figure it all out. I want to build this in the open, so I’ll be microblogging and regular blogging about my progress – however slow it may be.

My First Swift Package

I’m embarking on phase 1 of my blog rethink and have decided that whatever I do next will use the textbundle spec for file transport and storage. It will likely be some git repo or perhaps a Dropbox synced folder but all of the options that I’ve seen don’t strongly link a post together with assets. Textbundle solves that.

I think initially I might wrap Jekyll in something, so that the thing making the static site is really an implementation detail. The idea being that my server has an endpoint that would accept XML-RPC as well as micropub inputs, transform those to a textbundle and then build the static site from there.

So my first step was to download my posts stored on Ghost as well as my assets folder. I wanted a way to build textbundles out of these for easy migration. My first inclination was a Ruby script. I don’t know Ruby, so that became problematic from the start. So I turned to a more familiar language.

I made a Swift package called textbundleify, and the code is up on GitHub if you’re interested. It’s not polished in the slightest, but was a fun chance to try something new. It gets run from a directory containing Markdown files and if you give it a directory that contains pictures (or a directory of directories of pictures), it will embed linked images to the referenced textbundle.

I had to learn to use Process as well as NSRegularExpression (though I still don’t know how regular expressions really work). It was quite weird getting an NSRange for use on a Swift string, which only works on String.Range ranges. It was fun to put together over a few hours, and should do the migration trick I’m wanting.

The next step is to figure out the proper order of operations in converting textbundles (which Jekyll doesn’t support) to the structure that Jekyll does. I’ve still got Ruby learning on my horizon but at least there are a couple of wonderful souls who have offered some help to me if I need it.

Onward!