Automation Revelation

When I relaunched my website last summer using my homegrown blog engine (Maverick) I adopted the textbundle format for writing posts and I was using Ulysses for my writing. But after launching the site and using Ulysses for long enough I was running into some friction with their Markdown syntax. That led me to the decision to not renew my subscription when the time came.

I'd since gotten a BBEdit license after hearing tons of people I highly respect speak glowingly about it and decided to use it for writing my posts. It doesn't support textbundle natively like Ulysses does, but it does well enough for my needs. textbundles are in fact just a folder with a Markdown file inside with an optional assets directory. So I've been using BBEdit for writing on my Mac ever since.

But I've been running into a different kind of friction: making new posts was kind of painful. I've been manually making the directory structure, copying the info.json file, and creating the text file myself. I'd written an Alfred snippet to create the front matter that each post needs, but that's become problematic too because dates are hard (more on that in a minute).

Well, we've gotten deluged with snow in my neck of the woods so I figured I'd dive in to a couple of new tools and get this thing automated. I watched through the NSScreencast episodes with John Sundell explaining his Marathon scripting tool written in Swift, for Swift. I was able to take that knowledge and make a little script which outputs the directory structure I need, fills the front matter on the post, and launches me into BBEdit where I can write away.

Part of the spark for this was that I'd approached dates all wrong when I built Maverick. I used dates in my current time, not against UTC. I have posts automatically show up on my microblog using micro.blog's cross posting functionality and when my posts came over they were off by 8 hours or so (I'm in the Pacific time zone). So I had to go through all my posts and update times, then use dates that are UTC based (formatting them using an ISO8601DateFormatter and .withInternetDateTime).

I should be all set now, and my posts showing up in the correct order on my microblog. Happy Saturday!

Update

Here's the script that I wrote, in case you're interested 🙂

CALayer Mask Inversion

I've got a view controller with a camera view, overlaid with a partially transparent dimming view. I need to punch a hole out of the dimming view to let the camera shine through in its full glory. My first crack was the code below, and it works:

override func viewDidLayoutSubviews() {
    super.viewDidLayoutSubviews()

    let inset: CGFloat = 16.0
    let rectWidth = self.view.frame.width - (inset * 2)
    let rectHeight = rectWidth * 0.63
    let rectSize = CGSize(width: rectWidth, height: rectHeight)

    let rectOriginY = (self.view.frame.height / 2) - (rectHeight / 2)
    let rectOriginX = inset
    let rectOrigin = CGPoint(x: rectOriginX, y: rectOriginY)

    let maskRect = CGRect(origin: rectOrigin, size: rectSize)

    let maskLayer = CAShapeLayer()
    let path = CGMutablePath()
    path.addRect(self.dimmingView.frame)
    path.addRect(maskRect)
    maskLayer.path = path
    maskLayer.fillRule = .evenOdd

    self.dimmingView.layer.mask = maskLayer
}

Which produces what I want:

But it was redrawing too often (during transitions especially). I decided to have the mask only draw once, and I needed the view hierarchy set up. So I put it in viewWillAppear. Exact same body as above just different lifecycle method. Here's that result:

How in the world can the same code produce such different results?

Update

Thanks to Tom Bunch for asking me about the state of our view controller's superLayer. At viewWillAppear, it has no super layer but at viewDidLayoutSubviews it has one. The class of the layer is a UIWindowLayer (which is a private class). Turns out that the super layer must apply some transform that causes our inversion to happen.

To get around the problem of the too many redraws I have instead opted for a simple hasAppliedCutout boolean state check during viewDidLayoutSubviews. It's not the cleanest solution but it will work for what I need without too much extra fuss.

On the Sharp Edges of UIView.mask

I'm working on implementing a new user-facing feature at work, and the designs I've been given call for having a dark gray semi-transparent view overlaying a view, and this semi-transparent view has a rectangle punched out of it. Not having done this kind of thing before I went searching the docs on CALayer to see what options are available. I knew that layers could mask other layers so this seemed like the right place to go.

While using a layer-based mask seemed promising initially, I wanted flexibility to move this mask around pretty easily using Auto Layout constraints. I stumbled upon a really helpful article from Paul Hudson outlining the UIView property called mask. From the docs, this property is:

An optional view whose alpha channel is used to mask a view’s content.

That's the entire description we're given. I ran the code from Paul's blog post in a playground and sure enough I was able to mask the view (more on this later). So I adopted it in our app, and I ran it. I looked and there was no mask. In fact, the entire overlay was not visible. Huh? Why would this work in a playground but not the app?

Turns out there are a couple of undocumented gotchas that go along with using a mask, and if you don't know them you could spend a lot of time learning the hard way. I'm hoping this will spare you some lost hours. Here we go.

  1. The mask view cannot be part of a view hierarchy. This means no Auto Layout, and no constraint manipulation. It's all frames with this view.
  2. The mask cannot be held on to by anything except for the view it's masking. Don't try to store it as a property or you're in for a bag of hurt (as in it won't work at all, and the view it's masking will no longer be visible).

I suppose that if #2 were to be solved it could make #1 possible, but the reality is that if this property is going to have as many usability issues as it does then the documentation needs to be updated to reflect that. If you've been in the Apple developer sphere for very long you've heard the phrase "Radar or GTFO". With that in mind, I submitted rdar://47809462 (OpenRadar link) as I do not want to GTFO 🙂.

🔗 No More Oppatoo

No More Oppatoo

But kids inspire love, such deep unconditional love. You love and treasure how they are, down to the smallest quirk. Then, suddenly, right in front of you, they change. While one might grow used to the slow, sad change of growing apart from an adult you love, this feels very different.

This post from Allen Pike has been sitting with me for a while. I've got 2 boys, 5 and 2.5 years old, and my wife and I get to see their personalities develop. Atticus (our 5 year old) used to be attached to his pacifier, calling for it before going to bed with adorable cries of "Paci!" and Finnian (the 2.5 year old) went through an adorable phase of saying "yep" to everything.

Right now Finnian is saying "That's awesome", endlessly singing songs like "Baby Shark", and "Jesus Loves Me", and going on and on about choosing the purple bath bomb. Atticus loves watching toy train videos on YouTube and hugging every stuffed animal he comes across and saying "nice !".

I know these phases will end someday, perhaps soon. For now they will be replaced with other super cute things. The day ~may~ will come when the new things are simply grammatically correct and no longer endearing in the way toddlers are. It seems parenting is a bittersweet endeavor.

Take pictures, shoot video, make audio recordings. The memories will only last in our minds for so long, but we can preserve them thanks to the pocket computers we all carry. I think that's one of the best parts about living in the time we do. We can always remember our kids as they were, while celebrating the people they become.

Scorebook 1.5 Retrospective

A couple of weeks ago I brought my app back from the dead. Scorebook was first released in November 2014, got a minor bugfix update a month later and then was unceremoniously removed from the App Store by Apple in January 2018. Apple doesn't want apps that appear dead on their store anymore and with good reason. By all external accounts my app wasn't under active development.

And it wasn't. Over the last couple of years my desire to play games had gone down because I didn't want to use my app with the state it was in. Mid-last year I even joined a beta for an app under development that was very close in functionality to Scorebook and I got sad. I didn't want to use their app; I wanted to use mine. So I got to work.

This is going to be a retrospective of what it took to bring Scorebook back, culminating in the release a couple weeks ago of v1.5.

Grab Game Image From Web

Scorebook has a feature that enables finding box art on the internet and attaching it to a game. I built that against a web API (Bing) that performed the image search and returned the results. That API disappeared, and its replacement was migrated to Azure. I spun up an Azure account and set up their image search API. Past me would have called it a day from there.

Instead, with my experience I've gained over the past few years, I knew that if I hit my own API I could prevent this from happening to future versions of the app. I've been playing around with the Vapor framework (which is written in Swift) to power the web service, and using their cloud service was able to bring my API online pretty easily.

The most interesting part of this was thinking through authentication. I don't store any user data – don't want to, and don't plan on it– but wanted only my app to be able to talk to the service. I also know that baking a key into the app is fragile. So I utilized the public database provided by CloudKit to store the record. On launch the app performs a search for the key, which it then uses to authenticate to the server. The server likewise has the key and uses a middleware class that I wrote to validate the key sent as part of the request. I know it's not bullet proof but it gets the job done for what I need and I'm happy with the solution.

On Memory and Images

When a fried of mine was using the app after it first launched he told me about a crash he saw when taking pictures during a gameplay. Being the young developer I was, I looked at Crashlytics, didn't see anything, and moved on. It wasn't until later that I learned that out of memory crashes aren't picked up by crash reporting tools other than Apple. I decided to look into it and hunt it down if indeed it was a problem. (Narrator: It was)

I went through and audited all the places in code where I display an image. Save searching for a game image, all images are loaded from files on disk (though that did change in this version from the files being binary blobs in Core Data). When I needed to show a picture I'd simply create a UIImage from the data and put it in an image view. Done and done. Except that leaks memory like crazy and can result in the system killing the app. Whoopsie.

Thankfully there were a couple great guiding resources that came out this year: WWDC 2018 session 219 (presented by Xcoder Kyle Sluder – once an Xcoder always an Xcoder), and a very helpful post by Jordan Morgan going into some extra detail on how to get it done.

I set about adding a downsampling method like the one at WWDC, and had to defer actually rendering the image until I knew the frame that it needed to fill. Sometimes it's a small circle representing a person's image, sometimes it's a picture taken during a gameplay. I don't make assumptions about that until I know all the conditions I need. Images are stored inside of the <Container>/Documents/Pictures/<ImageType> directory, so knowing the image name (which is stored in the Core Data entity) and the type of the image I need, I could assemble the URL to the image file on disk.

From there I hand it, along with the size needed, to the downsampler, and it hands back the scaled image. I had to get rid of essentially all of my [UIImage imageWithData:] calls in favor of this new method. It was a fair bit more work than I thought it might be, but wound up saving a ton of memory so it's completely worth it. Plus it was the right thing to do.

Miscellaneous

Those are the two things that took the longest to get done, and there are a handful of other things that made it in to this release from times over the last few years when I did some concentrated work on Scorebook. The big ones were architectural things like splitting up the main storyboard into multiple smaller ones, and refactors to have the app flow use coordinators.

I've also been modularizing things a bit as I'm working to get sync turned back on. We take that modular approach at Lyft and I really have enjoyed the benefits.

Final Thoughts

One thing I wasn't expecting was easily slipping back into my old mindsets as I approached a problem that I had set aside for a while. I have to knowingly refuse to let myself get caught up into old habits, or think that a problem is too big. If it's too big it just needs to be deconstructed further until it's in manageable chunks that I can tackle one at a time.

I'm incredibly happy to have Scorebook back on the App Store. Bringing it back to life has been a lot of work, but I've been able to look back at the decisions past me made (good and bad), and apply some of what I've learned at my day jobs as well.

Go forth and remember your games!

Mixing Swift and Objective-C in a Framework

If you've worked on an iOS app that has both Swift and Objective-C code you're likely well familiar with the rules for getting the two languages to talk to each other. A bridging header here, some @objc declarations there, and you're probably good to go.

But the rules change a little bit when you want to start breaking your code up into frameworks and suddenly there's no bridging header. So how are you supposed to get your mixed-language framework target working anyhow? Fear not, you've come to the right place.

For this example let's say we have a language learning app called BiLingual, and it has a framework target called LanguageKit.

Setting Things Up

The framework target needs to be configured to build as a module (using the DEFINES_MODULE build setting). It also needs to then define a module name (the PRODUCT_MODULE_NAME build setting). Usually you won't have a custom name, but that build setting is what you'd use if you want to.

Objective-C into Swift

Your Swift files that belong to the your framework target implicitly import the target. Where app targets use a bridging header to talk to Objective-C code, frameworks rely on the umbrella header – LanguageKit.h in our case. The umbrella header imports the headers you want to be made public like this: #import <LanguageKit/Language.h>. This style of import is a modular import.

Any header imported into the umbrella header can now be seen by Swift files belonging to your framework target. Sweet!

Swift into Objective-C

Like apps, frameworks rely on a generated header to expose Swift code to Objective-C. In our app we would import its generated header like so: #import BiLingual-Swift.h. But, in our framework files we need a modular import like this: #import <BiLingual/BiLingual-Swift.h>. Please, please, please only import these generated files in your implementation files. Importing them into headers is only asking for trouble.

From here the standard rules apply. Your Swift classes need to be NSObjects and members need to be decorated with @objc in order to be seen by Objective-C. The one other thing to note is that the Swift types need to be public in order for visibility to Objective-C. I honestly don't know why this is at the moment. I'm hoping that someone smarter than me ca point me to the reasoning for this. If you know, please drop me a line.

Private Objective-C into Swift

If you're like me, you're a stickler for having clean API boundaries. This means you may have some Objective-C classes that you don't want framework consumers to see but your Swift files might need them. Thankfully there is a good way to do this. You'll create a special module that is only visible inside the framework. Don't worry though, it's not as intimidating as it sounds.

First, in your framework's root, create a file called module.map. I don't know why it has to be this exact title, but it does. Here's its contents:

module LanguageKit_Private {
    // import your private headers here
    export *
}

We're making a module called LanguageKit_Private, and importing our private headers into it. There an extra build setting we'll set for this to work: SWIFT_INCLUDE_PATHS = $(SRCROOT)/LanguageKit. This tells the build system to look for additional module map files. From there in our Swift code, we can import LanguageKit_Private and the headers imported there are now accessible by Swift. 🎉

Wrap-Up

We've seen how to import public and private Objective-C into Swift, and public Swift into Objective-C. So go forth and worry no more about mixing your framework target languages!

Scorebook is Back!

TL;DR: Scorebook has returned to the App Store!

Scorebook got removed from sale by Apple in January of last year. I spent most of the year sad about that, and late in the year I asked myself a big question: Had the time come to let it go?

Screw that noise. Let’s get to work.

I went about updating it to require iOS 12. There were new devices that it needed to support, safe areas to work with, and an entire web API to replace. I’ve done a lot of work calling web services over the last few years at my day jobs. But I’ve never built one before.

After a lot of head down time over the last month I’m so happy to report that Scorebook 1.5 was released this morning. I’ve got a lot of work I want to do on it including finishing up the CloudKit sync I built out, updating the UI to be less 2014 and more 2019, and figuring out a different pricing strategy (hello in-app purchase!).

The most important thing is that Scorebook is back. I hope you check it out, play some games with your family and friends, and make memories over your games.

Check out Scorebook from Taphouse Software

Gross

In the second grade my dad took me to my first Mariners game. It was in the Kingdome against the Blue Jays and I really had no interest in going. We got there after the game started and got to our seats to the right of the press box. At some point during the game a foul ball came up to us (hit by backup catcher Matt Sinatro). A guy near us caught it and handed me the ball. I was hooked.

That was my entry into sports. I loved the Mariners and couldn't get enough of them. Fast forward to today. Last season was a huge disappointment; blowing another lead to some historic A's team is old hat. Thankfully we are going on a plan to tear things down and rebuild in a way that might actually get us competitive in a few years (I'm all aboard this plan FWIW – it's the only way we're going to ever compete for a World Series).

But last week a story came out that is just gross. I feel gross in wanting to cheer them on to win it all.

For about a year, the Mariners employed Dr. Lorena Martin. She came to a new role with the team as a "high performance trainer". She was let go from the team last month and is now saying that GM Jerry Dipoto, Manager Scott Servais, and Farm Director Andy McKay demonstrated clear racist tendencies, especially against Latin players. I don't know if her claims are true or not; the Mariners have fervently denied all the claims.

A lot of the stories she recounts seem to be personal meetings without third parties to back them up. So we seem left in a he-said, she-said position with no real insight of what to think. If it's true, Dipoto, Servais, and McKay need to go. Full stop. I don't want them leading the Mariners, and I don't want to cheer for a team they lead. But if it's not true then Dr. Martin should say so immediately.

As a fan I don't know what to think. I just want the truth to come out. And to root on my Mariners. One day they will win the World Series and I want to feel good about cheering them on to it.

The Ultimate Guide to iOS Notification Lifecycles

I work on the notifications team at Lyft for my day job. One of the things that comes up frequently is the question of how can we trigger our code from a push notification (or even, “what is a push notification”). I’ve had these chats enough that it seemed relevant to put together what I hope to be the definitive answer. I hope it’s useful to you.

Here we go.

What is a user notification?

Let’s start small. A user notification is a way for app developers to let their user know that something in the app warrants their attention. Users can be notified in any combination of three different ways: an alert (most commonly a banner that drops down from the top of the screen), a sound, and/or a badge.

Apps must ask users for permission to show these notifications before anything can be displayed. Oftentimes apps will show their own custom view asking for permission before displaying the system dialog. This is because an app can only prompt their users once for display of notifications. After that, the user must go to the app’s preference screen in the system Settings.app.

How can user notifications be triggered?

Notifications can be scheduled as a local notification, or delivered via push. Both types of notifications can do the same things, appear exactly the same to the user, and as of iOS 10, Apple unified the notification system under the UNUserNotificationCenter suite of APIs. This makes handling of notifications the same.

What is a push notification?

A push notification is a notification that comes in remotely. It may or may not display any content (ones that do not display content are referred to as silent pushes). It is important to note that apps do not have to ask the user’s permission to deliver push notifications. Silent pushes do not require user permission.

You must register your app with the system in order to send a push. This is done with UIApplication.shared.registerForRemoteNotifications(). You’ll be called back with the result of this by implementing application(UIApplication, didRegisterForRemoteNotificationsWithDeviceToken: Data) for success, and application(UIApplication, didFailToRegisterForRemoteNotificationsWithError: Error) for failure. When the registration succeeds, you’ll get a device token that needs to be registered with your server that you’ll use to send a push.

When does my code run via push?

This is going to get complicated. The short answer is: it depends. But you didn’t come here for short answers, so let’s dig in. We’ll start by looking at an actual notification payload. I’ll put in the stuff we need, but you can find the full payload content reference here.

{
  "aps": {
	"alert": {
	  "title": "Our fancy notification",
	  "body": "It is for teaching purposes only."
	},
	"badge": 13,
	"sound": "fancy-ding",
	"content-available": 1,
	"category": "CUSTOM_MESSAGE",
	"thread-id": "demo_chat",
	"mutable-content": 1
  }
}

If you were to send this payload to a device, the user would see an alert with the title being the aps.alert.title value, the app icon would badge with the aps.badge value, and the sound would be the aps.sound value (assuming all the permissions were granted by the user).

You might want to send a notification that wakes up your app to fetch some content in the background, and a silent push is perfect for that task. This can be accomplished by setting the aps. content-available field like the example. When the system receives this notification it may wake up your app. Wait, may wake up? Why won’t it always wake up the app?

This brings us to the uncomfortable reality: the payload contents as well as the app state determine when your code is activated. Let’s look at a chart:

App State Payload Result
Backgrounded aps.content-available = 1 application(_:didReceiveRemoteNotification:fetchCompletionHandler:)
Running aps.content-available = 1 application(_:didReceiveRemoteNotification:fetchCompletionHandler:)
Killed aps.content-available = 1 Nothing
Running aps.content-available = 0 Nothing
Backgrounded aps.content-available = 0 Nothing

So, if your app is running or has been backgrounded, you’ll be called. But if the user has killed your app, nothing happens. Bummer! The next question you may ask is if there is a way for your code to run every time a notification comes in. Indeed there is.

Apple gives us two different extension points that let us run code when notifications arrive: UNNotificationServiceExtension and UNNotificationContentExtension.

Notification Service Extension

With the service extension, the intention is that you are going to mutate the content of the extension before it is presented to the user. The typical use cases are for things like decrypting an end-to-end encrypted message, or attaching a photo to a notification.

There are 2 main requirements that your notification must meet in order for this extension to fire: Your payload must contain both aps.alert.title and aps.alert.body values, and aps.mutable-content must be set to 1. The payload we have above satisfies both of those requirements and would trigger our app’s service extension if it had one.

It’s important to note that you cannot trigger a service extension with a silent push. The intention is for the extension to mutate a notification’s content, and silent pushes have no content.

Payload Result
aps.mutable-content = 1 && aps.title != nil && aps.body != nil Extension Fires
Anything else Nothing

Notification Content Extension

The other extension type allows us to paint a custom view on top of a notification, instead of relying on the system to render the notification. Users get to see this custom view when they force press or long press a notification (or in Notification Center, they can swipe left on a notification and tap “View”).

When you include one of these with your app, the extension’s Info.plist file declares an array called UNNotificationExtensionCategory. Each of these categories maps to a notification’s aps.category value. To go back to our sample notification above, our app would have to register a content extension for the CUSTOM_MESSAGE category.

Payload Result
Extension’s UNNotificationExtensionCategory contains aps.category Extension Fires
Anything else Nothing

One more thing

There’s one last bit that I want to cover: The UNUserNotificationCenterDelegate protocol. A conformer will get called in two main cases: when your app is in the foreground and will present a notification, and when a notification is tapped on.

When the user taps on one of your notifications, the app will fire up and come to the foreground. It’s a good idea to register the notification center delegate at startup so you’ll get the userNotificationCenter(_:didReceive:withCompletionHandler:) call. This lets you properly handle the notification, otherwise your app will just come to the foreground and that’s it.

Wrap-up

We’ve come a long ways, looking over the kinds of notifications we can have (interactive and silent), the different transport mechanisms for them (local and push), and how all these ways can interact with our code.

Securing an nginx Container with letsencrypt

One of the things I wanted to automate with my new website was deployment, including fetching SSL certificates and installing them in nginx. I came into this project knowing nothing about shell scripts and Docker. Now I know a _little_about each. With the web going increasingly https I knew I had to jump this hurdle to be production ready.

Thankfully I found a blog post that helped me get started. All I had to do was make some adaptations to my setup. While that post is very much talking about his environment I wanted this post to generalize for others (be it a static site or one on a separate container like mine).

My sample project uses a Vapor web server, but you can sub out anything you wish for that. Here’s the sample on GitHub

We start with a Dockerfile that will build up our nginx container:

# build from the official Nginx image
FROM nginx

# install essential Linux packages
RUN apt-get update -qq && apt-get -y install apache2-utils curl

# sets the environment domain variable for use in other files
ARG build_domain
ENV DOMAIN $build_domain

# where we store everything SSL-related
ENV SSL_ROOT /var/www/ssl
 
# where Nginx looks for SSL files
ENV SSL_CERT_HOME $SSL_ROOT/certs/live
 
# copy over the script that is run by the container
COPY ./nginx/web_cmd.sh /tmp/
RUN ["chmod", "+x", "/tmp/web_cmd.sh"]

# establish where Nginx should look for files
ENV WEB_ROOT /home/hello-world/Public/

# Set our working directory inside the image
WORKDIR $WEB_ROOT

# copy over static assets
COPY Public /home/hello-world/Public

# copy our Nginx config template
COPY ./nginx/server.conf /tmp/docker_example.nginx

# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst '$DOMAIN:$WEB_ROOT:$SSL_ROOT:$SSL_CERT_HOME' < /tmp/docker_example.nginx > /etc/nginx/conf.d/default.conf

# Define the script we want run once the container boots
# Use the "exec" form of CMD so Nginx shuts down gracefully on SIGTERM (i.e. `docker stop`)
CMD [ "/tmp/web_cmd.sh" ]

This file had to be modified a bit from the referenced blog post above. The script he used was updated to a new name and support current letsencrypt standards. I also wanted to inject the domain in from the outside. So the build_domain argument comes in to this script and creates an environment variable out of it. Pretty handy!

The last line here calls a script which preps everything for the SSL procedure:

#!/usr/bin/env bash

# initialize the dehydrated environment
setup_letsencrypt() {

  # create the directory that will serve ACME challenges
  mkdir -p .well-known/acme-challenge
  chmod -R 755 .well-known

  # See https://github.com/lukas2511/dehydrated/blob/master/docs/domains_txt.md
  echo "$DOMAIN www.$DOMAIN" > domains.txt

  # See https://github.com/lukas2511/dehydrated/blob/master/docs/wellknown.md
  echo "WELLKNOWN=\"$WEB_ROOT.well-known/acme-challenge\"" >> config

  # fetch stable version of dehydrated
  curl "https://raw.githubusercontent.com/lukas2511/dehydrated/v0.6.2/dehydrated" > dehydrated
  chmod 755 dehydrated
}

# creates self-signed SSL files
# these files are used in development and get production up and running so dehydrated can do its work
create_pems() {
  openssl req \
      -x509 \
      -nodes \
      -newkey rsa:1024 \
      -keyout privkey.pem \
      -out $SSL_CERT_HOME/fullchain.pem \
      -days 3650 \
      -sha256 \
      -config <(cat <<EOF
[ req ]
prompt = no
distinguished_name = subject
x509_extensions    = x509_ext
 
[ subject ]
commonName = $DOMAIN
 
[ x509_ext ]
subjectAltName = @alternate_names
 
[ alternate_names ]
DNS.1 = localhost
IP.1 = 127.0.0.1
EOF
)
 
  openssl dhparam -out dhparam.pem 2048
  chmod 600 *.pem
}

# if we have not already done so initialize Docker volume to hold SSL files
if [ ! -d "$SSL_CERT_HOME" ]; then
  mkdir -p $SSL_CERT_HOME # _HOME
  chmod 755 $SSL_ROOT
  chmod -R 700 $SSL_ROOT/certs
  cd $SSL_CERT_HOME
  create_pems
  cd $SSL_ROOT
  setup_letsencrypt
fi

# if we are configured to run SSL with a real certificate authority run dehydrated to retrieve/renew SSL certs
if [ "$CA_SSL" = "true" ]; then

  # Nginx must be running for challenges to proceed
  # run in daemon mode so our script can continue
  nginx

  # accept the terms of letsencrypt/dehydrated
  ./dehydrated --register --accept-terms

  # retrieve/renew SSL certs
  ./dehydrated --cron

  # copy the fresh certs to where Nginx expects to find them
  cp $SSL_ROOT/certs/$DOMAIN/fullchain.pem $SSL_ROOT/certs/$DOMAIN/privkey.pem $SSL_CERT_HOME

  # pull Nginx out of daemon mode
  nginx -s stop
fi

# start Nginx in foreground so Docker container doesn't exit
nginx -g "daemon off;"

The gist of what’s going on here is that we are setting up SSL. This script can be invoked in either a dev environment or a production one (that variable is configured in the docker-compose file below). This is handy for local development where a self-signed cert will do. For production it will go through the whole process of phoning the letsencrypt service and getting real certificates.

We then assemble our server config file that gets copied into the container. Here’s that template:

server {
	server_name $DOMAIN;
	listen 443 ssl;

	root $WEB_ROOT;

	# configure SSL
	ssl_certificate $SSL_CERT_HOME/fullchain.pem;
	ssl_certificate_key $SSL_CERT_HOME/privkey.pem;
    ssl_dhparam $SSL_CERT_HOME/dhparam.pem;

	try_files $uri @proxy;

	location @proxy {
		proxy_pass http://web:8080;
		proxy_pass_header Server;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_pass_header Server;
		proxy_connect_timeout 3s;
		proxy_read_timeout 10s;
	}
}

server {
  # many clients will send unencrypted requests
  listen 80;

  # accept unencrypted ACME challenge requests
  location ^~ /.well-known/acme-challenge {
    alias $SSL_ROOT/.well-known/acme-challenge/;
  }

  # force insecure requests through SSL
  location / {
    return 301 https://$host$request_uri;
  }
}

I’m no nginx expert and largely this is copied from the other blog post. You’ll hopefully notice the environment variables throughout the file (like $DOMAIN). They get substituted out in our Dockerfile.

Another easily overlooked part of this file is the line proxy_pass http://web:8080. This host is used by Docker as a token. The web host will be setup as a service in our Docker compose file.

Speaking of the compose file, here it is:

version: "3.3"
services:
  web:
    image: jsorge/hello-world
    volumes:
      - ./Public:/app/Public
  nginx:
    build:
      context: .
      dockerfile: ./nginx/Dockerfile-nginx
      args:
        build_domain: hello.world
    environment:
      CA_SSL: "true"
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./Public:/home/hello-world/Public
      - ./nginx/ssl:/var/www/ssl
      - ./nginx/logs:/var/log/nginx
    depends_on:
      - web

The webservice sets up our web server, and then the nginx service configures our container by running the Dockerfile. We expose the ports needed by nginx and also install the volumes.

If you wanted to simplify and use a static site, you could pass the whole directory of your site as a volume and put it in the nginx server’s root directory. I’ve not done that myself, but it seems logical for static sites. In that case you wouldn’t need the web service at all, and in the server config file you’d probably use localhost  for the proxy pass value.

I hope this has been helpful, and taken some of the pain out of getting SSL up and running on your website. I’m happy to help with any questions you might have, via email or Twitter.