avatar

Embedding resources with rice.go in a Gin project

Introduction

For the past few weeks I've been playing around with gin which pretty much covers all my needs when creating a web application. So, still about that goploader project of mine, I wanted to make the installation of the server part painless for people wanting to host the server themselves. What I had in mind was allowing people to download a single binary which would embed all the static assets (js, css, html templates, icons) and make the setup easy by first serving a form to automatically configure the server (which would generate a conf.yml file).

Now I've also worked with go.rice which does a nice job at embedding resources in a binary file by simply generating go source files including all the assets in it. So, how can we make gin use those resources ? That will be covered in the first part of this tutorial. In the end I managed to do this, and I thought it was the end of the story. Except I didn't want to offer only a binary, but also an archive containing the static assets, which would then allow people to customize the look, information and content of the served web pages. Problem is : When the resources aren't embedded, go.rice doesn't check for relative paths, only for absolute paths. So when someone downloaded that archive, they would get an error telling them that the box wasn't found, although they had the right files in the right places.

From r.LoadHTMLGlob() to r.SetHTMLTemplate()

Here we go. Ready to switch from on-disk templates and static files to embedded ones. Let's say this was your previous code :

package main

import (
    "net/http"

    "github.com/gin-gonic/gin"
)

func main() {
    r := gin.Default()
    r.LoadHTMLGlob("templates/*")
    r.Static("/static/", "assets")
    r.GET("/", func(c *gin.Context) {
        c.HTML(http.StatusOK, "index.html", gin.H{})
    })
    r.Run(":8080")
}

Here we have the most classical project starter with gin. Loading all templates in templates/, serving static assets in assets/ with the route /static/. Let's add some rice in this thing ! Adding the support for static files is pretty easy :

// File : project/main.go
// Replace r.Static("/static/", "assets") with :
r.StaticFS("/static", rice.MustFindBox("assets").HTTPBox())

But things will get pretty complicated when it comes to templates. As you may know it, gin parses all the templates when it starts and doesn't dynamically load them when called. So you can't just give it an HTTPBox() like we did earlier for the static files. Instead we need to tell the engine to use some templates. No way around it than parsing them manually. We'll create a function called InitAssetsTemplates (it is exported because you may want your main.go file to remain clean and we'll put that inside an utils package) that will do that for us !

Let's first modify our main.go file :

// File : project/main.go
package main

import (
    "log"
    "net/http"

    "github.com/Depado/articles/rice-gin/utils"
    "github.com/GeertJohan/go.rice"
    "github.com/gin-gonic/gin"
)

func main() {
    var err error

    tbox, _ := rice.FindBox("templates")
    abox, _ := rice.FindBox("assets")

    r := gin.Default()
    if err = utils.InitAssetsTemplates(r, tbox, abox, "index.html"); err != nil {
        log.Fatal(err)
    }
    r.GET("/", func(c *gin.Context) {
        c.HTML(http.StatusOK, "index.html", gin.H{})
    })
    r.Run(":8080")
}

Now we're talking. Stop writing your comment about how bad it is to not handle errors, and wait for the end of the article. Please. You'll see why it doesn't matter if an error is thrown or not at this point. For now we will handle only the templateBox (tbox) to load the templates into the engine. Let's look at what that function does, shall we ?

// File : project/utils/router.go
package utils

import (
    "html/template"

    "github.com/GeertJohan/go.rice"
    "github.com/gin-gonic/gin"
)

// InitAssetsTemplates initializes the router to use the rice boxes.
// r is our main router, tbox is our template rice box
// and names are the file names of the templates to load
func InitAssetsTemplates(r *gin.Engine, tbox, abox *rice.Box, names ...string) error {
    var err error
    var tmpl string
    var message *template.Template

    for _, x := range names {
        if tmpl, err = tbox.String(x); err != nil {
            return err
        }
        if message, err = template.New(x).Parse(tmpl); err != nil {
            return err
        }
        r.SetHTMLTemplate(message)
    }
    r.StaticFS("/static", abox.HTTPBox())
    return nil
}

Quite a lot of things to annotate here. First of all, the function declaration. We need to tell gin which files it needs to load in the engine, so we'll need to explicitly give the names of the templates we want to load. Then we will just cycle through the provided template names, load them, parse them and set them inside the engine with the name of the template being the key so that we can do c.HTML(200, "index.html", gin.H{}) in our routes. Then we simply add the static route using our trusty assets box abox.

“Yes but what happens if the boxes can't be loaded or found ?!”
Hey first of all, you calm down. Like right now. I told you we would come to that later. But if that's all you want to do (creating embedded binaries) you're good to go, just keep in mind that you need to generate the go source files using rice embed-go before compiling. Otherwise the boxes will never be found. And of course handle errors. (Ignoring them is only used in the next part).

Not embedded ? No worries ! Fallback !

If you followed me right, what I wanted to do is that with the archive release, files should be loaded from disk. Now the thing is, rice.go doesn't do that. It will register the absolute path of the boxes, and try to find them no matter what at this exact location if it cannot be found in the binary. So let's handle the fallback !

// File : project/utils/router.go
package utils

import (
    "html/template"

    "github.com/GeertJohan/go.rice"
    "github.com/gin-gonic/gin"
)

// InitAssetsTemplates initializes the router to use the rice boxes.
// r is our main router, tbox is our template rice box, abox is our assets box
// and names are the file names of the templates to load
func InitAssetsTemplates(r *gin.Engine, tbox, abox *rice.Box, names ...string) error {
    var err error

    if tbox != nil {
        var tmpl string
        var message *template.Template
        for _, x := range names {
            if tmpl, err = tbox.String(x); err != nil {
                return err
            }
            if message, err = template.New(x).Parse(tmpl); err != nil {
                return err
            }
            r.SetHTMLTemplate(message)
        }
    } else {
        r.LoadHTMLGlob("templates/*")
    }

    if abox != nil {
        r.StaticFS("/static", abox.HTTPBox())
    } else {
        r.Static("/static", "assets")
    }
    return nil
}

Now that's why errors at that point didn't matter that much. We will check if the boxes are nil pointers, and if so, fallback to serving files and templates from disk. Embedded or not, your files will be served. Of course this will check if the templates directory exists, and if not it will panic in case the templates aren't embedded. Although there is no error thrown when the assets directory doesn't exist or can't be found.

Buffer-less Multipart POST in Golang

Introduction

For the client of my goploader I started by using a simple POST method. Posting raw data was effective but there was a small problem. I couldn't name the file when serving it, so you ended up downloading things that were named aefa3d32-c222-437e-4d6b-5181bca2d3d1 without even knowing the type of the file you're downloading. Of course when the content type can be determined, it's not really a problem but it is still inconvenient for the users. Around that time I had the idea of using multipart. My first idea was to have two fields, file and name which the server could understand.

Then I realised that a multipart file upload would contain the name of the file anyway. I kept the name field in case the data source isn't properly a file but would come from os.Stdin for example. Also it allows to set a name that is different than the file name.

I had a rough time understanding what was going on. A simple http.Post was pretty easy to do when you write raw data in it. A multipart post is somewhat more complicated and I ended up loading the whole file in ram which is... Bad. Also, I am using the github.com/cheggaaa/pb progress-bar and it didn't make any sense to monitor the speed in which the file is read from disk to memory. (“Wow my connection is blazing fast, 350Mo/s !”)

Enters io.Pipe()

“Pipe creates a synchronous in-memory pipe. It can be used to connect code expecting an io.Reader with code expecting an io.Writer. Reads on one end are matched with writes on the other, copying data directly between the two; there is no internal buffering. It is safe to call Read and Write in parallel with each other or with Close. Close will complete once pending I/O is done. Parallel calls to Read, and parallel calls to Write, are also safe: the individual calls will be gated sequentially. ” - Godoc about io.Pipe

io.Pipe() looks like exactly what we need as we are going to use a multipart.Writer to write the content of our file as the request body. But http.Post() takes an io.Reader as argument, not an io.Writer. The basic approach would be to write down the entire body in a byte buffer and then pass the said buffer to the request. What if we simply read the content while its written ? That's the role of io.Pipe().

package main

import (
    "log"
    "os"
    "time"

    "github.com/cheggaaa/pb"
)

const service = "https://url.of.your.service"

func main() {
    var err error
    var f *os.File
    var fi os.FileInfo
    var bar *pb.ProgressBar

    if f, err = os.Open("test.txt"); err != nil {
        log.Fatal(err)
    }
    if fi, err = f.Stat(); err != nil {
        log.Fatal(err)
    }
    bar = pb.New64(fi.Size()).SetUnits(pb.U_BYTES).SetRefreshRate(time.Millisecond * 10)
    bar.Start()
}

Here we start by declaring a few variables and initialize them. We open a file (test.txt), store its information in an os.FileInfo so we can get the size when we initialize the bar. That program doesn't do much, nothing complicated here. Let's head to the multipart part.

package main

import (
    "fmt"
    "io"
    "io/ioutil"
    "log"
    "mime/multipart"
    "net/http"
    "os"
    "time"

    "github.com/cheggaaa/pb"
)

const service = "https://url.of.your.service"

func main() {
    var err error
    var f *os.File
    var fi os.FileInfo
    var bar *pb.ProgressBar

    if f, err = os.Open("test.txt"); err != nil {
        log.Fatal(err)
    }
    if fi, err = f.Stat(); err != nil {
        log.Fatal(err)
    }
    bar = pb.New64(fi.Size()).SetUnits(pb.U_BYTES).SetRefreshRate(time.Millisecond * 10)
    bar.Start()

    r, w := io.Pipe()
    mpw := multipart.NewWriter(w)
    go func() {
        var part io.Writer
        defer w.Close()
        defer f.Close()

        if part, err = mpw.CreateFormFile("file", fi.Name()); err != nil {
            log.Fatal(err)
        }
        part = io.MultiWriter(part, bar)
        if _, err = io.Copy(part, f); err != nil {
            log.Fatal(err)
        }
        if err = mpw.Close(); err != nil {
            log.Fatal(err)
        }
    }()

    resp, err := http.Post(service, mpw.FormDataContentType(), r)
    if err != nil {
        log.Fatal(err)
    }
    defer resp.Body.Close()
    ret, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        log.Fatal(err)
    }
    fmt.Print(string(ret))
}

First of all we start by creating our pipe, and our multipart.Writer which will write on the “write” end of the pipe. The next thing we do is start a goroutine. It will first create the file field, attributing to it the name of our file using the os.FileInfo we gathered earlier. The role of this goroutine will be to write the content of our file into a reader that will be read at the same time by our http.Post so that there is no buffering. As we also want to update the progress bar during this process, we make part a multiple writer (it will write both to part and bar). We then copy the content of our file right into our part and don't forget to close the multipart writer at the end, otherwise the server won't understand.

The rest of the program is pretty classic, we read the response of the server and print it to stdout.

Hope this helps !

Setting up Caddy Server on Debian

Note : Feedback is appreciated in case you give it a try on Ubuntu.

Caddy Server looks like is the next-gen web server. Here are some specs about Caddy that might be relevant :

  • HTTP/2 & HTTP
  • IPv6 & IPv4
  • Out of the Box Let's Encrypt support
  • Markdown
  • Websockets
  • Proxy and load balancer
  • FastCGI
  • Overly simple configuration files
  • ... And a lot more !

For a complete list of directives that Caddy supports you can head to the official documentation

Let's Encrypt Integration

As of version 0.8 of Caddy, it is now integrating Let's Encrypt. See Caddy 0.8 Released with Let's Encrypt Integration on the Caddy Blog.

Why is that such a big deal ? What does it mean ? Let's Encrypt could be the subject of a whole blog post. In short terms it allows you to receive and use free SSL certificates and thus allows anyone to provide a secure layer to their website. For a long time, having a valid (signed everywhere) certificate was complicated and expensive. Meaning : If you wanted to provide a security layer to your website you had to pay. And not only you had to pay, but you were also compelled to prove that you're actually the owner of your domain by giving lot of information about yourself. Let's Encrypt breaks this wall. It brings security to any site owner, even those without the funds to pay for a valid certificate. It's the end of the x509 certificate era.

Now what does Caddy have to do with that ? Well Caddy... Automatically serves your sites with HTTPS by using Let's Encrypt. You don't have to do anything, you don't have to worry about the certificates. You don't even have to give out any personal information about yourself. It just abstracts the process of requesting a certificate and using it. It also abstracts the renewal of these certificates. Meaning that with a two line long configuration file like this :

depado.eu {
    proxy / localhost:8080
}

It will automatically request Let's Encrypt for a valid certificate and serve depado.eu with HTTPS by default. Isn't that just great ? To get more information about that feature of caddy, head over to the documentation about automatic https.

Installation

First of all, go to the Caddy Server Download Page and select the features you want, your architecture and operating system. Instead of click on the button, right click and copy the url. Time to ssh into your server and start having fun. First of all, let's download caddy in a relevant location.

# mkdir /etc/caddy/
# wget "https://caddyserver.com/download/build?os=linux&arch=amd64&features=" /etc/caddy/caddy.tar.gz
# tar xvf /etc/caddy/caddy.tar.gz

You can give Caddy the rights to bind to low ports. To do so, here is the command you can execute :

# setcap cap_net_bind_service=+ep /etc/caddy/caddy

Let's create our first Caddyfile in /etc/caddy/. Edit /etc/caddy/Caddyfile and add something like that :

yourdomain.com {
    proxy / localhost:5000
}

Assuming something is running on port 5000, Caddy will then proxy every request for the yourdomain.com domain directly to the application running on that port. If you already have a website you want to bind to caddy, then head over to the full documentation and see what directives are useful for you. Let's start caddy for the first time so that it can bind itself to Let'sEncrypt service.

# cd /etc/caddy/
# ./caddy

Caddy will then ask you for an email to give to the Let'sEncrypt service. If you don't wish to give that out, then don't, but keep in mind that you won't be able to recover your keys if you loose them. Our initial setup is done. Let's move on to the supervisor section.

Supervisor configuration

In this guide I'll assume you have a functionning supervisor installation. It will allow us to execute caddy as a daemon. First of all we'll edit the /etc/supervisor/supervisord.conf and add this line under the [supervisord] section :

minfds=4096

Why is that ? In a production environment, caddy will complain that the number of open file descriptor is too low. The reason is that supervisor's default value is too low (1024, instead of 4096 as recommended by caddy). Now let's add a new program to our supervisor configuration. Create the file /etc/supervisord/conf.d/caddy.conf :

[program:caddy]
directory=/etc/caddy/
command=/etc/caddy/caddy -conf="/etc/caddy/Caddyfile"
user=www-data
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/caddy_stdout.log
stderr_logfile=/var/log/supervisor/caddy_stderr.log

You can customize the user to the one you want. As I said earlier, caddy now doesn't need root privileges to bind to low ports, so any user will do (prefer a user with few rights). Caddy is now ready to be started by supervisor ! Simply add the program into supervisor and enjoy.

# supervisorctl reread
# supervisorctl add caddy
# supervisorctl start caddy