/home/josephspurrier

How to Sync a Static Website to Amazon S3 with Rclone

After using WordPress for years, I decided to change this website to a static one so it’s cheaper, safer, and easier to manage. After reading and participating on the Golang Subreddit for about two years, I realized a lot of people were using the static website generators. Websites starting loading fasters and were much cleaner than websites in the past. I wanted in on this new trend and now I’m using Hugo.

Once I got my website up, I needed an easy way to keep my files synced. After looking around at the paid solutions, I stumbled upon the ncw/rclone repository on GitHub. I loved that it was written in Go and decided to download it.

After following the instructions for Amazon S3 with the binary, I noticed a strange problem. The initial sync worked beautifully, but when I modified my web files and tried to sync again, the files would download instead of render in the browser. I downloaded the source and noticed a little problem with the SetModTime() func. If you update the modification time of the file in S3 and don’t specify the Content-Type, it will default to the S3 default type of binary/octet-stream which will tell your browser to download the file instead of render.

I added a little code to fix the problem and sent a pull request. Nick was quick to merge and now you can easily sync your files from your local folder to S3.

I’m on Windows so here is the batch script I use to sync my files now using Rclone:

@ECHO OFF

REM Add the Hugo binary folder to the PATH environment variable
SET PATH=C:\Users\User\Desktop\hugo\rclone;%PATH%

REM Sync the files to S3
rclone.exe sync C:\Users\User\Desktop\hugo\Sites\josephspurrier.com\public remote:www.josephspurrier.com

PAUSE

If you’re looking for an efficient and free solution for syncing files to S3, head over to Nick’s repository and give him a star for his work! You can download the latest binary from his downloads page.

#aws #code