logo
Published on

Running a Web Application Offline - Part 1

Authors
  • avatar
    Name
    Lauren Jarrett
    Twitter

Introduction

I get to work on some pretty incredible problems at my work. One of them was to take an entire web application and run it offline, enabling the communities to engage with the platform in their remote community. Australia is a vast country with patchy and unreliable internet in these remote communities. The need was for a completely offline solution where content captured locally could be synced into the cloud platform later, if required.

The platform itself is designed to help our First Nations communities record their language. This includes recording text, audio, image and video content to be recorded, reviewed and maintained by the community, ensuring they have sovereignty over their data.

Let’s talk technical details. What exactly where am I trying to do?

Firstly, take the existing Kubernetes cluster running the web application and run it locally on a Raspberry Pi. The web application consisted of multiple Postgres databases with backup solutions, a Django backend, a Nginx reverse proxy and a Vue front end.

Secondly, I needed to make basic changes to the platform to better suit the local setup. For example, shutting down the platform safely when the device was shut down and ensuring all processing content was saved and not corrupted.

Thirdly, additional hardware to ensure users could interact with the platform. I would need a router and bundle it together, taking into account the harsh outback conditions that the device would be exposed to when in use.

Finally, solve any bugs and hurdles as they came up along the way.

First things first. Getting the Hardware selection right.

The original idea was to run it on a Raspberry Pi as the device is both small and portable, meaning users can access the platform on their own devices. With the simple goal to "just run what we had in the cloud which was Kubernetes managing the platform".

The first question raised in this process, while Kubernetes was used within our cloud setup, was it really needed on the Pi? As a technology professional, we often get stakeholders coming to us with their solution for a problem and being aksed to implement it. This felt like that situation.

While Kubernetes could be run locally, but it would consume precious memory and resources without utilising any of its advantages. It was estimated a maximum 50 connections at any 1 time on a device so autoscaling containers seemed like an over-engineered solution. The decision was reached to run everything within Docker and went back to fine-tune and simplify our environment.

While the initial ideas entertained a cluster of Pi’s but those broke down with practical considerations. 1 device is hard enough to remember let alone 2 or 3. To be the most user friendly, everything was needed to be to run on a single device.

Even simplifying our builds on Docker alone, I was already hitting system limits of the Pi. I reduced all our image sizes with multi-stage builds, stripped out only essential files needing to run the code being passed in and re-examined all the processes running within the code to see what I could improve. With a few more tweaks, the complex environment was mostly running. I say mostly because the remaining bug was hard to identify the root cause. The Pi would freeze when using Celery to asynchronously encode videos above 100mb.

This bug was confusing because reviewing both the Docker logs and Ubuntu memory utilisation were showing that we weren’t reaching the Pi’s limit. After a tonne of research and spending way too much time down raspberry pi blogs, reddit and stacks overflow, I realised that the power cord to the Pi was not powerful enough to actually power the NVME drive where the videos were being saved to. Smaller videos didn’t trigger the same bug but larger ones did. I needed a laptop charger rather than the Pi charger and things were a bit more stable.

This was a reflection point. Should the Pi's be continued with and purchase the additional power cords or switch it be switched to a more powerful device?

After all the reconfiguring and resetting, when the Pi’s failed completely due to the raspberry image becoming getting corrupted on the SD Card, the decision was mde. The device switched over to the Intel NUCs. The NUCs have a bit more processing capability and weren’t being going to have the OS mounted from the SD card, thus making it more reliable in the long term.

Thankfully after all the hard work and bug fixing, everything was set up pretty quickly and finally, the project was finally on its away.

Other changes to the platform

Running the local server means I needed to make several changes to better suit the local serving of the platform.

Firstly, I needed to set up scripts to run when the device was turned on to start all the containers and ensure that they came online in the right dependency order. I needed better health checks in the docker compose file rather than just depends on. I realised depends on waits until the container starts, not when it is running. Updating these and everything was consitently starting in right manner.

Secondly, I needed a way to shut down the platform and the device from the web application. Using the power button to turn the device off ran a high probability of corrupting data especially if it were still encoding videos. To get this working, I built an api end point so the front end could shut everything down. On the backend, I had to get a bit more creative as you can't kill docker from inside the system, it didn't have the commands.

To get around this, I had to install docker command line into the containers and then mount docker inside itself so it could check if async containers were still running and wait until they had stopped before safely stopping the docker system and shutting down the device.

Finally, a big concern on any local device was back-up and not losing data. I set up different backup scripts to run on the databases within docker and media and other content saved onto the file system, copying it different hard drives to try to ensure that in the event the device was dropped I could recover as much data as possible. I set these up to run every time the device was turned on and every day if it was left running.

Almost there - Addtional hardware

Having the router and computer meant that for practical ease, they needed to be bundled together. Naturally, as a technology person, the conversation went straight to 3D printing a case and talking the right filament to withstand the heat and the Outback conditions.

Given these communities are remote, the bundle needed within the harsh Australian outback to record content. These conditions include withstanding dirt, dust, 40 degree temperatures outdoor or higher if left in a car, and either having a remote power supply or be able to be charged by the communities existing portable supplies.

The decision was made to pick a fluroescent filament so that it would always stand out and be easy to find if put down. After trying ABS, Nylon and PLA filaments, PLA was selected for the printing ease, its stiffness and strength with the strict instruction to never leave in direct sunlight given it's poor ability to withstand the heat.

And with that, I had the prototype hardware bundle set up! With the Hardware finally sorted, let’s move to the software side of things in Part 2.