logo
Published on

Running a Web Application Offline - Part 2

Authors
  • avatar
    Name
    Lauren Jarrett
    Twitter

Introduction

This is part 2 in taking a cloud application and transform it into a completely offline solution, running locally.

DNS Dilemma

After downloading all existing content from the cloud environment, I needed to make a few tweaks within the software to get everything running locally. As a software engineer, this was the side of things that I felt a bit better abount.

Updating the front end, pointing the Django urls to the file system and reconfiguring Nginx as the reverse proxy were all pretty straightforward. But once again, if only it were that simple.

Given the platform is designed to record a language, accessing the microphone via Chrome or any browser ws pretty important and to do that, requires SSL certificates. Here was an issue, self-signed certificates naturally gave the users a horrible warning about the website trying to steal their data.

This was made worse because how users had to access the platform via their browser. Accessing the router to access the platform, it would always error out if we were using a normal web address, complaining about dns resolution. So to start with, users would have to to type the IP address to reach the platform. With an ip address url and the self-signed ceritificates the user is presented with a horrid warning and in chrome which you can only bypass by typing thisisunsafe into the keyboard and pressing enter. No text box, no way to click around it. This was a less than ideal user experience for even the most tech literate audience.

So the first issue to resolve was replacing the IP address with a normal looking url and fix the DNS resolution complaint. Given that we had set up effectively a local server, with unknown devices accessing the system we couldn’t simply edit the etc/hosts file on every device. We needed to set up a DNS server or resolver to then point the router too. As a data analyst turned software engineer, I can safely say I was well outside my comfort zone.

After a bit of research, we settled on an Unbound resolver rather than Bind9 DNS server as we simply wanted to resolve a few URLs rather than anything more complicated. Step 1 done.

The next step was then were to run it? In Docker or on Ubuntu? Within Docker would mean it could talk neatly to the reverse proxy but would be monitored and restarted if needed. On Ubuntu was simpler to set up but it was less likely to neatly integrate within docker and be outside control of the container if it fell over.

Intially, we tried Docker but even with the port exposed the router refused to connect to it. So the decision was made and it moved it onto the Ubuntu system itself and simply updating the access list to include Docker.

After some very painful and tedious troubleshooting (in relation to .local urls only resolving locally), the resolver was set up and voila, I finally had managed to get the configuration to work. We could type into chrome a normal url which still triggered a warning, but one where the user could at least click proceed to pass!

Part 3 Coming Soon!

Now to the next part. Getting security certificates to completly get rid of the warning presented to the user.