Technical Information

How is this site set up?

During development, all components of this site will be hosted on my personal web server.

Each individual component of the system (Ghost, RShiny, QuestDB, Apache) is containerized with Docker, allowing easy generation of new services. When the time comes to transfer the site to an NSSTC/UAH system, data can be easily exported to the new containers.

My server employs HAProxy as a reverse-proxy and load-balancer (currently there is only one node). I use Letsencrypt as my HTTPS certificate authority.

It should be noted that creating posts with the titles "API" or "Data" will render them inaccessible, as HAProxy will prevent their defualt URIs from ever getting to Ghost.


Recommendations for Code

As you write code which uses links to the RESTful API and other resources in this system, I would recommend using #definitions or some sort of variables to define the domain name (swirll.km4yhi.com).

When services are transferred to NSSTC/UAH systems, the domain name will become something like "nsstc.uah.edu/swirll" or "swirll.edu" (swirll.edu is my personal preference, but budgetary concerns may prohibit this change).


Notes for NSSTC/UAH IT

This system will likely need to become operational very rapidly in order to support operations in the Fall 2020 semester. Here are some of my thoughts about transitioning away from my server.

AWS

If IT feels comfortable running these Docker containers on Vortex (or other), I have no qualms with that method.

My personal recommendation—simply to ease the setup process for IT—would be to host this entire system in AWS, as there will be fairly high storage requirements. Distancing the system from Vortex would also eliminate any potential network security issues related to remote uploads.

Containers

Here is a list of the necessary containers for IT's reference:

  • Ghost (content management system at the root domain) - 1 port
  • Apache/PHP (RESTful API and file server... this can potentially be hosted directly on the Vortex web server... not sure if this is Apache or Nginx) - 1 TCP port
  • RShiny (live data views) - 2 TCP ports... must account for websockets in the proxy
  • QuestDB (time series database) - likely only 1 TCP port
  • I may potentially run an RStudio cloud instance - 1 TCP port with websockets
  • I run this on a personal server, but would like to have one or two instances on UAH hardware if possible: an AWIPS/CAVE cloud instance - also 1 TCP port with websockets

I am working to create custom containers for each of these services. These containers will be available on my server, but I can push them to Docker Hub or Github if needed.

Each container will need at least one directory mounted on the host, some more. I am still working out how data will be imported into new containers if one fails. This decision will likely be dependent upon how IT would like to handle storage and backups.

Network

Note: I am not implying that the container ports should be exposed to the Internet. Only Vortex itself should be able to access these ports (i.e. only accept traffic on these ports from the localhost). Traffic to the Internet will be secured over HTTPS on the standard port 443.

At this time, I am only running HTTP and websocket traffic within this system, so I do not presently see a need to run a TCP-level proxy (i.e. HAProxy or Nginx, though I assume one of these solutions is already employed on Vortex or some other load-balancing machine).

I will provide my rewrite rules from my HAProxy configuration. Several rewrites are needed to handle path proxying (i.e. /api/* or /data/* directing to a different server than /insert-document-name). Note: path proxying relies on the condition that we will run under the "nsstc.uah.edu/swirll" domain. If we acquire "swirll.edu" or similar, simpler proxying methods may be employed by using subdomains (data.swirll.edu, api.swirll.edu, etc.).


Other Notes

At some point, we will need to create a method of uploading data to this system from platforms in the field... in real time. We would also like to host several cameras around the region which will also upload imagery in real time.

My personal preference is to use the REST API and simple Token-based authentication. We could simply write files without execution permissions to avoid accidental malware injection. I may be missing other security concerns with this method, but it's my first thought.


Contact

I can be reached at nick.perlaky@uah.edu anytime. I am fine to meet remotely on any platform if needed.

Show Comments