Laravel Jetstream Workouts App

I recently joined a large bodybuilding challenge program. The plans they provide only work in Google Sheets and have very bad UI/UX.

I wrote a CSV scraper and loaded the data from the sheets into a MySQL database with the following tables/relations:

I then used Laravel Jetstream with Livewire to create a quick and basic web app that displays the workouts that are part of the plan for that day.

Not all of the features have been implemented yet. But in a few days I was able to get the web app running and at least have it usable.

It currently has the following features:

  • Change timezone
  • Select plan and select start date
  • Displays current workout on dashboard
  • Select substitute exercises

CCNP/DEVCOR – 1.1 Describe distributed applications related to the concepts of front-end, back-end, and load balancing

1.1 Describe distributed applications related to the concepts of front-end, back-end, and load balancing

What are the front end and the back end

Generally speaking:

  1. The front end refers to what a user/client can see
  2. The back end refers to what the company/server can see

The front end may refer to:

  1. A clients browser
  2. HTML/CSS/JS on a web page
  3. Images/Audio/Video a user sees

The back end may refer to:

  1. Load balancers, proxies, firewalls not directly accessible by a user
  2. Databases, caches, servers and object stores not directly accessible by a user
  3. Back end technologies and languages (server side languages)


What is load balancing

Without load balancing there is a direct connection between the client and the server.


This limits performance and reliability as the availability is based on a single server.

A load balancer is a middle man that appears as the "server" and directs front end traffic to back-end servers.


Load balancing features

An example of a load balancer that I use is NGINX:

NGINX offers the following load balancing methods 1:

  1. Round Robin – Requests are distributed evenly across the servers, with server weights taken into consideration.
  2. Least Connections – A request is sent to the server with the least number of active connections, again with server weights taken into consideration.
  3. IP Hash – The server to which a request is sent is determined from the client IP address. In this case, either the first three octets of the IPv4 address or the whole IPv6 address are used to calculate the hash value. The method guarantees that requests from the same address get to the same server unless it is not available.
  4. Generic Hash – The server to which a request is sent is determined from a user‑defined key which can be a text string, variable, or a combination. For example, the key may be a paired source IP address and port, or a URI.
  5. Least Time (NGINX Plus only) – For each request, NGINX Plus selects the server with the lowest average latency and the lowest number of active connections, where the lowest average latency is calculated.
  6. Random – Each request will be passed to a randomly selected server. If the two parameter is specified, first, NGINX randomly selects two servers taking into account server weights, and then chooses one of these servers.

There are other types of load balancing:

  1. Cookie marking – adds a field in the HTTP cookies which is used for balancing calculation
  2. Consistent IP-Hash: adds or removes servers without effecting user’s session or cache

Reverse proxy features (extra credit)


I use NGINX as a reverse proxy only (not as a load balancer). It acts a reverse proxy and serves web apps running on localhost as well as websites served from directories.

While it is possible to serve my website directly from the web app that runs it, it only allows for a single web server to run at a time. NGINX adds server multiplexing.

HTTP proxies are commonly used with web applications for gzip encoding, static file serving, HTTP caching, SSL handling, load balancing and spoon feeding clients. — 2

Scalability and Flexibility

As you can see by the diagram above the "Cheap VPC" when it gets a request for "" proxies the request to the server or folder based on the configuration.

This allows multiple sites to be run on a single IP address and the same server. I currently run 7 websites and 4 microservices on a single server/IP address.

The web app for "" runs on localhost and is not accessible on the internet directly. On this server NGINX is configured to proxy requests directed at "" to

NGINX can also be used to redirect websites and ports in any number of ways. For example all my websites force HTTPS and redirect all request to port 80 to port 443.


NGINX manages all my SSL certificates using certbot. Reverse proxies can also serve other security functions:

  • DDoS protection
  • Blacklist/Whitelists
  • Packet sniffing (SSL is performed by the load balancer so plaintext packets can be inspected)


I currently use two projects to monitor for performance on NGINX:

6 provides insight into many areas of server performance such as:

  • Requested files
  • Static requests
  • Not found URLs
  • Visitor hostnames/IPs
  • Operating Systems
  • Time Distribution
  • Referrers URLs/Sites
  • Google keywords
  • HTTP status codes

This data is scraped from the NGINX log files.

Ntopng provides real time data about traffic flows to a website. This was incredibly useful when I was trying to diagnose spikes in internet traffic to my website.

Reverse proxies can also improve performance by handle SSL termination, caching web pages and implementing compression.

Caching pages reduces the need to generate dynamic content everytime it is requested which is resource intensive.

Gzip compression reduces network bandwidth and speeds up the page load time.


Reverse proxies can act similar to firewalls with access control and filtering rules to prevent misconfigured servers from leaking sensitive data.


Reverse proxies can be configured to perform authentication. This allows a developer to put static websites or unprotected web apps behind password protection.

What are distributed applications

Distributed applications refer to applications which split there functionality or resources of multiple servers.


For example a server may store their images and videos in an AWS S3 bucket. This decouples the local file system from the web server allowing servers to me stood up and down without having to migrate data.

As another example a website might use separate services (microservices) for things such as authentication.

The website might use different servers for "" than it does for "", but the authentication is shared through a single authentication microservice.


Generated from markdown using parsedown-extra.

1.5 Evaluate an application design and implementation considering maintainability

This is not a formal blog entry for this post but just a musing about my own poor design choices after reviewing some of the material for this course.

My web app:

…. was my first serious attempt at creating a web app with some complexity. I learnt a lot of to do’s and don’t’s.

Original Design

The structure of my website in the beginning used two SQL tables called <<poultry>> and <<batch>> to signify a batch of quails or an individual bird.

The only difference were the following added fields (in <<batch>>):

  • is_batch: Boolean
  • total_incubated: Int
  • total_hatched: Int
  • dead_birds: Int

And the following added fields (in <<poultry>>):

  • sex: Boolean
  • alive: Boolean
  • batch: Int
  • tag: Int
  • culled: Timestamp

Here is a UML diagram of the original system tables:

This is a poor design as there is because the <<batch>> schema has multiple roles:

  1. A source of data about quail batches
  2. Forms a relational link with <<poultry>> (tight coupling)
  3. Forms a dependency as well as an inheritance

As you can see “batch is a batch” but “poultry is part of a batch”. This makes the programming difficult because the roles of each class are not defined and do not follow the “single responsibility principle”.

This design pattern breaks the SOLID design principals. Numbers 1, 2, and 3:

  1. Single responsibility principle: One class should have only a single responsibility.
  2. Open-closed principle: Components (classes, methods, etc.) should be open for extension but closed for modification.
  3. Liskov’s substitution principle: Derived types must be completely substitutable for their base types.
  4. Interface segregation principle: Clients should not be forced to depend upon the interfaces they do not use.
  5. Dependency inversion principle: Program to an interface, not to an implementation.

Current Design

I actually rewrote the website to rely on a single table <<poultry>> with an extra row called “is_batch”. This was an attempt at polymorphism but still failed the first rule of SOLID.

Here is a class diagram to show roughly how that worked.

As you can see the design still overloads the measurement table which breaks the first rule of SOLID.

Future Design

I have redesigned the system in UML form to better conform to SOLID design principals:

In this new design the following improvements have been made:

  1. Animal now inherits from Stock which allows Stock to have as little attributes as possible
  2. Animal can now have a type allowing the program to be expanded to handle more types of animals
    • language used in general to allow for birds/mammals and other animals
  3. Loose coupling is made between measurement class
  4. Measurements can now have unlimited fields as measurements are referred to as a UUID rather than an ID.
  5. Measurement types are unlimited as well due to types being implemented for them

Final Thoughts

After building a few projects I can definitely see the importance of OOP and SOLID design principals. I will be creating an Anki deck to cement the concepts better.

I am at the early stages of studying for this certification but can already see the usefulness of the subjects taught.


You can download my final UML doc here. To open it use UMLet.


I got my CCNA certification a few years ago and it expires shortly. So I have decided to go for CCNP DEVCOR certifcation as it aligns with what I am interested in.

I will put my study notes up on this blog and I will be building a GitHub repository with cheats/notes for the course.

Here is my study plan:

  1. Follow CCNA DEVCOR core study materials
  2. Learn Python 3 programming
  3. Study programming design patters (OOP)
  4. Learn Git basics

I will generate the following content:

  1. Blog posts
  2. Anki flash cards
  3. Sample/example code
  4. Cheat sheets

I will be doing this part time so check in every now and then. I will just say that there is heaps of better content out there. I am just doing this to help learn the concepts.

ESP8266 + 4 Digit Display = Dam Percentage Monitor

We have recently got a lot of rain where I live in Central Queensland. I was returning home and only just made it passed Theresa Creek before it overflowed. This rain is welcome as the local dam (Fairbairn Dam) has been at historically low levels.

Emerald is prone to flooding and the town has flooded before. Although recently our main concern has been about running out of water with the dam recording the lowest of 7.39% on the 16th of December, 2020.

Over a year ago I made an IOT Clock and built and printed a case for it. It had be laying around unused and I decided to repurpose it.

It consists of a MAX7219 4×32 Dot Matrix Display Module and a ESP8266 module. In this case I used a Wemos D1 Mini that I had lying around.

I discovered that has an internal API for querying storage levels.

Sun Water
var Url = '' + model_file + '/data?startDate=' + start_date;	//2019-03-24T02%3A24%3A44.003Z

function getData() {
    $.get(Url, function (data, result) {
        for (var i = 0; i < data.value.length; i++) {
            var temp = data.value[i];
            var temp_time = new Date(;
            var temp_time2 = temp_time.getTime() - (temp_time.getTimezoneOffset() * 60000);
            series_0.push({x: temp_time2, y: temp.storageLevelMetres});
            series_1.push({x: temp_time2, y: (temp.cubicMetersPerSecond * 86.4)});
            series_2.push({x: temp_time2, y: temp.percentageFull});

        if (data.continuationToken) {
            token = encodeURIComponent(data.continuationToken); //console.log(token);
            Url = '' + model_file + '/data?startDate=' + start_date + '&continuationToken=' + (token);
        chart2 = new Highcharts.Chart(chartOptions);


This is a recursive function that queries the Sun Water API and adds data to arrays. The “continuationToken” indicates if successive queries should be made to retrieve more data.

Using this code snippet and another section in the code that deals with time for the queries I was able to quickly throw together a micro service running on NodeJS and ExpressJS.

I used JavaScript to right pad the percentage levels and remove the period in between the numbers. An setInterval() call in JavaScript polls the Sun Water API every minute and updates the internal variable which holds the percentage.

The API call is here:

It returns a plain text response such as “1936”. This equates to 19.36%.

The code for the ESP8266 is very simple and just iterates over each number and draws it on the Dot Matrix supply. WiFi connection is done using a WiFi manager library. I used the ESP HTTP Client to connect to my micro service.

Here is the result:

It’s all quite simple. But it’s a cool example of a Full Stack project that includes manufacturing, electronics, networking and backend programming.

The Git repository is here.