Monitoring network speeds

Every once in a while, my network goes sideways on me. The performance drops to the point where it's unusable. I'm never sure if it's the router or the modem. To help figure out what's going on, I setup some scripts that run a speedtest and store the results. I run these scripts on a Macbook that uses WI-FI and on a linux system that is hard-wired into the network. I also wrote a script that alerts me if my bandwith drops below 100Mb/sec for more than 30 minutes and I created a web page so I could see the results of the tests.

๐Ÿ” Key Features

Initial Setup

Create a directory for all this. Since I'm running a docker container for the web page, I'll store everything for this in my docker directory

mkdir -p ~/docker/speedtest/templates

Here's what we're going to create on the Linux (Ethernet-attached) system:

Here's what we're going to create on the Mac (Wi-Fi-attached) system:

๐Ÿš€ Docker Installation Steps

On the system where you want to host the webpage, install Docker

  1. Update your system
    sudo apt update && sudo apt upgrade -y
  2. Install Docker
    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo sh get-docker.sh
  3. Install Docker Compose
    sudo apt install docker-compose-plugin
  4. Add your user to the Docker group
    sudo usermod -aG docker $USER
  5. Enable Docker at boot
    sudo systemctl enable docker
  6. Reboot
    sudo reboot
  7. Test Docker
    docker run hello-world

๐Ÿงพ Docker Compose File

In ~/docker/speedtest create docker-compose.yml

    services:
  netspeed:
    build: .
    container_name: netspeed-web
    ports:
      - "9099:8080"
    volumes:
      - /mnt/speedtest/speedtest_results.db:/app/speedtest_results.db:ro
      - /mnt/speedtest/mac_speedtest_results.db:/app/mac_speedtest_results.db:ro
      - /mnt/speedtest/fping_results.db:/app/fping_results.db:ro
      - /mnt/speedtest/mac_fping_results.db:/app/mac_fping_results.db:ro
    restart: unless-stopped

I use a shared device to store the database from the Mac and the Linux system. This allows me to run this anywhere. To do this, you MUST use NFS; CIFS won't let the database lock correctly. If you know a workaround for this, please let me know. Here's the entry in /etc/fstab on the Linux system:

192.168.86.234:/volume1/speedtest /mnt/speedtest nfs rw,_netdev,hard,intr,noatime 0 0

And here's what I used on the Mac:

192.168.86.234:/volume1/speedtest ~/speedtest/database nfs rw,resvport 0 0

Now create Dockerfile in ~/docker/speedtest on the Linux system:

FROM python:3.12-slim

WORKDIR /app

# Install Flask
RUN pip install --no-cache-dir flask

# Copy app and templates
COPY netspeed_web.py /app/
COPY templates /app/templates

# Expose port
EXPOSE 8080

# Run the app
CMD ["python", "netspeed_web.py"]

Other tools

We now need to install Python, SQLite, and speedtest

Python

We need Python on the MAc and Linux system. First, check if it's already installed by running:

python3 --version

or:

which python3

If nothing comes back, install it with these commands:

Linux:

sudo apt update
sudo apt install -y python3 python3-pip python3-venv

Mac:

brew install python

Test that it worked:

python3 --version
pip3 --version

SQLite

Much like Python, we need it on the Linux and Mac system. Test if it's already installed:

sqlite3 --version

If nothing comes back, install it:

Linux:

sudo apt update
sudo apt install -y sqlite3 libsqlite3-dev

Mac:

brew install sqlite

Test it:

sqlite3 --version

speedtest

Now we need speedtest on the Linux system and the Mac. Run this:

Linux:

curl -s https://packagecloud.io/install/repositories/ookla/speedtest-cli/script.deb.sh | sudo bash
sudo apt install speedtest

Now run:

which speedtest

It should return /usr/bin/speedtest

You need to run it once to accept the license so that when we automate it, it can run unattended. Run:

speedtest

Answer all the prompts. It will run a test and, the next time you run it, it won't prompt for anything.

On the Mac, it's a little different. You install it with this command:

Mac:

brew install speedtest

There are other ways to install it but the other versions I found don't output the data the same.

Like above, run it once to answer the prompts.

The Scripts

On the Mac, create a directory for the script:

mkdir ~/speedtest

Create speed_test.py in that directory. This is the script that runs speedtest and stores the results in the database.

#!/usr/bin/env python3

import subprocess
import json
import sqlite3
from datetime import datetime

DB_PATH = "~/speedtest/database/mac_speedtest_results.db" # Wherever you mounted your shared directory in /etc/fstab


def run_speedtest():
    result = subprocess.run(
        ["/opt/homebrew/bin/speedtest", "--format", "json"],
        capture_output=True,
        text=True,
        check=True
    )
    return json.loads(result.stdout)

def save_results(data):
    conn = sqlite3.connect(
        DB_PATH,
        timeout=30,
        isolation_level=None  # autocommit
    )
    cursor = conn.cursor()

    # Enable WAL to prevent locking (optional if local disk)
    cursor.execute("PRAGMA journal_mode=WAL;")

    cursor.execute("""
        CREATE TABLE IF NOT EXISTS results (
            timestamp TEXT PRIMARY KEY,
            download REAL,
            upload REAL,
            ping REAL
        )
    """)

    download_mbps = data["download"]["bandwidth"] * 8 / 1_000_000
    upload_mbps   = data["upload"]["bandwidth"]   * 8 / 1_000_000

    cursor.execute(
        "INSERT OR REPLACE INTO results VALUES (?, ?, ?, ?)",
        (
            datetime.now().isoformat(timespec="seconds"),
            download_mbps,
            upload_mbps,
            data["ping"]["latency"]
        )
    )

    conn.close()

if __name__ == "__main__":
    speedtest_data = run_speedtest()
    save_results(speedtest_data)

Now, on the Linux system, in ~/docker/speedtest create speed_test.py:

#!/usr/bin/env python3

import subprocess
import json
import sqlite3
from datetime import datetime

DB_PATH = "/mnt/speedtest/speedtest_results.db"


def run_speedtest():
    result = subprocess.run(
        ["speedtest", "--format", "json"],
        capture_output=True,
        text=True,
        check=True
    )
    return json.loads(result.stdout)

def save_results(data):
    conn = sqlite3.connect(
        DB_PATH,
        timeout=30,
        isolation_level=None  # autocommit
    )
    cursor = conn.cursor()

    # Enable WAL to prevent locking (optional if local disk)
    cursor.execute("PRAGMA journal_mode=WAL;")

    cursor.execute("""
        CREATE TABLE IF NOT EXISTS results (
            timestamp TEXT PRIMARY KEY,
            download REAL,
            upload REAL,
            ping REAL
        )
    """)

    download_mbps = data["download"]["bandwidth"] * 8 / 1_000_000
    upload_mbps   = data["upload"]["bandwidth"]   * 8 / 1_000_000

    cursor.execute(
        "INSERT OR REPLACE INTO results VALUES (?, ?, ?, ?)",
        (
            datetime.now().isoformat(timespec="seconds"),
            download_mbps,
            upload_mbps,
            data["ping"]["latency"]
        )
    )

    conn.close()

if __name__ == "__main__":
    speedtest_data = run_speedtest()
    save_results(speedtest_data)

On the Mac, create ~/speedtest/fping_poll.py. This is the script that runs fping and stores the results in the database.

#!/usr/bin/env python3

import subprocess
import sqlite3
from datetime import datetime

DB_PATH = "~/speedtest/database/mac_fping_results.db"
TARGET_FILE = "~/speedtest/targets"
COUNT = 20

def run_fping(target):
    """
    Runs fping -C  -q  and parses the output.
    Works on macOS and Linux.
    """

    result = subprocess.run(
        ["fping", "-C", str(COUNT), "-q", target],
        capture_output=True,
        text=True
    )

    # macOS sometimes prints to stdout, Linux to stderr
    line = result.stderr.strip() or result.stdout.strip()

    # If fping fails (DNS failure, host unreachable)
    if ":" not in line:
        return None, None, None, 100.0

    # Example:
    # 8.8.8.8 : 12.3 12.1 - 12.5 12.4 12.2
    samples = line.split(":")[1].strip().split()

    values = [float(x) for x in samples if x != "-"]
    loss = (samples.count("-") / COUNT) * 100

    if values:
        min_rtt = min(values)
        max_rtt = max(values)
        avg_rtt = sum(values) / len(values)
    else:
        min_rtt = max_rtt = avg_rtt = None

    return min_rtt, avg_rtt, max_rtt, loss


def save_results(target, min_rtt, avg_rtt, max_rtt, loss):
    conn = sqlite3.connect(DB_PATH)
    cursor = conn.cursor()

    cursor.execute("""
        CREATE TABLE IF NOT EXISTS results (
            timestamp TEXT,
            target TEXT,
            min REAL,
            avg REAL,
            max REAL,
            loss REAL
        )
    """)

    cursor.execute("""
        INSERT INTO results (timestamp, target, min, avg, max, loss)
        VALUES (?, ?, ?, ?, ?, ?)
    """, (
        datetime.now().isoformat(),
        target,
        min_rtt,
        avg_rtt,
        max_rtt,
        loss
    ))

    conn.commit()
    conn.close()


def load_targets():
    with open(TARGET_FILE) as f:
        for line in f:
            t = line.strip()
            if t and not t.startswith("#"):
                yield t


if __name__ == "__main__":
    for target in load_targets():
        min_rtt, avg_rtt, max_rtt, loss = run_fping(target)
        save_results(target, min_rtt, avg_rtt, max_rtt, loss)

And on the linux system create ~/docker/fping_poll.py:

import subprocess
import sqlite3
from datetime import datetime

DB_PATH = "/mnt/speedtest/fping_results.db"
COUNT = 20
TARGET_FILE = "~/docker/netspeed/targets"

def run_fping(target):
    result = subprocess.run(
        ["fping", "-C", str(COUNT), "-q", target],
        capture_output=True,
        text=True
    )

    line = result.stderr.strip() or result.stdout.strip()

    # If fping fails (host unreachable, DNS failure), skip
    if ":" not in line:
        return None, None, None, 100.0

    samples = line.split(":")[1].strip().split()
    values = [float(x) for x in samples if x != "-"]
    loss = (samples.count("-") / COUNT) * 100

    if values:
        min_rtt = min(values)
        max_rtt = max(values)
        avg_rtt = sum(values) / len(values)
    else:
        min_rtt = max_rtt = avg_rtt = None

    return min_rtt, avg_rtt, max_rtt, loss


def save_results(target, min_rtt, avg_rtt, max_rtt, loss):
    conn = sqlite3.connect(DB_PATH)
    cursor = conn.cursor()

    cursor.execute("""
        CREATE TABLE IF NOT EXISTS results (
            timestamp TEXT,
            target TEXT,
            min REAL,
            avg REAL,
            max REAL,
            loss REAL
        )
    """)

    cursor.execute("""
        INSERT INTO results (timestamp, target, min, avg, max, loss)
        VALUES (?, ?, ?, ?, ?, ?)
    """, (
        datetime.now().isoformat(),
        target,
        min_rtt,
        avg_rtt,
        max_rtt,
        loss
    ))

    conn.commit()
    conn.close()


def load_targets():
    with open(TARGET_FILE) as f:
        for line in f:
            t = line.strip()
            if t and not t.startswith("#"):
                yield t


if __name__ == "__main__":
    for target in load_targets():
        min_rtt, avg_rtt, max_rtt, loss = run_fping(target)
        save_results(target, min_rtt, avg_rtt, max_rtt, loss)

Now, on the Linux system, in ~/docker/speedtest, create netspeed_web.py. This is the brains to the webpage.

#!/usr/bin/env python3
from flask import Flask, jsonify, render_template
import sqlite3
import logging
from datetime import datetime

# Setup logging
logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(message)s"
)

# Database paths inside Docker container
DBS = {
    "LINUX": "/app/speedtest_results.db",
    "Mac": "/app/mac_speedtest_results.db"
}

app = Flask(__name__)

def fetch_data(db_path):
    """Fetch timestamp, download, upload from SQLite DB with logging."""
    try:
        conn = sqlite3.connect(db_path)
        cursor = conn.cursor()
        cursor.execute("SELECT timestamp, download, upload FROM results ORDER BY timestamp ASC")
        rows = cursor.fetchall()
        conn.close()
        logging.info(f"Fetched {len(rows)} rows from {db_path}")
        return [{"timestamp": ts, "download": dl, "upload": ul} for ts, dl, ul in rows]
    except Exception as e:
        logging.error(f"Error reading {db_path}: {e}")
        return []

@app.route("/data")
def get_data():
    """Return JSON data for LINUX and Mac with logging."""
    logging.info("Serving /data request")
    nas_data = fetch_data(DBS["LINUX"])
    mac_data = fetch_data(DBS["Mac"])
    logging.info(f"LINUX rows: {len(nas_data)}, Mac rows: {len(mac_data)}")
    return jsonify({
        "nas": nas_data,
        "mac": mac_data
    })

@app.route("/")
def index():
    logging.info("Serving / (index) request")
    return render_template("index.html")

if __name__ == "__main__":
    logging.info("Starting netspeed_web Flask server on 0.0.0.0:8080")
    app.run(host="0.0.0.0", port=8080, debug=True)

The final script will be run in cron and is used to detect if the bandwidth drops below 100Mb/sec for 30+ minutes. On the Linux system, in ~/docker/speedtest create poll_speedtest.py:

import sqlite3
import smtplib
from email.mime.text import MIMEText
from datetime import datetime, timedelta
import json
import os

# --- CONFIG ---
LINUX_DB = "/mnt/speedtest/speedtest_results.db"
MAC_DB = "/mnt/speedtest/mac_speedtest_results.db"

THRESHOLD = 100  # Mbps - set this to whatever you want
PERSIST_MINUTES = 30

STATE_FILE = "~/docker/speedtest/alert_state.json"

# --- EMAIL CONFIG ---
GMAIL_USER = "your_email@gmail.com"
GMAIL_PASS = "your_app_password"   # This is NOT the password you use to log in to gmail. 
TO_EMAIL = "your_email@gmail.com"


# --- ALERTING ---
def send_alert(message):
    msg = MIMEText(message)
    msg["Subject"] = "Speedtest Alert"
    msg["From"] = GMAIL_USER
    msg["To"] = TO_EMAIL

    with smtplib.SMTP("smtp.gmail.com", 587) as server:
        server.starttls()
        server.login(GMAIL_USER, GMAIL_PASS)
        server.send_message(msg)


# --- STATE HANDLING ---
def load_state():
    if not os.path.exists(STATE_FILE):
        return {}
    with open(STATE_FILE, "r") as f:
        return json.load(f)


def save_state(state):
    with open(STATE_FILE, "w") as f:
        json.dump(state, f)


# --- DB HELPERS ---
def get_latest_download(db_path):
    conn = sqlite3.connect(db_path)
    cursor = conn.cursor()
    cursor.execute("SELECT timestamp, download FROM results ORDER BY timestamp DESC LIMIT 1")
    row = cursor.fetchone()
    conn.close()
    return row  # (timestamp, download_mbps)


# --- CHECK LOGIC ---
def check_source(name, db_path, state):
    row = get_latest_download(db_path)
    if not row:
        return

    ts_str, download = row
    ts = datetime.fromisoformat(ts_str)

    low_key = f"{name}_low_since"
    alert_key = f"{name}_alert_sent"

    if download < THRESHOLD:
        if low_key not in state:
            state[low_key] = ts_str
        else:
            low_since = datetime.fromisoformat(state[low_key])
            if datetime.now() - low_since >= timedelta(minutes=PERSIST_MINUTES):
                if not state.get(alert_key):
                    send_alert(f"{name} download has been below {THRESHOLD} Mbps for {PERSIST_MINUTES} minutes")
                    state[alert_key] = True
    else:
        state.pop(low_key, None)
        state.pop(alert_key, None)


# --- MAIN ---
def main():
    state = load_state()

    check_source("Linux", LINUX_DB, state)  
    check_source("Mac", MAC_DB, state)

    save_state(state)


if __name__ == "__main__":
    main()

On the Mac (in ~speedtest) and Linux (~docker/speedtest) create a file called targets:

login.microsoftonline.com
facebook.com
8.8.8.8
1.1.1.1
google.com
cloudflare.com
192.168.86.1

Use any list of tragets that make sens to you. I included my router so I could detect any issues on my lan.

The webpage

On the Linux system, in ~docker/speedtest/templates create index.html:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Internet Speed History</title>
    
    <!-- Chart.js -->
    <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
    
    <!-- Luxon for time parsing -->
    <script src="https://cdn.jsdelivr.net/npm/luxon@3/build/global/luxon.min.js"></script>
    <script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-luxon@1"></script>

    <style>
        body {
            font-family: sans-serif;
            background: #111;
            color: #eee;
            padding: 20px;
        }
        canvas {
            max-width: 100%;
        }
        #error-log {
            margin-top: 20px;
            color: red;
            white-space: pre-wrap;
        }
        #latest-speed {
            margin-bottom: 20px;
            font-size: 1.1em;
        }
        label, select {
            margin-bottom: 10px;
            display: block;
            font-size: 1em;
        }
    </style>
</head>
<body>

<h2>Download / Upload Speed (Mbps) โ€” LINUX vs Mac</h2>

<div id="latest-speed"></div>

<label for="time-range">Time Range:</label>
<select id="time-range">
    <option value="6">Last 6 hours</option>
    <option value="12">Last 12 hours</option>
    <option value="24">Last 24 hours</option>
    <option value="168">Last 1 week</option>
    <option value="all" selected>All</option>
</select>

<canvas id="speedChart"></canvas>
<div id="error-log"></div>

<script>
function logError(message) {
    console.error(message);
    const logDiv = document.getElementById("error-log");
    logDiv.textContent += message + "\n";
}

// Global storage for original data and chart instance
let originalData = null;
let chartInstance = null;

// Filter dataset by hours
function filterDataByHours(data, hours) {
    if (hours === "all") return data;
    const cutoff = Date.now() - hours * 60 * 60 * 1000;
    return data.filter(r => new Date(r.x).getTime() >= cutoff);
}

// Update chart based on selected time range
function updateChart(hours) {
    if (!originalData || !chartInstance) return;

    chartInstance.data.datasets[0].data = filterDataByHours(originalData.nasDownload, hours);
    chartInstance.data.datasets[1].data = filterDataByHours(originalData.nasUpload, hours);
    chartInstance.data.datasets[2].data = filterDataByHours(originalData.macDownload, hours);
    chartInstance.data.datasets[3].data = filterDataByHours(originalData.macUpload, hours);

    chartInstance.update();
}

// Fetch data and initialize chart
fetch("/data")
    .then(res => {
        if (!res.ok) throw new Error("HTTP error " + res.status);
        return res.json();
    })
    .then(data => {
        if (!data.nas || !data.mac) {
            logError("Missing LINUX or Mac data in response.");
            return;
        }

        // Show latest speeds
        const latestLINUX = data.nas[data.nas.length - 1];
        const latestMac = data.mac[data.mac.length - 1];
        const latestDiv = document.getElementById("latest-speed");
        latestDiv.textContent = `Latest LINUX: Download ${latestLINUX.download.toFixed(2)} Mbps, Upload ${latestLINUX.upload.toFixed(2)} Mbps | `
                              + `Latest Mac: Download ${latestMac.download.toFixed(2)} Mbps, Upload ${latestMac.upload.toFixed(2)} Mbps`;

        // Store original data for filtering
        originalData = {
            nasDownload: data.nas.map(r => ({ x: new Date(r.timestamp), y: r.download })),
            nasUpload:   data.nas.map(r => ({ x: new Date(r.timestamp), y: r.upload })),
            macDownload: data.mac.map(r => ({ x: new Date(r.timestamp), y: r.download })),
            macUpload:   data.mac.map(r => ({ x: new Date(r.timestamp), y: r.upload }))
        };

        // Initialize Chart.js
        const ctx = document.getElementById("speedChart").getContext("2d");
        chartInstance = new Chart(ctx, {
            type: "line",
            data: {
                datasets: [
                    { label: "LINUX Download", data: originalData.nasDownload, borderColor: "lime", fill: false, tension: 0.3 },
                    { label: "LINUX Upload", data: originalData.nasUpload, borderColor: "cyan", fill: false, tension: 0.3 },
                    { label: "Mac Download", data: originalData.macDownload, borderColor: "#007bff", borderDash: [5,5], fill: false, tension: 0.3 },
                    { label: "Mac Upload", data: originalData.macUpload, borderColor: "red", borderDash: [5,5], fill: false, tension: 0.3 }
                ]
            },
            options: {
                responsive: true,
                scales: {
                    x: {
                        type: "time",
                        time: {
                            tooltipFormat: "MMM d, yyyy HH:mm:ss",
                            displayFormats: { minute: "HH:mm", hour: "MMM d HH:mm", day: "MMM d" }
                        },
                        title: { display: true, text: "Timestamp" }
                    },
                    y: {
                        beginAtZero: true,
                        title: { display: true, text: "Mbps" }
                    }
                },
                interaction: { mode: 'nearest', intersect: false }
            }
        });

        // Handle dropdown changes
        document.getElementById("time-range").addEventListener("change", (e) => {
            updateChart(e.target.value);
        });
    })
    .catch(err => {
        logError("Failed to load or parse /data: " + err.message);
    });
</script>

</body>
</html>

Now create fping.html:

<!DOCTYPE html>
<html>
<head>
    <title>FPing Combined Latency</title>
    <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
    <script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-date-fns"></script>
    <style>
        body { font-family: Arial; margin: 20px; background: #f4f4f4; }
        h1 { margin-bottom: 20px; }
        .chart-container {
            width: 100%;
            height: 500px;
            background: white;
            padding: 20px;
            box-shadow: 0 0 5px rgba(0,0,0,0.1);
        }
        .selector {
            margin-bottom: 20px;
            padding: 10px;
            background: white;
            display: inline-block;
            box-shadow: 0 0 5px rgba(0,0,0,0.1);
        }
    </style>
</head>
<body>

<h1>FPing Combined Latency (Linux + Mac)</h1>
<p><a class="nav-link" href="/">Back to home โ†’</a></p>


<!-- NEW: Target selector -->
<div class="selector">
    <label for="targetSelect"><strong>Select Target:</strong></label>
    <select id="targetSelect" onchange="goToTarget()" style="padding: 5px; margin-left: 10px;">
        <option value="">-- choose a target --</option>
        {% for t in targets %}
            <option value="{{ t }}">{{ t }}</option>
        {% endfor %}
    </select>
</div>

<script>
function goToTarget() {
    const t = document.getElementById("targetSelect").value;
    if (t) {
        window.location.href = "/fping/" + encodeURIComponent(t);
    }
}
</script>

<div class="chart-container">
    <canvas id="combinedChart"></canvas>
</div>

<script>
async function loadData() {
    const res = await fetch("/fping/data");
    return await res.json();
}

function buildDataset(rows, label, field, color, dash = []) {
    return {
        label: label + " " + field,
        data: rows.map(r => ({ x: r.timestamp, y: r[field] })),
        borderColor: color,
        borderDash: dash,
        fill: false,
        tension: 0.1
    };
}

async function drawChart() {
    const data = await loadData();

    const linux = data.linux;
    const mac = data.mac;

    new Chart(document.getElementById("combinedChart"), {
        type: "line",
        data: {
            datasets: [
                buildDataset(linux, "Linux", "min", "green"),
                buildDataset(linux, "Linux", "avg", "blue"),
                buildDataset(linux, "Linux", "max", "red"),
                buildDataset(mac, "Mac", "min", "purple"),
                buildDataset(mac, "Mac", "avg", "orange"),
                buildDataset(mac, "Mac", "max", "brown")
            ]
        },
        options: {
            responsive: true,
            scales: {
                x: {
                    type: "time",
                    time: { unit: "minute" },
                    title: { display: true, text: "Time" }
                },
                y: {
                    title: { display: true, text: "Latency (ms)" }
                }
            },
            plugins: {
                legend: {
                    labels: {
                        usePointStyle: true
                    }
                }
            }
        }
    });
}

drawChart();
setInterval(drawChart, 30000);
</script>

</body>
</html>

and, finally, create fping_target.html:

<!DOCTYPE html>
<html>
<head>
    <title>FPing: {{ target }}</title>

    <!-- Chart.js + date adapter -->
    <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
    <script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-date-fns"></script>

    <style>
        body {
            font-family: Arial, sans-serif;
            margin: 20px;
            background: #f4f4f4;
        }
        h1 {
            margin-bottom: 20px;
        }
        .chart-container {
            width: 100%;
            background: white;
            padding: 20px;
            margin-bottom: 40px;
            box-shadow: 0 0 5px rgba(0,0,0,0.1);
        }
        .selector {
            margin-bottom: 20px;
            padding: 10px;
            background: white;
            display: inline-block;
            box-shadow: 0 0 5px rgba(0,0,0,0.1);
        }
    </style>
</head>

<body>

<h1>FPing Target: {{ target }}</h1>
<p><a class="nav-link" href="/">Back to home โ†’</a></p>

<!-- Target selector -->
<div class="selector">
    <label for="targetSelect"><strong>Select Target:</strong></label>
    <select id="targetSelect" onchange="goToTarget()" style="padding: 5px; margin-left: 10px;">
        {% for t in targets %}
            <option value="{{ t }}" {% if t == target %}selected{% endif %}>{{ t }}</option>
        {% endfor %}
    </select>
</div>

<!-- Time-range selector -->
<div class="selector">
    <label for="timeRange"><strong>Time Range:</strong></label>
    <select id="timeRange" onchange="drawCharts()" style="padding: 5px; margin-left: 10px;">
        <option value="6h" selected>Last 6 hours</option>
        <option value="12h">Last 12 hours</option>
        <option value="24h">Last 24 hours</option>
        <option value="1w">Last 1 week</option>
        <option value="all">All</option>
    </select>
</div>

<script>
function goToTarget() {
    const t = document.getElementById("targetSelect").value;
    if (t) {
        window.location.href = "/fping/" + encodeURIComponent(t);
    }
}

// -----------------------------
// Data loading
// -----------------------------
async function loadData() {
    const res = await fetch("/fping/{{ target }}/data");
    return await res.json();
}

// -----------------------------
// Time-range filtering for ISO timestamps
// -----------------------------
function rangeToMs(range) {
    const now = Date.now();
    switch(range) {
        case "6h":  return now - 6 * 60 * 60 * 1000;
        case "12h": return now - 12 * 60 * 60 * 1000;
        case "24h": return now - 24 * 60 * 60 * 1000;
        case "1w":  return now - 7 * 24 * 60 * 60 * 1000;
        case "all": return 0;
        default:    return 0;
    }
}

function filterByRange(rows) {
    const minTime = rangeToMs(document.getElementById("timeRange").value);
    if (minTime === 0) return rows; // "all" selected

    return rows.filter(r => {
        const ts = new Date(r.timestamp).getTime(); // parse ISO string โ†’ ms
        return ts >= minTime;
    });
}

// -----------------------------
// Choose x-axis unit based on range
// -----------------------------
function xAxisUnit() {
    const range = document.getElementById("timeRange").value;
    switch(range) {
        case "6h":
        case "12h":
            return "minute";
        case "24h":
            return "hour";
        case "1w":
        case "all":
            return "day";
        default:
            return "hour";
    }
}

// -----------------------------
// Dataset builders
// -----------------------------
function jitterBand(rows, color) {
    return {
        label: "Jitter Band",
        data: rows.map(r => ({ x: new Date(r.timestamp).getTime(), y: r.max })),
        backgroundColor: color,
        borderWidth: 0,
        fill: {
            target: {
                data: rows.map(r => ({ x: new Date(r.timestamp).getTime(), y: r.min }))
            },
            above: color,
            below: color
        }
    };
}

function line(rows, label, field, color) {
    return {
        label: label,
        data: rows.map(r => ({ x: new Date(r.timestamp).getTime(), y: r[field] })),
        borderColor: color,
        borderWidth: 2,
        fill: false,
        tension: 0.1
    };
}

// -----------------------------
// Draw charts
// -----------------------------
async function drawCharts() {
    const data = await loadData();
    const linux = filterByRange(data.linux);
    const mac = filterByRange(data.mac);

    // Destroy previous charts if they exist
    if(window.latencyChartInstance) window.latencyChartInstance.destroy();
    if(window.lossChartInstance) window.lossChartInstance.destroy();

    // LATENCY CHART
    window.latencyChartInstance = new Chart(document.getElementById("latencyChart"), {
        type: "line",
        data: {
            datasets: [
                jitterBand(linux, "rgba(0,128,0,0.15)"),
                line(linux, "Linux Avg", "avg", "green"),
                jitterBand(mac, "rgba(128,0,128,0.15)"),
                line(mac, "Mac Avg", "avg", "purple")
            ]
        },
        options: {
            responsive: true,
            scales: {
                x: { type: "time", time: { unit: xAxisUnit() } },
                y: { title: { display: true, text: "Latency (ms)" } }
            }
        }
    });

    // PACKET LOSS CHART
    window.lossChartInstance = new Chart(document.getElementById("lossChart"), {
        type: "bar",
        data: {
            datasets: [
                { label: "Linux Loss (%)", data: linux.map(r => ({ x: new Date(r.timestamp).getTime(), y: r.loss })), backgroundColor: "rgba(255,0,0,0.5)" },
                { label: "Mac Loss (%)",   data: mac.map(r => ({ x: new Date(r.timestamp).getTime(), y: r.loss })),   backgroundColor: "rgba(255,165,0,0.5)" }
            ]
        },
        options: {
            responsive: true,
            scales: {
                x: { type: "time", time: { unit: xAxisUnit() } },
                y: { title: { display: true, text: "Loss (%)" }, min: 0, max: 100 }
            }
        }
    });
}

// Initial draw
drawCharts();
</script>

<div class="chart-container">
    <h3>Latency (Linux + Mac)</h3>
    <canvas id="latencyChart"></canvas>
</div>

<div class="chart-container">
    <h3>Packet Loss (%)</h3>
    <canvas id="lossChart"></canvas>
</div>

</body>
</html>

Pulling it all together

Now that we have all the pieces in place, let's get everything working. Start by using cron on the Mac and Linux systems to gather speedtest stats. On the Mac, run crontab -e and add this:

*/31 * * * * /usr/bin/python3 ~/speedtest/speed_test.py

And on the Linux system, add this:

*/30 * * * * /usr/bin/python3 ~/docker/speedtest/speed_test.py

These two lines will run approximately every 30 minutes and start populating the Mac and Linux databases with performance numbers. I stagger them by a minute so they don't both run at the same time.

Add these lines to gather fping stats. Again, I stagger them so they don't interfere with each other:

Linux

*/6 * * * * /usr/bin/python3 ~/docker/netspeed/fping_poll.py

Mac

*/7 * * * * /usr/bin/python3 ~/speedtest/fping_poll.py

Now, on the Linux system, add this line. It will run every 5 minutes and alert you if the bandwidth drops below the THRESHOLD you set in the script:

*/5 * * * * /usr/bin/python3 ~/docker/speedtest/poll_speedtest.py

Bring up the webpage

Now that you're collecting stats, you can bring up the webpage. You may want to wait until a bunch of stats are collected. To start up the webpage, from within ~/docker/speedtest run:

docker compose up --build -d

You can now browse to <YOUR_LINUX_SYSTEM>:9099 to see your stats.

๐Ÿ“œ View Logs

docker logs netspeed-web

If you find my content useful, please consider supporting this page:

โ˜• Buy Me a Coffee