Skip to content

openanalytics/shinyproxy-monitoring

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ShinyProxy Monitoring

Summary

This repository provides all resources required for setting up comprehensive monitoring of ShinyProxy on Kubernetes. The setup uses Loki (together with Alloy) for collecting logs of ShinyProxy, the ShinyProxy Operator and any app running in ShinyProxy. Prometheus is used for gathering metrics of ShinyProxy and the apps (i.e. the resources used by the apps). The setup also includes Grafana, together with nine dashboards for visualizing all logs and metrics.

The retention of both Loki and Prometheus is set to 90 days.

Overview of dashboards

Starting with version 3.2.0 of the stack, all documentation has been added to the dashboards. See the Description panel at the top of every dashboard and the info icons of every panel for detailed information.

ShinyProxy Usage

Screenshot Screenshot (continued)

  • Datasource: Prometheus
  • Goal: provide inside in the current usage and performance of ShinyProxy.

Note: the last three panels of this dashboard can somtimes show a too high value, e.g. the app crashes dashboard could list two app crashes while in reality only a single app crashed. This is caused by a limitation in Prometheus.

ShinyProxy Aggregated Usage

Screenshot

  • Datasource: Prometheus
  • Goal: provide inside in the long-term usage and performance of ShinyProxy.

ShinyProxy logs

Screenshot

  • Datasource: Loki
  • Goal: show the logs of the ShinyProxy server

Note: This requires ShinyProxy to log using the JSON format.

ShinyProxy Operator Logs

Screenshot

  • Datasource: Loki
  • Goal: show the logs of the ShinyProxy Operator

Note: promtail is configured such that it recognizes when Java outputs a stack trace and therefore collects this as a single log message. We could improve and optimize this by adding an option to the ShinyProxy Operator to log to JSON.

ShinyProxy App Logs

Screenshot Screenshot (error)

  • Datasource: Loki
  • Goal: show the logs of any app started by ShinyProxy.

Note: this dashboard also works when apps are run in different namespaces than the namespace of the ShinyProxy server. As an example, the Dash application in ShinyProxy 1 runs in a different namespace.

Note: this dashboard also shows parts of the ShinyProxy log that are relevant for this app.

ShinyProxy App Resources

Screenshot

  • Datasource: Prometheus
  • Goal: show the resources (CPU, Memory, Network) used by any app started by ShinyProxy.

Note: this dashboard also works when apps are run in different namespaces than the namespace of the ShinyProxy server. As an example, the Dash application in ShinyProxy 1 runs in a different namespace.

ShinyProxy Delegate App Logs

Screenshot

ShinyProxy Delegate App Resources

Screenshot

ShinyProxy Seats

Screenshot

How it works

Loki + Alloy

Both Loki and Alloy are used to collect the logs for all relevant dashboards. The upstream Loki helm chart is used. No tweaks are needed to make it work with ShinyProxy. In contrast, the configuration of Alloy must be changed such that it collects the correct metadata. See the overlays/alloy/configs/config.alloy file.

Prometheus

The Prometheus setup is based on the kube-prometheus stack.

Grafana

The following changes are made to the configuration of Grafana:

Prometheus

The changes to the Prometheus config are:

Getting started

This section demonstrates how to set up this stack in minikube.

  1. Start minikube

    minikube start --addons=metrics-server,ingress
  2. Configure web access to the cluster. First get the IP of minikube using:

    minikube ip

    Next, add the following entries to /etc/hosts, replacing MINIKUBE_IP by the output of the previous command;

    MINIKUBE_IP       grafana.shinyproxy-demo.local
    MINIKUBE_IP       shinyproxy-demo.local
    
  3. Set up Loki

    cd overlays/loki
    kustomize build | kubectl apply --server-side -f - 
    cd ../..

    Note: re-run the command if it fails when it cannot find some CRDs.

  4. Set up Alloy

    cd overlays/alloy
    kustomize build | kubectl apply --server-side -f - 
    cd ../..
  5. Set up Prometheus and Grafana

    cd overlays/monitoring
    kustomize build | kubectl apply --server-side -f - 
    cd ../..

    Note: re-run the command if it fails because it cannot find some CRDs.

  6. Set up the demo ShinyProxy Operator deployment:

    cd overlays/shinyproxy
    kustomize build | kubectl apply --server-side -f - 
    cd ../shinyproxy1-app
    kustomize build | kubectl apply --server-side -f - 

    Note: re-run the command if it fails because it cannot find some CRDs.

You can now log in into shinyproxy on http://shinyproxy-demo.local/shinyproxy1 and http://shinyproxy-demo.local/shinyproxy2 with the users jack and jeff (both have as password password). You can log into grafana on http://grafana.shinyproxy-demo.local, with the username and password admin.

Upgrading

This repository uses the same version numbers as ShinyProxy. Always use the same version of ShinyProxy and this repository.

Upgrade to 3.1.0

In release 3.1.0 of this repository, all components were upgraded. In order to maintain your logs and metrics, it's important to take the following steps when updating:

  • edit line 50 of overlays/loki/configs/config.yaml: change the day to be one day after you upgrade Loki. E.g. if you update this on 2024-03-25 (25 March 2024), change the date to 2024-03-26. If you do not modify this line, you will no longer be able to access logs from before the upgrade. See the Loki docs for more information.

Upgrade to 3.2.0

This release contains the following improvements:

  • replace Promtail by Alloy (for collecting logs). Alloy uses only a single pod instead of a daemonset, this reduces the overall resource consumption of the monitoring stack.
  • include documentation in all dashboards
  • allow selecting multiple apps/instances in app logs and resources dashboards
  • allow to search in log dashboards
  • make it easier to use the app resources dashboard
  • show user, app id and instance in dashboards (only when a single id is selected)
  • show app state in dashboards
  • remove dependency on kube-state-metrics
  • update all components

In order to upgrade an existing stack:

  • edit line 105 of overlays/loki/configs/config.yaml: change the day to be one day after you upgrade Loki. E.g. if you update this on 2025-07-03 (3 July 2025), change the date to 2025-07-04. If you do not modify this line, you will no longer be able to access logs from before the upgrade. See the Loki docs for more information.

  • apply the manifests of all components:

    cd overlays/loki
    kustomize build | kubectl apply --server-side -f -
    cd ../..
    cd overlays/alloy
    kustomize build | kubectl apply --server-side -f -
    cd ../..
    cd overlays/monitoring
    kustomize build | kubectl apply --server-side -f -
    cd ../..
  • once everything is running, remove Promtail:

    kubectl delete -n loki ds promtail
    kubectl delete -n loki sa promtail
    kubectl delete -n loki secret promtail

Following these steps both the old and new logs are available throuh Grafana.

Docker support

The resources in this repository can only be used on Kubernetes. However, a very similar stack can be deployed on pure docker hosts by using the ShinyProxy Operator.

(c) Copyright Open Analytics NV, 2022-2025.