OHDSI Home | Forums | Wiki | Github

Organization of ShinyDeploy

(Martijn Schuemie) #1

In the past few months the number of apps on OHDSI’s Shiny server has gone from 5 to 18. Although that is a welcome development, I’m not sure all these apps have similar life spans, and requirements over these life spans. Some apps, like SystematicEvidence and AhasHfBkleAmputation are supplements to published papers, and must be kept alive for eternity. Other apps like oxfordStudyathonData1 I wonder if they need to be kept running for very long.

I see several issues with allowing unlimited unorganized growth of the number of apps:

  1. Hard to find a specific app (although probably not a big issue because for example papers should link to a specific app).

  2. Increased resource requirements (some of these apps are really big in terms of disk space or memory requirements). I think it is even possible for one app to bring the entire server down, knocking out all others.

  3. Incompatible requirements on dependencies. All these apps run in a single R instance. If packages are updated that may require updating these apps. For example, I already had to make changes to SystematicEvidence to accommodate changes in ggplot2. If we have hundreds of apps that doesn’t seem feasible.

Perhaps we should have several Shiny servers? One for long-term apps, one for those supporting ongoing studies, and one sandbox for development purposes?

@lee_evans: any thoughts?

(Vojtech Huser) #2

I like very much that there is an OHDSI shiny server. I agree with putting some rules around it.

For app that can use the free pool tier, they can use shinyapp.io
(like this app here that did not make the cut (too much tidyverse and other problems) when I tried to use the OHDSI server)

(Lee Evans) #3

@schuemie we could implement the open source Shinyproxy solution on the OHDSI cloud to address points 2 and 3:


Shinyproxy uses Docker containers to isolate the server resources and R/R package version dependencies for each Shiny app,

Shinyproxy requires the use of a Dockerfile for each Shiny app to specify it’s server resources & dependencies. I could create Dockerfiles for the existing OHDSI Shiny apps as working examples. The existing Shiny apps would retain their current URLs.

I would also need to update the OHDSI Jenkins build process to use the Shiny app Dockerfiles as part of the automated build/deployment from the ShinyDeploy GitHub repo.

Additional benefits to deploying Shinyproxy are the ability to support more simultaneous users for each Shiny app and optional support for LDAP/TLS authentication if we needed some future Shiny app to be limited access.

Regarding point 1 we could use the Shinyproxy landing page to provide an attractive HTML page with a brief introduction to the data.ohdsi.org site and the URL link (and short description) for each OHDSI Shiny app.

If you think implementing Shinyproxy on data.ohdsi.org is worth pursuing I can follow-up with @hripcsa and @Patrick_Ryan about funding for the additional OHDSI cloud infrastructure & the deployment/support activities.

(Vojtech Huser) #4

This is a great proposal, Lee.
Isolating the apps from each other is a good approach.

(Martijn Schuemie) #5

I certainly like the idea of separate containers for the long-term apps. Would it make sense to have a mix? Docker containers for ‘frozen’ apps that are not expected to ever change, and a shared Shiny server (perhaps in a single docker container) for the shorter-term apps to allow for more flexibility when making changes?

(Lee Evans) #6


I’d say we should aim to support the following scenarios:

  • A ‘frozen’ app still requires an older version of an R package/OS dependency.
  • Different shorter-term deployed shiny apps require conflicting versions of an R package/OS dependency (e.g. java versions)
  • Shiny app developers need to install their own R packages/OS dependencies in a shiny server, for development agility, but without impacting other developers/existing apps
  • The shiny server(s) need to be able to serve multiple shiny apps to multiple users simultaneously without blocking/queuing user requests

I’ll follow-up with @hripcsa & @Patrick_Ryan on this topic after we complete the OHDSI cloud upgrade of Atlas to v2.7.0 which I expect will happen later this week.