Thanks for sharing the article with the Holoviz community. I didn’t know it existed!
I’ll caveat my response by noting that I have less than one year of experience and that my applications have never had large traffic. For use cases that do not have steady traffic, Cloud Run is far cheaper with a generous free tier compared to something like a Heroku Dyno. I haven’t yet crossed the free tier in my usage. The downside with using Cloud Run is that there is a little bit of latency (1-2 seconds, in my experience) for the first request after the application is scaled down to zero instances. You can set the minimum number of instances to 1 but that negates the pricing advantage. The upside is that depending on your average container uptime, you can generally go for much higher vCPUs and memory to improve performance as billing happens based on vCPU-seconds (CPU) and GiB-seconds (memory). For a large use case, if you expect the container instances to rarely go down to zero, it’s considerably cheaper to have a dedicated VM on GCP or Heroku, especially assuming that a data science-related application will require significant RAM/CPU. GCP has a very useful calculator, if you are curious to run some numbers through it: Google Cloud Platform Pricing Calculator
When I started researching for this article, I intended to hopefully get to a few commands to easily deploy an application so I am definitely interested in taking a stab at that soon. I should note that once the initial GCP project setup is complete, the subsequent deployment is only one command:
gcloud builds submit. I foresee a few challenges when trying to automate the initial setup. I haven’t yet found a way to set the required project permissions from a script. That’s why the first couple of steps rely on going to the GCP console. Not entirely sure if that was by design since those permissions have implications on billing. I’ll update here when I have something more streamlined!