Plotting millions of points is unusably slow

The page Accelerating with WebGL — Bokeh 2.4.2 Documentation shows a “large” example of 10K points, while I want to plot millions of points. This completely breaks down. Simply run the listed example with a million points to see what I mean. In my own code performance is even worse.

Is there anything that can be done to improve the situation? I’m totally fine sacrificing any alpha blending or antialiasing it might be doing for speed.

I tried the same in Plotly, which was not without issues, but after disabling some interactive elements, its webgl backend was fairly performant: Evaluating GUI toolkits: Flutter, React, Elm, Yew, ClojureScript, Kotlin - Wishful Coding I’m not totally sure where the difference in performance comes from.

I’ve tried level of detail scaling Configuring plot tools — Bokeh 2.4.2 Documentation but it looks like the webgl backend doesn’t support it, and with the default backend it still slows down to a crawl when downsampling to 10 points.

It also mentions somewhere that server-side downsampling could be the way to go, in which case my question is, how do I provide additional level of detail depending on the zoom level? When zoomed out you don’t care about all 1M points, but they are there for a reason, so zooming in on them should provide full resolution.

https://datashader.org/ I believe is the de facto go to for this.

Also note that Holoviews is a good option to integrate Datashader and Bokeh together at a high level: Working with large data using datashader — HoloViews v1.14.8

It also mentions somewhere that server-side downsampling could be the way to go, in which case my question is, how do I provide additional level of detail depending on the zoom level?

This is not built-in, the comment is only meant to indicate that the necessary events and features are present for anyone to implement their own downsampling scheme suitable for their specialized use case.