Resolve axis screen coordinates?

I need to be able to resolve the offset from figure edge to the main axis lines in screen pixels, so I can make some accurate transforms between data and screen coords.

Are the internal functions for data to screen space conversion public?

Or is there a way to tell how much border of overall figure size is used in axis and axis text?

Hi,

···

On Wed, Aug 1, 2018 at 3:10 AM, Anthony Tod [email protected] wrote:

I need to be able to resolve the offset from figure edge to the main axis lines in screen pixels, so I can make some accurate transforms between data and screen coords.

Are the internal functions for data to screen space conversion public?

Or is there a way to tell how much border of overall figure size is used in axis and axis text?

it’s possible from JavaScript or CustomJS, though most APIs aren’t stable or are private. In Python there are Plot.inner_width and Plot.inner_height that give you plot frame dimensions, but nothing more. It would be much more preferable to know what actual problem you are trying to solve, to give you any advice. Possibly there are already tools in bokeh that would allow you to align things the way you want, e.g. annotations can usually be positioned both in data and screen space.

Mateusz

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/d04d2aba-a6a3-4e8a-a51e-244b9e6f6e17%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

It should also be noted that Plot.inner_width and Plot.inner_height only function in the context of a Bokeh server application. I agree with Mateusz that we would have a much better chance at providing helpful guidance if you described what it is you are actually trying to accomplish at a high level, rather then jumping to a particular implementation assumption.

Thanks,

Bryan

···

On Aug 1, 2018, at 04:30, Mateusz Paprocki <[email protected]> wrote:

Hi,

On Wed, Aug 1, 2018 at 3:10 AM, Anthony Tod <[email protected]> wrote:
I need to be able to resolve the offset from figure edge to the main axis lines in screen pixels, so I can make some accurate transforms between data and screen coords.

Are the internal functions for data to screen space conversion public?

Or is there a way to tell how much border of overall figure size is used in axis and axis text?

it's possible from JavaScript or CustomJS, though most APIs aren't stable or are private. In Python there are Plot.inner_width and Plot.inner_height that give you plot frame dimensions, but nothing more. It would be much more preferable to know what actual problem you are trying to solve, to give you any advice. Possibly there are already tools in bokeh that would allow you to align things the way you want, e.g. annotations can usually be positioned both in data and screen space.

Mateusz

--
You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/d04d2aba-a6a3-4e8a-a51e-244b9e6f6e17%40continuum.io.
For more options, visit https://groups.google.com/a/continuum.io/d/optout.

--
You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/CANFzp8g3vCwKkZLmuZq1jrvuZObnhw8W0XmpUwezRMDTcnMYDg%40mail.gmail.com.
For more options, visit https://groups.google.com/a/continuum.io/d/optout.

I greatly appreciate the replies to my question here.

TL;DR so I can make a data to screen space transform and natively render data onto an exported png of the base axis image for maximum frame generation performance.

Long version…

As I am implementing a direct python application I don’t think the api’s available on bokeh server apply.

I will however give the context of what I am up to, should it just sate curiosity, its a wall of text, but I am quite happy with my end result, so happy to share.

I have working implementation now, so its just a more graceful solution I was looking for, rather than a critical path.

As discussed previously, I am using bokeh to generate frames of an animation from data sets extracted from logs. 1000s of frames may be necessary for each video.

I am quite familiar with bokeh, using it for various use cases where interactive html hosted graphs are the end target, so I likely used what I was used to, rather than what was most suitable.

I export_png -> selenium -> webkit driver -> headless chrome -> png file -> ffmpeg collection of png’s

I had already been through a performance pass, going multi process and ensuring that for each worker process I only established one webdriver through its lifetime, to avoid create / destroy overheads on each frame render.

That got my test data set process time from ~330 seconds to ~40 seconds which in itself was a great improvement and meant the pipeline was practical.

I will have a relatively large number of animation jobs being submitted to a backend for processing into video, so I need to care about system loading and time to render as an aggregate.

Once I had that in production I had a think about the pipeline and took another spin at the optimization.

I wanted to move to a more native rendering pipeline so I didn’t have to pass each frame through the whole selenium / chrome stack for image generation.

But I didn’t want to lose the capability of bokeh to render nice axis according to my data set ranges which have significant variability.

So I ended up with a key frame approach, using pillow, which I feel is like trying to paint with a chisel, but got me there.

I need a transform to get my glyph renders from data space into screen space, so I need to know the axis bounds in both screen and data space. Noting that screen space will mutate depending on the rendering of the axis labels etc.

I know data space as I set that, but I need to derive the screen space bounds.

I render a key frame with bokeh that has the axis, but no internal axis ticks, then once exported to a png and loaded into a pillow image, I “sniff” the pixel colors from inside the bounding box of the graphable space, until I hit the axis edges and outer bounding light grey box.

This gives me screen space bounds ( Its not absolutely accurate, but good enough, looks like there is a little internal whitespace from axis to screen space ) and I can implement a transform function.

I can then repeatedly use the cached key frame ( actually another instance with a complex and data space accurate background image applied ) and render my glyphs ( filled polygons ) onto that key frame with pillow and natively save out to png.

It is very effective, though there is no anti-aliasing in pillow polygon, so I have some sub-sampling tricks there.

When all is said and done, using a new data set as a baseline, which my bokeh only pipeline took 330 seconds to process ( a coincidence it matches the first data set before the first pass optimizations, it is much bigger ). It now takes 33 seconds to generate my video animation.

That 10 times performance improvement basically means I stop caring about back end processing power for the foreseeable…

I know bokeh is probably not the right tool to use in the first place for the axis generation and pillow is probably a cave man library, but in the end its an impressively fast and working solution.

···

On Wednesday, August 1, 2018 at 5:20:34 PM UTC-4, Bryan Van de ven wrote:

It should also be noted that Plot.inner_width and Plot.inner_height only function in the context of a Bokeh server application. I agree with Mateusz that we would have a much better chance at providing helpful guidance if you described what it is you are actually trying to accomplish at a high level, rather then jumping to a particular implementation assumption.

Thanks,

Bryan

On Aug 1, 2018, at 04:30, Mateusz Paprocki [email protected] wrote:

Hi,

On Wed, Aug 1, 2018 at 3:10 AM, Anthony Tod [email protected] wrote:

I need to be able to resolve the offset from figure edge to the main axis lines in screen pixels, so I can make some accurate transforms between data and screen coords.

Are the internal functions for data to screen space conversion public?

Or is there a way to tell how much border of overall figure size is used in axis and axis text?

it’s possible from JavaScript or CustomJS, though most APIs aren’t stable or are private. In Python there are Plot.inner_width and Plot.inner_height that give you plot frame dimensions, but nothing more. It would be much more preferable to know what actual problem you are trying to solve, to give you any advice. Possibly there are already tools in bokeh that would allow you to align things the way you want, e.g. annotations can usually be positioned both in data and screen space.

Mateusz


You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/d04d2aba-a6a3-4e8a-a51e-244b9e6f6e17%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.


You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/CANFzp8g3vCwKkZLmuZq1jrvuZObnhw8W0XmpUwezRMDTcnMYDg%40mail.gmail.com.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.