Bokeh is slower at each iteration

Hi,

I am using jupyter notebook to make a bokeh map.

Each time I re-do my cell, the calculation time is longer.

Here is the idea of my map : I do a four-step map :

total_times =
import time

p = figure()

n_row = 100000

output_file(“france_zdc_et_unites.html”)

d = time.time()

glyph = p.patches(source =ColumnDataSource(france_hors_zdc[:n_row]), xs = “lat”, ys = “lng”, fill_color=“white”,line_color=“white”)
p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp”)], name=“hors_zdc”))

source = ColumnDataSource(nb_unite_by_cp[[‘lat’, ‘lng’,‘cp_y’, ‘unite’, ‘unites’, ‘color’]][:n_row])
glyph = p.patches(source =source, xs = “lat”, ys = “lng”, fill_color=“color”,)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp_y”), (“nb unite”, “@unite”), (“liste unites”, “@unites”)], name=“zdc”))

source =ColumnDataSource(unite_hors_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source =source , x = u’X_lambert93’, y = u’Y_Lambert93’, color=“red”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

source = ColumnDataSource(unite_dans_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source=source, x = u’X_lambert93’, y = u’Y_Lambert93’, color=“orange”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

show(p)

d5 = time.time()

total_times.append(d5-d)

print total_times

``

The total time get each time bigger :

First iteration was about 10 sec, then after 3 or 4 I get : [29.3, 35.6, 42.9 ]

The size of the created html is also getting bigger and bigger : [26,7 Mo, 33,4 Mo 40,1Mo]

I tried to do a “del p” , it didn’t change anything

My activity monitor also tell me the python process is getting bigger at each iteration.

Anyone else has had the same issue ?

Is it a known issue ?

Any idea of a solution ?

Thx,

romain.

Redirecting to Google Groups seems related

···

Le lundi 11 juillet 2016 22:46:45 UTC+2, Romain Jouin a écrit :

Hi,

I am using jupyter notebook to make a bokeh map.

Each time I re-do my cell, the calculation time is longer.

Here is the idea of my map : I do a four-step map :

total_times =
import time

p = figure()

n_row = 100000

output_file(“france_zdc_et_unites.html”)

d = time.time()

glyph = p.patches(source =ColumnDataSource(france_hors_zdc[:n_row]), xs = “lat”, ys = “lng”, fill_color=“white”,line_color=“white”)
p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp”)], name=“hors_zdc”))

source = ColumnDataSource(nb_unite_by_cp[[‘lat’, ‘lng’,‘cp_y’, ‘unite’, ‘unites’, ‘color’]][:n_row])
glyph = p.patches(source =source, xs = “lat”, ys = “lng”, fill_color=“color”,)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp_y”), (“nb unite”, “@unite”), (“liste unites”, “@unites”)], name=“zdc”))

source =ColumnDataSource(unite_hors_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source =source , x = u’X_lambert93’, y = u’Y_Lambert93’, color=“red”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

source = ColumnDataSource(unite_dans_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source=source, x = u’X_lambert93’, y = u’Y_Lambert93’, color=“orange”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

show(p)

d5 = time.time()

total_times.append(d5-d)

print total_times

``

The total time get each time bigger :

First iteration was about 10 sec, then after 3 or 4 I get : [29.3, 35.6, 42.9 ]

The size of the created html is also getting bigger and bigger : [26,7 Mo, 33,4 Mo 40,1Mo]

I tried to do a “del p” , it didn’t change anything

My activity monitor also tell me the python process is getting bigger at each iteration.

Anyone else has had the same issue ?

Is it a known issue ?

Any idea of a solution ?

Thx,

romain.

I thought creating the bokeh plot in a subprocess using multiprocessing could be an idea, but I am new to this module and still have to try (and mainly debug) it.
Here is what I tried, but it fails :

(1) creating a maping function

def map_bokeh(inputs ):

print inputs

import sys

import pandas as pd

import numpy as np

import shapefile

import bokeh

from bokeh.io import output_notebook

from bokeh.models import HoverTool

from bokeh.plotting import figure, ColumnDataSource

from sortedcontainers import SortedSet

import matplotlib.pylab

france_hors_zdc = inputs[0]

nb_unite_by_cp = inputs[1]

unite_hors_zdc = inputs[2]

unite_dans_zdc = inputs[3]

import time

p = figure()

n_row = 100000

d = time.time()

output_file(“france_zdc_et_unites_%s.html”%d)

source = ColumnDataSource(france_hors_zdc[:n_row]

glyph = p.patches(source =source), xs = “lat”, ys = “lng”, fill_color=“white”,line_color=“white”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp”)], name=“hors_zdc”))

source = ColumnDataSource(nb_unite_by_cp[[‘lat’, ‘lng’,‘cp_y’, ‘unite’, ‘unites’, ‘color’]][:n_row])

glyph = p.patches(source =source, xs = “lat”, ys = “lng”, fill_color=“color”,)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp_y”), (“nb unite”, “@unite”), (“liste unites”, “@unites”)], name=“zdc”))

source =ColumnDataSource(unite_hors_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source =source , x = u’X_lambert93’, y = u’Y_Lambert93’, color=“red”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

source =ColumnDataSource(unite_dans_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source=source, x = u’X_lambert93’, y = u’Y_Lambert93’, color=“orange”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

show(p)

d5 = time.time()

print "total time = ", d5-d

total_times.append(d5-d)

print total_times

``

(2) calling multiprocessing and maping the bokeh creation to one process (hoping the stack will flush by itself at the process’s death):

from multiprocessing import Pool

pool = Pool(1)

inputs = [[france_hors_zdc, nb_unite_by_cp, unite_hors_zdc, unite_dans_zdc]]

pool.map(map_bokeh, inputs)

``

But it never gives back any output.

It is even not sure that you can interrupt the kernel (I can’t)

Any insight ?

···

Le lundi 11 juillet 2016 22:46:45 UTC+2, Romain Jouin a écrit :

Hi,

I am using jupyter notebook to make a bokeh map.

Each time I re-do my cell, the calculation time is longer.

Here is the idea of my map : I do a four-step map :

total_times =
import time

p = figure()

n_row = 100000

output_file(“france_zdc_et_unites.html”)

d = time.time()

glyph = p.patches(source =ColumnDataSource(france_hors_zdc[:n_row]), xs = “lat”, ys = “lng”, fill_color=“white”,line_color=“white”)
p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp”)], name=“hors_zdc”))

source = ColumnDataSource(nb_unite_by_cp[[‘lat’, ‘lng’,‘cp_y’, ‘unite’, ‘unites’, ‘color’]][:n_row])
glyph = p.patches(source =source, xs = “lat”, ys = “lng”, fill_color=“color”,)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp_y”), (“nb unite”, “@unite”), (“liste unites”, “@unites”)], name=“zdc”))

source =ColumnDataSource(unite_hors_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source =source , x = u’X_lambert93’, y = u’Y_Lambert93’, color=“red”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

source = ColumnDataSource(unite_dans_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source=source, x = u’X_lambert93’, y = u’Y_Lambert93’, color=“orange”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

show(p)

d5 = time.time()

total_times.append(d5-d)

print total_times

``

The total time get each time bigger :

First iteration was about 10 sec, then after 3 or 4 I get : [29.3, 35.6, 42.9 ]

The size of the created html is also getting bigger and bigger : [26,7 Mo, 33,4 Mo 40,1Mo]

I tried to do a “del p” , it didn’t change anything

My activity monitor also tell me the python process is getting bigger at each iteration.

Anyone else has had the same issue ?

Is it a known issue ?

Any idea of a solution ?

Thx,

romain.

I am now trying to create a figure on the subprocess, and to get it back to the main program, like this :

def map_bokeh(inputs ):
# Function called by the subprocess (make a bokeh plot and (try to) return it)

import sys

import pandas     as pd

import numpy      as np

import shapefile

import bokeh

from bokeh.io         import output_notebook

from bokeh.models     import HoverTool

from bokeh.plotting   import figure, ColumnDataSource, show





france_hors_zdc = inputs[0] 

nb_unite_by_cp  = inputs[1] 

unite_hors_zdc = inputs[2] 

unite_dans_zdc= inputs[3] 





import time

p     = figure()

n_row = 100000





source = ColumnDataSource(france_hors_zdc[:n_row])

glyph  = p.patches(source =source, xs = "lat", ys = "lng", fill_color="white",line_color="white")

p.add_tools(HoverTool(renderers=[glyph], tooltips=[("cp", "@cp")], name="hors_zdc"))

return p

``

Then I call it :

from multiprocessing import Pool

pool = Pool(1)

inputs = [[france_hors_zdc, nb_unite_by_cp, unite_hors_zdc, unite_dans_zdc]]

pool.map(map_bokeh, inputs)

``

and I get an error :


<details class='elided'>
<summary title='Show trimmed content'>&#183;&#183;&#183;</summary>

---------------------------------------------------------------------------
MaybeEncodingError                        Traceback (most recent call last)
<ipython-input-30-13ced425a2b5> in <module>()
      1 inputs = [[france_hors_zdc, nb_unite_by_cp, unite_hors_zdc, unite_dans_zdc]]
----> 2 pool.map(map_bokeh, inputs)

/Users/romain/anaconda/lib/python2.7/multiprocessing/pool.pyc in map(self, func, iterable, chunksize)
    249         '''
    250         assert self._state == RUN
--> 251         return self.map_async(func, iterable, chunksize).get()
    252
    253     def imap(self, func, iterable, chunksize=1):

/Users/romain/anaconda/lib/python2.7/multiprocessing/pool.pyc in get(self, timeout)
    565             return self._value
    566         else:
--> 567             raise self._value
    568
    569     def _set(self, i, obj):

MaybeEncodingError: Error sending result: '[<bokeh.plotting.figure.Figure object at 0x109f53e90>]'. Reason: 'PicklingError("Can't pickle <class 'bokeh.core.enums.Enumeration'>: it's not the same object as bokeh.core.enums.Enumeration",)'

``

Le lundi 11 juillet 2016 22:46:45 UTC+2, Romain Jouin a écrit :

Hi,

I am using jupyter notebook to make a bokeh map.

Each time I re-do my cell, the calculation time is longer.

Here is the idea of my map : I do a four-step map :

total_times =
import time

p = figure()

n_row = 100000

output_file(“france_zdc_et_unites.html”)

d = time.time()

glyph = p.patches(source =ColumnDataSource(france_hors_zdc[:n_row]), xs = “lat”, ys = “lng”, fill_color=“white”,line_color=“white”)
p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp”)], name=“hors_zdc”))

source = ColumnDataSource(nb_unite_by_cp[[‘lat’, ‘lng’,‘cp_y’, ‘unite’, ‘unites’, ‘color’]][:n_row])
glyph = p.patches(source =source, xs = “lat”, ys = “lng”, fill_color=“color”,)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp_y”), (“nb unite”, “@unite”), (“liste unites”, “@unites”)], name=“zdc”))

source =ColumnDataSource(unite_hors_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source =source , x = u’X_lambert93’, y = u’Y_Lambert93’, color=“red”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

source = ColumnDataSource(unite_dans_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source=source, x = u’X_lambert93’, y = u’Y_Lambert93’, color=“orange”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

show(p)

d5 = time.time()

total_times.append(d5-d)

print total_times

``

The total time get each time bigger :

First iteration was about 10 sec, then after 3 or 4 I get : [29.3, 35.6, 42.9 ]

The size of the created html is also getting bigger and bigger : [26,7 Mo, 33,4 Mo 40,1Mo]

I tried to do a “del p” , it didn’t change anything

My activity monitor also tell me the python process is getting bigger at each iteration.

Anyone else has had the same issue ?

Is it a known issue ?

Any idea of a solution ?

Thx,

romain.

Which version of bokeh are u using?

Sarah Bird
[email protected]

···

On Jul 11, 2016, at 1:46 PM, Romain Jouin [email protected] wrote:

Hi,

I am using jupyter notebook to make a bokeh map.

Each time I re-do my cell, the calculation time is longer.

Here is the idea of my map : I do a four-step map :

total_times =
import time

p = figure()

n_row = 100000

output_file(“france_zdc_et_unites.html”)

d = time.time()

glyph = p.patches(source =ColumnDataSource(france_hors_zdc[:n_row]), xs = “lat”, ys = “lng”, fill_color=“white”,line_color=“white”)
p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp”)], name=“hors_zdc”))

source = ColumnDataSource(nb_unite_by_cp[[‘lat’, ‘lng’,‘cp_y’, ‘unite’, ‘unites’, ‘color’]][:n_row])
glyph = p.patches(source =source, xs = “lat”, ys = “lng”, fill_color=“color”,)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp_y”), (“nb unite”, “@unite”), (“liste unites”, “@unites”)], name=“zdc”))

source =ColumnDataSource(unite_hors_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source =source , x = u’X_lambert93’, y = u’Y_Lambert93’, color=“red”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

source = ColumnDataSource(unite_dans_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source=source, x = u’X_lambert93’, y = u’Y_Lambert93’, color=“orange”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

show(p)

d5 = time.time()

total_times.append(d5-d)

print total_times

``

The total time get each time bigger :

First iteration was about 10 sec, then after 3 or 4 I get : [29.3, 35.6, 42.9 ]

The size of the created html is also getting bigger and bigger : [26,7 Mo, 33,4 Mo 40,1Mo]

I tried to do a “del p” , it didn’t change anything

My activity monitor also tell me the python process is getting bigger at each iteration.

Anyone else has had the same issue ?

Is it a known issue ?

Any idea of a solution ?

Thx,

romain.

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/8c00be8d-f6a6-4168-a048-9a50f9e68a6e%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

Bokeh 12.0
I am on mon Mac ( last os)
Python 2.7 ( from anaconda)

I’ve noticed the same issue - bokeh performance decrease, and more significantly, HTML pages increase in size.

Bokeh 0.11.1 and 0.12.0 have same issue.

I have a bokeh python script which generates a single HTML page output.

If I run this script in Spyder, it works fast and produces the output. If, in the same instance of spyder, I make modifications and re-run the script, the file gets bigger. This continues to get worse unless I close and restart Spyder, wherein the HTML files go back to their correct size.

Are you also running in Spyder or another IDE? Or just executing the script from the command line?

···

On Monday, 11 July 2016 21:46:45 UTC+1, Romain Jouin wrote:

Hi,

I am using jupyter notebook to make a bokeh map.

Each time I re-do my cell, the calculation time is longer.

Here is the idea of my map : I do a four-step map :

total_times =
import time

p = figure()

n_row = 100000

output_file(“france_zdc_et_unites.html”)

d = time.time()

glyph = p.patches(source =ColumnDataSource(france_hors_zdc[:n_row]), xs = “lat”, ys = “lng”, fill_color=“white”,line_color=“white”)
p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp”)], name=“hors_zdc”))

source = ColumnDataSource(nb_unite_by_cp[[‘lat’, ‘lng’,‘cp_y’, ‘unite’, ‘unites’, ‘color’]][:n_row])
glyph = p.patches(source =source, xs = “lat”, ys = “lng”, fill_color=“color”,)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp_y”), (“nb unite”, “@unite”), (“liste unites”, “@unites”)], name=“zdc”))

source =ColumnDataSource(unite_hors_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source =source , x = u’X_lambert93’, y = u’Y_Lambert93’, color=“red”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

source = ColumnDataSource(unite_dans_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source=source, x = u’X_lambert93’, y = u’Y_Lambert93’, color=“orange”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

show(p)

d5 = time.time()

total_times.append(d5-d)

print total_times

``

The total time get each time bigger :

First iteration was about 10 sec, then after 3 or 4 I get : [29.3, 35.6, 42.9 ]

The size of the created html is also getting bigger and bigger : [26,7 Mo, 33,4 Mo 40,1Mo]

I tried to do a “del p” , it didn’t change anything

My activity monitor also tell me the python process is getting bigger at each iteration.

Anyone else has had the same issue ?

Is it a known issue ?

Any idea of a solution ?

Thx,

romain.

That is exactly the same issue
I am on a Jupiter notebook , so I tend to run and rerun tens of time. My Python server went to a 1 gb of memory, plots up to 3 min to display, awful. Have to restart kernel

I’m experiencing the same issue using a Jupyter notebook and output_notebook(). The notebook size and execution time grow on each iteration of full-notebook execution. I think it might be related to this PR:

···

On Tuesday, July 12, 2016 at 4:57:46 AM UTC-4, Romain Jouin wrote:

That is exactly the same issue
I am on a Jupiter notebook , so I tend to run and rerun tens of time. My Python server went to a 1 gb of memory, plots up to 3 min to display, awful. Have to restart kernel

Hi,

Bokeh is a large and complicated cross-language, cross-runtime library, and the same can be said of the Jupyter notebook. Combining them (well, in all cases) is a challenging technical proposition and presents an enormous test surface. There are also intrinsic trade-offs that will almost certainly mean not everyone can be 100% happy at simultaneously.

Confession time: I am not much of a notebook user, at all. I almost never "run all" to begin with, and running "run all" over and over again is simply not a use-case that even occurred to me. Duly noted.

Constructive feedback is always appreciated. But rather than simply offering dismissive statements like "awful", may I suggest that instead you offer your expertise as a heavy notebook user to help improve Bokeh for everyone? The true value of Open Source is that it can be a collaboration.

Regarding this issue, my suspicion is that it is related to the accumulation of notebook comms objects. The notebook comms feature is great, and lets any bokeh plot in any cell be modified and updated in place with a simple `push_notebook` call. This affords things like streaming data and updating plots with Jupyter interactors. But it also evidently does not interact well with this particular "re-run all many times" use case. There are a variety of possible avenues for improvement that I can imagine, ranging from documentation changes, to making the notebook_comms feature be off by default, to investigating technical improvements in implementation. But they will take time and effort to investigate.

In the mean time, adding a call to `bokeh.io.reset_output` at the top of your notebook, should help prevent the memory growth in this situation.

Regards,

Bryan

···

On Jul 12, 2016, at 3:57 AM, Romain Jouin <[email protected]> wrote:

That is exactly the same issue
I am on a Jupiter notebook , so I tend to run and rerun tens of time. My Python server went to a 1 gb of memory, plots up to 3 min to display, awful. Have to restart kernel

--
You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/77a34d2f-e018-4b42-8883-700f3caa6c28%40continuum.io\.
For more options, visit https://groups.google.com/a/continuum.io/d/optout\.

I am sorry for this awf... Word which I didn't except to be read this way, I am more than happy of Bokeh and glad and thanks all the work done on this library.

Giving a feedback with some code example was for me a first step on a collaborative approach to the library.

I would be glad to help to the Bokeh development, but I am not sure to have the knowledge and time to do so. Getting used to it on a user point of view seems to me the first step before helping to develop features.

For now I just try to get Bokeh work on a project I have to finish quite soon... I will try your reset idea.

Thanks and best regards.
Romain.

Victory !

@bryan-VandeVen : adding the [ bokeh.io.reset_output() ] at the top of my ipython cell made it !! With incredible performance improvements :

  1. linear process time linear

  2. 2.5 time faster => From 10 sec down to 4.5 secondes

  3. the output html is lighter => from 20 M down to 6 Mb !!

import time

p = figure()

n_row = 100000

d = time.time()

bokeh.io.reset_output()

output_file(“couverture_des_unites_par_typologie_%s.html”%d)

if True:

source = ColumnDataSource(france_hors_zdc[:n_row])

glyph  = p.patches(source =source, xs = "lat", ys = "lng", fill_color="white",line_color="white")

p.add_tools(HoverTool(renderers=[glyph], tooltips=[("cp", "@cp")], name="hors_zdc"))

source = ColumnDataSource(nb_unite_by_cp[['lat', 'lng','cp_y', 'unite', 'unites', 'color']][:n_row])

glyph  = p.patches(source =source, xs = "lat", ys = "lng", fill_color="color",)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[("cp", "@cp_y"), ("nb unite", "@unite"), ("liste unites", "@unites")], name="zdc"))

source = ColumnDataSource(unite_hors_zdc[[u'unit', u'pool', u'X_lambert93', u'Y_Lambert93']][:n_row])

glyph  = p.circle(source =source , x = u'X_lambert93', y = u'Y_Lambert93', color="red", legend=" unités hors zdc")

p.add_tools(HoverTool(renderers=[glyph], tooltips=[("pool", "@pool"), ("unite", "@unit")], name="pool_units"))

source = ColumnDataSource(unite_dans_zdc[[u'unit', u'pool', u'X_lambert93', u'Y_Lambert93']][:n_row])

glyph  = p.circle(source=source, x = u'X_lambert93', y = u'Y_Lambert93', color="orange", legend=" unités dans zdc")

p.add_tools(HoverTool(renderers=[glyph], tooltips=[("pool", "@pool"), ("unite", "@unit")], name="pool_units"))

show(p)

d5 = time.time()

total_time = d5-d

print "total time = ", total_time

times.append[total_time]

``

With output :

total time = 4.08737492561
[4, 4.289460897445679, 4.141924858093262, 4.087374925613403]

``

Thanks, perfect :=))

···

Le mardi 12 juillet 2016 17:21:35 UTC+2, Romain Jouin a écrit :

I am sorry for this awf… Word which I didn’t except to be read this way, I am more than happy of Bokeh and glad and thanks all the work done on this library.
Giving a feedback with some code example was for me a first step on a collaborative approach to the library.

I would be glad to help to the Bokeh development, but I am not sure to have the knowledge and time to do so. Getting used to it on a user point of view seems to me the first step before helping to develop features.

For now I just try to get Bokeh work on a project I have to finish quite soon… I will try your reset idea.

Thanks and best regards.
Romain.

Romain,

Thanks very much for the update, I am glad to hear that this is working for you successfully. If you have thoughts on common workflows, your input could be valuable. One option would be to make asking for a notebook comms have to be explicit, either with a flag to `output_notebook`, or possibly at a more fine-grained level, to `show`.

Alternatively, if it's more valuable to have notebook comms "always on" then perhaps there is a way to better document and socialize this technique with `reset_output` more widely.

Either of those paths could be taken fairly quickly. I'd like to look into technical avenues as well, but that will require more time and effort to dedicate for investigating what if anything could be improved.

Thanks,

Bryan

···

On Jul 12, 2016, at 10:58 AM, Romain Jouin <[email protected]> wrote:

Victory !

@bryan-VandeVen : adding the [ bokeh.io.reset_output() ] at the top of my ipython cell made it !! With incredible performance improvements :

1) linear process time linear
2) 2.5 time faster => From 10 sec down to 4.5 secondes
3) the output html is lighter => from 20 M down to 6 Mb !!

import time
p = figure()
n_row = 100000
d = time.time()
bokeh.io.reset_output()
output_file("couverture_des_unites_par_typologie_%s.html"%d)

if True:
    source = ColumnDataSource(france_hors_zdc[:n_row])
    glyph = p.patches(source =source, xs = "lat", ys = "lng", fill_color="white",line_color="white")
    p.add_tools(HoverTool(renderers=[glyph], tooltips=[("cp", "@cp")], name="hors_zdc"))

    source = ColumnDataSource(nb_unite_by_cp[['lat', 'lng','cp_y', 'unite', 'unites', 'color']][:n_row])
    glyph = p.patches(source =source, xs = "lat", ys = "lng", fill_color="color",)
    p.add_tools(HoverTool(renderers=[glyph], tooltips=[("cp", "@cp_y"), ("nb unite", "@unite"), ("liste unites", "@unites")], name="zdc"))

    source = ColumnDataSource(unite_hors_zdc[[u'unit', u'pool', u'X_lambert93', u'Y_Lambert93']][:n_row])
    glyph = p.circle(source =source , x = u'X_lambert93', y = u'Y_Lambert93', color="red", legend=" unités hors zdc")
    p.add_tools(HoverTool(renderers=[glyph], tooltips=[("pool", "@pool"), ("unite", "@unit")], name="pool_units"))

    source = ColumnDataSource(unite_dans_zdc[[u'unit', u'pool', u'X_lambert93', u'Y_Lambert93']][:n_row])
    glyph = p.circle(source=source, x = u'X_lambert93', y = u'Y_Lambert93', color="orange", legend=" unités dans zdc")
    p.add_tools(HoverTool(renderers=[glyph], tooltips=[("pool", "@pool"), ("unite", "@unit")], name="pool_units"))

show(p)
d5 = time.time()
total_time = d5-d
print "total time = ", total_time
times.append[total_time]

With output :

total time = 4.08737492561
[4, 4.289460897445679, 4.141924858093262, 4.087374925613403]

Thanks, perfect :=))

Le mardi 12 juillet 2016 17:21:35 UTC+2, Romain Jouin a écrit :
I am sorry for this awf... Word which I didn't except to be read this way, I am more than happy of Bokeh and glad and thanks all the work done on this library.
Giving a feedback with some code example was for me a first step on a collaborative approach to the library.

I would be glad to help to the Bokeh development, but I am not sure to have the knowledge and time to do so. Getting used to it on a user point of view seems to me the first step before helping to develop features.

For now I just try to get Bokeh work on a project I have to finish quite soon... I will try your reset idea.

Thanks and best regards.
Romain.

--
You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/62433710-a348-43b1-80aa-4da3db7ebaa1%40continuum.io\.
For more options, visit https://groups.google.com/a/continuum.io/d/optout\.

Paul,

Can you give some more feedback please.

  That PR was explicitly designed to improve notebook rendering

speeds, and in all my tests it made things better - obviously the
caveat there is “my tests” which may well have missed something.

  If you can provide other test cases that would be extremely

helpful.

Best,

Bird

···

On 7/12/16 6:32 AM, Paul Morrison
wrote:

    I'm experiencing the same issue using a Jupyter

notebook and output_notebook(). The notebook size and execution
time grow on each iteration of full-notebook execution. I think
it might be related to this PR:

    On Tuesday, July 12, 2016 at 4:57:46 AM UTC-4, Romain Jouin

wrote:

      That is

exactly the same issue

      I am on a Jupiter notebook , so I tend to run and rerun tens

of time. My Python server went to a 1 gb of memory, plots up
to 3 min to display, awful. Have to restart kernel

  You received this message because you are subscribed to the Google

Groups “Bokeh Discussion - Public” group.

  To unsubscribe from this group and stop receiving emails from it,

send an email to [email protected].

  To post to this group, send email to [email protected].

  To view this discussion on the web visit [](https://groups.google.com/a/continuum.io/d/msgid/bokeh/acb02a86-937b-427f-a745-2cf916ee2034%40continuum.io?utm_medium=email&utm_source=footer)      .

For more options, visit .


Sarah Bird
Developer, Bokeh

    [
      ![Continuum Analytics](http://docs.continuum.io/_static/img/ContinuumWordmark.png)
    ](http://continuum.io)

https://github.com/bokeh/bokeh/pull/4563https://groups.google.com/a/continuum.io/d/msgid/bokeh/acb02a86-937b-427f-a745-2cf916ee2034%40continuum.io
https://groups.google.com/a/continuum.io/d/optout

Thanks Romain,

Is it possible to provide the data you’re using ( perhaps just email
to me directly if its not public data) so
that we can test your use case and find out what’s going wrong.

Sincerely,

Sarah Bird

···

On 7/11/16 11:38 PM, Romain Jouin
wrote:

Bokeh 12.0
I am on mon Mac ( last os)
Python 2.7 ( from anaconda)


Sarah Bird
Developer, Bokeh

    [
      ![Continuum Analytics](http://docs.continuum.io/_static/img/ContinuumWordmark.png)
    ](http://continuum.io)

I think I have an
idea.

You’re using output_file from a notebook.

We definitely didn’t handle that use case when doing work to improve notebook rendering - it’s not a small
thing to do.

                  I'm

almost certain that if you just ran that python script o ver and over again from the command line
you would not see th e increasing html
size.

                              Can you please con                                    firm that if

you use output_notebook, you don’t
see a speed decrease?

                              I would suggest opening a

github issue about this.

                                                                                Thanks

for sharing the code.

                                          Best,

                                            Bird
···

On 7/12/16 8:58 AM, Romain Jouin wrote:

Victory !

      @bryan-VandeVen : adding the [ bokeh.io.reset_output() ] at

the top of my ipython cell made it !! With incredible
performance improvements :

  1. linear process time linear
  1. 2.5 time faster => From 10 sec down to 4.5 secondes
      3) the output html is lighter => from 20 M down to 6 Mb

!!

                import

time

                p    

= figure()

                n_row

= 100000

                d    

= time.time()

bokeh.io.reset_output()

output_file(“couverture_des_unites_par_typologie_%s.html”%d)

                if

True:

source = ColumnDataSource(france_hors_zdc[:n_row])

glyph = p.patches(source =source, xs = “lat”, ys =
“lng”, fill_color=“white”,line_color=“white”)

p.add_tools(HoverTool(renderers=[glyph],
tooltips=[(“cp”, “@cp”)], name=“hors_zdc”))

source = ColumnDataSource(nb_unite_by_cp[[‘lat’,
‘lng’,‘cp_y’, ‘unite’, ‘unites’, ‘color’]][:n_row])

glyph = p.patches(source =source, xs = “lat”, ys =
“lng”, fill_color=“color”,)

p.add_tools(HoverTool(renderers=[glyph],
tooltips=[(“cp”, “@cp_y”), (“nb unite”, “@unite”),
(“liste unites”, “@unites”)], name=“zdc”))

source = ColumnDataSource(unite_hors_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source =source , x =
u’X_lambert93’, y = u’Y_Lambert93’, color=“red”,
legend=" unités hors zdc")

p.add_tools(HoverTool(renderers=[glyph],
tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)],
name=“pool_units”))

source = ColumnDataSource(unite_dans_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source=source, x = u’X_lambert93’,
y = u’Y_Lambert93’, color=“orange”, legend=" unités
dans zdc")

p.add_tools(HoverTool(renderers=[glyph],
tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)],
name=“pool_units”))

show(p)

                d5 =

time.time()

                total_time

= d5-d

                print

"total time = ", total_time

times.append[total_time]

``

With output :

total time = 4.08737492561
[4, 4.289460897445679, 4.141924858093262, 4.087374925613403]

``

      Thanks, perfect :=))
      Le mardi 12 juillet 2016 17:21:35 UTC+2, Romain Jouin a

écrit :

        I am

sorry for this awf… Word which I didn’t except to be read
this way, I am more than happy of Bokeh and glad and thanks
all the work done on this library.
Giving a feedback with some code example was for me a
first step on a collaborative approach to the library.

          I would be glad to help to the Bokeh development, but I

am not sure to have the knowledge and time to do so.
Getting used to it on a user point of view seems to me the
first step before helping to develop features.

          For now I just try to get Bokeh work on a project I have

to finish quite soon… I will try your reset idea.

Thanks and best regards.
Romain.

  You received this message because you are subscribed to the Google

Groups “Bokeh Discussion - Public” group.

  To unsubscribe from this group and stop receiving emails from it,

send an email to [email protected].

  To post to this group, send email to [email protected].

  To view this discussion on the web visit [](https://groups.google.com/a/continuum.io/d/msgid/bokeh/62433710-a348-43b1-80aa-4da3db7ebaa1%40continuum.io?utm_medium=email&utm_source=footer)      .

For more options, visit .


Sarah Bird
Developer, Bokeh

    [
      ![Continuum Analytics](http://docs.continuum.io/_static/img/ContinuumWordmark.png)
    ](http://continuum.io)

https://groups.google.com/a/continuum.io/d/msgid/bokeh/62433710-a348-43b1-80aa-4da3db7ebaa1%40continuum.io
https://groups.google.com/a/continuum.io/d/optout

Hi Sarah / Bryan,

First of all, thank you for all your hard work making such a fantastic product! I’m learning something new everyday about Bokeh and am absolutely awed by the capability.

I’m still on 0.12 and it looks like this PR ( https://github.com/bokeh/bokeh/pull/4563 ) is still on master and hasnt yet made it into a release. I’ll try to pull the latest master and test to see if this resolves the issue I’ve been experiencing.

My use case is actually exactly what Bryan described above: the re-run all for a notebook with many plots (50ish?) using output_notebook().

My symptoms are a steadily increasing memory footprint (Chrome 51, Windows 7, Anaconda 4.1) with each “Run All” in a notebook. It also seems that a kernel restart does not affect the memory footprint so this leads me to believe its something on the browser side. Closing and reopening the notebook within the same kernel instance does bring the memory footprint back to normal.

Unfortunately, the reset_output() suggestion didnt seem to have an impact.

Thanks again!

Paul

···

On Tue, Jul 12, 2016 at 12:59 PM, Sarah Bird - Continuum [email protected] wrote:

Thanks Romain,

Is it possible to provide the data you’re using ( perhaps just email
to me directly if its not public data) so
that we can test your use case and find out what’s going wrong.

Sincerely,

Sarah Bird

  On 7/11/16 11:38 PM, Romain Jouin

wrote:

Bokeh 12.0
I am on mon Mac ( last os)
Python 2.7 ( from anaconda)


Sarah Bird
Developer, Bokeh

    [
      ![Continuum Analytics](http://docs.continuum.io/_static/img/ContinuumWordmark.png)
    ](http://continuum.io)

You received this message because you are subscribed to a topic in the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this topic, visit https://groups.google.com/a/continuum.io/d/topic/bokeh/h0DQSZyx9-c/unsubscribe.

To unsubscribe from this group and all its topics, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/403b0dc5-57d4-cd00-c77d-fb230bc35c74%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

Changing to output_notebook() did improve things, but is still not as good as reset_ouput :

import time

p = figure()

n_row = 100000

d = time.time()

bokeh.io.reset_output()

#output_file(“couverture_des_unites_par_typologie_%s.html”%d)

output_notebook()

if True:

source = ColumnDataSource(france_hors_zdc[:n_row])

glyph = p.patches(source =source, xs = “lat”, ys = “lng”, fill_color=“white”,line_color=“white”)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp”)], name=“hors_zdc”))

source = ColumnDataSource(nb_unite_by_cp[[‘lat’, ‘lng’,‘cp_y’, ‘unite’, ‘unites’, ‘color’]][:n_row])

glyph = p.patches(source =source, xs = “lat”, ys = “lng”, fill_color=“color”,)

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“cp”, “@cp_y”), (“nb unite”, “@unite”), (“liste unites”, “@unites”)], name=“zdc”))

source = ColumnDataSource(unite_hors_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source =source , x = u’X_lambert93’, y = u’Y_Lambert93’, color=“red”, legend=" unités hors zdc")

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

source = ColumnDataSource(unite_dans_zdc[[u’unit’, u’pool’, u’X_lambert93’, u’Y_Lambert93’]][:n_row])

glyph = p.circle(source=source, x = u’X_lambert93’, y = u’Y_Lambert93’, color=“orange”, legend=" unités dans zdc")

p.add_tools(HoverTool(renderers=[glyph], tooltips=[(“pool”, “@pool”), (“unite”, “@unit”)], name=“pool_units”))

show(p)

d5 = time.time()

total_time = d5-d

print "total time = ", total_time

times.append(total_time)

print times

``

we loose 2-3 sec at each iteration :

total time = 11.9990158081
[6.672179937362671, 9.095952987670898, 11.999015808105469]

``




···

Le mardi 12 juillet 2016 19:09:36 UTC+2, Paul Morrison a écrit :

Hi Sarah / Bryan,

First of all, thank you for all your hard work making such a fantastic product! I’m learning something new everyday about Bokeh and am absolutely awed by the capability.

I’m still on 0.12 and it looks like this PR ( https://github.com/bokeh/bokeh/pull/4563 ) is still on master and hasnt yet made it into a release. I’ll try to pull the latest master and test to see if this resolves the issue I’ve been experiencing.

My use case is actually exactly what Bryan described above: the re-run all for a notebook with many plots (50ish?) using output_notebook().

My symptoms are a steadily increasing memory footprint (Chrome 51, Windows 7, Anaconda 4.1) with each “Run All” in a notebook. It also seems that a kernel restart does not affect the memory footprint so this leads me to believe its something on the browser side. Closing and reopening the notebook within the same kernel instance does bring the memory footprint back to normal.

Unfortunately, the reset_output() suggestion didnt seem to have an impact.

Thanks again!

Paul

On Tue, Jul 12, 2016 at 12:59 PM, Sarah Bird - Continuum [email protected] wrote:

Thanks Romain,

Is it possible to provide the data you’re using ( perhaps just email
to me directly if its not public data) so
that we can test your use case and find out what’s going wrong.

Sincerely,

Sarah Bird

  On 7/11/16 11:38 PM, Romain Jouin > > wrote:
Bokeh 12.0
I am on mon Mac ( last os)
Python 2.7 ( from anaconda)


Sarah Bird
Developer, Bokeh

    [
      <img alt="Continuum Analytics" src="https://lh6.googleusercontent.com/proxy/VYgVjggTk1hCXSN9wFkffE3I6kxTvJ51tT4KvDXOuKbs1WyFG66k7kt2-vkDimbyxfWtP-d1paJmstMYhPPnDYSUF4rLPoYM2GM2QFM=w5000-h5000" height="30px" width="150px">
    ](http://continuum.io)

You received this message because you are subscribed to a topic in the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this topic, visit https://groups.google.com/a/continuum.io/d/topic/bokeh/h0DQSZyx9-c/unsubscribe.

To unsubscribe from this group and all its topics, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/403b0dc5-57d4-cd00-c77d-fb230bc35c74%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

Hi Bryan,

Thanks for that - the call to bokeh.io.reset_output at the top of my scripts solve this problem when running in Spyder with IPython 5. The files are now the correct size every time.

Cheers,

  • Will
···

On Tuesday, 12 July 2016 16:05:59 UTC+1, Bryan Van de ven wrote:

Hi,

Bokeh is a large and complicated cross-language, cross-runtime library, and the same can be said of the Jupyter notebook. Combining them (well, in all cases) is a challenging technical proposition and presents an enormous test surface. There are also intrinsic trade-offs that will almost certainly mean not everyone can be 100% happy at simultaneously.

Confession time: I am not much of a notebook user, at all. I almost never “run all” to begin with, and running “run all” over and over again is simply not a use-case that even occurred to me. Duly noted.

Constructive feedback is always appreciated. But rather than simply offering dismissive statements like “awful”, may I suggest that instead you offer your expertise as a heavy notebook user to help improve Bokeh for everyone? The true value of Open Source is that it can be a collaboration.

Regarding this issue, my suspicion is that it is related to the accumulation of notebook comms objects. The notebook comms feature is great, and lets any bokeh plot in any cell be modified and updated in place with a simple push_notebook call. This affords things like streaming data and updating plots with Jupyter interactors. But it also evidently does not interact well with this particular “re-run all many times” use case. There are a variety of possible avenues for improvement that I can imagine, ranging from documentation changes, to making the notebook_comms feature be off by default, to investigating technical improvements in implementation. But they will take time and effort to investigate.

In the mean time, adding a call to bokeh.io.reset_output at the top of your notebook, should help prevent the memory growth in this situation.

Regards,

Bryan

On Jul 12, 2016, at 3:57 AM, Romain Jouin [email protected] wrote:

That is exactly the same issue
I am on a Jupiter notebook , so I tend to run and rerun tens of time. My Python server went to a 1 gb of memory, plots up to 3 min to display, awful. Have to restart kernel


You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/77a34d2f-e018-4b42-8883-700f3caa6c28%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

I'm glad to hear. I think we could expose this in the docs better. Do folks have any suggestions for the best place(s) to note this use case and remedy, until/if there is some way to make it better by "default"?

Thanks,

Bryan

···

On Jul 13, 2016, at 10:42 AM, Will Kew <[email protected]> wrote:

Hi Bryan,

Thanks for that - the call to `bokeh.io.reset_output` at the top of my scripts solve this problem when running in Spyder with IPython 5. The files are now the correct size every time.

Cheers,

- Will

On Tuesday, 12 July 2016 16:05:59 UTC+1, Bryan Van de ven wrote:
Hi,

Bokeh is a large and complicated cross-language, cross-runtime library, and the same can be said of the Jupyter notebook. Combining them (well, in all cases) is a challenging technical proposition and presents an enormous test surface. There are also intrinsic trade-offs that will almost certainly mean not everyone can be 100% happy at simultaneously.

Confession time: I am not much of a notebook user, at all. I almost never "run all" to begin with, and running "run all" over and over again is simply not a use-case that even occurred to me. Duly noted.

Constructive feedback is always appreciated. But rather than simply offering dismissive statements like "awful", may I suggest that instead you offer your expertise as a heavy notebook user to help improve Bokeh for everyone? The true value of Open Source is that it can be a collaboration.

Regarding this issue, my suspicion is that it is related to the accumulation of notebook comms objects. The notebook comms feature is great, and lets any bokeh plot in any cell be modified and updated in place with a simple `push_notebook` call. This affords things like streaming data and updating plots with Jupyter interactors. But it also evidently does not interact well with this particular "re-run all many times" use case. There are a variety of possible avenues for improvement that I can imagine, ranging from documentation changes, to making the notebook_comms feature be off by default, to investigating technical improvements in implementation. But they will take time and effort to investigate.

In the mean time, adding a call to `bokeh.io.reset_output` at the top of your notebook, should help prevent the memory growth in this situation.

Regards,

Bryan

> On Jul 12, 2016, at 3:57 AM, Romain Jouin <[email protected]> wrote:
>
> That is exactly the same issue
> I am on a Jupiter notebook , so I tend to run and rerun tens of time. My Python server went to a 1 gb of memory, plots up to 3 min to display, awful. Have to restart kernel
>
> --
> You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bokeh+un...@continuum.io.
> To post to this group, send email to bo...@continuum.io.
> To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/77a34d2f-e018-4b42-8883-700f3caa6c28%40continuum.io\.
> For more options, visit https://groups.google.com/a/continuum.io/d/optout\.

--
You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/0ef82d59-1f77-472b-9acd-e191f816402e%40continuum.io\.
For more options, visit https://groups.google.com/a/continuum.io/d/optout\.