Memory Release Problems

Hi all,

I’m working on a little data dashboard site running on bokeh but am having a problem where after a tab is closed and the session destroyed, local memory is not freed. In general I’m using an embedded server in Flask, mostly as put together from bits of the User Guide and various examples but the same behaviour can be seen by just taking the sliders.py example and importing a big dataframe at the start.

So more explicitly, looking at the memory used by the ‘bokeh’ process on my system monitor, running
bokeh serve sliders.py

``

If I open a few tabs with the modified file including the big dataframe load (either from a pickle or from sql), memory quickly shoots up to several hundred MB. closing the tabs and waiting for timeout does nothing to lower that number. If I change it to a directory style format and include the server_lifecycle.py file and tell it to print out when it reaches on_session_destroyed, I can see that the session is destroyed but it still doesn’t release any memory. In this form and when embedded in the flask app, memory is only released when killing the python process.

Is this something I’ll have to get around using some sort of process control or is there some sort of way to force it to release memory?

In terms of usage, I expect it to not run that frequently, so I can regularly restart the process, but the problem as it stands would be multiple people using it at the same time to compare between them, jumping between pages and those pages not clearing out when clicked away from, eventually leading to a crash.

bokeh: 0.12.14
python: 3.6.4
os: linux mint 18

Thanks very much,
Jack

And it looks like this is essentially issue 7468, oh well.

Hi,

Possibly, yes. Can you provide a complete script to reproduce what you are seeing? We are careful to maintain that modules created for sessions only have the expected number of referrers and therefore should be deleted:

  https://github.com/bokeh/bokeh/blob/master/bokeh/document/document.py#L403-L440

But evidently references to things inside the modules can linger, perhaps a via cycles, it will take some investigation.

Thanks,

Bryan

···

On Mar 2, 2018, at 06:26, Jack Beanland <[email protected]> wrote:

And it looks like this is essentially issue 7468, oh well.

--
You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/e0462270-afdb-4272-bda4-e8aaf592a482%40continuum.io\.
For more options, visit https://groups.google.com/a/continuum.io/d/optout\.

Hi Bryan,

I’ve attached a zip of a working example. Run with: bokeh serve --unused-session-lifetime 2000 bokeh_memory_test

Essentially
its a over the top simple example where I create a 50MB dataframe inside the sliders example and do nothing with it. I’ve included on_session_destroyed as well so it will print when the session is actually destroyed.
I had some problems getting memory-profiler to plot so was looking at the memory used by the ‘bokeh’ process. It steps up in roughly 50MB chunks and doesn’t drop back down when the session is
destroyed.

I also played around the the linked method, delete_modules and followed the reference count. There is the expected behaviour of the 3 stated references initially and then there does seem to be only the frame reference left at the end.

Thanks,
Jack

bokeh_memory_test.zip (1.65 KB)

Thanks, that's a helpful script. I am at a bit of a loss at the moment. I've made some medications that help clean up what appeared to be lingering documents and models (at least if I force a gc collection). But with your code the RSS still seems to increase without decreasing, and more confusing is that gc.get_objects does show that all the large objects and data frames from the previous sessions *are* collected:

BEFORE (14 sessions opened from tab reloads)

  2018-03-05 14:51:51,637 [pid 49792] Memory usage: 713.000000 MB
  GARBAGE
  <class 'pandas.core.frame.DataFrame'> 14 672001456.0

AFTER (13 sessions destroyed) by timeout):

  2018-03-05 14:52:21,610 [pid 49792] Memory usage: 712.000000 MB
  GARBAGE
  <class 'pandas.core.frame.DataFrame'> 1 48000104.0

There's only one DataFrame left (and no other large object hanging around, I have truncated a larger output) but the RSS still stays at 712MB (reported by psutil). There is also no garbage list reported by the gc. I am not sure what's going on at this point, or what is holding on to that extra memory.

Bryan

···

On Mar 5, 2018, at 14:08, Jack Beanland <[email protected]> wrote:

Hi Bryan,

I've attached a zip of a working example. Run with: bokeh serve --unused-session-lifetime 2000 bokeh_memory_test

Essentially its a over the top simple example where I create a 50MB dataframe inside the sliders example and do nothing with it. I've included on_session_destroyed as well so it will print when the session is actually destroyed.
I had some problems getting memory-profiler to plot so was looking at the memory used by the 'bokeh' process. It steps up in roughly 50MB chunks and doesn't drop back down when the session is destroyed.

I also played around the the linked method, delete_modules and followed the reference count. There is the expected behaviour of the 3 stated references initially and then there does seem to be only the frame reference left at the end.

Thanks,
Jack

--
You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/415d137f-33c4-47f9-864b-a01c2288f059%40continuum.io\.
For more options, visit https://groups.google.com/a/continuum.io/d/optout\.
<bokeh_memory_test.zip>

OK this is interesting. If I swap out the DataFrames for very large lists, then the problem drastically improves (with my WIP changes). There seems like a one-time increase of about ~40MB thereafter <1MB leak per destroyed session.

I wonder if this is exposing some usage issue or weird bokeh interaction with pandas or numpy, given that they access the Python C API that provides an avenue for memory allocations that the python gc system does not know about. I will post some of this information on the issue you linked as well as make a WIP PR soon. Can we continue the discussion there? Also if you have the ability to help test out some patches or PRs that would be extremely valuable.

Thanks,

Bryan

···

On Mar 5, 2018, at 15:14, Bryan Van de ven <[email protected]> wrote:

Thanks, that's a helpful script. I am at a bit of a loss at the moment. I've made some medications that help clean up what appeared to be lingering documents and models (at least if I force a gc collection). But with your code the RSS still seems to increase without decreasing, and more confusing is that gc.get_objects does show that all the large objects and data frames from the previous sessions *are* collected:

BEFORE (14 sessions opened from tab reloads)

  2018-03-05 14:51:51,637 [pid 49792] Memory usage: 713.000000 MB
  GARBAGE
  <class 'pandas.core.frame.DataFrame'> 14 672001456.0

AFTER (13 sessions destroyed) by timeout):

  2018-03-05 14:52:21,610 [pid 49792] Memory usage: 712.000000 MB
  GARBAGE
  <class 'pandas.core.frame.DataFrame'> 1 48000104.0

There's only one DataFrame left (and no other large object hanging around, I have truncated a larger output) but the RSS still stays at 712MB (reported by psutil). There is also no garbage list reported by the gc. I am not sure what's going on at this point, or what is holding on to that extra memory.

Bryan

On Mar 5, 2018, at 14:08, Jack Beanland <[email protected]> wrote:

Hi Bryan,

I've attached a zip of a working example. Run with: bokeh serve --unused-session-lifetime 2000 bokeh_memory_test

Essentially its a over the top simple example where I create a 50MB dataframe inside the sliders example and do nothing with it. I've included on_session_destroyed as well so it will print when the session is actually destroyed.
I had some problems getting memory-profiler to plot so was looking at the memory used by the 'bokeh' process. It steps up in roughly 50MB chunks and doesn't drop back down when the session is destroyed.

I also played around the the linked method, delete_modules and followed the reference count. There is the expected behaviour of the 3 stated references initially and then there does seem to be only the frame reference left at the end.

Thanks,
Jack

--
You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/415d137f-33c4-47f9-864b-a01c2288f059%40continuum.io\.
For more options, visit https://groups.google.com/a/continuum.io/d/optout\.
<bokeh_memory_test.zip>

Weird. Not a problem, I’ll help how I can.

Jack

Great, here is a PR:

  Address leaking memory by bryevdv · Pull Request #7604 · bokeh/bokeh · GitHub

If you could do some testing of it with your own code and report back in the PR it would be a great help.

Thanks,

Bryan

···

On Mar 6, 2018, at 01:59, Jack Beanland <[email protected]> wrote:

Weird. Not a problem, I'll help how I can.

Jack

--
You received this message because you are subscribed to the Google Groups "Bokeh Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/0cc983dc-7300-4fc3-a345-644948fffd27%40continuum.io\.
For more options, visit https://groups.google.com/a/continuum.io/d/optout\.