question about bokeh-server performance

My team has been developing an Angular app that allows access to a bunch of Bokeh apps. The Angular app lets users select apps, which get represented as cards on their dashboard. Clicking on a card opens an iframe, which points to the corresponding Bokeh app. Based on a couple threads in this list as well as my understanding of the examples such as stock_app.py, I import all apps from their respective files into a single file called run.py. For example, our “scholar attrition” app looks like this in run.py:

@bokeh_app.route("/apps/scholar_attrition/")

@object_page(“scholar_attrition”)

def make_scholar_attrition():

app = ScholarAttrition.create()

return app

``

I have six apps in there right now. I then run it on bokeh-server (on Debian GNU/Linux 7.8) this way:

bokeh-server -m --ip xxx.xxx.xxx.xxx --port 5006 --backend redis --start-redis --script run.py

``

That hangs all of my apps at their respective paths (so the scholar_attrition app is at http://xxx.xxx.xxx.xxx:5006/apps/scholar_attrition).

Problem: The apps worked fine until we did a demo today to about 30 people. I showed them the app, they all promptly logged on, and an app that normally takes less than 2 seconds to load took 45 seconds.

We envision a situation in a few months where we’ll have a couple hundred people accessing the app at the same time, with about 2000 unique users. I would very much like to have the performance issue resolved by then, but I’m not sure how to proceed. I know that bokeh-server is still very much under active development. Is this expected behavior at this point? Can anyone suggest a workaround? We have a small team with primarily analytic skills - we’re learning the web development stuff as we go along, so I’m not even sure which questions to ask. Could the number of apps running on the same bokeh-server instance cause a bottleneck? The number of users? Could this be resolved by spinning up a new bokeh-server instance for each user (which would then necessitate opening more ports)? I know you’re working on a big refactor of bokeh-server, but any advice you could give me in the interim would be really appreciated.

Thanks,

Schaun

you should look into

https://github.com/bokeh/bokeh/blob/0c78cf646ec95d3c003c714c35a388db4b4ad834/examples/deploy/README.md

to run on multiple cores

···

On Fri, Jul 10, 2015 at 5:18 PM, Schaun Wheeler [email protected] wrote:

My team has been developing an Angular app that allows access to a bunch of Bokeh apps. The Angular app lets users select apps, which get represented as cards on their dashboard. Clicking on a card opens an iframe, which points to the corresponding Bokeh app. Based on a couple threads in this list as well as my understanding of the examples such as stock_app.py, I import all apps from their respective files into a single file called run.py. For example, our “scholar attrition” app looks like this in run.py:

@bokeh_app.route("/apps/scholar_attrition/")

@object_page(“scholar_attrition”)

def make_scholar_attrition():

app = ScholarAttrition.create()

return app

``

I have six apps in there right now. I then run it on bokeh-server (on Debian GNU/Linux 7.8) this way:

bokeh-server -m --ip xxx.xxx.xxx.xxx --port 5006 --backend redis --start-redis --script run.py

``

That hangs all of my apps at their respective paths (so the scholar_attrition app is at http://xxx.xxx.xxx.xxx:5006/apps/scholar_attrition).

Problem: The apps worked fine until we did a demo today to about 30 people. I showed them the app, they all promptly logged on, and an app that normally takes less than 2 seconds to load took 45 seconds.

We envision a situation in a few months where we’ll have a couple hundred people accessing the app at the same time, with about 2000 unique users. I would very much like to have the performance issue resolved by then, but I’m not sure how to proceed. I know that bokeh-server is still very much under active development. Is this expected behavior at this point? Can anyone suggest a workaround? We have a small team with primarily analytic skills - we’re learning the web development stuff as we go along, so I’m not even sure which questions to ask. Could the number of apps running on the same bokeh-server instance cause a bottleneck? The number of users? Could this be resolved by spinning up a new bokeh-server instance for each user (which would then necessitate opening more ports)? I know you’re working on a big refactor of bokeh-server, but any advice you could give me in the interim would be really appreciated.

Thanks,

Schaun

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/3e09b30f-6067-4f2c-90e6-cfee1d5d463c%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

Thanks for your very quick response, Hugo! Forgive me my unfamiliarity with gunicorn, tornado, redis, and web development in general: am I correct in understanding that, if I run the three processes outlined that readme, then I can then run bokeh-server as I normally do (the way I showed in my original post) and it will run on multiple cores?

···

On Friday, July 10, 2015 at 5:36:47 PM UTC-4, Hugo Shi wrote:

you should look into

https://github.com/bokeh/bokeh/blob/0c78cf646ec95d3c003c714c35a388db4b4ad834/examples/deploy/README.md

to run on multiple cores

On Fri, Jul 10, 2015 at 5:18 PM, Schaun Wheeler [email protected] wrote:

My team has been developing an Angular app that allows access to a bunch of Bokeh apps. The Angular app lets users select apps, which get represented as cards on their dashboard. Clicking on a card opens an iframe, which points to the corresponding Bokeh app. Based on a couple threads in this list as well as my understanding of the examples such as stock_app.py, I import all apps from their respective files into a single file called run.py. For example, our “scholar attrition” app looks like this in run.py:

@bokeh_app.route("/apps/scholar_attrition/")

@object_page(“scholar_attrition”)

def make_scholar_attrition():

app = ScholarAttrition.create()

return app

``

I have six apps in there right now. I then run it on bokeh-server (on Debian GNU/Linux 7.8) this way:

bokeh-server -m --ip xxx.xxx.xxx.xxx --port 5006 --backend redis --start-redis --script run.py

``

That hangs all of my apps at their respective paths (so the scholar_attrition app is at http://xxx.xxx.xxx.xxx:5006/apps/scholar_attrition).

Problem: The apps worked fine until we did a demo today to about 30 people. I showed them the app, they all promptly logged on, and an app that normally takes less than 2 seconds to load took 45 seconds.

We envision a situation in a few months where we’ll have a couple hundred people accessing the app at the same time, with about 2000 unique users. I would very much like to have the performance issue resolved by then, but I’m not sure how to proceed. I know that bokeh-server is still very much under active development. Is this expected behavior at this point? Can anyone suggest a workaround? We have a small team with primarily analytic skills - we’re learning the web development stuff as we go along, so I’m not even sure which questions to ask. Could the number of apps running on the same bokeh-server instance cause a bottleneck? The number of users? Could this be resolved by spinning up a new bokeh-server instance for each user (which would then necessitate opening more ports)? I know you’re working on a big refactor of bokeh-server, but any advice you could give me in the interim would be really appreciated.

Thanks,

Schaun

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/3e09b30f-6067-4f2c-90e6-cfee1d5d463c%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

Can anyone confirm that I’ve understood Hugo’s instructions correctly? Run the three commands and then run bokeh-server as normal, feeding the script in through the --script tag? Or do I have to merge my “run.py” file with the config.py file and feed it into the second command? Or something else?

···

On Friday, July 10, 2015 at 6:13:13 PM UTC-4, Schaun Wheeler wrote:

Thanks for your very quick response, Hugo! Forgive me my unfamiliarity with gunicorn, tornado, redis, and web development in general: am I correct in understanding that, if I run the three processes outlined that readme, then I can then run bokeh-server as I normally do (the way I showed in my original post) and it will run on multiple cores?

On Friday, July 10, 2015 at 5:36:47 PM UTC-4, Hugo Shi wrote:

you should look into

https://github.com/bokeh/bokeh/blob/0c78cf646ec95d3c003c714c35a388db4b4ad834/examples/deploy/README.md

to run on multiple cores

On Fri, Jul 10, 2015 at 5:18 PM, Schaun Wheeler [email protected] wrote:

My team has been developing an Angular app that allows access to a bunch of Bokeh apps. The Angular app lets users select apps, which get represented as cards on their dashboard. Clicking on a card opens an iframe, which points to the corresponding Bokeh app. Based on a couple threads in this list as well as my understanding of the examples such as stock_app.py, I import all apps from their respective files into a single file called run.py. For example, our “scholar attrition” app looks like this in run.py:

@bokeh_app.route("/apps/scholar_attrition/")

@object_page(“scholar_attrition”)

def make_scholar_attrition():

app = ScholarAttrition.create()

return app

``

I have six apps in there right now. I then run it on bokeh-server (on Debian GNU/Linux 7.8) this way:

bokeh-server -m --ip xxx.xxx.xxx.xxx --port 5006 --backend redis --start-redis --script run.py

``

That hangs all of my apps at their respective paths (so the scholar_attrition app is at http://xxx.xxx.xxx.xxx:5006/apps/scholar_attrition).

Problem: The apps worked fine until we did a demo today to about 30 people. I showed them the app, they all promptly logged on, and an app that normally takes less than 2 seconds to load took 45 seconds.

We envision a situation in a few months where we’ll have a couple hundred people accessing the app at the same time, with about 2000 unique users. I would very much like to have the performance issue resolved by then, but I’m not sure how to proceed. I know that bokeh-server is still very much under active development. Is this expected behavior at this point? Can anyone suggest a workaround? We have a small team with primarily analytic skills - we’re learning the web development stuff as we go along, so I’m not even sure which questions to ask. Could the number of apps running on the same bokeh-server instance cause a bottleneck? The number of users? Could this be resolved by spinning up a new bokeh-server instance for each user (which would then necessitate opening more ports)? I know you’re working on a big refactor of bokeh-server, but any advice you could give me in the interim would be really appreciated.

Thanks,

Schaun

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/3e09b30f-6067-4f2c-90e6-cfee1d5d463c%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

I don’t know about running on multiple cores, but I will say, that in the past I had to set run_forwarder to True, not False in my config.py file.

···

On Tue, Jul 14, 2015 at 2:21 PM, Schaun Wheeler [email protected] wrote:

Can anyone confirm that I’ve understood Hugo’s instructions correctly? Run the three commands and then run bokeh-server as normal, feeding the script in through the --script tag? Or do I have to merge my “run.py” file with the config.py file and feed it into the second command? Or something else?

On Friday, July 10, 2015 at 6:13:13 PM UTC-4, Schaun Wheeler wrote:

Thanks for your very quick response, Hugo! Forgive me my unfamiliarity with gunicorn, tornado, redis, and web development in general: am I correct in understanding that, if I run the three processes outlined that readme, then I can then run bokeh-server as I normally do (the way I showed in my original post) and it will run on multiple cores?

On Friday, July 10, 2015 at 5:36:47 PM UTC-4, Hugo Shi wrote:

you should look into

https://github.com/bokeh/bokeh/blob/0c78cf646ec95d3c003c714c35a388db4b4ad834/examples/deploy/README.md

to run on multiple cores

On Fri, Jul 10, 2015 at 5:18 PM, Schaun Wheeler [email protected] wrote:

My team has been developing an Angular app that allows access to a bunch of Bokeh apps. The Angular app lets users select apps, which get represented as cards on their dashboard. Clicking on a card opens an iframe, which points to the corresponding Bokeh app. Based on a couple threads in this list as well as my understanding of the examples such as stock_app.py, I import all apps from their respective files into a single file called run.py. For example, our “scholar attrition” app looks like this in run.py:

@bokeh_app.route("/apps/scholar_attrition/")

@object_page(“scholar_attrition”)

def make_scholar_attrition():

app = ScholarAttrition.create()

return app

``

I have six apps in there right now. I then run it on bokeh-server (on Debian GNU/Linux 7.8) this way:

bokeh-server -m --ip xxx.xxx.xxx.xxx --port 5006 --backend redis --start-redis --script run.py

``

That hangs all of my apps at their respective paths (so the scholar_attrition app is at http://xxx.xxx.xxx.xxx:5006/apps/scholar_attrition).

Problem: The apps worked fine until we did a demo today to about 30 people. I showed them the app, they all promptly logged on, and an app that normally takes less than 2 seconds to load took 45 seconds.

We envision a situation in a few months where we’ll have a couple hundred people accessing the app at the same time, with about 2000 unique users. I would very much like to have the performance issue resolved by then, but I’m not sure how to proceed. I know that bokeh-server is still very much under active development. Is this expected behavior at this point? Can anyone suggest a workaround? We have a small team with primarily analytic skills - we’re learning the web development stuff as we go along, so I’m not even sure which questions to ask. Could the number of apps running on the same bokeh-server instance cause a bottleneck? The number of users? Could this be resolved by spinning up a new bokeh-server instance for each user (which would then necessitate opening more ports)? I know you’re working on a big refactor of bokeh-server, but any advice you could give me in the interim would be really appreciated.

Thanks,

Schaun

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/3e09b30f-6067-4f2c-90e6-cfee1d5d463c%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/2a1a4ea7-9360-4af4-b304-0ef389b22e2e%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

gunicorn will run bokeh server - it’s webserver will spawn N instances of the bokeh server and forward requests to them

in this configuration you don’t want run_fowarder=True.

If you have one bokeh process, an update comes in and a message goes out on websockets about the change

If you have N bokeh processes, an update that comes in needs to go out via webosckets over all other N processes

the forwarder broadcasts the message from one bokeh process so that the other N can see it

run_forwarder is the config for a single process cake - just run the forwarder in the same process, because we don’t need to broadcast to anyone else

however in a multi porcess ocnfiguration, this is in correct, you only want one forwarder for all your bokeh processes

···

On Tue, Jul 14, 2015 at 11:19 AM, Sarah Bird [email protected] wrote:

I don’t know about running on multiple cores, but I will say, that in the past I had to set run_forwarder to True, not False in my config.py file.

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/CA%2BEr%2BdQW3fOdaMH3%3Ds86Hop95pN6iKEO7wTbtvoFzM3TAtdAaA%40mail.gmail.com.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

On Tue, Jul 14, 2015 at 2:21 PM, Schaun Wheeler [email protected] wrote:

Can anyone confirm that I’ve understood Hugo’s instructions correctly? Run the three commands and then run bokeh-server as normal, feeding the script in through the --script tag? Or do I have to merge my “run.py” file with the config.py file and feed it into the second command? Or something else?

On Friday, July 10, 2015 at 6:13:13 PM UTC-4, Schaun Wheeler wrote:

Thanks for your very quick response, Hugo! Forgive me my unfamiliarity with gunicorn, tornado, redis, and web development in general: am I correct in understanding that, if I run the three processes outlined that readme, then I can then run bokeh-server as I normally do (the way I showed in my original post) and it will run on multiple cores?

On Friday, July 10, 2015 at 5:36:47 PM UTC-4, Hugo Shi wrote:

you should look into

https://github.com/bokeh/bokeh/blob/0c78cf646ec95d3c003c714c35a388db4b4ad834/examples/deploy/README.md

to run on multiple cores

On Fri, Jul 10, 2015 at 5:18 PM, Schaun Wheeler [email protected] wrote:

My team has been developing an Angular app that allows access to a bunch of Bokeh apps. The Angular app lets users select apps, which get represented as cards on their dashboard. Clicking on a card opens an iframe, which points to the corresponding Bokeh app. Based on a couple threads in this list as well as my understanding of the examples such as stock_app.py, I import all apps from their respective files into a single file called run.py. For example, our “scholar attrition” app looks like this in run.py:

@bokeh_app.route("/apps/scholar_attrition/")

@object_page(“scholar_attrition”)

def make_scholar_attrition():

app = ScholarAttrition.create()

return app

``

I have six apps in there right now. I then run it on bokeh-server (on Debian GNU/Linux 7.8) this way:

bokeh-server -m --ip xxx.xxx.xxx.xxx --port 5006 --backend redis --start-redis --script run.py

``

That hangs all of my apps at their respective paths (so the scholar_attrition app is at http://xxx.xxx.xxx.xxx:5006/apps/scholar_attrition).

Problem: The apps worked fine until we did a demo today to about 30 people. I showed them the app, they all promptly logged on, and an app that normally takes less than 2 seconds to load took 45 seconds.

We envision a situation in a few months where we’ll have a couple hundred people accessing the app at the same time, with about 2000 unique users. I would very much like to have the performance issue resolved by then, but I’m not sure how to proceed. I know that bokeh-server is still very much under active development. Is this expected behavior at this point? Can anyone suggest a workaround? We have a small team with primarily analytic skills - we’re learning the web development stuff as we go along, so I’m not even sure which questions to ask. Could the number of apps running on the same bokeh-server instance cause a bottleneck? The number of users? Could this be resolved by spinning up a new bokeh-server instance for each user (which would then necessitate opening more ports)? I know you’re working on a big refactor of bokeh-server, but any advice you could give me in the interim would be really appreciated.

Thanks,

Schaun

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/3e09b30f-6067-4f2c-90e6-cfee1d5d463c%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/2a1a4ea7-9360-4af4-b304-0ef389b22e2e%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

That is super helpful - thank you hugo! I had been wondering :smiley:

···

On Tue, Jul 14, 2015 at 5:24 PM, Hugo Shi [email protected] wrote:

gunicorn will run bokeh server - it’s webserver will spawn N instances of the bokeh server and forward requests to them

in this configuration you don’t want run_fowarder=True.

If you have one bokeh process, an update comes in and a message goes out on websockets about the change

If you have N bokeh processes, an update that comes in needs to go out via webosckets over all other N processes

the forwarder broadcasts the message from one bokeh process so that the other N can see it

run_forwarder is the config for a single process cake - just run the forwarder in the same process, because we don’t need to broadcast to anyone else

however in a multi porcess ocnfiguration, this is in correct, you only want one forwarder for all your bokeh processes

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/CAMn%3DpDTshbmgnUvJffqq8cMYOO-sj9N%3DBg6pFdRiE7HXvQMkZg%40mail.gmail.com.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

On Tue, Jul 14, 2015 at 11:19 AM, Sarah Bird [email protected] wrote:

I don’t know about running on multiple cores, but I will say, that in the past I had to set run_forwarder to True, not False in my config.py file.

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/CA%2BEr%2BdQW3fOdaMH3%3Ds86Hop95pN6iKEO7wTbtvoFzM3TAtdAaA%40mail.gmail.com.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

On Tue, Jul 14, 2015 at 2:21 PM, Schaun Wheeler [email protected] wrote:

Can anyone confirm that I’ve understood Hugo’s instructions correctly? Run the three commands and then run bokeh-server as normal, feeding the script in through the --script tag? Or do I have to merge my “run.py” file with the config.py file and feed it into the second command? Or something else?

On Friday, July 10, 2015 at 6:13:13 PM UTC-4, Schaun Wheeler wrote:

Thanks for your very quick response, Hugo! Forgive me my unfamiliarity with gunicorn, tornado, redis, and web development in general: am I correct in understanding that, if I run the three processes outlined that readme, then I can then run bokeh-server as I normally do (the way I showed in my original post) and it will run on multiple cores?

On Friday, July 10, 2015 at 5:36:47 PM UTC-4, Hugo Shi wrote:

you should look into

https://github.com/bokeh/bokeh/blob/0c78cf646ec95d3c003c714c35a388db4b4ad834/examples/deploy/README.md

to run on multiple cores

On Fri, Jul 10, 2015 at 5:18 PM, Schaun Wheeler [email protected] wrote:

My team has been developing an Angular app that allows access to a bunch of Bokeh apps. The Angular app lets users select apps, which get represented as cards on their dashboard. Clicking on a card opens an iframe, which points to the corresponding Bokeh app. Based on a couple threads in this list as well as my understanding of the examples such as stock_app.py, I import all apps from their respective files into a single file called run.py. For example, our “scholar attrition” app looks like this in run.py:

@bokeh_app.route("/apps/scholar_attrition/")

@object_page(“scholar_attrition”)

def make_scholar_attrition():

app = ScholarAttrition.create()

return app

``

I have six apps in there right now. I then run it on bokeh-server (on Debian GNU/Linux 7.8) this way:

bokeh-server -m --ip xxx.xxx.xxx.xxx --port 5006 --backend redis --start-redis --script run.py

``

That hangs all of my apps at their respective paths (so the scholar_attrition app is at http://xxx.xxx.xxx.xxx:5006/apps/scholar_attrition).

Problem: The apps worked fine until we did a demo today to about 30 people. I showed them the app, they all promptly logged on, and an app that normally takes less than 2 seconds to load took 45 seconds.

We envision a situation in a few months where we’ll have a couple hundred people accessing the app at the same time, with about 2000 unique users. I would very much like to have the performance issue resolved by then, but I’m not sure how to proceed. I know that bokeh-server is still very much under active development. Is this expected behavior at this point? Can anyone suggest a workaround? We have a small team with primarily analytic skills - we’re learning the web development stuff as we go along, so I’m not even sure which questions to ask. Could the number of apps running on the same bokeh-server instance cause a bottleneck? The number of users? Could this be resolved by spinning up a new bokeh-server instance for each user (which would then necessitate opening more ports)? I know you’re working on a big refactor of bokeh-server, but any advice you could give me in the interim would be really appreciated.

Thanks,

Schaun

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/3e09b30f-6067-4f2c-90e6-cfee1d5d463c%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/2a1a4ea7-9360-4af4-b304-0ef389b22e2e%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.

I am very interested in this functionality as well. I’m curious what syntax you would use to pass the script, for example stock_app.py, and how you might then embed the app in a flask webapp as shown in the stock_app example.

···

On Friday, July 10, 2015 at 11:36:47 AM UTC-10, Hugo Shi wrote:

you should look into

https://github.com/bokeh/bokeh/blob/0c78cf646ec95d3c003c714c35a388db4b4ad834/examples/deploy/README.md

to run on multiple cores

On Fri, Jul 10, 2015 at 5:18 PM, Schaun Wheeler [email protected]ail.com wrote:

My team has been developing an Angular app that allows access to a bunch of Bokeh apps. The Angular app lets users select apps, which get represented as cards on their dashboard. Clicking on a card opens an iframe, which points to the corresponding Bokeh app. Based on a couple threads in this list as well as my understanding of the examples such as stock_app.py, I import all apps from their respective files into a single file called run.py. For example, our “scholar attrition” app looks like this in run.py:

@bokeh_app.route("/apps/scholar_attrition/")

@object_page(“scholar_attrition”)

def make_scholar_attrition():

app = ScholarAttrition.create()

return app

``

I have six apps in there right now. I then run it on bokeh-server (on Debian GNU/Linux 7.8) this way:

bokeh-server -m --ip xxx.xxx.xxx.xxx --port 5006 --backend redis --start-redis --script run.py

``

That hangs all of my apps at their respective paths (so the scholar_attrition app is at http://xxx.xxx.xxx.xxx:5006/apps/scholar_attrition).

Problem: The apps worked fine until we did a demo today to about 30 people. I showed them the app, they all promptly logged on, and an app that normally takes less than 2 seconds to load took 45 seconds.

We envision a situation in a few months where we’ll have a couple hundred people accessing the app at the same time, with about 2000 unique users. I would very much like to have the performance issue resolved by then, but I’m not sure how to proceed. I know that bokeh-server is still very much under active development. Is this expected behavior at this point? Can anyone suggest a workaround? We have a small team with primarily analytic skills - we’re learning the web development stuff as we go along, so I’m not even sure which questions to ask. Could the number of apps running on the same bokeh-server instance cause a bottleneck? The number of users? Could this be resolved by spinning up a new bokeh-server instance for each user (which would then necessitate opening more ports)? I know you’re working on a big refactor of bokeh-server, but any advice you could give me in the interim would be really appreciated.

Thanks,

Schaun

You received this message because you are subscribed to the Google Groups “Bokeh Discussion - Public” group.

To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

To post to this group, send email to [email protected].

To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/bokeh/3e09b30f-6067-4f2c-90e6-cfee1d5d463c%40continuum.io.

For more options, visit https://groups.google.com/a/continuum.io/d/optout.