16:00:32 <mlavalle> #startmeeting neutron_performance
16:00:32 <openstack> Meeting started Mon Aug 26 16:00:32 2019 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:33 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:36 <openstack> The meeting name has been set to 'neutron_performance'
16:00:55 <njohnston> o/
16:01:10 <jrbalderrama> hi !
16:01:16 <mlavalle> hey, njohnston
16:01:53 <mlavalle> hey jrbalderrama. How's the heat treating you this summer?
16:02:39 <jrbalderrama> it is better now. Thanks for asking, here everybody is coming back from vacations.
16:03:44 <mlavalle> goog to know ;-)
16:04:01 <mlavalle> are you going to Shanghai?
16:04:09 <njohnston> We're having highs around 75F/22C which is unseasonably mild for this time of year, it is lovely
16:04:58 <jrbalderrama> Nobody from Inria is going to Shanghai
16:05:15 <mlavalle> wow, that's nice! that's the typical summer weather in Mexico City. That's one thing I miss about my birth place, the mild weather
16:05:31 <mlavalle> we'll miss you
16:06:11 <mlavalle> ok let's get going...
16:06:29 <mlavalle> I know slaweq is off, and haleyb might still be off
16:06:58 <njohnston> haleyb is back I believe, but he may still be drowning in thousands of emails
16:07:22 <bcafarel> late hi o/
16:07:33 <haleyb> i'm back, like njohnston said, deep in email, but here now
16:08:09 <mlavalle> yeap.... I come from a generation that started professional life before email was used for business.... you could go off on vacation without worrying about messages piling up
16:08:51 <mlavalle> ok....
16:09:01 <mlavalle> #topic Updates
16:09:22 <mlavalle> does anybody (other than me) have an update?
16:10:22 <njohnston> no
16:11:01 <mlavalle> ok then I'll give my update. I think I have a productive couple of afternoons about this...
16:12:15 <mlavalle> as discussed with bcafarel last meeting, the problem with osprofiler reports as that the traces are very difficult to relate to the code
16:12:51 <mlavalle> so I sent a message to the ML about 3 or 4 weeks ago asking if it was possible to improve that....
16:13:01 <mlavalle> and I got no answer
16:13:35 <njohnston> not sure there is anyone working on osprofiler these days
16:14:14 <bcafarel> no answer either on direct email?
16:14:37 <mlavalle> so next step I sent an email directly to Andrey Kurilin, who has been helping us with Rally and osprofiler
16:14:57 <mlavalle> and I got no asnwer
16:15:13 <mlavalle> as a consequence I've been exploring other alternatives
16:15:24 <mlavalle> I am not saying we scrap osprofiler traces
16:16:02 <mlavalle> It's still a valid perspective but we need to complement it with other view(s)
16:16:47 <mlavalle> so last meeting I discussed with bcafarel this https://julien.danjou.info/guide-to-python-profiling-cprofile-concrete-case-carbonara/
16:17:21 <mlavalle> which was posted by Julien Danjou, a compatriot of bcafarel and jrbalderrama
16:18:11 <mlavalle> you can read it carefully later. The summary is that he proposes using cProfile together with KCacheGrind
16:19:02 <mlavalle> if you scroll down that blogpost you can see an example of the call graphs generated by KCacheGrind from cProfile data
16:20:19 <njohnston> that's really nice
16:20:22 <mlavalle> I dug a little more and came across this: https://gist.github.com/ionelmc/5735205
16:21:26 <mlavalle> in that gist, a decorator is defined to automatically generate traces ready to be consumed by KCacheGrind
16:21:58 <mlavalle> so I adapted that code a bit and came up with this PoC: https://review.opendev.org/#/c/678438/
16:23:28 <mlavalle> I can know generate KCacheGring input files directly from create_port and update_port requests
16:24:04 <njohnston> This is really great stuff mlavalle
16:25:03 <njohnston> just have to apply that change and you can get the KCacheGrind files?  I can't wait to try it out!
16:25:28 <mlavalle> given the Zuul allergic reaction, I think the next step for that PoC is to make the profiling data gathering configurable with a config option
16:26:12 <mlavalle> njohnston: yes, just decorate wihchever method you can to trace and it will dump files in the /tmp folder
16:26:30 <mlavalle> the files are ready to be consumed by KCacheGrind
16:27:30 <mlavalle> the only small nit is that I haven't figured out what files types KCacheGrind expects by default, so in the open dialog make sure you select all file types
16:27:59 <mlavalle> wThen I did the following
16:28:52 <mlavalle> with this patch applied to my dvr 3 node devstack I ran our create_port_and_bind rally scenario
16:29:17 <mlavalle> 80 iteration, concurrency 20
16:31:19 <mlavalle> I generated a bunch of profiling files: https://github.com/miguellavalle/neutron-perf/tree/august-25th/threadripper/devstack/25-August-2019
16:32:37 <mlavalle> if you look at the name of the files, they are composed of the method path in dot notation + how long the request took in milliseconds + date and time
16:33:04 <mlavalle> and then under https://github.com/miguellavalle/neutron-perf/tree/august-25th/threadripper/devstack/25-August-2019/reports
16:33:21 <mlavalle> I left the Rally run report
16:34:23 <mlavalle> and in the pictures folder a couple of examples of KcacheGrind reports generated for those runs
16:34:50 * mlavalle hasn't figured out to to generate a report for Kcachegrind
16:35:33 <mlavalle> and this is where I ran out of time last night
16:35:51 <mlavalle> what do you guys think?
16:37:06 <njohnston> I am very excited by this, and I want to try it out on the bulk port code, as well as seeing how the ipam code looks in this
16:37:46 <njohnston> Looking at your screenshots I think this is going to be a very intuitive tool for evaluating where performance bottlenecks are
16:38:24 <mlavalle> the only thing you need is to install Kcachegrind in your workstation
16:38:34 <bcafarel> yup that looks really promising
16:38:49 <mlavalle> and apply the decorator to whatever code method you want to profile
16:39:07 <mlavalle> so next steps are:
16:39:57 <mlavalle> 1) Refine the PoC patch so zuul doesn't barf with it. I thing making this a configurable option will be good.
16:41:12 <mlavalle> 2) I am going to try to correlate the results in that Rally report with the profiling files generated so we can select the files better suited to identify bottlenecks due to concurrency
16:42:08 <mlavalle> that's it from me for this meeting
16:42:22 <mlavalle> anything else we should talk about today?
16:43:20 <njohnston> nothing I can think of
16:43:21 <jrbalderrama> nothing on my side
16:43:39 <mlavalle> cool.
16:43:44 <mlavalle> thanks for attending
16:43:52 <mlavalle> have a great week!
16:43:53 <njohnston> thank you mlavalle for all the hard work!
16:43:57 <bcafarel> thanks for that kcachegrind work :)
16:44:05 <mlavalle> :-)
16:44:11 <mlavalle> it's been fun
16:44:18 <mlavalle> #endmeeting