16:01:04 <mlavalle> #startmeeting neutron_performance
16:01:05 <openstack> Meeting started Mon Dec 17 16:01:04 2018 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:09 <openstack> The meeting name has been set to 'neutron_performance'
16:01:30 <rubasov> hello
16:01:42 <bcafarel> hi
16:01:55 <slaweq> hi
16:04:45 <mlavalle> #topic Updates
16:05:48 <mlavalle> slaweq: you were working on adding osprofiler to the Rally job. Do you want to update the team on that?
16:05:57 <slaweq> yes, sure
16:06:22 <slaweq> I still have this DNM patch https://review.openstack.org/#/c/615350/
16:06:56 <slaweq> and I work with andreykurilin on it
16:07:15 <slaweq> basically there is problem on rally/osprofiler side
16:07:46 <slaweq> they made some progress recently
16:08:16 <slaweq> but in last run there was problem that rally was killed by oom killer during generating osprofiler reports
16:08:44 <slaweq> I'm still in touch with andreykurilin on it and I'm slowly moving forward with this
16:10:22 <mlavalle> just in case, I know you are doing all you can. Didn't mean to put you on the spot ;-)
16:10:44 <rubasov> I also have a tiny update
16:10:54 <mlavalle> let me ask you a question. if you don't know that's ok
16:11:01 <rubasov> about the floating ip tests in rally
16:11:03 <mlavalle> rubasov: will get to it in a minute
16:11:07 <rubasov> ok
16:11:09 <slaweq> mlavalle: sure
16:11:42 <mlavalle> does this mean that no other OpenStack project has osprofiler enabled in their Rally jobs?
16:12:08 <slaweq> yes, it looks that we are pioneers in this area
16:12:25 <slaweq> at least in CI jobs :)
16:12:32 <mlavalle> ahhh, I see, we are the bleeding edge
16:13:12 <slaweq> yes, but fortunatelly andreykurilin is very helpful and very responsive so it's going somehow :)
16:13:12 <mlavalle> is andreykurilin with Mirantis?
16:13:33 <mlavalle> is he close to your timezone?
16:13:56 <slaweq> I don't know where he works
16:14:06 <slaweq> but he is in same TZ as me I think
16:14:16 <mlavalle> that's great
16:15:34 <mlavalle> so, is the next step to wait for more progress to be made on the rally / osprofiler side?
16:16:07 <slaweq> yes
16:16:32 <mlavalle> ok, let's do that then. Thanks for your update and your work on this slaweq :-)
16:16:50 <slaweq> andreykurilin told me today that he had some idea how maybe we can lower memory usage by rally and we will check than
16:17:09 <mlavalle> let us know if there is anything we can do to help
16:17:50 <slaweq> mlavalle: sure, thx
16:18:05 <mlavalle> rubasov: please go ahead with your update
16:18:31 <rubasov> it's all about the floating ip tests in rally
16:18:46 <rubasov> here: https://review.openstack.org/#/q/topic:rally-neutron-fip
16:19:18 <rubasov> now the floating ip association and disassociation operations are covered too
16:20:46 <mlavalle> are these patches ready for review?
16:21:14 <rubasov> yes they are
16:21:42 <rubasov> the zuul -1 is just because the unmerged dependency
16:21:46 <mlavalle> Great!
16:21:57 <mlavalle> thanks rubasov
16:22:50 <rubasov> yw, I may need to ping rally folks later
16:23:06 <mlavalle> I also have an update
16:23:41 <rubasov> go ahead
16:23:44 <mlavalle> 1) I think you all have seen the email exchange with the EnOS team, haven't you?
16:23:51 <rubasov> I did
16:24:22 <slaweq> yes
16:24:33 <mlavalle> So we are going to have a conversation with them on our meeting on the 14th
16:25:00 <mlavalle> I am going to start an etherpad where we create an agenda for that conversation
16:25:15 <mlavalle> I'll send a message to you all later this week
16:25:25 <mlavalle> please add your ideas there
16:26:00 <mlavalle> let's try to make the most out of that meeting
16:26:13 <mlavalle> keep in mund that it will be a high bandwidth meeting
16:26:16 <mlavalle> not IRC
16:27:06 <mlavalle> any questions, comments on this?
16:27:44 <rubasov> not from me
16:28:27 <mlavalle> ok, next
16:29:01 <mlavalle> 2) Last week I had a very interesting conversation with folks from a company called Platform9
16:29:23 <mlavalle> I was sharing with them the fact that we have formed this Neutron performance sub-team
16:29:35 <mlavalle> and that our goal is to improve performance and scalability
16:30:04 <mlavalle> They were very interested in the effort. They might even participate
16:30:29 <mlavalle> But for the time being, the interesting part was some points they raised:
16:31:35 <mlavalle> They have customers for whom they operate their OpenStack deployments of all sizes
16:32:54 <mlavalle> based on their experience improving the performance of Neutron in the "stable state" is very good
16:33:11 <mlavalle> but they also recommend that we shoud pay attention to the transitions
16:33:23 <mlavalle> because those transitions can be very costly
16:33:30 <mlavalle> Transitions mean:
16:33:36 <mlavalle> 1) Re-staring agents
16:34:34 <mlavalle> 2) Simulations of network outages
16:35:15 <mlavalle> 3) Situations where there is a lot of ports churn. i.e. VMs are being constantly created and deleted
16:35:47 <mlavalle> I know we have inituial steps in our performance improvements efforts
16:36:07 <mlavalle> But I wanted to bring these scenarios to your attention, so we start thinking also about them
16:36:44 <mlavalle> does it make sense?
16:36:51 <slaweq> yes, I agree and I know that e.g. restarting agents may be very slow if there is a lot of ports/networks handled by agent
16:37:12 <rubasov> it absolutely does
16:37:34 <mlavalle> ok, that is what I wanted to update the team with
16:37:53 <mlavalle> is there anything else we should discuss today?
16:38:27 <slaweq> not from me
16:38:34 <rubasov> me neither
16:38:44 <mlavalle> ok, thanks for your hard work
16:38:53 <mlavalle> and thanks for attending
16:38:57 <mlavalle> #endmeeting