15:00:38 #startmeeting neutron_qos 15:00:39 Meeting started Tue Mar 24 15:00:38 2020 UTC and is due to finish in 60 minutes. The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:42 The meeting name has been set to 'neutron_qos' 15:00:42 hello 15:00:45 hi :) 15:01:03 yeah, the ofctl removal is not easy 15:01:09 there is no 1:1 conversion 15:01:18 let's wait 30 secs 15:02:02 #topic RFEs 15:02:11 #link https://bugs.launchpad.net/neutron/+bug/1858610 15:02:12 Launchpad bug 1858610 in neutron "[RFE] Qos policy not supporting sharing bandwidth between several nics of the same vm." [Undecided,New] 15:02:27 we are still waiting for a spec or a POC 15:02:41 and that seems to be pretty complex 15:02:47 I'll ping the author of the RFE today just to know the status 15:02:51 yeah, it is 15:03:25 so let's wait for it but I don't know if we'll see something landed this cycle 15:03:45 and that's all for today in this section! 15:04:16 I don't think it will be this cycle 15:04:23 I dropped the classifier RFE because there are no volunteers to continue with the work 15:04:46 so I encourage anyone to continue with it 15:05:04 reference: 15:05:06 #link https://bugs.launchpad.net/neutron/+bug/1476527 15:05:09 Launchpad bug 1476527 in neutron "[RFE] Add common classifier resource" [Wishlist,Triaged] - Assigned to Igor D.C. (igordcard) 15:05:35 is something missing here? 15:05:58 #topic Bugs 15:06:03 #link https://bugs.launchpad.net/neutron/+bug/1864630 15:06:05 Launchpad bug 1864630 in neutron "Hard Reboot VM with multiple port lost QoS " [Undecided,Fix released] - Assigned to Nguyen Thanh Cong (congnt95) 15:06:16 well addressed by https://review.opendev.org/#/c/709687/ 15:06:19 already merged 15:06:24 ++ 15:06:36 and there is a backport https://review.opendev.org/#/q/If8edd29dd741f1688ffcac341fd58173539ba000 15:06:51 but this patch in Train should be rebased on top of 15:06:56 (one sec) 15:07:04 https://review.opendev.org/#/c/714417/ 15:07:26 (now in Train we have a small issue with rally) 15:07:33 so we need to wait for it 15:07:37 not so small :) 15:07:39 https://bugs.launchpad.net/neutron/+bug/1868691 15:07:41 Launchpad bug 1868691 in neutron "neutron-rally-task fails 100% on stable branches" [Critical,In progress] - Assigned to Bernard Cafarelli (bcafarel) 15:07:45 I know, I know... 15:07:46 but bcafarel is on it already 15:07:52 bcafarel++ 15:08:25 so the OVS QoS bug is almost solved, even in T 15:08:33 next one 15:08:35 #link https://bugs.launchpad.net/neutron/+bug/1863852 15:08:36 Launchpad bug 1863852 in neutron "[OVN]Could not support more than one qos rule in one policy" [Medium,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez) 15:08:49 ralonsoh: should this be backported also to Stein and older? 15:08:55 or it applies only to Train? 15:08:57 hmmmmm 15:09:03 let me check that 15:09:14 I'll check that after this meeting 15:09:32 no, I can check it on my own later, I just though that maybe You will know :) 15:09:34 thx 15:09:38 no, sorry 15:10:14 that bug was open to master and I know this is also happening in T 15:10:22 but I don't know in previous versions 15:10:26 ok 15:10:37 ok, the OVN QoS bug 15:10:48 maciejjozefczyk found today some issues in the patch 15:11:01 #link https://review.opendev.org/#/c/711317/ 15:11:09 and I still need to reply to haleyb 15:11:25 ralonsoh, slaweq this should be cherry-picked to train (qos) 15:11:31 but is almost ready, probably in the next PS 15:11:46 yes, this QoS refactor should be backported to networking-ovn 15:11:57 (this is going to be funny...) 15:11:57 actually everything related to qos should be in train, cause in train we have a limited qos functionality, and mainly its broken 15:12:23 yeah, once this patch is merged, I'll push a patch for n-ovn in T 15:13:04 ralonsoh, thanks! 15:13:09 yw 15:13:31 and so far, this is everything I have for today in the meeting backlog! 15:13:43 do you have any other bug? 15:13:50 nope 15:14:23 ok, so next section 15:14:27 #topic Open Discussion 15:14:39 something to add? 15:14:46 not from me 15:14:49 or do you want 45 mins back? 15:14:50 heheeh 15:15:06 Yes, I want to add one thing 15:15:15 please 15:15:38 We have bug in Core OVN related to qos, described here 15:15:40 #link https://mail.openvswitch.org/pipermail/ovs-discuss/2020-March/049866.html 15:15:45 yeah... 15:15:59 Actually if there are for example 2 ports from the same network on the same chassis (same compute) 15:16:07 those 2 ports share one 'qos bucket' 15:16:17 that means the limit is shared between those ports 15:16:53 so if one port is noisy - the other port from the same network could have a problem 15:17:20 or just transmitting one flow 15:17:22 The solution would be to create a meter per each OVS inport, and I'm discussing it with OVN folks, to address that 15:17:26 the BW will be shared 15:17:42 but why per inport? 15:17:49 * slaweq needs to leave for few minutes, sorry 15:17:52 why not one meter per OVN QoS rule? 15:17:53 sorry, inport or outport 15:17:58 slaweq, bye 15:18:28 slaweq, bb! 15:18:31 in a future, we'll have "match" fields with other than inport/outport 15:18:38 for example, filtering by ip or mac 15:18:41 ralonsoh, ah yes 15:18:49 (for FIP, for example) 15:19:02 ralonsoh, yes you're right 15:19:13 the point is, this is not like classfull HTC in TC 15:19:26 where you have a qdisc and classes and filters 15:19:37 everything organized in a tree 15:19:53 there is no dependency between ovn qos rules 15:20:05 so, IMO, there should be a meter per rule 15:20:15 another topic could be the performance... 15:20:35 but image you have 100 ports in one chassis 15:20:40 each one shaped 15:20:47 and 100 FIPs 15:20:58 so you'll need those 200 meters, each one different 15:21:22 yes 15:22:30 what I don';t understand is the current OVN qos implementation 15:22:41 maybe there is a reason for that 15:22:51 and we need to implement the qos in another way 15:22:58 neutron qos for ovn 15:23:58 ok, something else to add? 15:24:06 from me nope 15:24:18 thank you all and see you online! 15:24:24 #endmeeting