17:18:55 #startmeeting ovn_community_development_discussion 17:18:56 Meeting started Thu Aug 27 17:18:55 2020 UTC and is due to finish in 60 minutes. The chair is imaximets. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:18:57 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:18:59 The meeting name has been set to 'ovn_community_development_discussion' 17:19:25 I do not have much to say. Whou wants to go first? 17:19:56 I can start 17:20:18 dceara, ok. 17:20:36 thanks zhouhan for reviewing the conntrack bypass patch, I'll probably not have time for a v3 until next week though. 17:21:01 except for that I wanted to schedule a run on our scale setup with zhouhan 17:21:09 's I-P patches 17:21:23 hopefully I can manage to do that tomorrow. 17:21:26 dceara: thanks dceara 17:21:53 And I sent a couple of bug fix patches today. One of which should probably be backported all the way until 20.03. 17:22:08 That's it on my side, thanks! 17:22:59 dceara, thanks! 17:23:06 I can go next 17:23:12 zhouhan, sure. 17:23:37 I sent the series of incremental processing for flow installation: #link https://patchwork.ozlabs.org/project/openvswitch/list/?series=197009 17:24:24 The CPU cost reduced around 40% for a scale of 1200 HV with 12K ports 17:24:58 It also solves a bug when conjunction combination is used. 17:25:16 (that may need back port as well) 17:25:42 I also did more scale test for 3k HV with 30k ports. 17:26:36 zhouhan: regarding the conjunction bug (I didn't look at the patches yet), but would it be possible to move it earlier in the series to make backporting easier? 17:27:10 It ran successfully. However, current ovn-nbctl --wait=hv mechanism is not accurate for measuring the end to end latency, because the updates of nbcfg from all HVs actually contributes the most cost. 17:28:07 dceara: I think the earlier patches are required by the bug fix. (The bug fix is actually a big part of the series) 17:28:40 zhouhan: ack, thanks, i'll try to have a closer look too. 17:29:40 To measure the latency more accurately, I think I need to improve the nb_cfg mechanism, to include a timestamp field. I will work on it. 17:30:21 But overall, by manually checking the latency, it seems a port binding can finish within 4 - 5 sec at that scale. 17:31:19 In addition, I did some code reviews. imaximets: could you take a look at this one as well? 17:31:21 https://patchwork.ozlabs.org/project/openvswitch/patch/20200813205259.5036-1-zhewang@nvidia.com/ 17:31:42 That's it from me 17:32:20 zhouhan: in our tests we wait until the port can ping its gateway (or an external host) and we see it taking >10sec in some cases. I didn't try with your patches yet though. 17:33:07 zhouhan, yeah, I looked a this patch and I'm thinking if it's possible to fix the issues from the inside of idl/jsonrpc, without requirement for CMS to call special functions. 17:33:28 dceara: do you ping from all the VMs? I guess that action itself may take a lot of overhead. 17:34:07 zhouhan: only from the new fake vm (netns) until it is successful 17:34:21 imaximets: that would be better, if it can be supported. 17:35:04 dceara: but there will be 30k of them? And we need to make sure the slowest one can ping ... 17:36:06 dceara: or do you just ping from a random VM and assume most of the VMs got the flow installed at similar latency? 17:36:11 zhouhan: in our tests we don't advance to create the next port until the current one can ping its own gateway. 17:37:32 zhouhan: We also don't batch port add operations. This in order to try to see the worst case scenario latency for flow installation. 17:37:37 dceara: Oh, I think that's a different scenario. I am testing when the whole scale is built up, then create and bind a new port, and see how long it takes for this new change to get processed in all the HVs (meaning the new port can reach all other ports) 17:38:31 zhouhan: I see, ok, I can set try to set our scenario in a similar way too, thanks. 17:38:56 s/can set try/can try/ 17:39:05 zhouhan, I do not know yet, how to make re-balancing of connections work from the inside of idl, I will likely reply to ML with some ideas a bit later, if any. 17:39:23 dceara: to make sure *all* HVs has processed the new change, I am utilizing the --wait=hv feature. Now I realized that this mechanism itself was a bottleneck (even after solving the flooding problem). 17:40:05 zhouhan: ack, we decided to go for the ping approach exactly to avoid --wait=hv 17:41:05 dceara: So I am thinking about posting a timestamp from each ovn-controller while reporting the nb_cfg number it processed, so the nbctl can finally rely on the timestamp to calculate the time spent for the slowest HV 17:42:04 imaximets: ok, thanks! But do you think that could be a follow-up improvement, independent of the command provided by that patch? 17:42:51 (of course, with that improvement, the current patch provided won't be as useful any more) 17:43:23 zhouhan, in general, I'd like to avoid introduction of new commands if possible, especially if we can fix the issue in general. 17:44:45 zhouhan, how these commands supposed to be used? Will CMS just re-disribute all the clients by itself, or will nominate only part of them for re-connection? 17:45:00 imaximets: agree in general. But I feel this command does provide some value for operational need. 17:45:48 Hi 17:46:32 imaximets: I think the typical case is when failover happend and the node recovered, the newly recovered node has no connections. So operator can use the command to instruct some of the HVs to connect back to the recovered node. 17:47:07 hi 17:47:16 ab 17:47:44 imaximets: but it may also be useful if someone wants to adjust (fine tune) the load of different servers but moving clients from one server to another. 17:48:28 zhouhan, I see. Let me think a little bit. I will reply on ML, or just apply the patch if there will be no clever ideas from my side. :) 17:48:33 s/but moving/by moving 17:48:46 thanks imaximets 17:49:13 zhouhan: I think there were some similar discussions on the ML at some point about adding a timestamp to the sync mechanism. We can probably continue the chassis.nb_cfg discussion there. 17:49:43 dceara: sure 17:50:14 dceara: maybe I will try a POC first 17:50:21 zhouhan: cool 17:53:51 OK. Anyone else wants to share some updates? 17:55:33 So, I think, we could call it now. 17:56:31 #endmeeting