18:15:21 #startmeeting ovn_community_development_discussion 18:15:21 Meeting started Thu Nov 19 18:15:21 2020 UTC and is due to finish in 60 minutes. The chair is mmichelson. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:15:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:15:24 The meeting name has been set to 'ovn_community_development_discussion' 18:16:27 hi 18:16:34 Hopefully y'all saw my email about delaying branching until 4 December. 18:16:34 hi dceara 18:17:11 And it turns out this is pretty timely. We've been seeing some test failures based on some recent commits. 18:17:29 I'm personally working on fixing a test failure I caused with the SNAT CT zone selection patch. 18:17:55 yeah, there are few tests which fail for me when run in parallel 18:18:06 I usually run with RECHECK=yes, which helps. 18:18:10 In short, the test is failing because of the change to no longer associate LOG_TO_PHY flows with a port binding 18:18:17 But it's better to make the tests reliable. 18:19:04 The result is that a gateway router that is bound to one chassis (hv3) still has flows installed on another chassis sometimes (hv1) and the test fails because it explicitly tries to ensure that hv1 is not seeing the traffic that should be handled by the gateway chassis 18:19:19 I have a plan for how to fix it. The problem is it's making the code really ugly. 18:20:01 mmichelson: Why isn't this a matter of making sure that all the chasssis are caught up with nb? 18:20:07 In addition to the LOG_TO_PHY flows though, I found that even without my patch, there are gateway router flows on the non-gateway chassis, which seems wrong. It's like the gateway router is being set as a local datapath even though it shouldn't be. 18:20:45 mmichelson, does having these flows in another chassis a problem ? 18:20:47 blp, they are, but the incremental processing logic in ovn-controller got updated due to my CT zone patch. And the logic can cause a failure 18:20:57 if not is it ok to change the test case ? 18:20:59 mmichelson: incremental processing is so difficult 18:21:20 numans, Well, if ALL gateway router flows were removed from the non-gateway chassis then the problem I'm seeing wouldn't happen. 18:21:24 blp, yes it is 18:21:44 numans, the test case is valid 18:22:02 numans, it's perfectly reasonable for someone to set up the same scenario and expect it to work properly 18:22:05 mmichelson, ok. I'm just thinking is it ok to have those flows vs making the code more ugly to fix it :) 18:22:32 But if its a wrong behavior, sure we should fix it. 18:22:59 numans, so my proposed fix is some ugly code that makes it so that when CT zones are changed, we need to iterate over all port bindings and find if the datapath that those port bindings belong to was updated, then erase all physical flows having to do with both those bindings and their peers 18:23:27 numans, but another possible fix might be to detect the gateway router as not being local and remove all its flows. That might be slightly less ugly. 18:23:38 maybe 18:23:50 ok 18:24:50 Anyways, getting the CT zone fix in and getting this test fixed has taken up way too much of my time this week. I haven't had proper time to do reviews, which is what I wanted to spend the majority of the week doing 18:25:19 mmichelson: anything else? 18:25:25 At this moment, I've read through imaximets' dp groups patch series, and on the surface it seems good. I haven't put an ack on the series yet because I wanted to do some testing of my own 18:25:35 I think that's all from me. 18:25:46 may i go next? 18:26:16 sure 18:26:26 numans: Thanks for the test script. I'll try it. My other priority for v8 is to try to use ovsdb-idl, or important parts of it. 18:26:43 numans: Also, I posted a tentative schedule for ovscon: https://ovscon.site/ 18:26:43 welcome 18:27:14 I tried to make the page amenable to adjusting to anyone's local time zone. If there are bugs in that, please let me know. 18:27:28 that's cool. 18:27:33 I'll check it out. 18:27:36 I didn't put in a time zone selector because I'm really confused about DST in the southern hemisphere (especially NZ and AU) 18:27:44 so you just have to type your GMT offset. 18:28:24 Does anyone want to comment on my ddlog patches? That's about all I've got for today. 18:28:59 no additional comments from me other than what I reported earlier today. 18:29:08 Yes, thanks again for those numans 18:29:09 thanks for fixing the probe intervals. 18:29:22 welcome 18:29:28 There was more going wrong with probe intervals than I thought! I did fix them though. 18:29:57 OK then. 18:29:59 Who's next? 18:30:04 I can go next 18:30:24 I worked on new version of load balancer hairpin patches. 18:30:32 thanks dceara for the review and the suggestions. 18:30:56 mmichelson, If you could take a look at the hairpin patches that would be great. dceara has acked patches 3-7. 18:31:19 numans, will do 18:31:30 I submitted a patch to pin ovn-controller to specific version of ovn-northd to minimize downtime during upgrades. 18:31:49 Apart from that I did code some reviews. 18:32:18 I'm reviewing Anton's parallel patches and testing them out. Reading and learning the ovs's hmap implementation on the go. 18:32:22 That's it from me. 18:32:30 <_lore_> can I go next? 18:33:08 <_lore_> this week I worked on bfd implementation in ovn, the solution is stable now 18:33:34 <_lore_> I discussed with openshift folks about the scale 18:33:44 <_lore_> they are targetting ~30 connections for gw node 18:34:12 <_lore_> I tested it at this scale and the solution seems to work 18:34:16 numans: If you have any questions about hmap, that's all me. 18:34:38 <_lore_> I think we should test it in a real deployment 18:34:59 <_lore_> I will post a rfc v2 soon 18:35:20 <_lore_> the I addressed dceara's comments on my raft debug patch, posted v3 18:35:35 <_lore_> and I added the capability to log commands on ovn-scale-test 18:35:40 <_lore_> that's all from me 18:36:00 blp, sure. I'll ask you. Thanks. 18:36:34 In case you've bandwidth to review - https://patchwork.ozlabs.org/project/ovn/list/?series=212710 18:36:59 certainly I'm not qualified much in that area to do a proper review and give comments. 18:37:27 There's already cmap for highly parallel access to a hash table, is there a discussion of why another data structure? 18:38:32 I don't think we had any discussion about it in the mail thread. 18:39:34 I'll check out cmap too during the review. 18:39:36 thanks. 18:39:48 I have to admit I didn't know about cmap until you just mentioned it 18:40:04 Yeah, it's not used anywhere in OVN. That's probably why 18:41:32 mmichelson: what's the plan after ddlog is merged? Do we update both C northd and ddlog northd for same feature changes? 18:41:54 That's the hope. 18:41:56 zhouhan, yes 18:42:24 When ddlog is proved stable/production ready we can then abandon C version? 18:42:40 That's *my* hope, at least. 18:43:05 zhouhan, yes, though determining stability and production readiness is going to be tough. 18:43:18 Would ddlog benefit from the parallel northd change? 18:43:36 I haven't read the parallel northd patches. How do they work? 18:43:38 I don't think they would. 18:44:24 Or I guess we wouldn't need parallel processing for ddlog, right? (since there won't be any recompute at all?) 18:44:45 blp, he needs a separate hmap implementation since he wants to slice data, i.e. process slices of hmap in diferent threads. 18:44:50 zhouhan, right, parallelization isn't as useful with incremental processing. It could be useful for full recomputes I guess? 18:45:21 ddlog has built-in support for multithreading, although we haven't enabled it. It is not clear whether it would be beneficial in this case. 18:45:24 from what I understand till now, it builds the flows in parallel by creating a thread for each datapath hmap bucket 18:45:42 blp, the implementation divides an hmap up between threads. So thread 1 processes items 0, 0+N, 0+2N, etc. Thread 2 processes items 1, 1+N, 1+2N, etc. 18:45:47 Where N is the number of threads. 18:46:13 OK. 18:46:14 And then the results are merged together. That's the general idea 18:47:00 imaximets: cmap can do that, I think, although the details are important. 18:48:33 blp, cmap is mostly about working on the same data from different threads, but he wants to work on different data and wants this data to have no intersections. 18:49:10 blp, it's just a matter of API extension, I think. 18:49:22 OK. 18:49:26 to have access to some internals. 18:50:49 Ok, I've lost track of who's turn it is. Who'd like to go next? 18:51:03 I can go next if that's ok. 18:51:34 dceara: i think so 18:51:50 blp: Thanks for the discussion on the nb_cfg ovn-controller patch and for the conditional ack. I'll wait a bit though before sending a v3 to think more about potential other solutions. 18:52:32 blp: This also means that the patch in your ddlog series that removes "sleep" in most tests might have to wait. I'd say it's better to make tests (more) reliable before merging that one. 18:53:22 dceara: OK. 18:53:30 blp: Except for that I've been doing some reviews here and there and quite a lot of debugging. 18:53:38 blp: That's it from me I guess, thanks. 18:54:23 OK, anybody else? 18:54:32 I have a quick update. 18:55:15 Last week I prepared a first version of "logical datapath group" implementation. This is to reduce number of lflows by combining them. 18:56:06 First results are good. On my test it reduced size of db in 6 times. 18:56:21 However, there are few issues uncovered during unit testing. 18:56:41 ovsdb-idl seems to not handle row deletions correctly causing memory leaks and use-after-free conditions. 18:57:10 I wonder if that got introduced in the last year or two. It was correct once. 18:57:28 I wrote pretty thorough tests for it, I thought. 18:57:36 we debugged that a lot with dceara and seems to have one solution, but will test it further. 18:58:03 The mapping from uuids to pointers in the idl code is complex. 18:58:15 I don't like that code, but it makes using the database from C bearable. 18:58:42 blp, it's hard to say. SB db has a lot of dependencies between rows and my patch-set seems to add some more. 18:59:26 one more issue is that for some reason ovn-controller operates with freed port_binding rows. 18:59:50 If you find a general problem in the idl code, then it might be wise to add a new test to ovs to find it. 19:00:05 There are idl tests there for row dependencies already, so it might be easy to add another. 19:00:21 blp, yep. that's a good point. 19:00:30 (I liked building the Python IDL better, it does not build row dependency graphs.) 19:01:34 ovn-controller issue could be workarounded by receiving all updates on port_binding table, but that is not a good solution. Need to figure out what happens when conditional monitoring works as needed. 19:02:16 imaximets: 6 times sounds great. How many datapaths and lflows-per-DP in this testing? 19:02:36 To get the ddlog code to use the idl code, instead of its ad hoc code, I might end up refactoring the idl into a couple of layers. That might make it easier to understand. 19:02:40 blp: we have some feature-set difference between C and python idl, unfortunately. 19:03:07 zhouhan, it was 1.1 million flows in total. reduced to 170 thousands. 19:03:25 zhouhan, 100 logical switches in this test. 19:03:45 imaximets: Great! thx 19:04:38 blp, layers might be good to have. Dependency tracking is hard to understand in current idl. 19:05:15 imaximets: We'll see. I haven't read enough code yet to figure it out. 19:05:39 So, once we'll figure out what is going on with idl, I'll prepare v2 with fixes and some performance optimizations for the code. 19:05:55 that's it form my side. 19:06:53 Anyone else? 19:08:55 OK, I guess that's all then. Thanks everyone 19:09:00 #endmeeting