17:17:00 <mmichelson> #startmeeting ovn_community_development_discussion
17:17:01 <openstack> Meeting started Thu Jun 25 17:17:00 2020 UTC and is due to finish in 60 minutes.  The chair is mmichelson. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:17:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:17:04 <openstack> The meeting name has been set to 'ovn_community_development_discussion'
17:17:42 <mmichelson> Hi everyone
17:17:52 <imaximets> Hi.
17:18:02 <flaviof> o/
17:18:08 <dceara> Hi
17:18:20 <mmichelson> I can go first
17:18:54 <mmichelson> I spent some time yesterday updating all the case-sensitivity/normalization issues in ovn-nbctl commands for MACs and IPv6 addresses
17:19:15 <mmichelson> I also wrote a script to be able to send the email to ovs-dev about this particular meeting. We'll see how that goes once this meeting is over :)
17:19:28 <mmichelson> Did a bunch of reviews Monday.
17:19:32 <mmichelson> And I think that about covers it.
17:20:56 <_lore_> hi all
17:21:05 <aconole> I have a quick thing to bring up: we're going to have OvS+OVN conf 2020 to be virtual.  We're looking at the week of Dec 7th to have it.  Still open to ideas on implementation.  The big thing is making sure the dates aren't already booked.
17:21:33 <mmichelson> If people have plans for that week this far out, I'm impressed
17:22:24 <mmichelson> I, for one, can tell you that I do not have plans yet for that week :)
17:22:25 <aconole> Well, we targeted a week in Nov, but KubeCon is already scheduled then.
17:22:32 <imaximets> #link https://mail.openvswitch.org/pipermail/ovs-discuss/2020-June/050275.html
17:22:55 <imaximets> A relevant thread on ovs-discuss. ^
17:25:51 <mmichelson> I'd suggest people look into what may be going on that week for them and comment on the email thread.
17:26:00 <mmichelson> Thanks, aconole for bringing it up
17:27:24 <mmichelson> Does anyone else wish to share? This could be a quick meeting...
17:27:32 <dceara> I have a couple updates
17:28:18 <dceara> 1. I respinned the IDL recovery patch as discussed during the last meeting. I did make it retry in all cases when an inconsistency is detected:
17:28:23 <dceara> #link https://patchwork.ozlabs.org/project/openvswitch/patch/1592513144-25095-1-git-send-email-dceara@redhat.com/
17:29:10 <dceara> 2. I sent a series implementing zhouhan's suggestion to avoid high number of flows in the IP_INPUT stage due to ARP responders for DNAT IPs:
17:29:18 <dceara> #link https://patchwork.ozlabs.org/project/openvswitch/list/?series=185580
17:29:32 <zhouhan> dceara: thanks, I will review them.
17:29:55 <dceara> As far as I know gmg is already testing the second series on his setup.
17:30:18 <dceara> zhouhan: thanks.
17:31:10 <dceara> I'm also working on a potential optimization suggested by numans: to split port groups per datapath in order to avoid reinstalling all referring logical flows when ports are changed in a port group.
17:31:24 <dceara> That's it on my side for this week.
17:31:49 <zhouhan> That's cool, but didn't hear from gmg on the RFC patches I sent earlier for the ARP_RESOLVE stage flow explosion problem.
17:32:17 <dceara> zhouhan: as far as I understood gmg has both series applied in his current test.
17:32:21 * zhouhan seeing gmg left
17:32:32 <dceara> zhouhan: connectivity issues? :)
17:32:46 <zhouhan> dceara: that's great :)
17:34:19 <flaviof> dceara: can you briefly describe how splitting pgs per dp avoids the re-installing of the referring logical flows? I'm just curious.
17:34:48 <dceara> flaviof: I meant, splitting the port groups in the SB db. So for the CMS this change would be transparent.
17:35:28 <flaviof> dceara: right, I'd think this is a sb think only. still, how does it change the bahevior?
17:35:56 * flaviof sorry for typos
17:37:25 <dceara> flaviof: Right now if a port group contains ports from different logical switches and X logical flows refer the port group and a port P is added to the port group, we reinstall all X logical flows while we could reinstall only the Y ( < X) flows that are defined on the logical datapath corresponding to the switch where X is connected.
17:37:35 <dceara> (sorry, really long sentence, I hope it makes sense)
17:38:01 <flaviof> it does. thanks!
17:38:38 <zhouhan> dceara: wow, you made it by describing this in one sentence!
17:38:46 <dceara> zhouhan: :)
17:39:20 <flaviof> dceara++ and yes, that would be huge for openstack, bc we have a pg that has all ports from all ls
17:39:55 <dceara> flaviof: yes, this issue came up during scale testing done by anilvenkata and dalvarez and team for OpenStack
17:41:27 <zhouhan> flaviof: "a pg that has all ports from all ls" sounds strange. I guess it would not need a pg in this case if we know that it would be all ports, right?
17:41:55 <flaviof> dceara yup. I had that side of the story, just did not know how you were solving it.
17:41:56 <dceara> zhouhan: yes, it's a default deny PG, afaiu
17:42:08 <zhouhan> flaviof: it would be much much more scalable if not using PG in this case.
17:42:28 <flaviof> zhouhan: yeah, it is a 'special' pg. We need it to give the 'drop' by default behavior that openstack expects.
17:42:35 <dceara> zhouhan: but the problem is with port groups in general, not only with this 'special' pg
17:42:58 <dceara> zhouhan: there might still be reasonably large port groups that don't contain all ports from all LSs
17:43:20 <zhouhan> dceara: yes I agree with the problem in general.
17:43:55 <dceara> zhouhan: I have the code almost ready, will send it out once I do more benchmarking.
17:44:25 <zhouhan> great!
17:45:00 <zhouhan> flaviof: I think I misunderstood here. The default PG doesn't has any rule that references the group itself, right?
17:45:29 <zhouhan> flaviof: if so, that's fine. And the change mentioned by dceara should help.
17:45:55 <flaviof> zhouhan: right. not rules, many many ports. ;)
17:46:04 <zhouhan> ok
17:46:41 * numans joining late.
17:47:18 <numans> zhouhan, Also openstack normally would have a default security group (and sg rule) for a tenant.
17:47:57 <numans> So in a way its not just the drop one. Anyway I see that you already understood what dceara is saying :)
17:49:05 <numans> Can I go real quick or I'm intruding some one who is in the middle of the update ?
17:49:16 <dceara> numans: zhouhan: there are more potential optimizations we should look at for this: like I-P for port group members
17:49:21 <gmoodalbail> Han, Dumitru: This is Girish here. We are testing your patches. It has definitely reduced the logical flow explosion that was captured in the email thread. We haven't yet tested the `dynamic neighbor cache` thing yet since we need to write code to move to single join switch
17:49:23 <zhouhan> numans: yeah, I recalled it. flaviof mentioned about the default pg that contains all ports, and I thought about if this group has rules that references itself, it would generate O(N^2) flows due to the self-referencce address-set.
17:50:03 <zhouhan> Usually tenant's default PG has such self-reference rules (to allow port in a group to talk to each other)
17:50:32 <numans> Yeah
17:51:10 <zhouhan> gmoodalbail: thanks for the update!
17:51:45 <zhouhan> gmoodalbail: so the ARP_RESOLVE stage flows are not yet verified, right?
17:52:58 <zhouhan> gmoodalbail: that problem happens when you have a single join switch, so I will wait for your confirm when you move to single join switch.
17:53:15 <zhouhan> I can go next quickly
17:53:56 <zhouhan> I don't have much update except rebuilding our scale test env in lab.
17:54:46 <zhouhan> I found that latest ovn-scale-test is not working as expected for our old scenarios. I am working on the changes.
17:55:00 <zhouhan> that's it
17:55:10 <mmichelson> That was succinct :)
17:56:32 <numans> I can go real quick.
17:56:42 <numans> Last Friday I applied the I-P patches to master.
17:56:51 <numans> Thanks to zhouhan dceara and mmichelson for the reviews.
17:57:04 <numans> I applied those patches today to branch-20.06
17:57:13 <zhouhan> great!
17:57:18 <numans> mmichelson, It would be great if we could release 20.06.1
17:57:55 <numans> I submitted few bug fix patches and couple of small patches.
17:58:03 <numans> thanks for the reviews.
17:58:09 <flaviof> #link https://github.com/ovn-org/ovn/commit/ade4e779d3fb5cfe601a0da2bf73a0ed90696c38 ip patches
17:58:14 <numans> I just have 2 patches in the queue now.
17:58:17 <numans> #link https://patchwork.ozlabs.org/project/openvswitch/list/?submitter=77669
17:58:34 <numans> It would be great if some one take a look.
17:58:39 <mmichelson> numans, sure, that makes sense. Would it make sense to wait for dceara's improvements to make it in before making 20.06.1?
17:58:56 <numans> mmichelson, I think it may take some time.
17:59:01 <numans> dceara, what do you think ?
17:59:17 <numans> I'd suggest for 20.06.1.
17:59:34 <numans> That's it from me.
17:59:38 <mmichelson> numans, ack
17:59:53 <numans> I plan to look into mmichelson and dceara's patches tomorrow.
17:59:56 <gmoodalbail> zhouzhan: correct. we should be able to finish the single join switch case failry soon.
18:00:15 <dceara> mmichelson: numans: I can have the PG patch on the ML in a couple of days.
18:00:20 * zhouhan have to drop off for another meeting. ttyl
18:00:26 <numans> dceara, wow. That's cool :)
18:00:54 <mmichelson> dceara numans OK, it would be nice to have all the performance improvements in 20.06.1 if possible
18:00:59 <dceara> numans: just the splitting of the PG. Nothing fancy.
18:01:01 <numans> mmichelson, If you want to wait for the PG patches that's fine
18:01:05 <mmichelson> OK cool
18:01:43 <numans> I guess with those PG patches in, we can close the 20.06 branch for any further non bug fix patches.
18:02:09 <mmichelson> numans, hopefully it's mostly bug fixes going in there anyway
18:02:43 <mmichelson> Does anybody else wish to share?
18:02:59 <imaximets> I hae one note
18:03:03 <imaximets> *have
18:03:14 <imaximets> We likely need a stable OVS release at least on 2.13 branch.  People out there are using latest stable tag which is 2.13.0 and complains about raft issues. :)
18:03:31 <imaximets> We had quiet a lot of raft fixes and some other patches, so it seems like a good point to make a stable release for OVS 2.13.1 at least.
18:03:31 <mmichelson> I didn't realize there hasn't been a point release of OVS since 2.13.0
18:03:51 <mmichelson> I guess we should contact blp about that.
18:04:03 <mmichelson> We can bring it up on the dev list.
18:04:13 <imaximets> mmichelson, sure, I'll send an email on a list.
18:04:19 <imaximets> One point here that raised yesterday during OVS+DPDK public meeting that Intel is going to finish verification of latest stable DPDK releases in a couple of weeks and submit related patches for OVS. So, it should be a good thing to release OVS stable right after that.
18:04:24 <mmichelson> Also it would be good to be sure all the RAFT changes are in the 2.13 branch
18:04:50 <imaximets> mmichelson, they are, AFAIK.  But we could re-check
18:04:57 <mmichelson> imaximets, ack
18:05:34 <imaximets> So, I'll send an email soon about that.
18:05:41 <imaximets> That's it from my side.
18:06:55 <mmichelson> OK, anybody else?
18:08:10 <mmichelson> All right, I suppose that's it. Bye everyone!
18:08:21 <imaximets> Bye.
18:08:31 <mmichelson> #endmeeting