14:00:07 #startmeeting neutron_drivers 14:00:07 Meeting started Fri Jun 30 14:00:07 2023 UTC and is due to finish in 60 minutes. The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:07 The meeting name has been set to 'neutron_drivers' 14:00:10 hello all! 14:00:14 o/ 14:00:21 o/ 14:00:29 o/ 14:00:31 just before starting: please check https://review.opendev.org/c/openstack/neutron-specs/+/885324 14:00:45 next week will the the last to merge a neutron spec for this cycle 14:00:53 that's all (before starting) 14:01:18 \o/ 14:01:20 frickler? 14:01:43 I think we have quorum, let's start 14:01:51 1) [RFE] Formalize use of subnet service-type for draining subnets 14:01:57 #link https://bugs.launchpad.net/neutron/+bug/2024921 14:02:20 is frickler here? 14:02:30 hi 14:02:56 ok, let's move to the second one for now 14:02:58 2) [RFE] Caching instance id in the metadata agent to have less RPC messages sent to server 14:03:03 https://bugs.launchpad.net/neutron/+bug/2024581 14:03:07 slaweq, please 14:03:24 I opened it based on some feedback from the forum in Vancouver 14:04:05 basically someone in the large scale deployments room was saying that adding caching of the instance_id in metadata agent lowered load on rabbitmq significantly 14:04:44 and as we checked with ralonsoh it seems that we are asking neutron server for instance id with every request to the metadata service 14:05:16 maybe we should ask for it once and then cache it for short time locally 14:05:44 so when guest vm is booted and cloud-init is doing many requests to the metadata server it will do just one rpc query to neutron-server 14:05:46 so, each agent in each compute would have a cache? 14:05:48 sahid added some comment which they use in their env if I understand well to use caching for metadata 14:05:54 that's whole idea 14:06:11 the OVS RPC cache implementation is better than using solo_cache 14:06:11 without code change, would be interestingto try it (I never personally but happy to test it) 14:06:29 the RPC cache will subscribe to the resource events (ports in this case) 14:06:38 and will have always the updated information 14:06:54 lajoskatona I didn't know we can use that memcache there 14:07:08 maybe that's the solution then, I will need to test it 14:07:12 and will run faster because it won't issue any RPC call (regardless that it will be catch by oslo cache) 14:07:15 neither me, so omething new to check for met atl east :-) 14:07:19 thx a lot 14:07:37 yeah, let's give it a try 14:07:55 so I will check it and we can come back to that rfe later 14:09:36 ok, I'm against using the oslo cache, just for the records 14:09:53 why? 14:10:05 because that won't have the most updated information 14:10:19 oslo cache catches the RPC calls and store the info 14:10:34 but it doesn't store the latest DB info 14:10:46 as we can achieve with the OVS RPC cache implementation 14:10:52 OVS agent 14:11:28 in any case, the oslo cache is just configuration, no code is needed 14:11:33 I think if it is documented as option with all the effects it can be a choise for the operator, of course without knowing now if it is really working 14:11:48 yep 14:12:07 that's ok, but are we going to go further? 14:12:15 it has some (small) cons so if we will test it and document properly, it can work IMO 14:12:45 I will test this oslo cache thing and will then see if we need any changes in docs or somewhere else 14:12:54 ok, so the output of this RFE is documentation, right? 14:13:07 if I will need any other discussion about it, I will get back to You to bring it here :) 14:13:12 perfect 14:13:15 +1 14:13:20 +1 14:13:25 thanksslaweq for bringing it here 14:13:48 so we have 2 votes in favor of this RFE 14:13:50 +1 mine too 14:13:58 please, vote for this RFE 14:14:14 +1 from me 14:14:21 obondarev? 14:15:11 ok, we have enough votes I think 14:15:17 so RFE approved 14:15:20 +1 14:15:23 which one? sorry got disconnected 14:15:24 he seems tp have dropped off 14:15:47 obondarev__, https://bugs.launchpad.net/neutron/+bug/2024581 14:16:06 thanks ralonsoh, lgtm , +1 14:16:09 thanks 14:16:14 I'll update the LP bug 14:16:40 ok, let's go for the third one 14:16:43 3) [rfe][ml2] Add a new API that supports cloning a specified security group 14:16:51 #link https://bugs.launchpad.net/neutron/+bug/2025055 14:17:04 I don't know the nickname of Liu Xie 14:17:27 in any case, did you check the proposal? 14:17:42 what is proposed is to have an API to clone SG+rules 14:18:45 any feedback? comment? 14:19:12 not sure how common is this use case 14:19:40 when one would need an SG with same rules? 14:19:48 In principle I side with Luyong, comment 2 14:20:00 IMO this can be easily scripted using existing API and having that in neutron would be overcomplicating things 14:20:01 +1 https://bugs.launchpad.net/neutron/+bug/2025055/comments/2 14:20:13 but that's just my opinion 14:20:28 luyong and I agree 14:20:36 with slaweq I mean 14:20:41 +1 for Liu's comment 14:20:56 and I agree too, the Neutron API should be "atomic" 14:21:33 so me it seemed somewhat possibly related to the default SG template work, i.e. admin wants to define the default set of SG rules? I didn't get a response to my question though 14:21:34 I liked the way Yulong put it: "concise and fundamental" 14:21:58 haleyb, could help, for sure 14:22:01 haleyb: well noted 14:22:06 and slaweq is working on it 14:22:46 ok, I think we all think the same here, if I'm not wrong 14:23:04 let's vote first 14:23:06 -1 14:23:11 -1 14:23:13 -1 14:23:13 -1 14:23:16 -1 14:23:30 I'll update the LP bug with the feedback provided in this conversation 14:23:31 +1 14:23:39 -1 14:23:40 =1 14:23:41 ahh 14:23:45 I mean(s=orry) 14:23:47 +1 14:24:15 mlavalle, +1? 14:24:52 anyway, the RFE is not approved, I'll update the LP today 14:24:55 thanks 14:24:56 he would need a +7 14:25:00 +1 14:25:08 haha 14:25:32 already +3 :) 14:25:36 even +2+W is just kind of +3 in total so far from +7 :P 14:26:02 think about people reading these logs 10 years later 14:26:15 LOL 14:26:16 ok, let's jump again to the first RFE 14:26:21 1) [RFE] Formalize use of subnet service-type for draining subnets 14:26:27 #link https://bugs.launchpad.net/neutron/+bug/2024921 14:26:35 I'll try to summarize it 14:26:42 but I think you know this proposal 14:26:53 it's just documentation, isn't it? 14:26:58 not really 14:27:05 this one is basically doc update, right? 14:27:07 IIUC correctly this rfe is about making officially supported something what already works in Neutron 14:27:18 doc and some testing maybe 14:27:24 we need to add this service type to the IPAM module 14:27:36 in order to avoid it when assigning IPs 14:27:46 so no, is not only documentation 14:28:08 but we need to state on the constant to be used as service type 14:28:32 but apart from this, what do you think about this proposal? 14:28:38 apart from the implementation 14:28:56 sounds reasonable for me 14:29:01 so +1 14:29:04 seems the use case is justified 14:29:11 i thought any "unknown" service type would prevent IPAM from allocating IPs from this subnet. The author said they already using it like this 14:29:16 yeah some testing (tempest maybe) is necessary to keep the functionalty working 14:29:16 am I missing something? 14:29:37 that's what I understood 14:29:41 same as obondarev 14:29:46 obondarev, yes, but that's the point: not using any random value 14:30:04 got it, fair enough, thanks 14:30:16 +1 14:30:27 but yes, now IPAM will skip any service type not null 14:30:35 we can bikeshed on the name later :) 14:30:43 exactly 14:30:54 so +1 from, makes sense to have this feature 14:31:01 +1 14:31:07 +1, especially to get it documented on how to use 14:31:39 ok, that was quite productive today! 14:31:43 3 RFEs in 30 mins 14:31:52 I'll update the LP bugs now 14:32:03 anything else you want to bring here?? 14:32:26 so thank you all for attending the meeting 14:32:32 and have a nice weekend 14:32:37 #endmeeting