14:00:36 <slaweq> #startmeeting neutron_drivers
14:00:37 <openstack> Meeting started Fri Jan 10 14:00:36 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:38 <slaweq> hi
14:00:40 <openstack> The meeting name has been set to 'neutron_drivers'
14:02:43 <amotoki> hi
14:02:59 <slaweq> hi amotoki
14:03:17 <amotoki> it looks like slower start than usual :)
14:03:19 <slaweq> for now there are only 2 of us so lets wait for others
14:03:24 <slaweq> yeah
14:03:31 <mlavalle> hi, sorry for the delay
14:03:38 <slaweq> mlavalle: welcome
14:03:55 <slaweq> lets wait 2 more minutes for haleyb and yamamoto, maybe they will join too
14:04:02 <mlavalle> ok
14:04:07 <amotoki> no worries. it seems we are recovering from the holiday weeks
14:04:12 <haleyb> hi, was sitting here doing reviews
14:04:23 <slaweq> hi haleyb :)
14:05:35 <slaweq> ok, lets start as we have quorum already
14:05:47 <slaweq> welcome back on drivers meetings in 2020 :)
14:05:49 <slaweq> #topic RFEs
14:05:59 <slaweq> we have couple of RFEs to discuss today
14:06:08 <slaweq> let's start with easy one (I hope)
14:06:16 <slaweq> https://bugs.launchpad.net/neutron/+bug/1843165
14:06:16 <openstack> Launchpad bug 1843165 in neutron "RFE: Adding support for direct ports with qos in ovs" [Wishlist,In progress] - Assigned to waleed mousa (waleedm)
14:06:29 <slaweq> this one seems preety simple, patch is already proposed also
14:06:37 <slaweq> and amotoki even +2'ed this patch already
14:06:55 <ralonsoh> (hi, sorry for being late)
14:07:05 <amotoki> I forgot to check the RFE status :-(
14:07:21 <slaweq> amotoki: no problem :)
14:07:27 <slaweq> hi ralonsoh
14:07:42 <slaweq> so I'm ok with this proposal and this patch also
14:08:04 <slaweq> patch is here: https://review.opendev.org/#/c/611605/
14:08:56 <njohnston> o/
14:09:28 <mlavalle> I am also ok with it. Just +2ed the patch
14:09:35 <ralonsoh> is this QoS API for offloaded ports the same as OVS ones?
14:09:52 <slaweq> maybe we should add some new sanity checks to check ovs and kernel version
14:10:36 <slaweq> ralonsoh: from https://github.com/openvswitch/ovs/commit/e7f6ba220e10c0b560da097185514b6e33e2dc71 seems that it's the same
14:10:47 <ralonsoh> yes, I was reading it
14:10:50 <ralonsoh> perfect for me!
14:10:51 <slaweq> You need to set ingress_policying_rate
14:11:00 <ralonsoh> as regular OVS ports
14:11:11 * slaweq will be back in 1 minute
14:11:16 <ralonsoh> (actually is not qos, but policing)
14:11:25 <ralonsoh> but is the same as with normal OVS ports now
14:12:03 <ralonsoh> (I'll try to ask in my company for some HW with offload capabilities, to check this)
14:12:18 <amotoki> It looks like OVS provides the same abstraction for hw-offload ports
14:12:28 <amotoki> so neutron support looks simple.
14:12:36 <ralonsoh> sure!
14:13:22 <slaweq> ok, I'm back
14:13:51 <slaweq> amotoki: yes, from neutron pov it looks very simple
14:14:21 <slaweq> so I think we can approve this rfe, or haleyb or amotoki have got any concerns?
14:14:30 <haleyb> yes, +1 from  me
14:14:41 <amotoki> I'm fine with this proposal. ovs version needs to be clarified.
14:15:00 <slaweq> ok, so I will approve it
14:15:27 <slaweq> amotoki: where You would like to have clarified ovs version? in docs somewhere?
14:15:50 <amotoki> sanity check or document would work
14:16:03 <slaweq> ahh, ok
14:16:21 <slaweq> I was thinking that You have something else in mind
14:16:38 <slaweq> I will add note about that in rfe and patch review after the meeting
14:16:44 <amotoki> cool
14:17:23 <slaweq> thx, so one is done
14:17:25 <slaweq> :)
14:17:27 <slaweq> next one
14:17:36 <slaweq> https://bugs.launchpad.net/neutron/+bug/1855854
14:17:36 <openstack> Launchpad bug 1855854 in neutron "[RFE] Dynamic DHCP allocation pool" [Wishlist,New]
14:17:53 <slaweq> this one is a bit long so I will give You some time to read it :)
14:22:33 <njohnston_> Interesting
14:24:20 <haleyb> one data point is that the dnsmasq maintainer might be warming to the patch, at least he responded this week with a "If I've understood correctly, this looks like it might be a viable
14:24:20 <haleyb> solution."
14:24:38 <haleyb> dnsmasq patch that is
14:24:54 <haleyb> i don't see hjensas here
14:26:16 <haleyb> one question i have is will this help support privacy addresses?
14:26:34 <amotoki> if this proposal is implemented, we cannot know IP address(es) assigned to a port if the port uses the dynamic allocation pool, right?
14:26:45 <haleyb> although it is just for dhcpv6, not slaac
14:27:35 <slaweq> amotoki: right, neutron will not have in db IP address associated with port
14:27:49 <haleyb> amotoki: looks like it, could possibly do it with a callback script
14:28:17 <haleyb> a provider might see that as a security issue if they can't lookup a port by IP
14:28:33 <amotoki> thanks for clarification
14:28:42 <njohnston_> that also affects other things, like security groups
14:29:33 <slaweq> but reading this part: "In the Ironic example, a dynamic address will be used during instance provisioning. Once Ironic is done provisioning, a port with a static address, i.e a "standard" neutron port with a neutron ipam address allocation etc is bound to the final baremetal instance." it seems that it would be not known only for some time, right?
14:29:58 <amotoki> it seems this feature needs to be used for a specific subnet.
14:30:45 <amotoki> we seem to need some assumption on this feature as it affects SG as njohnston_ commented.
14:32:04 <mlavalle> slaweq: yes, it is only during the boot process
14:32:26 <amotoki> I think it is specific to the provisioning network(s) in ironic.
14:32:42 <amotoki> it sounds reasonable for that case.
14:34:17 <slaweq> amotoki: for ironic probably yes (if dnsmasq's patch will not be merged soon)
14:34:42 <frickler> maybe this could also be solved by running an additional boot-specific dhcp server?
14:34:46 <njohnston> I agree, with the asterisk that until it is transitioned to a regular neutron port, unpredictable results might occur.
14:37:05 <slaweq> njohnston: I agree with that, if we will have it, it should be very well documented to make operators aware, what is the goal of this and that this shouldn't be used always
14:37:13 <amotoki> frickler: that can work. we can run a DHCP server for a specific purpose in a subnet with specifyig enable_dhcp=False in a neutron subnet).
14:38:20 <slaweq> amotoki: sorry but I'm not sure if I undertood corectly
14:38:41 <slaweq> You want to run something different than dnsmasq for subnets with disabled dhcp?
14:39:20 <frickler> amotoki: even without setting enable_dhcp=False, if there is a range of addresses not covered by the allocation-pool, it should be able to add some other dhcp server just handing that out to the pxeboot clients maybe
14:39:42 <ralonsoh> folks, why don't we ask hjensas to propose a solution for this? maybe he has something in mind
14:39:56 <amotoki> I haven't tested it yet, but I think neturon DHCP server is not launched if enable_dhcp is False so an extra DHCP server can serve for DHCP request from instances.
14:39:59 <mlavalle> he is even proposing to write a spec
14:40:14 <ralonsoh> spawning a dhcp server in a dhcp=false subnet... is not something desirable
14:41:46 <slaweq> amotoki: dnsmasq server is spawned per network if there is at least one network with enable_dhcp=True
14:42:01 <slaweq> and than various subnets are added to its config file
14:42:34 <slaweq> and I agree with ralonsoh that it wouldn't be super clear to run dhcp server (some other server) for subnets with disabled dhcp
14:42:52 <mlavalle> I think we should request a spec and while it is written and reviewed, see if any of the 3 alternatives listed in note #3 bear fruit. Those can be analyzed in the alternatives section of the spec
14:43:00 <njohnston> +1
14:43:03 <ralonsoh> perfect
14:43:13 <amotoki> slaweq: you're right. I should say 'network' rather than 'subnet'.
14:43:17 <slaweq> mlavalle: I agree with that
14:43:33 <amotoki> +1 for a spec
14:43:38 <slaweq> lets ask for spec now
14:44:10 <slaweq> ok, so lets move on
14:44:29 <slaweq> I will summarize this discussion in comment to the rfe after the meeting
14:44:36 <mlavalle> in this case, as a team we should place special focus on alternatives evaluation as part of the review process, IMO
14:45:10 <slaweq> next one
14:45:13 <slaweq> https://bugs.launchpad.net/neutron/+bug/1759790
14:45:13 <openstack> Launchpad bug 1759790 in neutron "[RFE] metric for the route" [Wishlist,In progress] - Assigned to Bin Lu (369283883-o)
14:45:26 <slaweq> this was also discussed some time ago
14:46:55 <slaweq> based on last comments it seems that this rfe is only about API change, that different backends will be able to implement this (or not) on their own
14:47:30 <njohnston> previous discussion: http://eavesdrop.openstack.org/meetings/neutron_drivers/2019/neutron_drivers.2019-10-11-14.00.log.html#l-13
14:47:51 <slaweq> thx njohnston
14:48:58 <ralonsoh> slaweq, can you ping xiaodan again in the bug? just to clarify this
14:49:06 <ralonsoh> but seems that you are right
14:49:27 <amotoki> there is a reply from the RFE author, it said "just add metric for user configuration and record info in neutron db". I am not sure how metric information should be applied to a neutron router.
14:49:41 <ralonsoh> the parameter will be there in the route, and the backend will use it
14:49:52 <ralonsoh> amotoki, perfect
14:51:33 <slaweq> yes, so this rfe is not about implementing it in in-tree L3 agent
14:52:03 <slaweq> only add new API extension and parameter to the router's extra route entries
14:52:21 <slaweq> which is fine for me in generall
14:52:24 <ralonsoh> I think so, yes
14:53:08 <slaweq> only concern which I have is that it may be confusing for users if someone will set various metrics in api and it will not be applied on the backend
14:53:25 <amotoki> but, don't we need to clarify how the added parameter is interpreted in the reference implementation?
14:54:29 <amotoki> slaweq says the same thing
14:56:04 <slaweq> amotoki: my understanding was that, if we would implement that in reference implementation, it would be something like "ip route add ... metric X" added to each extra route entry in router's namespace
14:56:59 <amotoki> it is what we expect :)
14:57:57 <slaweq> so maybe we should ask there that implementation for reference backend should be also part of this rfe?
14:58:17 <amotoki> agree
14:58:27 <slaweq> and if author of this proposal is going to implement that
14:58:39 <mlavalle> or the opposte, if it is not implemented in the reference implemenation, do it completely out of tree
14:58:56 <slaweq> mlavalle: +1
14:59:05 <slaweq> I will add such comment to summarize this discussion there
14:59:18 <slaweq> haleyb: any last minute thoughs on that?
14:59:44 <mlavalle> if the functionality is going to be out of tree, what do they need us for?
15:00:12 <haleyb> slaweq: no extra thoughts, i'm fine with it
15:00:15 <slaweq> mlavalle: probably nothing but I will ask in the comment about that too
15:00:20 <slaweq> haleyb: ok, thx
15:00:24 <amotoki> mlavalle: perhaps we need to ask it together
15:00:25 <slaweq> we are out of time already
15:00:28 <ralonsoh> bye!
15:00:33 <slaweq> thx a lot for attending
15:00:35 <njohnston> o/
15:00:36 <amotoki> thanks!
15:00:37 <slaweq> have a great weekend
15:00:40 <slaweq> #endmeeting