14:00:12 <slaweq> #startmeeting neutron_drivers
14:00:13 <openstack> Meeting started Fri May 15 14:00:12 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:16 <openstack> The meeting name has been set to 'neutron_drivers'
14:00:19 <njohnston> o/
14:00:23 <mlavalle> o/
14:00:24 <slaweq> Good Morning/Afternoon/Evening :)
14:00:25 <amotoki> hi
14:00:31 <jamesdenton> hi
14:01:15 <slaweq> we are waiting for haleyb and yamamoto still
14:01:21 <haleyb> hi
14:01:24 <njohnston> https://media.giphy.com/media/K8KtAeRA1w44E/source.gif
14:01:41 <slaweq> njohnston: LOL
14:01:49 <slaweq> good morning is missing there :P
14:02:06 <njohnston> he only gets 80% code coverage for that
14:02:12 <slaweq> :D
14:03:17 <ralonsoh> hi
14:05:18 <slaweq> ok, lets start
14:05:29 <slaweq> maybe yamamoto will join us in the meantime
14:05:33 <slaweq> #topic RFEs
14:05:44 <slaweq> we have one rfe for today
14:05:48 <slaweq> https://bugs.launchpad.net/neutron/+bug/1877301
14:05:48 <openstack> Launchpad bug 1877301 in neutron "[RFE] L3 Router support ndp proxy" [Wishlist,Confirmed]
14:06:16 <slaweq> haleyb already triaged it - thx :)
14:06:34 <njohnston> yamamoto sent email that he would be unable to make it but he left a question in the LP
14:06:52 <slaweq> thx njohnston I probably missed this email somehow
14:12:45 <haleyb> so i'm guessing people are still reading my long notes in the bug? :)
14:12:51 <mlavalle> I liked Brian's contrast of L3 vs. L2. Thanks, nice way to put it
14:13:15 <mlavalle> haleyb's explanation is very good
14:13:32 <haleyb> mlavalle: yeah, about half way through i started to remember about router caches and memory consumption
14:14:05 <haleyb> but IPv6 ND is actually over L3, even if it doesn't look like it
14:14:14 <mlavalle> so the upstream router thinks it is aneighboring address
14:14:43 <haleyb> right, similar to proxy ARP
14:15:42 <haleyb> the potential for number of addresses is just much larger with the address space
14:15:46 <slaweq> haleyb: and this proxy can be done by kernel on network node, so we don't need any new soft like e.g. radvd for RA
14:15:58 <slaweq> or is there any additional soft required to do that?
14:16:42 <slaweq> ok, I see something like "ip -6 neigh add proxy 2001:400:1234:567:ffff::8 dev qg-733bd76b-62" in description
14:16:45 <mlavalle> I think it is all kernel
14:16:51 <haleyb> slaweq: that's a good question.  with proxy ARP there is usually an ARP transmit, I'm not sure there's an equivalent utility for IPv6 ND
14:16:52 <slaweq> so should be possible without any additional tools
14:17:43 <haleyb> so if an instance is migrated there is still the possibility of lost packets, so it's a good question
14:18:09 <mlavalle> and yeah, when triaging this RFE during my bugs deputy week, I read through https://tools.ietf.org/html/rfc4389 as yamamoto suggests
14:18:40 <mlavalle> but it wasn't as crystal clear as haleyb's explanation
14:18:42 <haleyb> although it's Experimental, most vendors have implemented some form of IPv6 proxy ND
14:19:29 <haleyb> mlavalle: I've had IPv6 on the brain since 1995
14:19:47 <mlavalle> it shows ;-)
14:20:08 <haleyb> or was it 1996 i forget :)
14:20:44 <mlavalle> beyond 20 years, at our age, we probably want to forget... at least I do LOL
14:21:53 <haleyb> https://www.ipv4.sixxs.net/archive/docs/ipv6-history/6bone-mailinglist/1996-September/000232.html
14:21:54 <mlavalle> and yes IPv6 ND is over L3, but the effect of it goes to L2
14:22:34 <haleyb> anyways...
14:23:14 <slaweq> ok, but getting back to nowadays :P
14:23:37 <haleyb> slaweq: you raised a good point about the Neighbor Advertisement, does the router need to send one when it adds the proxy entry?
14:24:44 <slaweq> haleyb: and also another question which I have - how it will work with L3HA routers during failover?
14:24:44 <haleyb> it's maybe not necessary, but without it there are edge cases like migration taking 30 seconds, etc
14:25:14 <haleyb> slaweq: seems like the same problem
14:25:20 <slaweq> haleyb: yes
14:26:56 <haleyb> there is a daemon for linux, npd6, but i don't know how it works or if it applies
14:27:43 <slaweq> haleyb: are You talking about https://github.com/npd6/npd6 ?
14:27:52 <slaweq> if so, it seems like dead project
14:28:07 <haleyb> i was only looking at packages
14:28:12 <slaweq> :)
14:29:27 <haleyb> https://github.com/DanielAdolfsson/ndppd
14:29:48 <slaweq> thx, this one looks better indeed
14:30:24 <haleyb> i thought we had looked at code to send proxy ND packets a while ago, thought maybe there was a way using packet generator code
14:30:35 <slaweq> I think that those are in fact some implementation details which can be tested during implementation
14:31:05 <haleyb> but i guess we need to add this to the spec since something is needed to support HA
14:32:00 <slaweq> hmm, now I see one more thing which confuses me a bit
14:32:06 <slaweq> in description there is sentence:
14:32:08 <slaweq> In external router we just need to add a static direct route, like this: 'ip route add 2001:400:1234:567:ffff::/80 dev fake-gw-port'.
14:32:25 <slaweq> does it mean that we will need to interact somehow with some external routers?
14:32:47 <slaweq> or what? haleyb do You know maybe?
14:32:58 <haleyb> was that in the bug?
14:33:03 <mlavalle> yes
14:33:10 <slaweq> in the description
14:35:02 <haleyb> hmm, i don't know if they meant the upstream router as the command was 'ip route add...'
14:35:33 <haleyb> i think the point is to not have to rely on the upstream router doing anything but learn neighbors and forward packets
14:35:45 <mlavalle> I agree
14:36:43 <haleyb> he would have to write up a spec to make it clear
14:37:02 <mlavalle> yes
14:37:28 <amotoki> in case of L3HA, I think we can assume two neutron routers and an upstream router belong to a same subnet, so I don't think the upstream router reconfigure its routing entry (via fake-gw-port)
14:37:33 <slaweq> I agree that relying on the upstream router wouldn't be good in that case
14:37:39 <amotoki> agree. it needs to beo clarified in the spec.
14:37:49 <njohnston> +1
14:37:55 <amotoki> s/beo/be/
14:38:06 <haleyb> i know i played with this years ago based on the patch I linked, in a production environment, and it did work, but it was just a manual test, and PD came along...
14:38:31 <mlavalle> so it made it not worthwhile
14:39:09 <haleyb> right, i jumped on the PD train so to speak since it took care of the entire /64
14:40:05 <slaweq> so should we approve this rfe with spec as next step to discuss technial details of implementation? or should we wait for spec now before we will approve (or not) the rfe? what do You think?
14:40:32 <mlavalle> yes, in my opinion experimenting is good for Neutron and OpenStack in general
14:40:41 <slaweq> IMO option 1 is ok but I want to know what others think :)
14:40:45 <mlavalle> and I like this experiment
14:41:03 <mlavalle> I'm even willing to help
14:41:21 <mlavalle> to write the spec and implement a PoC
14:41:43 <amotoki> sounds good to approve the RFE with spec and try PoC
14:42:24 <haleyb> yes, i'm ok given we write a good spec outlining the issues we've raised here, so someone can go look at solving them
14:42:39 <mlavalle> yeap, experiment
14:43:37 <slaweq> njohnston: any thoughts?
14:44:37 <njohnston> This aspect of IPv6 networking is not my strong suit but I am very interested to see how it works out
14:45:40 <slaweq> ok, so I will mark this rfe as approved, with comment that we first need spec which will answer for the questions which we had here
14:45:57 <mlavalle> cool
14:46:33 <slaweq> also thx mlavalle for volunteering to help with this one :) I will include that in my summary also :)
14:46:46 <mlavalle> slaweq: cool, thanks
14:47:02 <slaweq> ok, so I think we are good for today
14:47:16 <slaweq> anyone wants to discuss anything else today?
14:47:21 <amotoki> what I could do today was just to follow the discussion and try to understand it well.... thanks all!
14:48:59 <haleyb> now we can get back to fixing the gate :-o
14:49:50 <slaweq> ok, thx a lot for attending the meeting, have a great day and weekend :)
14:49:55 <slaweq> see You all next week
14:49:58 <mlavalle> you too
14:50:00 <mlavalle> o/
14:50:01 <ralonsoh> bye
14:50:01 <slaweq> #endmeeting