22:03:09 <kevinbenton> #startmeeting neutron_drivers
22:03:10 <openstack> Meeting started Thu Aug 31 22:03:09 2017 UTC and is due to finish in 60 minutes.  The chair is kevinbenton. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:03:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:03:13 <openstack> The meeting name has been set to 'neutron_drivers'
22:03:53 <kevinbenton> #topic announcements
22:04:06 <kevinbenton> armax has pushed the patch to open the queens specs directory
22:04:12 <kevinbenton> armax: do you have the link handy?
22:04:42 <armax> hang on
22:04:49 <armax> https://review.openstack.org/#/c/498953/
22:06:00 <kevinbenton> armax: and a link to the specs build to show what went into pike? :)
22:06:39 <armax> sure
22:06:48 <armax> http://docs-draft.openstack.org/53/498953/2/check/gate-neutron-specs-docs-ubuntu-xenial/7f40f42//doc/build/html/
22:07:15 <ihrachys> hm. doesn't it show same things you moved to backlog in Pike?
22:07:43 <ihrachys> ah wait, I misread
22:07:46 <armax> it’s at the bottom
22:07:49 <kevinbenton> well the things under Pike in that build there should be the specs that landed in pike
22:08:00 <armax> http://docs-draft.openstack.org/53/498953/2/check/gate-neutron-specs-docs-ubuntu-xenial/7f40f42//doc/build/html/#backlog
22:08:03 <ihrachys> is classification framework done? why is it there?
22:08:19 <armax> ihrachys: that’s the point of the patch, it needs reviewing
22:08:26 <armax> I don’t believe half of the stuff got done
22:08:38 <kevinbenton> yes, so please review that patch and the stuff there so we can get other stuff carried forward
22:08:43 <ihrachys> oh ok, I thought we rubber stamp :p
22:08:48 <kevinbenton> nope
22:09:05 <kevinbenton> I don't think egress is done either, but it would be good to confirm with who is working on things
22:09:43 <ihrachys> other features are good I beleive
22:09:51 <armax> OK
22:11:40 <kevinbenton> ihrachys: is the gate ok now?
22:12:01 <ihrachys> I rechecked the grenade revert just now
22:12:09 <ihrachys> because of functional failure (unrelated)
22:12:21 <kevinbenton> ok
22:12:33 <ihrachys> once it's in, we'll also need https://review.openstack.org/#/c/499585/2
22:12:42 <ihrachys> and probably https://review.openstack.org/#/c/499725/3 when ready
22:12:47 <ihrachys> and of course, all that is to backport
22:12:55 <kevinbenton> ack, thanks
22:13:11 <kevinbenton> the PTG is coming up quickly
22:13:22 <kevinbenton> has everyone added their name to the etherpad that is going to attend?
22:13:39 <ihrachys> remind the link
22:13:45 <kevinbenton> #link https://etherpad.openstack.org/p/neutron-queens-ptg
22:14:25 <ihrachys> RH list is complete
22:14:46 <kevinbenton> ok, cool. I will put the schedule together next week
22:15:50 <kevinbenton> anyone else have any announcements before we start looking at some RFEs?
22:16:07 <ihrachys> I won't join next week
22:16:20 <ihrachys> I have some conf here in SF Thu-Fri
22:16:32 <kevinbenton> ack
22:16:52 <armax> which one?
22:17:09 <ihrachys> http://www.opendevconf.com/
22:17:18 <armax> get out of town!
22:17:26 <armax> :)
22:17:35 <ihrachys> yeah it's pretty hot
22:17:38 <mlavalle> ihrachys: enjoy!
22:17:59 <kevinbenton> ok
22:18:02 <kevinbenton> #topic RFEs
22:18:17 <kevinbenton> mlavalle, you mentioned that some folks wanted to discuss some specific ones before we dig into the list
22:18:38 <mlavalle> I think alisanhaji and davidsha
22:18:57 <mlavalle> I know alisanhaji has an RFE for sure
22:18:58 <alisanhaji> Hi, yes, this one https://bugs.launchpad.net/neutron/+bug/1692951
22:18:59 <openstack> Launchpad bug 1692951 in neutron "[RFE] DSCP mark on the outer header" [Wishlist,Confirmed]
22:19:46 <alisanhaji> We discussed it in the QoS meeting and they said it should be discussed here
22:20:07 <ihrachys> ok. question: is there a reason not to reuse the existing rule?
22:20:56 <kevinbenton> +1, why do we need all of the API work to enable overrides?
22:21:05 <kevinbenton> i can see this being much simpler if we just carry inner to outer
22:21:06 <alisanhaji> Only so that we can have outer without inner, but I think we could do it anyway
22:21:25 <ihrachys> I would say, if you want DSCP preference, you probably want it across all network types transparently
22:21:58 <alisanhaji> kevinbenton: yes, the OVS ports can be created with inherit option, or updated
22:22:11 <kevinbenton> The problem here is that the DSCP mark for the underlay should be completely transparent to the tenants
22:22:16 <amotoki> copying inner dscp to outer sounds reasonable.
22:22:36 <amotoki> alisanhaji: can we update existing vxlan ovs port?
22:23:20 <alisanhaji> yes, for OVS a simple set can work. For LB it doesn't if it is not created that way AFAIK
22:23:31 <kevinbenton> (so adding a special qos flag for tenants to set when they don't even know the infra or encap type isn't good)
22:23:53 <kevinbenton> alisanhaji: ok, so would you be okay trimming out basically all of the API stuff and just have inherit automatically happen?
22:24:06 <kevinbenton> or does that prevent the critical use cases?
22:24:41 <alisanhaji> kevinbenton: sorry?
22:24:56 <kevinbenton> alisanhaji: no new rule types
22:25:13 <kevinbenton> alisanhaji: just automatically inherit dscp marking from the inner encap
22:25:15 <alisanhaji> Well we can extend the old rule
22:25:16 <mlavalle> alisanhaji: not modify the API, just copy the DSCP mark
22:25:18 <alisanhaji> Ah
22:25:43 <alisanhaji> You mean without the tenant involvment?
22:25:49 <mlavalle> right
22:26:23 <kevinbenton> alisanhaji: the issue is that a tenant would have no idea what DSCP marking is appropriate for the underlay
22:26:33 <kevinbenton> so I don't see how they would know what to put in these rules
22:26:47 <armax> if we copy blindly from inner to outer is there a possibility that this would go against settings as specified by the admin?
22:27:03 <armax> I mean the network admin, not even the openstack admin
22:27:45 <amotoki> so in most cases qos rules are defined by openstack admin (or who can know network infra) and shared to tenant users
22:28:02 <ihrachys> in general, qos policies are managed by openstack admin. they will need to negotiate with network admin if using dscp is somehow a bad thing for the network.
22:28:15 <ihrachys> if there is concern that it breaks something, maybe a conf option will do (enabled by default)
22:28:26 <alisanhaji> Can't the openstack admin know what rules suit the underlay network?
22:28:34 <mlavalle> yeap, that is what I was thinking
22:28:35 <kevinbenton> ok, well then maybe this should be an admin-only API
22:28:50 <ihrachys> kevinbenton, qos policies are admin-only
22:28:54 <ihrachys> unless you modify polic7
22:28:57 <ihrachys> *policy
22:29:08 <alisanhaji> yes they are admi-only by default
22:29:15 <amotoki> DSCP rule itself are intended to use by admin-only
22:29:17 <kevinbenton> ihrachys: ah, sorry
22:29:27 <kevinbenton> i was thinking users could create policies on their own networks
22:29:39 <amotoki> if operators allows tenants to create it, they should be careful
22:30:15 <armax> we should definitely preserve a way to keep the existing behavior where the inner rule doesn’t propagate to the outer header
22:30:25 <mlavalle> ++
22:30:28 <armax> though that doesn’t seem that interesting
22:30:45 <armax> in the sense that it might be seen as an oversight
22:30:52 <amotoki> so we need a conf option for this, right?
22:31:03 <armax> what happens with vlan as seg type?
22:31:17 <kevinbenton> well maybe a new API does make sense if qos is already admin-only...
22:32:10 <amotoki> do we need a new API? QoS API is generally neutral to which backend is used
22:32:29 <alisanhaji> armax: There is an RFE on VLAN 802.1Q tagging: https://bugs.launchpad.net/neutron/+bug/1505631
22:32:30 <openstack> Launchpad bug 1505631 in neutron "[RFE] QoS VLAN 802.1p Support" [Wishlist,Confirmed] - Assigned to Reedip (reedip-banerjee)
22:32:32 <kevinbenton> amotoki: well new rule type i mean as suggested in RFE
22:32:44 <davidsha> As in adding a flag to the existing rule --propagate=True or such?
22:33:40 <kevinbenton> continuing on armax's question, is this relevant for vlan/flat?
22:34:05 <armax> memory is slowly coming back
22:34:06 <amotoki> there is no outer IP header for vlan/flat, so it is unrelated
22:34:07 <alisanhaji> For VLAN, we need to be able to tag with 802.1Q first
22:34:33 <ihrachys> that's a completely different feature
22:34:37 <alisanhaji> nad have a suitable map between DSCP and 802.1Q
22:34:44 <amotoki> I think syncing dscp mark to 802.1q priority is a different thing
22:35:02 <alisanhaji> Well, it's priority tagging
22:36:07 <kevinbenton> we probably could get by with a config option to disable/enable inheritance on the agent
22:36:49 <alisanhaji> kevinbenton: Yes we could. There is already a config option in LB to set TOS
22:37:10 <alisanhaji> I don't know if it works with inherit
22:37:40 <kevinbenton> shall we start with that and get all of the implementation kinks worked out in the agents and then reconsider API-controlled behavior later?
22:37:57 <amotoki> +1 to a config option. it is not clear to me what is the real value which have a different qos rule or a flag in dscp-marking rule
22:38:05 <kevinbenton> because this lands in the intersection of rules and network encap types
22:38:18 <kevinbenton> it doesn't cleanly fit as part of a qos rule by itself
22:38:23 <alisanhaji> That's a good start
22:38:45 <ihrachys> + works for me
22:38:58 <mlavalle> +1
22:39:50 <kevinbenton> armax: are you okay with that as a start? you said it didn't seem interesting :)
22:40:52 <armax> I meant the dscp tag only on the inner packet
22:41:36 <kevinbenton> armax: ok, then are you okay with starting out with an agent config to disable/enable inheritance?
22:42:04 <ihrachys> btw do we even need that inheritance knob? if dscp rule is created, at least underlay supports it for flat/vlan no?
22:42:08 <ihrachys> why not tunneled?
22:42:17 <armax> djd I drop?
22:42:22 <mlavalle> you did
22:42:31 <ihrachys> <kevinbenton> armax: ok, then are you okay with starting out with an agent config to disable/enable inheritance?
22:42:53 <kevinbenton> ihrachys: the knob might not even be necessary, we can figure that out in the review
22:43:09 <armax> meant the dscp tag only on the inner packet
22:44:00 <ihrachys> ok, so we agree that it's useful to have it on outer; we need to reword RFE; we will figure out config knob need during review.
22:44:05 <ihrachys> is it a good tl;dr?
22:44:12 <kevinbenton> ok, then i'll mark this as rfe-approved and put comments capturing the changes
22:44:15 <kevinbenton> ihrachys: +1
22:44:18 <armax> sounds good to me
22:44:30 <kevinbenton> excellent
22:44:31 <alisanhaji> great
22:44:36 <kevinbenton> mlavalle: was there another?
22:44:57 <mlavalle> maybe davidsha wants to pitch something?
22:45:09 <davidsha> This RFE: https://bugs.launchpad.net/neutron/+bug/1705536
22:45:11 <openstack> Launchpad bug 1705536 in neutron "[RFE] L3-agent agent-mode dvr bridge." [Wishlist,Triaged]
22:45:30 <armax> no more DVR features!
22:45:37 <armax> go away! :)
22:45:40 <davidsha> :P
22:45:59 <armax> davidsha: no office :)
22:46:03 <armax> *offense
22:46:15 <davidsha> armax: None taken :)
22:46:44 <davidsha> I have a basic PoC just showing it working with IPv4, I just have to enable floating IP
22:46:46 <davidsha> https://review.openstack.org/#/c/472289/
22:47:02 <ihrachys> "compatible with DPDK and Windows"
22:47:09 * ihrachys runs away screaming
22:48:33 <armax> actually the impact doesn’t look too bad
22:48:36 <armax> I am intrigued
22:48:40 <armax> ihrachys: come back!!
22:50:18 <armax> davidsha: have you given any thought to the comment I made on the RFE?
22:50:35 <armax> like, how to handle migration if this is something we’d even want to consdier?
22:50:47 <armax> and how would this L3 mode work with HA and VRRP
22:51:37 <davidsha> armax: I'd have to talk more with the L3 team about it, mlavalle would it be ok to discuss this next week?
22:51:52 <mlavalle> davidsha: of course
22:51:55 <ihrachys> I know I am a PITA, but I honestly think it's not the right time to touch this code. we don't seem to have it not breaking into pieces every time we touch it. we never delivered on making DVR+HA working and voting in gate, and I think stabilization and bug fixing is the thing to focus on on L3 side, not rearchitecture. that's my 2cents.
22:52:00 <mlavalle> let's put it in the agenda
22:52:26 <armax> ihrachys: you’re stealing my lines
22:52:28 <davidsha> mlavalle: thanks
22:52:31 <armax> just saying
22:52:34 <armax> :)
22:53:10 <armax> that said, the way davidsha seems to have approached at the problem looks like it’s nearly a 100% non-overlap
22:53:20 <armax> now, I am not saying I support this RFE in principle
22:53:40 <kevinbenton> yeah, this approach is nice in that it's not invasive
22:53:45 <armax> but if we managed to bring the non-overlap to 100% or (in reverse logic) 0% overlap
22:54:00 <armax> I can’t see why we could not consider have this available for users to play with...
22:54:03 <armax> so long as..
22:54:05 <ihrachys> it may exist as an experimental feature I guess
22:54:13 <armax> we are clear upfront what the scope of the whole effort is
22:54:17 <ihrachys> that will need to prove itself in gate and whatnot to move to supported
22:54:19 <kevinbenton> maybe we can revisit this one in a couple of weeks after the PTG once we have a better big picture of development for this cycle?
22:54:26 <mlavalle> so we can continue maturing it as a POC, they way it's been done so far
22:54:29 <armax> like no migration required etc
22:54:31 <kevinbenton> (and after the PTSD has worn off from the gate disaster :) )
22:54:41 <ihrachys> if I can have it not disabled/dropped on all my customers, I would be happy
22:54:52 <ihrachys> *not enabled
22:55:00 <armax> there’s literally no coverage of the new code from davidsha, am I right?
22:55:15 <armax> ihrachys: I feel you
22:55:28 <ihrachys> armax, I've heard that about the mode we just landed
22:56:10 <ihrachys> but I am not blocking. I think it's up to l3 team to figure out how to isolate it as much as possible
22:56:20 <armax> ihrachys: totally agree
22:56:21 <ihrachys> I just ask not to enable it by default, give it time, gate...
22:56:31 <armax> if I see one line of code where it shouldn’t be
22:56:40 <armax> I make changes to gerrit to apply a -3
22:56:51 <davidsha> armax: ack
22:56:57 <mlavalle> davidsha: we can take this guidance and refine next week in the meeting
22:57:12 <kevinbenton> i wonder if there is a way this can be architected as a drop-in extension
22:57:24 <armax> kevinbenton: go away
22:57:30 <kevinbenton> ok
22:57:33 <kevinbenton> :)
22:57:33 <armax> kevinbenton: but seriously, care to elaborate?
22:57:41 <ihrachys> :)))
22:57:41 <mlavalle> yeah please
22:57:44 <armax> not sure I fully understand what you mean
22:58:02 <ihrachys> pluggable agent mode implementation. we don't seem to have enough pluggable things.
22:58:12 <kevinbenton> well i was thinking some way to load up this code using the extension stuff added for fwaas/vpnaas
22:58:24 <kevinbenton> but this looks much more tightly coupled to internal router processing
22:58:33 <kevinbenton> so that's probably too much to put into the extension framework
22:58:37 <armax> um like if it were a router type?
22:58:39 <kevinbenton> we would need to have hooks everywhere
22:58:43 <armax> dvr_kickass?
22:58:58 <armax> or kickass_dvr?
22:59:05 <kevinbenton> not a different flavor
22:59:06 <davidsha> armax: I'm switching dvr_bridge to that...
22:59:09 <ihrachys> kevinbenton, ...which means changing architecture for existing users, which is no go
22:59:14 <armax> davidsha: please do!
22:59:18 <armax> :)
22:59:22 <davidsha> :)
22:59:36 <armax> kevinbenton: I think we’ll have to either that offline or somewhere else
22:59:40 <kevinbenton> ihrachys: yeah, it would require more hooks
22:59:42 <kevinbenton> ok
22:59:43 <armax> we’re 1 min to the top of the hour
22:59:53 <kevinbenton> my clock says times up
22:59:55 <armax> but my suggestion would be to minimize the invasiveness
22:59:59 <kevinbenton> so thanks everyone
23:00:02 <mlavalle> o/
23:00:05 <armax> davidsha: good work btw!
23:00:08 <ihrachys> armax, amen
23:00:10 <amotoki> o/
23:00:13 <davidsha> Thanks!
23:00:13 <kevinbenton> #endmeeting