22:03:47 <amuller> #startmeeting neutron_drivers
22:03:47 <kevinbenton> you want "neutron_drivers"
22:03:48 <openstack> Meeting started Thu Jun 16 22:03:47 2016 UTC and is due to finish in 60 minutes.  The chair is amuller. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:03:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:03:51 <openstack> The meeting name has been set to 'neutron_drivers'
22:03:56 <ihrachys_> what a mess
22:03:58 <amuller> Hi everyone
22:04:03 <ihrachys_> ahoj!
22:04:15 <amuller> We are gathered here today to go over https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Confirmed&field.tag=rfe
22:04:28 <amuller> please run the query again I updated some states just a minute ago
22:04:35 <amotoki> hi
22:04:43 <amuller> First on my list is https://bugs.launchpad.net/neutron/+bug/1505631
22:04:43 <openstack> Launchpad bug 1505631 in neutron "[RFE] QoS VLAN 802.1p Support" [Wishlist,Confirmed] - Assigned to Reedip (reedip-banerjee)
22:04:57 <carl_baldwin> Not Triaged?
22:05:30 <amuller> I'm looking at comment 8
22:05:40 <amuller> If Miguel our QoS czar thinks this makes sense for N
22:06:02 <amuller> I think we should approve it, and if resources become an issue we flip it back to our backlog
22:06:19 <ihrachys_> I don't think it will happen in N
22:06:21 <ihrachys_> but he
22:06:26 <ihrachys_> *but hey..
22:06:33 <amuller> No I don't either
22:06:39 <amuller> But the work should start
22:06:42 <njohnston> In the QoS meeting we talked about it, and we said interest is low
22:06:58 <amuller> njohnston: Is there any interest from anyone to actually push it?
22:07:00 <ihrachys_> the general idea is fine, so I guess it's the matter of volunteers.
22:07:41 <njohnston> amuller: not AFAIK
22:08:02 <amuller> njohnston: OK, I'll clear the assignee
22:08:40 <amuller> So we'll keep this in its current state, I'll attach eavsdrop to make it clear we're looking for an assignee for this one
22:08:46 <njohnston> sounds good
22:09:21 <amuller> next up bypassing BGP is https://bugs.launchpad.net/neutron/+bug/1566520
22:09:21 <openstack> Launchpad bug 1566520 in neutron "[RFE] Upgrade controllers with no API downtime" [Wishlist,Confirmed]
22:09:21 <njohnston> tornado warning just got issues for this area, so if I drop it's OK, I either lost power or I'm dead
22:09:56 <amuller> ihrachys_: ^
22:10:02 <ihrachys_> oh well, I missed some Armando's comments there and was not responsive :-x
22:10:19 <ihrachys_> amuller: should we skip so that I can give some details for the next meeting?
22:10:36 <amuller> Isn't this the case of the work already being underway with both contributors and reviewers?
22:11:02 <ihrachys_> amuller: partially. it's multi facet feature though, and we started work on some pieces only
22:11:06 <amuller> Do we want to approve this, or would further discussions / a spec be required?
22:11:14 <amuller> ihrachys_: Do you want to scope the work for the N cycle explicitly?
22:11:14 <ihrachys_> there is a map of the effort as .svg file attached there
22:11:18 <amuller> ok
22:11:39 <ihrachys_> amuller: I think armax wants me to describe strategy for API/HA
22:11:56 <ihrachys_> the API thingy, it won't be a piece to choke in N for sure
22:12:12 <ihrachys_> but I guess it's a huge topic to start discussing it right now
22:12:27 <ihrachys_> I just need to sit and give it some thought and reply to armax
22:12:36 <amuller> ihrachys_: I'll flip the bug to 'Triaged' to signal it's under active discussion
22:12:42 <ihrachys_> amuller: ok
22:12:48 <amuller> Make sense?
22:12:56 <ihrachys_> amuller: btw for objects stuff, there is a separate RFE
22:13:01 <ihrachys_> amuller: and that one is long approved
22:13:15 <amotoki> makes sense for triagged
22:13:44 <ihrachys_> fyi, the objects RFE: https://bugs.launchpad.net/neutron/+bug/1541928
22:13:44 <openstack> Launchpad bug 1541928 in neutron "Adopt oslo.versionedobjects for core resources (ports, networks, subnets, ...)" [Medium,In progress] - Assigned to Rossella Sblendido (rossella-o)
22:13:44 <carl_baldwin> +1
22:14:05 <ihrachys_> hm, doesn't seem like it's marked as rfe, whateve
22:14:27 <amuller> Action item is on ihrachys_ to provide more details on the 'no downtime' RFE
22:14:30 <amuller> next up is https://bugs.launchpad.net/neutron/+bug/1578989
22:14:31 <openstack> Launchpad bug 1578989 in neutron "[RFE] Strict minimum bandwidth support (egress)" [Wishlist,Confirmed]
22:14:38 <amuller> Reported by ajo
22:14:51 <amuller> He told me he might be awake for this meeting, let's see if he succeeded :)
22:15:26 <ihrachys_> I think that requires nova scheduler work. did it happen?
22:15:48 <amuller> It also requires https://bugs.launchpad.net/neutron/+bug/1560963 which is still in progres in Neutron
22:15:48 <openstack> Launchpad bug 1560963 in neutron "[RFE] Minimum bandwidth support (egress)" [Wishlist,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)
22:15:50 <amuller> so, nothing to do about it yet
22:16:01 <njohnston> and it's also approved
22:16:09 <njohnston> I think there is some progress there
22:16:20 <amuller> njohnston: it's two separate RFEs
22:16:29 <amuller> anyway really nothing to discuss here as it's blocked by ongoing efforts
22:16:32 <amuller> moving on
22:16:49 <amuller> https://bugs.launchpad.net/neutron/+bug/1579068
22:16:49 <openstack> Launchpad bug 1579068 in neutron "[RFE] VXLAN multicast groups distribution in linuxbridge" [Wishlist,Confirmed]
22:17:36 <amuller> Seems to me like Sean is OK with the general idea and we should iterate on the patch. Thoughts?
22:18:49 <kevinbenton> i'm not sure if this should just be config driven or if we should have an API
22:19:12 <kevinbenton> i'm okay with the feature
22:19:25 <amuller> Ahh, that is a detail that should be decided out side of the patch :)
22:19:29 <kevinbenton> i can see why you might need more fine-grained control to match config of network devices
22:19:38 <kevinbenton> amuller: well not necessarily
22:19:47 <kevinbenton> amuller: if its API, it moves beyond linux bridge
22:20:12 <kevinbenton> and sort of makes the target vxlan address for multicast more 'first class'\
22:20:21 <kevinbenton> right now it's just a small linux bridge config option
22:20:30 <kevinbenton> which is maybe okay to continue with
22:20:38 <kevinbenton> i would say approve it
22:20:43 <kevinbenton> we can start with config option
22:20:47 <kevinbenton> will be small and non-invasive
22:21:06 <amuller> new config options aren't free though, if there is a possible intention to deprecate the option and introduce a new API...
22:21:15 <amuller> we  should probably know ahead of time and go with either an option or an API
22:21:20 <amuller> unless the option is required in both cases?
22:21:50 <kevinbenton> no, i'm not sure i see the value in a new API quite yet
22:22:12 <dougwig> o/, sorry i'm late
22:22:15 <kevinbenton> the OVS community has shown no interest in optimizing broadcast traffic via multicast
22:22:32 <kevinbenton> so it's mainly linux bridge specific right now
22:22:40 <amuller> kevinbenton: I understand
22:22:44 <HenryG> I don't see the usefulness of an API for this
22:22:46 <amotoki> with config option, can a feature requested achieved? is there any need to specify some information from outside?
22:23:10 <kevinbenton> amotoki: RFE can be fulfilled with just config for linux bridge
22:23:49 <amuller> any objections to move forward then?
22:23:50 <amotoki> kevinbenton: so i agree to start it with config
22:23:53 <kevinbenton> HenryG: i think the main thing would be getting a data model describing it for interoperability between ML2 drivers
22:24:13 <kevinbenton> HenryG: API itself isn't too useful
22:24:41 <HenryG> An API kind of implies you can turn it on and off. I'd think you just want to enable it and be done.
22:25:06 <kevinbenton> HenryG: API would allow you to set a VNI's multicast address
22:25:30 <HenryG> kevinbenton: Then add an API for just that.
22:25:50 <kevinbenton> HenryG: what else did you think we were talking about?
22:25:56 <amuller> HenryG: it would only be applicable to the LB agent though, kevinbenton was pointing out that there is no ongoing work for OVS for this
22:26:24 * HenryG rereads the rfe
22:26:49 <kevinbenton> use case is they want to set multicast addresses for VNIs
22:26:49 <carl_baldwin> It seems strange to only add it for LB.  Doesn't it?
22:27:01 <kevinbenton> carl_baldwin: right
22:27:14 <kevinbenton> carl_baldwin: which is why i was thinking a better generic approach would be useful
22:27:18 <carl_baldwin> Shouldn't it be scoped to a network instead of an agent?
22:27:21 <kevinbenton> carl_baldwin: but OVS won't be able to use it
22:27:56 <kevinbenton> carl_baldwin: yes, it's really a property of the segment
22:28:15 <kevinbenton> carl_baldwin: or maybe the port binding to that segment
22:28:27 <carl_baldwin> This is starting to make a little more sense.
22:28:44 <amotoki> looking at the description of RFE, it looks this wants to configure multicast groups for VNI *ranges*.
22:28:49 <carl_baldwin> "OVS won't be able to use it" ??
22:29:04 <kevinbenton> carl_baldwin: OVS does not support multicast targets for VXLAN
22:29:18 <kevinbenton> carl_baldwin: and there was no interest from the community last time i emailed the list about it
22:29:34 <carl_baldwin> kevinbenton: ok
22:30:02 <kevinbenton> #link http://openvswitch.org/pipermail/discuss/2016-January/019897.html
22:30:07 <kevinbenton> ovs thread about that
22:30:13 <kevinbenton> but getting side tracked a bit now
22:30:20 <amuller> since this is LB specific for now, we can start with a config option
22:30:25 <amuller> if this becomes generally applicable we can revisit
22:30:32 <kevinbenton> i don't think we have the bandwidth to try to include it in the data model anyway this cycle
22:30:38 <kevinbenton> so i'm okay with config option
22:30:50 <carl_baldwin> Right, I'm over on that side (okay with config option) now.
22:31:23 <amuller> Going in 3...
22:31:42 <amuller> alright
22:32:02 <amuller> next up https://bugs.launchpad.net/neutron/+bug/1580149
22:32:02 <openstack> Launchpad bug 1580149 in neutron "[RFE] Rename API options related to QoS bandwidth limit rule" [Wishlist,Confirmed] - Assigned to Slawek Kaplonski (slaweq)
22:32:20 <ihrachys_> oh that one is very controversial
22:32:37 <amuller> heh
22:32:44 <ihrachys_> ajo wants to consider the following solution: https://bugs.launchpad.net/neutron/+bug/1586056
22:32:44 <openstack> Launchpad bug 1586056 in neutron "[RFE] Improved validation mechanism for QoS rules with port types" [Wishlist,Confirmed] - Assigned to Slawek Kaplonski (slaweq)
22:32:57 <amuller> Yeah, I love that one
22:33:05 <njohnston> Yes, the one ihrachys_ just linked to would solve a number of issues
22:33:16 <ihrachys_> specifically, introducing a qos specific API to determine which API features are available
22:33:36 <ihrachys_> which seems to me like reimplementing /extensions API endpoint, in a way
22:34:08 <ihrachys_> though it is not exactly the mapping of /extensions since it's per-port type.
22:35:14 <amuller> ihrachys_: I think part of the problem is that with ML2, you can't have mech drivers expose extensions, only the plugin itself
22:35:29 <ihrachys_> amuller: right. that's why the port-type part.
22:35:44 <amuller> ihrachys_: If you could expose extensions per mech driver, we could have SRIOV / LB / OVS expose higher resolution QoS related extensions
22:35:47 <amuller> without a new API
22:35:56 <ihrachys_> amuller: the original approach we took in Liberty for qos was that supported rule types == common subset of rules supported by all drivers
22:35:57 <kevinbenton> i thought they could?
22:36:33 <kevinbenton> ml2 has a whole extension manager thing
22:37:17 <ihrachys_> kevinbenton: but not to define API extensions available to specific ports?
22:37:39 <amuller> ihrachys_: to specific ports or to the sriov/ovs/lb mech drivers?
22:37:42 <kevinbenton> ihrachys_: no, just to define API extensions in general
22:37:59 <ihrachys_> kevinbenton: yes. the problem is, some drivers may support rule X while others don't
22:38:24 <ihrachys_> afaik atm unknown rules are just ignored
22:38:54 <amuller> this reminds me of an old bug I reported https://bugs.launchpad.net/neutron/+bug/1450067
22:38:54 <openstack> Launchpad bug 1450067 in neutron "Server with ML2 & L3 service plugin exposes dvr extension even if OVS agent is unused" [Low,Confirmed]
22:38:56 <ihrachys_> amuller: ports bound to drivers. I guess that's the right wording?
22:39:16 <amuller> If you run the L3 service plugin, it exposes DVR as supported, even if you're running ML2 with LB
22:39:38 <amuller> here, the QoS service plugin exposes the 'qos' extension, without regard to specific rule types and the loaded ML2 mech drivers compatability
22:39:40 <ihrachys_> I honestly prefer keeping API simple even if it means not exposing some features to users. just allow those rules that are supported by all drivers (as was originally intended during L)
22:39:58 <ihrachys_> amuller: we have /rule_types API for qos that returns the list of supported rules
22:40:11 <ihrachys_> it's just that we don't enforce the set when creating rules
22:40:17 <amuller> right
22:40:18 <ihrachys_> and I believe we should just enforce that
22:40:26 <kevinbenton> so you would need like a /rule_types?port_id=<port>
22:40:40 <ihrachys_> as for the issue of additional attributes, that is a usual case of API extensions as for any other resources
22:40:48 <ihrachys_> kevinbenton: sort of
22:41:25 <amuller> dougwig: Did you handle something similar in LBaaS? I remember specific health checks not being supported by haproxy but the API happily returning 200
22:41:28 <ihrachys_> or better, ?vnic_type=
22:41:42 <kevinbenton> well vnic_type wouldn't even be enough
22:41:54 <kevinbenton> linux bridge and OVS are same VNIC type, aren't they?
22:41:58 <kevinbenton> 'normal'
22:42:42 <ihrachys_> yeah. maybe ajo assumed the port is bound, so you know the driver.
22:43:23 <kevinbenton> who calls this API, just admins?
22:43:32 <ihrachys_> kevinbenton: users of the API
22:43:32 <kevinbenton> because regular users don't know anything about the port binding details
22:43:45 <ihrachys_> kevinbenton: API is usually admin_only, but that can be changed with policy.json
22:43:46 <kevinbenton> so don't expect them to query based on what they can't see :0
22:43:48 <kevinbenton> :)
22:44:25 <kevinbenton> https://github.com/openstack/neutron/blob/496f1f75e8b97c486ebdcb1e505d71656398026c/etc/policy.json#L83-L86
22:44:36 <kevinbenton> can't depend on any of that
22:45:19 <ihrachys_> kevinbenton: it would be great if you can point that out in RFE
22:45:31 <amuller> ihrachys_: you're suggesting that the API will be better about validation and NACKing requests, right? In which case users don't have to query
22:46:31 <ihrachys_> amuller: yes. expose only things that will consistently work for all ports. if backend does not support a thing, just don't allow it. then it's only a matter of detecting if the rule type is available, and then using it for anything
22:47:12 <amuller> ihrachys_: what happens if you configue both the OVS and LB mech drivers but only have OVS agents?
22:47:18 <amuller> like devstack does
22:47:35 <dougwig> amuller: in lbaas, we raise Unsupported() exceptions in drivers for API calls that are valid, but the backend doesn't support it.
22:47:40 <ihrachys_> amuller: you will have access to rule types that are common for LB and OVS. which is afaik atm all two rule types we have
22:48:12 <amuller> ihrachys_: right, I'm saying in a general manner that if something is applicable to OVS but not to LB, your suggestion would be to deny that request, even if LB is not actually used
22:48:27 <amuller> ihrachys_: assuming we're interested in a long term solution we will not have to revisit next cycle
22:48:46 <ihrachys_> amuller: well, I say it's a wrong configuration if you load a driver that is not used, and hence it's ok to kick you in *aas
22:48:52 <amuller> hah
22:49:24 <amuller> ihrachys_: the other counter example would be to have OVS on some nodes and SRIOV on others
22:49:44 <kevinbenton> yeah, i'm not sure least common denominator makes sense
22:49:50 <ihrachys_> that's a better example
22:49:52 <amuller> ihrachys_: and trying to apply a rule on an OVS port, with that rule not applicable to SRIOV
22:50:13 <amuller> ihrachys_: I was trying to build up
22:51:15 <amuller> ihrachys_: In that case we would want to mark the rule type as generally A-OK, but deny the request in case the rule is added to a policy that is bound to an SRIOV port
22:51:26 <amuller> ihrachys_: which is similar to what is done in LBaaS land
22:52:17 <kevinbenton> and then refuse to bind a port if it has unenforcable qos policies
22:53:16 <ihrachys_> how does the user know the driver that handles the port? or it's a matter of calling to API and check the result?
22:53:27 <ihrachys_> 'if it fails, apparently it's not supported'?
22:53:35 <kevinbenton> yep :/
22:54:04 <ihrachys_> nice
22:54:23 <amuller> ihrachys_: I say that phase 1 is better validation, 'it it fails it's not supported' as you put it, and a possible phase 2 is to add a query-API
22:54:51 <ihrachys_> I think we have some thoughts on the matter, and we should capture something in RFE.
22:55:10 <ihrachys_> amuller: I am definitely to postpone query-API if that's an option.
22:55:16 <amuller> I'll dump this eavsdrop in the RFE, I'll let Miguel respond
22:55:28 <amuller> We don't have all stakeholders here
22:56:22 <kevinbenton> steakholder -> http://i.imgur.com/IxMX0sR.jpg
22:56:34 <ihrachys_> :))
22:56:42 <amuller> I thought we could sneak in a potentially simple one for last
22:56:46 <amuller> https://bugs.launchpad.net/neutron/+bug/1580327
22:56:46 <openstack> Launchpad bug 1580327 in neutron "[RFE] Neutron Feature Classification" [Wishlist,Confirmed] - Assigned to Ankur (ankur-gupta-f)
22:57:19 <amuller> It's essentially more docs
22:57:31 <kevinbenton> NOT RELATED TO NEUTRON CLASSIFIER
22:57:43 <kevinbenton> was very confused at first reading the problem statement :)
22:57:59 <ihrachys_> amuller: I think it's good as long as it's up to date
22:58:17 <ihrachys_> amuller: updating the page should be a part of pre-release preparation steps
22:58:18 <kevinbenton> so Sam-I-Am was talking about something like this in the main meeting
22:58:28 <amotoki> we can start it and re-spin to improve them.
22:58:32 <kevinbenton> i would rather have this than it being an ad-hoc decision in the docs
22:59:11 <amuller> Who is the target audience for this?
22:59:11 <ihrachys_> I think it's safe to approve and allow folks to code
22:59:21 <amuller> It isn't develoeprs it? Why is it in dev-ref then?
22:59:26 <amuller> is it*?
22:59:30 <amuller> oh wow I cannot spell
22:59:39 <ihrachys_> amuller: I suspect just because it's easier to update it
22:59:49 <amotoki> i think it is better to catch up what happens in nova
23:00:06 <kevinbenton> i think devref is okay and then docs can pull directly from it
23:00:22 <kevinbenton> because docs is talking about classifying feature stability as well
23:00:34 <amuller> sounds like double work :(
23:01:08 <amuller> why not contribute an initial version to the network-guide, then add feature stability or whatever columns people want?
23:01:18 <ihrachys_> time!
23:01:24 <amuller> ihrachys_: party pooper
23:01:44 <kevinbenton> amuller: docs cores != neutron cores
23:01:52 <amuller> anyway, I think we should approve as we seem to be ok with the high level intent, the destination of the docs can be discussed in the patch
23:01:55 <kevinbenton> amuller: hard to have reviewers vouch for stability claims
23:02:01 <kevinbenton> SGTM
23:02:13 <amuller> Thank you everyone
23:02:18 <amuller> #sendmeeting
23:02:21 <amuller> ok
23:02:24 <amuller> #endmeeting