22:01:59 #startmeeting neutron_drivers 22:02:00 Meeting started Thu Mar 30 22:01:59 2017 UTC and is due to finish in 60 minutes. The chair is kevinbenton. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:02:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:02:04 The meeting name has been set to 'neutron_drivers' 22:02:29 #topic meeting time 22:02:40 armando has disappeared 22:03:05 he is in the Neutron channel 22:03:48 amotoki: going any earlier around this time is too early for you, correct? 22:04:06 amotoki: we would need to go at least 8 hours backwards? 22:04:13 backwards == earlier 22:04:20 kevinbenton: 7 or 8 hours 22:04:43 which is fine by me ;) 22:04:51 and I think 8AM was an issue for Armando 22:04:56 8AM PST 22:05:24 6 hours earlier than now. 22:05:27 On Thursday's I chair the L3 meeting at 1500UTC, that is my only constraint 22:05:34 6h is 9am 22:06:01 so we would need either 1400UTC on Thu or 1500UTC otherwise 22:06:07 amotoki: or is 1500UTC too late? 22:06:14 no 22:06:19 it is 12am 22:06:32 I am okay with 16UTC too 22:06:33 kevinbenton: 1500 UTC doesn't work form me 22:06:42 mlavalle: right, we would change the day 22:06:51 kevinbenton: perfect 22:06:58 let's come back to this once armax is back 22:07:18 #topic RFE's 22:07:30 i think we're getting close to the end of the list 22:07:32 #link #link https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe 22:07:34 #link #link https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe 22:07:35 #link https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe 22:07:49 paste disaster :) 22:08:15 #link https://bugs.launchpad.net/neutron/+bug/1633280 22:08:16 Launchpad bug 1633280 in neutron "[RFE]need a way to disable anti-spoofing rules and yet keep security groups" [Wishlist,Triaged] 22:08:23 * ihrachys sets some RFEs to Triaged ;) 22:08:28 yello 22:08:29 I reckon we should transition this one to incomplete? 22:08:45 armax: hello! 22:08:58 kevinbenton: yes, to signal we wait on the answert 22:08:59 sorry I am late 22:09:06 armax: at end of meeting we are going to try to reschedule drivers so get your calendar ready 22:09:32 aye 22:09:42 at the moment my calendar is a huge mess 22:09:46 you don’t wanna know 22:09:49 anyhow 22:09:50 #link https://bugs.launchpad.net/neutron/+bug/1639566 22:09:50 Launchpad bug 1639566 in neutron "[RFE] Add support for local SNAT" [Wishlist,Triaged] 22:09:51 regarding bug 22:09:52 1633280 22:09:58 #undo 22:09:59 Removing item from minutes: #link https://bugs.launchpad.net/neutron/+bug/1639566 22:10:08 armax: go ahead 22:10:11 yeah, I was gonna say incomplete sounds good :) 22:10:21 armax: ok :) 22:10:23 #link https://bugs.launchpad.net/neutron/+bug/1639566 22:10:24 I mean if the use case is unclear 22:10:28 we can’t do much about it 22:10:59 me too on use case. I wonder allowed-address-pair does not work as kevin mentioned too 22:11:19 so for DVR SNAT at compute I think we are going to need some kind of devref or something to see how it could be done in an uninvasive fashion 22:11:28 regarding bug 1639566 I would rather not conflate the fast exist and the distrbuted snat case 22:11:28 bug 1639566 in neutron "[RFE] Add support for local SNAT" [Wishlist,Triaged] https://launchpad.net/bugs/1639566 22:11:36 right 22:11:42 fast exit is not the same thing 22:11:43 because they are not the same thing 22:11:52 as for the distributed SNAT 22:12:06 kevin and I ‘talked’ about this 22:12:09 talked as in argued 22:12:20 LOL 22:12:44 from the comments on the bug report 22:12:52 it seems the submitter has code he put together 22:12:52 well i think we agree that the use case is clear but the implementation might be a mess with the current code 22:12:53 I recollect there was some fist fight prev meeting 22:13:06 armax: do you agree? 22:13:07 kevinbenton is warm to the idea, I am not 22:13:13 kevinbenton: on that I agree 22:13:36 armax: but you're only against it because of the implementation detail questions, not because of the use case 22:13:38 the problem stems from the fact that the code integration may look messy or based off an old version of neutron 22:13:43 well 22:13:48 not just that to be honest 22:14:27 I think this boils down to the nature of recommendation one makes to solve the SNAT problem 22:14:51 and the SPOF that comes with it 22:14:58 there’s more than one solution 22:15:01 also resource utilization 22:15:02 no perfect one 22:15:18 local SNAT is the only way to avoid bottlenecks 22:15:22 over the various iterations there was large consensus to stick to a single recommendation 22:15:25 which was HA+DVR 22:15:30 the SPOF being the local SNAT? 22:15:35 mlavalle: right 22:15:43 armax: what? 22:15:51 what what? 22:15:56 armax: if local SNAT fails, it's only for the instances on that same node 22:16:00 armax: which would have failed as well 22:16:01 correct 22:16:11 shared fate 22:16:33 I mean you can address the SPOF by either distributing the SNAT over the compute nodes involved or the network nodes involved 22:16:53 the failure domain is still different between the two 22:17:21 in one case it affects all VMs behind the routers running on the network node affected 22:17:28 right, with HA you depend on HA and compute node not failing 22:17:38 with local SNAT, you just depend on compute node not failing 22:17:39 in the other it affects all the VMs behind running the compute node affected 22:18:29 the other issue is bandwidth utilization efficiency 22:18:42 which is really poor with routing to a network node 22:18:44 I’d say let’s not spend the entire hour on this bug 22:18:54 bottom line is: 22:19:06 if the author of the ticket feels like proposing the change against master 22:19:17 we can look at the damage and assess what to do 22:19:18 ok 22:19:39 past experience tells me this is going to be a full bag of hairy 22:20:01 so I feel it’s gonna be a frustrating experience for everybody involved 22:20:04 ok, i left a comment 22:20:31 but I am usually pessimistic so don’t take me too literally 22:21:31 #link https://bugs.launchpad.net/neutron/+bug/1649909 22:21:32 Launchpad bug 1649909 in neutron "[RFE] Domain-defined RBAC" [Wishlist,Triaged] 22:21:41 kevinbenton: so long as you’re aware that throwing a patch over the wall is only the first femto step 22:22:11 based on the feedback in that patch, it sounds like domain_ids are effectively static 22:22:21 and that to see the whole thing through the finishing line is a giant effort 22:22:43 we should warn the contributor before he busts his rearhand 22:23:26 armax: please comment on the RFE if you don't think my comment communicated that well enough 22:23:58 I am 22:24:29 is anyone against https://bugs.launchpad.net/neutron/+bug/1649909 22:24:29 Launchpad bug 1649909 in neutron "[RFE] Domain-defined RBAC" [Wishlist,Triaged] 22:24:48 looks good 22:25:14 we basically need a new column 22:25:40 +, I think we could unblock that without a spec or anything and just allow the contributor to tinker 22:25:58 this is ofc a new extension, but we know how to do them right now 22:26:15 +1 22:26:24 looks good to me as well 22:26:38 ihrachys: can you trigger the magic state machine to approve it? :) 22:26:39 new colum in networkrbacs, right? 22:26:50 mlavalle: yes 22:26:56 kevinbenton: sure, I will consult the Old Books first 22:26:58 mlavalle: we should probably do it for QoS too 22:27:09 ok 22:27:09 since the logic will be the same and they use the same API 22:27:19 #link https://www.youtube.com/watch?v=xsZSXav4wI8 22:27:20 new column for RBAC table (perhas either type or domain_id) 22:27:23 spacex rocket launch 22:27:26 :) 22:27:52 ah live! 22:28:10 is it the re-used rocket? 22:29:03 yep 22:29:05 first one 22:29:08 #link https://bugs.launchpad.net/neutron/+bug/1650678 22:29:08 Launchpad bug 1650678 in neutron "[RFE] Allow specifying dns_domain when creating a port" [Wishlist,Triaged] 22:29:19 so mlavalle, you're familiar with the dns code :) 22:29:25 how difficult would that be to implement 22:29:26 The submitter clarfied his use case 22:29:36 it seems like a clear use case at this point 22:29:47 he wants all vm's on a provider network 22:29:57 so it makes sense to allow dns_domain by port 22:30:07 we do it for fips already 22:30:21 so it is not a very big deal 22:30:39 my thought is same. I don't see any negative side so far 22:30:49 do we need a new DB column to store this? 22:31:01 or is it just a matter of carrying info from the API straight to the DB 22:31:21 we already have a table to carry the ports dns informatio 22:31:26 so we can add it there 22:31:47 ok 22:31:54 and then update the interaction with the external DNS driver to take that into consideration 22:32:17 so i think we can approve the RFE, but is the submitter going to work on it? 22:32:19 The pots dns_domain would have priority over the network's 22:32:41 we can ask him 22:33:14 armax: what's the final state of a RFE bug after rfe-approved is set? it's not clear from the docs. 22:33:17 if not, maybe this would be a good feature for a newer contributor to pick up since it sounds relatively isolated 22:33:33 yep and I can supervise 22:33:40 let's set to rfe-approved for this one as well 22:34:20 + 22:34:34 hang on 22:35:11 got distracted 22:35:13 which bug? 22:35:30 oh 22:35:35 I had a question about rbac https://bugs.launchpad.net/neutron/+bug/1649909 22:35:35 Launchpad bug 1649909 in neutron "[RFE] Domain-defined RBAC" [Wishlist,In progress] 22:35:52 ihrachys: go ahead 22:35:53 and the proposal is to approve dns_domain one https://bugs.launchpad.net/neutron/+bug/1650678 22:35:53 Launchpad bug 1650678 in neutron "[RFE] Allow specifying dns_domain when creating a port" [Wishlist,Triaged] 22:35:53 ihrachys: once an RFE is approved 22:36:06 we create a blueprint and we start tracking the effort 22:36:22 armax: who's creating it? 22:36:38 ihrachys: it’s in here 22:36:39 https://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements 22:37:15 "A member of the Neutron release team (or the PTL)" 22:37:15 ok 22:37:17 a member of the release team or drivers team 22:37:21 but I can do it 22:37:37 leave it with me 22:37:44 armax: and so we keep the bug in Triaged once we set rfe-approved? 22:38:21 #link https://bugs.launchpad.net/neutron/+bug/1658682 22:38:21 Launchpad bug 1658682 in neutron "port-security can't be disabled if security groups are not enabled" [Wishlist,Triaged] 22:38:31 ihrachys: it doesn’t matter 22:38:45 the bug will go to in progress as soon as patches are being filed against 22:39:23 it's not clear to me what the ask is here 22:39:34 is it just to allow disabling port security if security groups are shut off? 22:39:34 personally here I think it’s time to kill the config option 22:39:40 enable_security_group 22:39:49 it has outlived its purpose 22:39:53 armax: it sounds like that's the opposite of what they want 22:40:00 tough luck 22:40:18 disabling security groups globally is an abuse 22:40:33 and it was a stop gap when nova-net was the 800 lbs gorrila 22:40:36 gorilla 22:40:51 well it would still be a conforming implementation; it's just that we may not want to support that for ml2 22:40:55 today we must API-drive all the things 22:41:05 port-security and the config option conflict. IIRC the option was introduced when a security group feature was introduced. we had enough confident on that 22:41:30 then i think we should close this RFE as "won't fix" 22:41:42 because any fix will be invalid right when we remove the option to disable it 22:41:46 it == security groups 22:42:02 kevinbenton: it depends if you agree with me 22:42:25 I mean, I don’t think that a global flag to disable security is a reasonable option anymore 22:42:58 +1 to Won't fix. also do we deprecate enable_sg option? 22:43:10 yeah, from a cross cloud behavior standpoint disabling SGs isn't good 22:43:22 amotoki: deprecating the option is gonna be interesting 22:43:48 so I'm okay with "Won't Fix" 22:43:52 because the alternative once the option goes away is gonna be use the API to disable security groups on your port 22:44:27 i suspect this will bring back the "allow changing default security group" discussion :) 22:44:31 which is something the admin can’t be involved in 22:45:04 that’s a good point 22:45:07 the problem is 22:45:10 ok, armax please mark Won't Fix 22:45:15 the answer to that is FWaaS 22:45:22 are we ok silently enabling that for all ports on upgrade? sounds like a tough thing to do for ops. 22:45:23 but there’s no model to relax security then 22:45:41 kevinbenton: hang on, let’s discuss a bit more about this 22:45:54 kevinbenton: that’s probably an excellent topic for the forum 22:45:56 ihrachys: good point 22:45:58 wink wink 22:46:04 make note 22:46:21 #action kevinbenton to propose forum topic about security groups 22:46:46 we should probably mumble on this one a tad longer 22:46:48 that note will die in vain here 22:46:48 at least users can know whether sg is enabled or not through extension list 22:47:11 amotoki: true, it is discoverable 22:47:13 ihrachys: will or will not? 22:47:51 it probably will, no one follows up on actions in irc logs :) 22:48:30 ihrachys: come on, have faith 22:48:35 :) 22:48:57 shall we move on then? 22:49:00 yes 22:49:14 yeah, this needs more thinking 22:49:52 #link https://bugs.launchpad.net/neutron/+bug/1658682 22:49:52 Launchpad bug 1658682 in neutron "port-security can't be disabled if security groups are not enabled" [Wishlist,Triaged] 22:50:00 kevinbenton: oops? 22:50:11 should be mtu right 22:50:17 yes 22:50:18 muuu 22:50:23 #link https://bugs.launchpad.net/neutron/+bug/1671634 22:50:23 Launchpad bug 1671634 in neutron "[RFE] Allow to set MTU for networks" [Wishlist,Triaged] - Assigned to Ihar Hrachyshka (ihar-hrachyshka) 22:50:29 +1 from me on this one 22:50:40 it's my baby so I better listen to others 22:50:49 Ihar removed the column, he can re-add it :) 22:50:53 tl;dr allow to POST mtu on networks 22:51:00 excercise our migration scripts 22:51:04 hang on 22:51:08 there are follow up items in MTU world that are not covered by that RFE 22:51:10 me too. one thing to note is we need to check MTU specified to user should be smaller than system max 22:51:16 you want to allow POST to tenants? 22:51:30 amotoki: that's ofc a must 22:51:37 to me this is a huge breach of abstraction 22:51:38 armax: yes, as long as you go down the max 22:52:00 armax: MTUs are directly exposed to tenants 22:52:08 armax: abstraction as in - you don't know what mtu you will get? 22:52:21 ihrachys: you have not elaborated why this is useful 22:52:48 armax: 2nd sentence and beyond 22:52:52 why would a user care? 22:52:56 in lp 22:53:09 even if we don't allow users to specify MTU, user can set MTU manually if they want to use larger MTU (something like 8950) 22:53:20 what are these custom workloads? 22:53:40 two cases 1) workload assumes some MTU for its unknown reasons (maybe their VNF is picky); 2) users want to get the same MTU for all its networks 22:53:40 so using an MTU larger than 1500 on the internet can bring pain 22:53:43 don’t get me wrong 22:53:58 sometimes we cannot get full throughput with 1500 MTU 22:53:58 I am not reluctant to the idea 22:54:13 some people don't want web servers to have a 8000 MTU 22:54:19 I am just thinking that more choice and traspanrency is not necessarily a good thing 22:54:28 but for internet connection 1500 MTU is needed, so we cannot advertise larger MTU thru dhcp 22:54:29 because if there are dropped ICMP size exceeded in the network, it leads to bad performance 22:54:49 armax: there is no more transparency 22:54:52 armax: same as before 22:54:58 MTU is user visible 22:55:02 if I am the only one on the fence here, than it’s fine 22:55:02 custom workloads being e.g. something VNF-y that chews frames in some hardware offloaded way that relies on specific limitations of NICs exposed into it. 22:55:05 kevinbenton: true 22:55:14 but the user can’t do anything to it 22:55:15 amotoki: we can, l3 layer will deal with fragmentation 22:55:16 right? 22:55:35 armax: yes, that's the issue. we force MTU on all users 22:55:45 armax: I hear from customers the same "silly" question - how do I get 1500 for all my networks? 22:55:47 it’s mainly informational in case the guest needs to do something related to the MTU 22:55:49 armax: and are even trying to get rid of the ability to stop advertising 22:56:06 armax: every guest needs to use the correct MTU 22:56:07 ihrachys: ah, router reassembles packets to smaller MTU if larger packets come in? 22:56:17 armax: it's not only informational. l3 agent will fragment incoming traffic into chunks as per calculated MTU 22:56:28 if your instance is not ready to consume those, tough life 22:56:38 ihrachys: fragmentation sucks if it happens somewhere else in the network 22:56:42 L3 agents is not something the user even see 22:56:49 amotoki: yes, it does fragment 22:56:55 ihrachys: see my point about dropped ICMP messages 22:57:25 armax: right. neither they see traffic fragmented using MTU that is larger than you configured your instance for :) 22:57:40 I suppose as a user and owner of a logical network I may want to control the whole thing, MTU included 22:57:53 yes 22:57:55 but then why was it admin only in the first place? 22:58:10 we need to move on to figure out meeting time for next week 22:58:18 because it allowed to go beyond what infrastructure supports 22:58:26 ok let's discuss time 22:58:26 kevinbenton: actually we have time 22:58:32 because I am gonna have to skip the next two anyway 22:58:49 armax: you're permanently skipping the rest? 22:58:59 if not, you better provide feedback 22:59:02 #topic meeting time 22:59:05 no, I mean I can’t attend the next two occurrences 22:59:25 1400 UTC on thursday or 1500 UTC another day 22:59:29 armax: do either of those work? 22:59:34 armax: for you 22:59:41 right now I can do 11am PDT 22:59:58 not even close 23:00:11 can you do any early mornings any day of the week? 23:00:18 nah 23:00:33 ok, then i think we're at a deadlock and will need to keep the current time for now 23:00:38 this early is difficult for me 23:00:41 ihrachys: can you survive for now? 23:00:49 survive is a good word 23:00:50 I have other meetings or have to drop my daugther to school 23:00:53 and I can’t type 23:01:10 I am used to survival, since Europe. 23:01:14 ok 23:01:25 morning slots are pretty much booked for me at the moment 23:01:29 so for now i think the only time that people can all show up is now 23:01:34 like 8-10am 23:01:34 ok 23:01:36 local time 23:01:38 #info meeting time to stay the same 23:01:47 #endmeeting