14:01:01 #startmeeting networking 14:01:02 Meeting started Tue Jul 28 14:01:01 2015 UTC and is due to finish in 60 minutes. The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:05 The meeting name has been set to 'networking' 14:01:14 #link https://wiki.openstack.org/wiki/Network/Meetings Agenda 14:01:19 #topic Announcements 14:01:29 #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule Liberty Release Schedule 14:01:38 #info Liberty-2 will be released this week 14:01:53 Unless there is a bug blocking this, we'll likely cut it later today or tomorrow. 14:02:24 #info The DVR Job is now voting 14:02:26 #link https://review.openstack.org/#/c/180230/15 14:02:44 Hopefully this will help us grind out any additional issues that are lurking in DVR. 14:02:58 Thanks to Swami for driving this and to ihrachyshka for his work to remove the job from stable branches 14:03:17 #info Voting for the Tokyo Conference is open now 14:03:19 #link https://www.openstack.org/summit/tokyo-2015/vote-for-speakers 14:03:23 Go forth and vote 14:03:27 I would be glad if there is no work to remove each new voting job from stable... 14:03:35 ihrachyshka: You and me both 14:03:52 how do you concieve of that happening? 14:04:11 \o 14:04:15 We stabilize something during a cycle and make any new job voting during that cycle. 14:04:27 ah I'm all for stablity 14:04:32 anteaya, nothing except review attention and maybe heads-up for potentially affected parties. 14:04:44 okay thanks 14:05:51 #info For networking sub-projects (networking-foo), please remember to follow code merge requirements (e.g. 2 +2 votes) 14:05:53 #link http://docs.openstack.org/infra/manual/developers.html#code-review 14:06:11 If you're in the Neutron Stadium, you need to be following those guidelines 14:06:28 If you have questions, please reach out to me, anteaya, ihrachyshka or anyone else who has been merging code for a while for help 14:06:59 Any other announcements for the team from anyone else before we move along? 14:08:24 #topic Liberty-3 and our giant backlog of things to merge 14:08:29 #link https://launchpad.net/neutron/+milestone/liberty-3 14:08:43 As is typically the case, we have a lot of things to merge in Liberty-3. 14:09:05 I would strongly, highly beg of reviewers to focus on things that are on that list 14:09:08 Instead of just opening gerrit and reviewing what's at the top 14:09:24 There are many things in there that need some review love and the earlier we can merge many of these, the better 14:10:14 * HenryG thought we no longer had deadlines ;) 14:10:23 * mestery slaps HenryG 14:10:24 (I don't see qos there at all) 14:10:39 #action ihrachyshka to add QoS LP BPs to Liberty-3 milestone 14:10:41 :) 14:10:47 ouch :) 14:10:50 lol 14:10:52 mestery: what is L3 date? 14:10:57 slapping party! 14:11:04 Any prioritization between bugs and non-bugs? 14:11:22 mlavalle: Liberty-3 date is week of August 31 14:11:25 That's also FF 14:11:44 neiljerram: I'd encourage reviewing of critical and high priority bugs in parallel with those specs 14:12:11 As you can see, Liberty-3 is going to drop on us very fast. :) 14:12:22 Any other questions on Liberty-3? 14:12:36 I wonder if all vendors will get their code out in time? 14:12:49 * regXboi grabs the popcorn and program 14:12:51 HenryG: They won't 14:13:04 HenryG: what does that refer to? 14:13:05 HenryG: You, me and armax need to send an email to the ML on that 14:13:22 #action mestery HenryG and armax to email list about the impending purge of drivers from neutron during mitaki 14:13:30 OK, and should we relax the wording in the contrib devref? 14:13:40 HenryG: If you submit a patch, add me there and I'll reveiw :) 14:14:01 neiljerram: http://docs.openstack.org/developer/neutron/devref/contribute.html 14:14:11 I prepared the decomp for my one and the good news is it was easy :-) 14:14:23 amotoki: Nice! :) 14:14:50 Any other Liberty-3 questions from folks before we move on? 14:15:35 #topic Where should Macvtap agent land 14:15:36 scheuran: You're up! :) 14:15:41 #link https://review.openstack.org/#/c/195907/ 14:15:47 thanks 14:15:49 That's the review in question for folks who are not following along here 14:16:01 The plan is to have a ml2 driver & l2 agent for supporting macvtap guest attachments (independent form sriov) 14:16:04 scheuran: Can you summarize for the team? 14:16:21 The big question is, where such code should land 14:16:25 The agent reuses a lot of code of the linuxbridge agent - especially the main loop, mechanism for detecting pluged tap (macvtap) devices. 14:16:35 and so on 14:16:44 scheuran: So one option is to add the code into the existing LB agent, right? 14:16:49 right 14:16:51 sc68cal: ^^^ 14:16:58 sc68cal: Bringing you in because this is LB related 14:17:10 scheuran: If you do that, you don't need a repo and you just do it in-tree. 14:17:19 right 14:17:32 * sc68cal looks 14:17:33 In case it helps: this sounds similar to my current situation with the DHCP agent 14:17:37 extension drivers for agents? in qos, we have that: http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l2/extensions?h=feature/qos 14:17:38 but that would mean extracting new superclasses and moving methods up and down between themn 14:17:42 not sure whether fully applicable 14:17:46 so a larger restructuring 14:18:17 the LB agent has no tests, a "large restructuring" scares me 14:18:20 it's several lines in your base agent (lb) and then you are free to do whatever you want with port updates in your extension. 14:18:35 ihrachyshka: That sounds like the best option forward. 14:18:45 amuller: Dually noted on the LB testing situation :) 14:18:59 it obviously depends on whether you replace or extend the agent. 14:19:08 scheuran: So, it sounds like perhaps you should work offline to understand the approach ihrachyshka is extending to you here. 14:19:13 Because that sounds like it may be the best way forward 14:19:38 ok, I'll talk to ihrachyshka and having a look at his approach 14:19:55 amuller: that's not true 14:20:36 sc68cal: oh? 14:20:55 amuller: https://github.com/openstack/neutron/blob/master/neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py 14:20:59 amuller sc68cal: Shall we take the LB test situation to #openstack-neutron post meeting? 14:21:13 I'd hate to bog down the already packed meeting with that here, because I sense it could go south quickly. 14:21:29 Fair? 14:21:31 sure 14:21:35 Cool :) 14:21:42 scheuran: So, you have what you need to move forward here? 14:21:57 scheuran: And if so, you can mark your governance patch as WIP for now and reference the meeting once it's on eavesdrop (or I can do that for you post meeting too) 14:22:14 yes, I'll talk to ihrachyshka and then I'll come back to you 14:22:15 ok 14:22:37 Great! Thanks scheuran and ihrachyshka. 14:22:51 Next up on our weekly smorgasboard of an agenda 14:23:05 #topic Ironic Provider Networks 14:23:10 I am not sure who added this to the agenda 14:23:17 Does anyone want to step forward to claim their topic? 14:23:28 #link https://review.openstack.org/#/c/152703/ 14:23:40 Oh wait, now I recall! 14:23:42 This was Josh 14:23:46 From the nova mid-cycle last week 14:23:52 I don't recall his IRC handle though .... 14:24:20 let's shelve it and move on - i'll ping the ironic channel 14:24:41 if they get someone to come in before end of meeting we'll pull it back in 14:24:41 mestery: hi 14:24:47 jim, not josh 14:24:55 jroll: Sorry about that :) 14:24:58 no worries 14:25:01 so! 14:25:06 jroll: OK, so lets get everyone up to speed 14:25:08 JoshNang ping 14:25:34 at the midcycle, and on the list, we basically decided that neutron's provider network plugin thing should indicate if the vlan should be passed to the host 14:25:45 but never decided how that should actually work 14:26:16 * med_ should have listened in on that discussion... 14:27:10 jroll: I'm still digesting the patch in question a bit, by chance do you also have a link to the ML discussion? 14:27:18 jroll: that starts to sound like vlan transparent to me - http://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html 14:27:20 so I'm thinking only the ML2 mechanism can really know 14:27:53 mestery: http://lists.openstack.org/pipermail/openstack-dev/2015-July/069783.html 14:28:04 #link http://lists.openstack.org/pipermail/openstack-dev/2015-July/069783.html 14:28:07 Thanks jroll 14:28:21 sc68cal: Elaborate further please :) 14:28:27 sc68cal: yeah, it sounds like it... there are ML2 (or maybe not ml2?) plugins that require this today 14:28:38 https://github.com/rackerlabs/ironic-neutron-plugin 14:29:02 so maybe we do want to wait for that, maybe we don't, I'm not entirely sure 14:29:05 sc68cal: I see it now, NM 14:29:13 mestery: :) 14:29:35 jroll: The VLAN transperency was released as a part of Kilo already, so it's there, the issue is what driver/plugin you want supports how you want to use it I guess 14:29:44 I'm aware of the ML controversy on this - but no idea as yet about what _this_ meeting is being asked to decide or discuss 14:30:03 mestery: oh, I thought this was a liberty thing 14:30:24 neiljerram: so as someone who uses that nova patch and the above neutron plugin in production today, I would like to upstream this work 14:30:26 jroll: Nope, it's already in Kilo, made it at the very end 14:30:31 jroll: I wish http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/metadata-service-network-info.html explained the use case better, why an Ironic host needs this information in the first place and what does it want to do with it 14:30:52 neiljerram: and I don't know a ton about neutron and so trying to figure out the best way to do this. 14:31:24 amuller: it's not just ironic hosts that might use this, it's really any instance in a dhcp-less context 14:31:42 jroll: You just want to know the VLAN tag so you can set the guest/host up to use that tag, right? 14:31:56 correct. 14:32:02 jroll: OK, cool. The discussion appeared quite contentious to me, so hopefully someone can see a way to bring the sides together... 14:32:07 So, that's what VLAN transparency was for :) 14:32:20 welp 14:32:31 mestery: this is the first time I'm seeing this 14:32:35 I think there mayb e some gaps here 14:32:55 I'll have to look at it, I suppose 14:32:59 I think what you need is in addition to the indication you can handle passing VLAN traffic you want the actual tag as well 14:33:05 jroll: Yes, please :) 14:33:36 mestery: oh, right, the instance needs to know the vlan to tag 14:34:01 I think it's more like the vlan-aware-vms spec 14:34:07 which is why I was thinking liberty. 14:34:08 jroll: o_O 14:34:18 Yes, I think a bit like that too 14:34:42 jroll: OK, let me try and see where things are here and work with you on this to see what we can do 14:34:44 jroll: does such insntace want to send multiple networks based on vlan tags? 14:34:53 This reminds me I need to try and understand where the vlan-aware-vms work is .... 14:34:57 amotoki: sometimes :) 14:35:17 there's been no progress on vlan aware VMs AFAIK since the spec was merged (So, no code proposed), it's pretty much guaranteed to miss L I think at this point 14:35:21 jroll: if so, it looks like vlan-aware-vms work as mestery said. 14:35:35 sounds like we have at least a way forward though 14:35:55 ++ 14:36:15 amuller: I agree, and to be honest, it was questionable even with code already proposed, so it's not looking good. 14:36:26 #action mestery to try and dig out some status on the vlan-aware-vms spec 14:36:35 jroll: We'll sort this and get back to you soon, sound good? 14:36:41 mestery: exactly, even with a full set of patches proposed today it'll probably not make it 14:36:55 mestery: cool, thank you sir 14:37:08 mestery: loop me in since I suggested it too 14:37:09 amuller: I like to think of myself as an optimist against a constant backlog of challenges, so lets see. :) 14:37:15 sc68cal: ++ 14:37:35 OK, lets move on to the next topic with 23 minutes left 14:37:48 #topic Neutron Mitaki mid-cycle 14:37:54 This has come up recently 14:37:57 From a few folks 14:38:11 The reason for discussion this now is that a plan has been floated to have this in Galway, Ireland 14:38:22 I wanted input from everyone on a few things: 14:38:23 not Mtaki but Mitaka (which ends with "a") 14:38:29 1) Is Galway ok? 14:38:30 Mitaki -> Mitaka? : ) 14:38:36 2) Do we even need a mid-cycle still? 14:38:43 hichihara amotoki: Thank you for your correction :) 14:38:54 mestery, at least for drinking Irish beer, so yes 14:39:05 ihrachyshka: lol :) 14:39:07 mestery: going forward, I believe need to figure out a way to make mid-cycles more remote friendly 14:39:16 1) It's nice and close to me; 2) Don't know as I haven't been to one yet, but would like to. 14:39:19 ihrachyshka: +1 many times 14:39:34 i like ireland :) 14:39:35 regXboi: Exactly my point! If we did a virtual sprint, it may make it better for everyone, but less personal for those who attend ... in person. 14:39:59 there are better chance I join that one; that said, those gatherings are sometimes painful for outsiders. 14:40:05 mestery: What dates are being considered for the mid-cylcle? 14:40:16 john-davidge: We're looking at either early December, or early January 14:40:18 john-davidge brings up a great point 14:40:27 Which is another point. 14:40:32 it will be cold in Ireland that time! 14:40:40 I guess I want the team to really think about whether or not we want to keep doing mid-cycles 14:40:50 Before we move forward with planning the next one. 14:40:54 Early Jan would be better for me 14:41:04 Ireland in January, fantastic 14:41:09 amuller: lol ;) 14:41:20 amuller: more reason to hold the midcycle in a pub 14:41:35 If someone can get us a place in Cuba in January, I'm all for that too. 14:41:47 How many people typically attend? 14:41:54 neiljerram: 20-30 14:42:05 I’ve personally never attended a mid-cycle as they’ve been too far away (and ireland will be as well), but I can see the value. I feel like the Liberty mid-cycle was a bit soon after the summit though 14:42:05 We've made our mid-cycles coding sprints 14:42:09 for my taste, we are good to keep everyone on the same level of participation, which suggests online is fine. summits are already quite frequent to get together 14:42:22 ihrachyshka: Exactly my thinking too 14:42:22 (but I haven't been to any) 14:42:30 Interesting, sounds like that's enough to be considered a strategic part of the dev cycle - as opposed to mostly a social thing 14:42:42 My otehr suggestion is to try a 3 day virtual-sprint for Mitaka 14:42:44 And see how that goes 14:43:03 neiljerram: Actually, it's a coding sprint, nothing strategic 14:43:07 We've worked hard to make it only a coding sprint 14:43:10 So attendance not required 14:43:12 mestery: hmm... could we try the virt sprint first and if that doesn't work, fall back to the mid-cycle coding sprint? 14:43:15 There is too much travel already 14:43:27 Ah, OK. 14:43:36 regXboi: We'd lose all the benefits of skipping the mid-cycle as we'd have to plan it, but may be worth thinking about 14:43:38 mestery: let's ask Fidel 14:43:43 a virtual-only coding sprint would be a really interesting experiment, like Ihar and Kyle said, travelling twice a year to the summits is already a lot 14:43:44 mlavalle: lol :) 14:43:51 amuller: ++ 14:43:51 In that case, sounds like it would be very interesting to see if can get the same group dynamic virtually 14:43:54 amuller: ++ 14:44:05 I think getting the team used to collaborating more on IRC and online woudl be a good move personally 14:44:27 OK 14:44:33 mestery: are you thinking a 3 day coding sprint or a 72 hour coding sprint? 14:44:38 I'll email the ML to get broader participation from folks not at this meeting 14:44:43 i.e. run the coding sprint round the globe 14:44:44 regXboi: 72 hours :P 14:44:59 #action mestery to solicit input from everyone on the Neutron mid-cycle coding sprint from the ML 14:45:07 mestery: got it 14:45:09 Anything else on the mid-cycle from anyone? 14:45:13 We could also have per-feature sprints? 14:45:35 HenryG: If they're virtual, yes. In person, no way :) 14:45:44 hemna, we did have for qos, it was a nice one 14:45:56 yep, virtual is what I meant. 14:46:03 ihrachyshka: QoS was special 14:46:10 I'd like to keep feature specific things virtual 14:46:14 Or this will get out of hand really quickly 14:46:26 it isn't aleady? 14:46:35 OK, I have 2 more items to cover (including neiljerram's work), so lets move on. 14:46:38 #topic Concrete plan to merge back pecan and QoS work 14:46:43 I'm time-boxing this for 4 minutes 14:46:44 :) 14:46:50 qos was in person though, remote was tough tz 14:46:52 ihrachyshka blogan kevinbenton: We need a plan to bring these back 14:47:05 And as soon as possible. 14:47:07 I think QoS is more ready than pecan at this point 14:47:16 So I propose we bring back QoS first, followed by pecan. 14:47:18 Thoughts? 14:47:23 mestery, well, we collecting pieces, but we are not exactly there yet. 14:47:29 as for order, yay for that 14:47:35 cool 14:47:51 we work hard to get there sometime next week though 14:47:53 #info QoS to merge back to master first, followed by pecan 14:48:01 how bad is pecan? 14:48:05 ihrachyshka: Lets keep this on the agenda for Monday next week 14:48:08 ihrachyshka: It just needs reviews :( 14:48:18 old story 14:48:27 can somebody point me at the pecan review(s)? 14:48:28 yes 14:48:33 I've held off while tests stabilized 14:48:37 ryanpetrello: https://review.openstack.org/#/q/project:openstack/neutron+branch:feature/pecan+status:open,n,z 14:48:37 :) 14:48:40 but I'd like to take another look 14:48:41 thanks! 14:48:50 ryanpetrello: Your reviews there would be AWESOME! Thanks! 14:49:02 k, I'll take a look over this today 14:49:08 thanks! 14:49:12 mestery: I'll put it on my list to look tomorrow am 14:49:14 OK 14:49:17 regXboi: Thanks! 14:49:22 Lets move on to the last item 14:49:44 #topic neiljerram's routed network and DHCP changes 14:49:50 neiljerram: The floor is yours for 10 minutes ;) 14:49:55 Thanks - https://review.openstack.org/198439 14:50:06 #link https://review.openstack.org/198439 14:50:15 OK, so on the one hand there's lots of great discussion about how best to model routed networking 14:50:36 Thanks to everyone participating there - in review and ML threads. 14:51:09 It's looking, though, that that will take lots more time to think through. 14:51:44 Somewhat independently, though, there's a set of DHCP agent changes that I've put up for review, and I'll really like to get a general feeling on those. 14:52:18 neiljerram: do these DHCP agents changes stand independently of the spec? 14:52:22 link? 14:52:23 https://review.openstack.org/206078, plus its 3 prerequisites 14:52:28 #link https://review.openstack.org/206078 14:52:35 neiljerram: Same question as regXboi 14:52:54 Technically yes - because variation in the DHCP agent is driven by an interface_driver config 14:53:31 Long term, I wonder if that's correct - seems that maybe it should be dynamic based on network_type. But for now we have interface_driver. 14:54:19 neiljerram: I think you've stumbled into a quagmire here, as this also relates to the work that carl_baldwin is doing, as you know :) 14:54:21 Therefore, given a few meaningful customization points in the DHCP agent, I can write a custom interface_driver that uses a certain combination of those to produce the behaviour that I'm looking for. 14:54:36 At this point in the cycle, it's looking like this may end up being shelved for Mitaka, but lets see 14:54:52 mestery: Oh, absolutely, yes. But that's all on the modelling side, which I think can be separated from the DHCP agent 14:54:52 neiljerram: The DHCP changes may end up being ok, but lets see what happens during review 14:55:03 neiljerram: Exactly, we're on the same page :) 14:55:04 I think an interface driver should represent how to connect to a network and it should not represent a different behavior.... 14:55:06 Cool, thanks. 14:55:35 The big practical benefit, for my project, if we could get the DHCP changes agreed, would be working with vanilla upstream Neutron... 14:56:20 neiljerram: Yup, agreed, and I think that helps you out, so lets see if we can figure those out. 14:56:23 :) 14:56:30 I have a lot of detail work to do on the DHCP reviews, but it's nice to have a somewhat positive feeling, so thanks. 14:56:35 I encourage folks to review neiljerram's patches he posted above 14:56:42 neiljerram: Absolutely :) 14:56:44 OK 14:56:46 3 minutes or so left 14:56:49 I'm done - thanks for attention! 14:56:49 #topic Open Discussion 14:57:07 Just one final note to encourage reveiws of Liberty-3 specs alongside Critical/High priority bugs 14:57:16 We have a neutron ml2 agent for powervm out in stackforge (https://github.com/stackforge/neutron-powervm). At the nova mid-cycle meetup it was brought up that we should look at moving it under openstack/networking-powervm to fit the new third-party drivers decomp model. How should we handle proposing the change? Are there steps beyond the required changes to governance, project-config, and working with the infra team on the 14:57:16 rename we should be aware of? 14:57:51 HenryG: Do you have a link for adreznec off hand? 14:58:14 adreznec: You basically need to propose a project-config change to move it to openstack namespace, and make that dependent on a governance change adding it to the lsit of neutron repos 14:58:20 Add me as a reviewer to both of those and I'll ACK them. 14:58:24 It's that simple :) 14:58:56 mestery: awesome, I have the patches already written up. Just wanted to make sure I brought it up here first. I'll get those proposed here today 14:59:03 kevinbenton: I have a question about implementation plan of Distributed SNAT. Could you talk after the meeting? 14:59:06 mestery, thank you for your great ACK and review for my ML2 plugin patch. 14:59:17 adreznec: Awesome! 14:59:24 mestery: Would love to see #link https://review.openstack.org/#/c/158697 merge soon. Recent reviews have been largely positive and/or nit-picking. Getting this patch in will help us concentrate on the agent-side changes during the L-3 timeframe. 14:59:25 yushiro: yw 14:59:28 OK, thanks folks! 14:59:29 mestery, I've posted the patch for updating sub_project.rst. https://review.openstack.org/#/c/206293/ Would you please review it? 14:59:35 We'll see you all in #openstack-neutron 14:59:36 yushiro: Will look, yes. 14:59:39 #endmeeting