22:00:19 <armax> #startmeeting neutron_drivers
22:00:20 <openstack> Meeting started Thu Mar 31 22:00:19 2016 UTC and is due to finish in 60 minutes.  The chair is armax. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:00:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:00:23 <openstack> The meeting name has been set to 'neutron_drivers'
22:00:44 <armax> welcome to this exciting episode of neutron drivers, Newton Series
22:00:45 <kevinbenton> hi
22:00:48 <haleyb> hi
22:00:51 <sidk> hi
22:01:04 <armax> you ready for some action?
22:01:10 <HenryG> no
22:01:20 <dougwig> get off my lawn.
22:01:31 <Sukhdev> HenryG : :-)
22:01:31 <armax> whose lawn? yours?
22:01:32 <armax> pff
22:01:40 <kevinbenton> remember to register for the austin party!
22:01:46 <dougwig> link?
22:01:46 <armax> whose party?
22:01:48 <armax> yours?
22:01:49 <armax> pff
22:02:04 <kevinbenton> https://www.eventbrite.com/e/stackcity-austin-a-community-festival-for-stackers-tickets-24174378216
22:02:21 <armax> #action kevinbenton to buy beer to anyone who register to this party
22:03:03 <armax> ok, let’s dive in, but before we do that, I’d like to share a few reminders
22:03:46 <armax> Reviewing neutron-specs changes is just as important as reviewing neutron ones
22:04:06 <armax> don’t forget that as drivers members you’re the custiodian of the +A
22:04:16 <armax> without that, specs stall
22:04:22 <armax> and no-one wants that
22:05:02 <armax> so please take the time out of your busy schedule to spare sometime going over the backlog and nudge contributors, review, approve and pending changes in that repo
22:05:28 <carl_baldwin> Not a long list
22:05:28 <armax> the backlog is rather small now, so it’s pretty easy to clear
22:05:29 <carl_baldwin> #link https://review.openstack.org/#/q/status:open+project:openstack/neutron-specs
22:05:31 <dougwig> oh boy, stackalytics /90 on that repo is grim.
22:05:45 <armax> carl_baldwin: thanks you beat me to it
22:05:57 <armax> carl_baldwin: there’s a link on the drivers page for your convenience
22:06:02 <armax> #link https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
22:06:19 <kevinbenton> well i personally de-prioritized all spec reviews during the last part of the cycle to work on stability and bugs
22:06:34 <armax> kevinbenton: fair enough, but now it’s time to switch gear again
22:06:36 <armax> hence the reminder
22:06:49 <armax> #link https://review.openstack.org/#/c/286413/ needs some love
22:07:02 * amuller slotting spec/RFE reviews in to his calendar
22:07:03 <armax> #link https://review.openstack.org/#/c/286413/ is close
22:07:14 <armax> sorry
22:07:16 <armax> #link https://review.openstack.org/#/c/225384/
22:07:24 <armax> #link https://review.openstack.org/#/c/190285/
22:07:40 <armax> as for the latter, we’d need to nudge the submitter to respin
22:07:55 <armax> bear in mind that we need to repurpose specs for Newton
22:08:17 <armax> #link https://github.com/openstack/neutron-specs/tree/master/specs/backlog/mitaka
22:08:49 <armax> during the next team meeting we’ll go over N-1/Mitaka backlog and remind people to nudge these to the right release
22:09:12 <armax> but if you’re proactive (for the stuff you’re approver of) then, we might save a day or two
22:09:16 <dougwig> armax: are we fast-tracking missed and backlog specs again, or do they need to go through this meeting?
22:09:33 <armax> they can be fast-tracked
22:09:56 <armax> no need to go back to the end of the queue, but it’d be good to understand if they need new owners/approvers
22:10:12 <armax> hence I’d like to have a recorded conversation
22:10:33 <armax> dougwig: does that answer your question?
22:10:36 <dougwig> yep
22:11:03 <dougwig> well, no.
22:11:08 <dougwig> your answers are kind of contradictory.  :)
22:11:24 <armax> so are yours :)
22:11:32 <armax> ok
22:11:36 <armax> let me give you an example
22:11:44 <armax> vlan-aware-vms
22:11:50 <armax> the spec is in the backlog
22:12:10 <armax> it need to be resubmitted to Newton
22:12:33 <armax> that doesn’t require a full blown spec approval process
22:12:59 <armax> but I’d like to understand if the original approvers/owners assigned in Mitaka intend to continue to work in Netwon
22:13:09 <armax> if not, then before fast tracking we’d need to find new owners
22:13:18 <armax> otherwise what’s the point in fast tracking?
22:13:22 <armax> you with me now?
22:13:47 <dougwig> so, simple +2/+A, but bring it up briefly in the meeting to verify owner/approver before the +A?
22:13:53 <armax> correct
22:13:57 <dougwig> with you
22:13:57 <Sukhdev> armax : good explanation - reasonably clear now
22:14:10 <amotoki> sounds fair
22:14:13 <armax> dougwig, Sukhdev sorry I wasn’t crystal clear
22:14:15 <carl_baldwin> armax: speaking of vlan-aware-vms, does it currently have an active approver?
22:14:37 <armax> rossella_s was in charge of it
22:14:46 <armax> we’d need to check with her if her priorities have changed
22:15:07 <armax> bence too, I haven’t seen him being super active on its patch
22:15:09 <armax> his
22:15:38 <armax> last spin of https://review.openstack.org/#/c/273954/ is as old as Feb 2
22:15:42 <armax> *sigh
22:15:50 <Sukhdev> armax : Ironic needs it as well - if no one comes forward, I will try to find a taker for this
22:15:54 * armax hurts himself
22:16:12 <armax> Sukhdev: I am sure we have many people interested
22:16:16 <kevinbenton> whoever takes over this really needs to understand the l2 agent well for the implementation
22:16:19 <armax> but no-one with the right endurance :)
22:16:27 <kevinbenton> otherwise it will stall
22:16:33 <armax> kevinbenton: exactly
22:16:41 <armax> but this isn’t just l2 agent alone
22:16:46 <armax> pitfalls may be all over the place
22:16:51 <dougwig> and all of those people committed themselves.
22:16:59 <armax> anyhow we’ll discuss about this in the right venue
22:17:07 <armax> dougwig: to a mental institution?
22:17:20 <HenryG> lol
22:17:26 * armax is unclear
22:17:26 <Sukhdev> :-)
22:17:38 <armax> I am funny aren’t I?
22:17:40 <armax> anyhoo
22:17:45 * kevinbenton thinkgs dougwig is the active peanut gallery
22:17:50 <armax> if there’s nothing else, shall we dive in?
22:17:58 <HenryG> I'd like to quickly revisit an RFE from last week: bug 1560003
22:18:00 <openstack> bug 1560003 in neutron "[RFE] Creating a new stadium project for BGP Dynamic Routing effort" [Wishlist,Triaged] https://launchpad.net/bugs/1560003 - Assigned to vikram.choudhary (vikschw)
22:18:10 <armax> HenryG: you bully
22:18:12 <HenryG> It was pointed out to me that there is an exist repo, networking-bgpvpn
22:18:15 <armax> HenryG: go ahead
22:18:22 <armax> HenryG: aye
22:18:25 <HenryG> Do we want a different repo for BGP dynamic routing?
22:19:07 <armax> it’s my udnerstanding that the two are not quite the same
22:19:11 <HenryG> Some in the team indicated that they do want it separate
22:19:15 <mickeys> There are two different architectures. Many discussions in the past about the differences and the need for both, which has always resulted in both moving ahead so far
22:19:35 <dougwig> i wouldn't think they are the same.
22:19:44 <kevinbenton> bgpvpn is closer related to vpnaas than it is to the BGP dynamic routing that recently merged
22:19:47 <armax> but to be fair, I did make the point repeately that neutron-bgp is probably not appropriate as a name
22:19:58 <dougwig> i assumed it'd be networking-bgp
22:20:26 <HenryG> That's all I wanted cleared up
22:20:26 <armax> HenryG: actually I just reviewed both bug and patch from vikram
22:20:31 <armax> HenryG: thanks boos
22:20:32 <armax> boss
22:20:44 <armax> let’s keep an eye on this
22:20:45 <HenryG> The naming can be discussed in the bug
22:21:05 <dougwig> the ultimate bikeshed topic.
22:21:12 <armax> HenryG, dougwig: ack
22:21:20 <dougwig> openstack/networking-BigGreenPenis
22:21:25 * salv-orlando did anyone say bikeshedding?
22:21:45 <armax> oh my, the painter awoke
22:21:46 <dougwig> ok, too many chocolate covered espresso beans.
22:21:50 <armax> salv-orlando: go back to bed
22:21:58 * kevinbenton awkward silence
22:22:02 <armax> :)
22:22:04 <armax> ok
22:22:05 <armax> bug 1507499
22:22:06 <openstack> bug 1507499 in neutron "[RFE] Centralized Management System for testing the environment" [Wishlist,Triaged] https://launchpad.net/bugs/1507499
22:22:35 <armax> this is not going away easily
22:22:41 <kevinbenton> seems to be disagreement on what people want from this
22:22:42 <armax> we had a new related proposal on bug 1563538
22:22:44 <openstack> bug 1507499 in neutron "duplicate for #1563538 [RFE] Centralized Management System for testing the environment" [Wishlist,Triaged] https://launchpad.net/bugs/1507499
22:23:08 <armax> we punted to the mid-cycle, we’re close enough to the summit that we could punt there easily
22:23:33 <dougwig> extension or separate command-line tool, this could proceed in a separate repo, to see if it goes anywhere.
22:23:41 <kevinbenton> yeah, maybe a session on what kind of debugging we want built in
22:23:47 <amuller> or a friday session
22:24:17 <armax> I still think that we’d need to augument our API to report more sophisiticated health information on a resource basis
22:24:22 <armax> to start off
22:24:37 <armax> then we can worry about how we provide the toolkit to implement remedy actions
22:25:09 <salv-orlando> I think both armax and dougwig provided good arguments to close the discussion on this rfe
22:25:09 <boden> I added some notes to #link https://etherpad.openstack.org/p/neutron-troubleshooting
22:25:16 <amuller> as long as we actually come to an agreement for this cycle I'm game
22:25:23 <armax> I’ll make the point on the bug report again, see if I can find proselytes
22:25:32 * salv-orlando on this node goes back to this sleep
22:25:36 <armax> amuller: follow me and I’ll show you the light
22:25:50 <armax> salv-orlando: have a good one, we love you
22:26:08 <armax> bug 1520719
22:26:09 <openstack> bug 1520719 in neutron "[RFE] Use the new enginefacade from oslo_db" [Wishlist,Triaged] https://launchpad.net/bugs/1520719 - Assigned to Ann Kamyshnikova (akamyshnikova)
22:26:20 <armax> HenryG: I look at you to get this resolved
22:26:33 <HenryG> Approve it and it shall be done
22:26:40 <dougwig> yep
22:26:45 <armax> HenryG: I’d like to see a plan first
22:26:47 <armax> but aye
22:27:06 <armax> like what to expect in the course of the release
22:27:12 <armax> but thanks for taking ownership
22:27:16 <armax> you shall be rewarded
22:27:23 <armax> bug 1530331
22:27:24 <openstack> bug 1530331 in neutron "[RFE] [ipv6] Advertise tenant prefixes from router to outside" [Wishlist,Triaged] https://launchpad.net/bugs/1530331
22:28:36 <carl_baldwin> Hi
22:28:37 <armax> shall we consider something within the scope?
22:28:57 <carl_baldwin> It would be a third way to get IPv6 routes back in to tenant networks.
22:29:07 <carl_baldwin> It could be the simplest way though.
22:29:29 <haleyb> yes, if you could stuff a /128 in there
22:29:54 <carl_baldwin> haleyb: I wasn't even thinking about host routes yet.
22:30:00 <carl_baldwin> haleyb: That would take some more thinking.
22:30:05 <armax> to be honest, I’d rather choose a single way to address the specific use case
22:30:47 <haleyb> oh, the CVR doing this
22:31:26 <carl_baldwin> The thing is, I haven't heard much demand for this.
22:31:33 <carl_baldwin> Maybe asking operators would be good.
22:31:37 <armax> carl_baldwin, haleyb do I read that this needs more baking until it formalized into an actionable proposal?
22:31:55 <carl_baldwin> armax: I think so.
22:32:06 <armax> carl_baldwin: most likely we’ll have an operators sponspored session at the summit
22:32:13 <haleyb> yes, more baking
22:32:15 <armax> carl_baldwin: let’s make sure we get the time to talk about this
22:32:22 <carl_baldwin> armax: ++
22:32:28 <armax> haleyb: you can’t leave the cake in the oven unattended
22:32:30 <armax> it’ll burn!
22:32:42 <armax> haleyb: can I trust you and carl_baldwin will see through this?
22:32:44 <carl_baldwin> armax: set a timer.  ;)
22:32:44 * haleyb is already hungry and that didn't help
22:32:55 <armax> haleyb: but it’s gonna burn
22:33:01 <armax> it’s not gonna taste good
22:33:07 <armax> bug 1552631
22:33:09 <openstack> bug 1552631 in neutron "[RFE] Bulk Floating IP allocation" [Wishlist,Triaged] https://launchpad.net/bugs/1552631
22:33:13 <carl_baldwin> armax: I'll take it.
22:33:22 <armax> carl_baldwin: don’t oversubscribe
22:33:26 <armax> I hear haleyb is a slacker
22:33:39 * armax spreads false rumors
22:33:40 <haleyb> i can help too, only one RFE on my plate
22:33:50 <dougwig> honestly, i think this is a horizon rfe, and not appropriate for neutron api
22:33:52 * carl_baldwin passes it on to haleyb
22:34:14 <carl_baldwin> I added one comment to this one about contiguous fip allocations.
22:34:21 <armax> carl_baldwin: ack
22:34:29 <carl_baldwin> But, that might be a different request altogether.
22:34:31 <armax> talking about FIPs
22:35:08 <armax> and contiguous space
22:35:23 <armax> I wonder if we
22:35:37 <armax> will end up causing users to stamped on each other
22:36:23 <armax> I appreciate some customers may indeed ask for this, but at the same time, I think it’s right to say no if that means taking the rope away from users
22:36:24 <carl_baldwin> armax: In what way?
22:36:58 <armax> if you have concurrent requests asking for the same contiguous space
22:37:02 <dougwig> if there's a contiguous fip rfe, we should deal with that separately.  if this is really just the horizon use case, the ui should be driving the api.
22:37:18 <armax> dougwig: fair point
22:37:25 <carl_baldwin> dougwig: +1 I fear I've expanded the scope of this rfe with my comment.
22:37:44 <armax> carl_baldwin: but that’s an important aspect nonetheless
22:38:01 <armax> amotoki: what do you reckon?
22:38:12 <armax> you’re the resident Horizon SME
22:38:14 <amotoki> I think we call multiple APIs as they want.
22:38:57 <armax> meaning we can provide a client side binding that accepts # of FIPs
22:38:57 <amotoki> if someone wants 2 FIPs in GUI, it is more tough compared to running CLI two times...
22:38:58 <armax> ?
22:39:19 <armax> and return a list of FIPs?
22:39:28 <haleyb> armax: i just added a comment there as well, i'm remembering a customer wanting say a /28 contiguous, so they had one SG rule.  I think they were using the IPs on their end as well so it impacted their VPN config
22:39:39 <dougwig> amotoki: django does two api calls instead of one, in that case, i think, right?
22:39:41 <amotoki> there are two caniddate to implement it. horizon API wrappers or neutron client library.
22:39:54 <amotoki> dougwig: yes
22:39:55 <armax> haleyb: ok
22:40:16 <armax> amotoki: I can see value in having the client library binding
22:40:22 <haleyb> we wrote a "trawler" to scoop up all the blocks and keep them just for them
22:40:49 <dougwig> armax: you hate orchestration, but you're in favor of a binding that a simple for loop can handle?
22:40:52 <amotoki> armax: agree to some extent.
22:41:18 <armax> dougwig I do hate orchestration server side
22:41:18 <amotoki> if we support in in the client library, how about openstacksdk side? do we need to support it in openstacksdk?
22:41:39 <armax> dougwig: but not all orchestration
22:41:45 <armax> if I can’t live without
22:41:48 <dougwig> i'm opposed to an api change, mildly opposed to a client orchestration, but will lose sleep over neither.
22:42:24 <armax> dougwig: good, then we’re on the same page
22:42:34 <armax> bug 1552680
22:42:35 <openstack> bug 1552680 in neutron "[RFE] Add support for DLM" [Wishlist,Triaged] https://launchpad.net/bugs/1552680 - Assigned to John Schwarz (jschwarz)
22:42:44 <armax> see if you lose sleep over this one
22:42:51 * jschwarz ^_^
22:43:55 <dougwig> this could either be awesome or a disastrous nightmare of epic proportions.  i don't think it has a middle ground.
22:44:27 <armax> I wonder if this an opportunity for us to take and experiment with this for ‘new’ stuff and leave stuff we built up until now alone
22:44:32 <kevinbenton> without adopting it in every place that touches an object protected by a lock, it's not buying us much
22:44:42 <jschwarz> if it does end up being a disastrous nightmare as dougwig says, we can easily revert back
22:44:43 <armax> at least until we’ve given ourselves enough time to master the space
22:45:36 <jschwarz> I agree with kevinbenton - the key is identifying specific places that can benefit from it greatly.
22:45:39 <armax> so I wodner if we can make a recommendation that for new work items this be considered
22:46:17 <armax> potentially overlapping with the ovo work that ihar, rossella et al are working on
22:46:36 <armax> not sure if I am making any sense
22:46:47 <amuller> armax: you want to lock an object every time it's mutated, at the OVO layer?
22:46:52 <armax> I’ll need to give some more thought
22:47:47 <armax> amuller: I am saying that I am not sure I am prepered to warrant refactoring to adopt DLM
22:47:51 <amuller> kevinbenton: Did you give any thought to locking at what low of a layer vs locking at higher layers like jschwarz's PoC?
22:47:55 <amuller> at that*
22:48:06 <dougwig> aside from our ability to make use of this well, is tooz stable enough to wrap our stuff around?
22:48:07 <armax> but I am definitely open to seeing how this may play out in the context of a new effort
22:48:16 <amotoki> we have the real first need in L3 area? if so, we can try PoC without overlapping efforts
22:48:27 <amuller> jschwarz: are you aware of any users of tooz's DLM API?
22:48:30 <kevinbenton> amuller: locking that low won't help when it's related objects (e.g. router and it's HA network)
22:48:41 <amuller> kevinbenton: aye
22:48:48 <jschwarz> amuller, dougwig, ceilometer uses it
22:49:11 <amuller> jschwarz: the locking API or grouping API?
22:49:29 <jschwarz> amuller, grouping iirc
22:49:56 <jschwarz> amuller, https://github.com/openstack/ceilometer/blob/master/ceilometer/coordination.py
22:50:21 <amuller> so we'd probably be adding tooz and contributing to it in parallel
22:50:38 <armax> let’s keep brainstorming on this one
22:51:16 <amuller> dougwig: So I don't know if anyone can make that determination at this point
22:51:40 <armax> bug 1554869
22:51:42 <openstack> bug 1554869 in neutron "[RFE] Make API errors conform to API working group schema" [Wishlist,Triaged] https://launchpad.net/bugs/1554869 - Assigned to xiexs (xiexs)
22:51:50 <jschwarz> dougwig, I think that part of the stability of tooz is the stability of the backend we choose
22:51:55 <dougwig> that means the answer is unknown, so no.  i personally don't want to risk neutron's adoption momentum for something like that, and if we do it, i'd want to see it behind a config/enable toggle.
22:52:50 <dougwig> are these api error changes additive, or will we be breaking backwards compat to switch?
22:53:00 <armax> dougwig: taht’s what I am questioning too
22:53:27 <armax> from the specs proposal it looks like they are non bw compat
22:54:25 <armax> has anyone given this any thought?
22:54:49 <dougwig> just my opinion, but i'm in favor of applying those standards to additions and any new v3 api, but not to potentially breaking users.
22:55:15 <carl_baldwin> +1
22:55:19 <armax> yes, but before doing that we’d nee to see what that v3 api looks like and who is interested in working on one
22:55:20 <armax> :)
22:55:25 <amotoki> v3 or versioned API
22:55:38 <dougwig> "next major rev, if", then.  :)
22:55:58 <armax> amotoki: rather the latter I’d say, but a bump needs to happen nonetheless
22:56:11 <armax> bug 1557290
22:56:12 <openstack> bug 1557290 in neutron "[RFE]DVR FIP agent gateway does not pass traffic directed at fixed IP" [Wishlist,Triaged] https://launchpad.net/bugs/1557290
22:56:26 <armax> I agree with carl_baldwin that this should be treated as a regular bug
22:56:33 <carl_baldwin> +1
22:57:01 <armax> carl_baldwin: do you see many architectural changes involved?
22:57:22 <carl_baldwin> armax: I don't think so.
22:57:26 <armax> ok
22:57:29 <armax> good to know
22:57:38 <armax> bug 1558812
22:57:39 <openstack> bug 1558812 in neutron "[RFE] Enable adoption of an existing subnet into a subnetpool" [Wishlist,Triaged] https://launchpad.net/bugs/1558812
22:58:18 <armax> this is interesting, and I think it makes sense, how that is going to end materializing I am not quite sure, so I’d advice us to iterate on a spec
22:58:29 <dougwig> goes against pets vs cattle, but i guess this is the real world we're living in.
22:58:38 <carl_baldwin> My biggest concern here is how to make an API that ensures consistency on a network.
22:59:14 <armax> perhaps iterating on a spec may help us shed some lights on these types of issues
22:59:19 <carl_baldwin> dougwig: Imagine an external network with lots of things on it.  You want to tear them all down?  You can't kill your whole herd.
22:59:36 <armax> too bad we had one bug left from the lot
22:59:47 <armax> not sure if we can cover it in the last few seconds left
22:59:49 <armax> bug 1563069
22:59:50 <openstack> bug 1563069 in neutron "[RFE] Centralize Configuration Options" [Wishlist,Triaged] https://launchpad.net/bugs/1563069
23:00:14 <armax> I see no point in rushing on this, but right now I have a kneejerk reaction to say ‘go away'
23:00:19 <armax> and on that note
23:00:22 <armax> thank you all
23:00:22 <dougwig> carl_baldwin: there's a place for herds of pets.
23:00:24 <armax> #endmeeting