20:00:06 <johnsom> #startmeeting Octavia
20:00:06 <openstack> Meeting started Wed Feb 24 20:00:06 2016 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:10 <openstack> The meeting name has been set to 'octavia'
20:00:11 <sbalukoff> Hello, hello!
20:00:13 <madhu_ak> o/
20:00:13 <bharathm> Hii o/
20:00:13 <xgerman> o/
20:00:14 <johnsom> Hi there
20:00:16 <blogan_mobile> Hi
20:00:21 <ajmiller> o/
20:00:27 <evgenyf> o/
20:00:33 <xgerman> blogan_mobile? another towage?
20:00:38 <minwang2> o/
20:00:48 <johnsom> #topic Announcements
20:00:48 <TrevorV> o/
20:00:57 <johnsom> L7 merged in Octavia!
20:00:59 <blogan_mobile> Nope, just out and about
20:01:02 <xgerman> yeah!!!
20:01:03 <TrevorV> WOOT
20:01:05 <sbalukoff> YAAAAAY!
20:01:20 <xgerman> blogan_mobile I would say the same when I am towed ;-)
20:01:28 <markvan> o/
20:01:29 <sbalukoff> Now we need to get it merged in neutron-lbaas!  (And yes, I'm working on that.)
20:01:29 <johnsom> Thanks to sbalukoff, rm_work, johnsom for hanging out into the evening last night to get that done
20:01:44 <sbalukoff> Thanks to y'all for the extensive testing and code review.
20:01:54 <sbalukoff> I didn't know this beforehand, but apparently it was 6000+ lines of code.
20:01:56 <xgerman> yeah, you guys rock!!
20:02:17 <TrevorV> Yes, sbalukoff, twas a feat :D
20:02:22 <johnsom> It was fun to see 7+ patches chained up in the merge gate
20:02:45 <johnsom> Priority patches needing review
20:02:58 <johnsom> L7 tracking etherpad
20:03:05 <johnsom> #link https://etherpad.openstack.org/p/lbaas-l7-todo-list
20:03:24 <TrevorV> I may add the single-create review before this meeting is over. :)
20:03:26 <johnsom> There are still neutron-lbaas and CLI patches that need to merge.  I think both have open bugs.
20:03:28 <xgerman> I think we have more than L7
20:03:40 <xgerman> #link https://review.openstack.org/#/c/282587/5
20:03:42 <sbalukoff> Only a little left on that--  but note that post-L7 bugs have been captured in launchpad and aren't on that etherpad.
20:03:45 <johnsom> TrevorV Throw it in here
20:03:53 <xgerman> #link https://review.openstack.org/#/c/282113/2
20:03:58 <TrevorV> Will do
20:04:03 <sbalukoff> johnsom: You are correct.
20:04:27 <xgerman> #link  https://review.openstack.org/#/c/284340/2
20:04:28 <johnsom> There are still some open horizon patches too: https://review.openstack.org/#/q/project:openstack/neutron-lbaas-dashboard+status:open
20:04:37 <xgerman> #link https://review.openstack.org/#/c/268237/7
20:04:41 <madhu_ak> #link https://review.openstack.org/#/c/172199/
20:04:41 <minwang2> this patch has been there for a while, it is good to go #link https://review.openstack.org/#/c/272344/
20:04:45 <johnsom> I saw a demo of TLS via horizon panels today, so again, good progress.
20:04:57 <xgerman> yep, horizon looks real good
20:05:07 <sbalukoff> johnsom: Given the feature freeze deadling next Monday, what are the high priorities to review to get in before the freeze?
20:05:23 <sbalukoff> TrevorV's single-create patch, and Min's anti-affinity patch?
20:05:37 <bharathm> sbalukoff: and cascade-delete as well
20:05:50 <sbalukoff> Oh! Cool-- that's ready for review? Great!
20:05:54 <madhu_ak> octavia scenario tests with tempest plugin here: #link https://review.openstack.org/#/c/172199/
20:05:55 <xgerman> cascading delete (see my patches) - Horizon needs that
20:05:58 <johnsom> Priorities off my head are: L7, horizon panels, get-me-an-LB, delete-me-an-LB, anti-affinity
20:06:22 <xgerman> s/delete-me-an-b/cascading-delete/g
20:06:41 <sbalukoff> And the n-lbaas L7 and L7 CLI stuff. :)
20:06:42 <johnsom> Sorry for taking artistic license....
20:06:52 <sbalukoff> :)
20:07:03 <rm_work> o/ (sorry late)
20:07:07 <sbalukoff> I think that's all doable.
20:07:08 <xgerman> no worries — cascading delete is especially critical since we won’t get an extension per dougwig
20:07:24 <sbalukoff> xgerman: Ok, good to know.
20:07:41 <johnsom> Yeah, all of that is in-flight so definitely possible
20:07:44 <xgerman> yeah “orchestration” ...
20:08:13 <johnsom> #topic Octavia/glance coupling for images (ihar)
20:08:24 <xgerman> ok, thanks for picking that up
20:08:39 <johnsom> Ugh, it looks like we just lost him
20:08:39 <xgerman> basically uses glance-tags instead of image ids
20:08:58 <xgerman> so you na change the image WITHOUT having to reboot the octavia control plane
20:09:10 <johnsom> Yeah, so this is around being able to swap amphora images without restarts
20:09:19 <sbalukoff> Ok.
20:09:27 <xgerman> downside you would couple closely with glance
20:09:31 <blogan_mobile> Couldn't we just store the image ID in the dB to get that?
20:09:35 <johnsom> I had originally thought we would handle a signal to reload the octavia config.
20:09:36 <xgerman> so if you store your images elswhere
20:09:43 <sbalukoff> Doesn't seem like a bad idea. Though I will note that restarting the Octavia controler worker is not disruptive in most cases.
20:10:06 <rm_work> yeah
20:10:08 <rm_work> since queue
20:10:10 <sbalukoff> But this is probably important if we want to realistically support multiple controller workers.
20:10:15 <xgerman> blogan_mobile yes, but that would be reinventing the wheel
20:10:15 <rm_work> but a signal wouldn't be bad either
20:10:23 <johnsom> I'm not sure if the oslo service stuff makes signals easy or not
20:10:24 <dougwig> sbalukoff: it's likely a horizon error dialog down the road...
20:10:46 <sbalukoff> johnsom: A signal to reload the octavia config is also probably a good idea.
20:10:46 <rm_work> but yeah, restarting the controller-worker isn't horrible -- except if it's mid-operation on something
20:10:57 <xgerman> task board?
20:11:03 <blogan_mobile> Job board
20:11:03 <rm_work> i don't know whether it immediately acks the queue, or if the job would go back on
20:11:07 <rm_work> JOB BOARD
20:11:11 <sbalukoff> job bored.
20:11:11 <johnsom> The signal would buy us being able to reload for reasons beyond image ID
20:11:12 <rm_work> >_< yes that'd do it
20:11:15 <sbalukoff> ;)
20:11:25 <rm_work> yeah i agree though johnsom, reload signal seems ideal
20:11:37 <xgerman> #action johnsom file reload in LP
20:11:37 <rm_work> this can't be uncommon
20:11:42 <sbalukoff> johnsom: +1
20:11:48 <rm_work> xgerman: i'm sure he already is doing it right now
20:11:49 <rm_work> :P
20:11:55 <xgerman> :-)
20:12:08 <sbalukoff> HAha!
20:12:09 <johnsom> Ok.  We will work on a signal.
20:12:22 <xgerman> ok, I also like the glance idea — but if we think it would be too tight a coupling we can add a driver...
20:12:35 <johnsom> Maybe revisit image tags in the future, but they make me nervous as I'm not sure I trust glance to pick the right one
20:12:46 <blogan_mobile> I'd hate to see another driver interface for this
20:13:04 <sbalukoff> And...  well... maybe we don't need an 'image service' interface just yet?
20:13:04 <xgerman> why? we love driver interfaces… let’s vote on that
20:13:16 <sbalukoff> blogan_mobile: +1
20:13:26 <johnsom> Well, technically *we* don't talk to glance today.  That would be new...
20:13:32 <sbalukoff> Yep.
20:13:35 <sbalukoff> Exactly.
20:13:59 <johnsom> #topic octavia-health-manager requires a host-wise plugged interface to the lb-mgmt-net
20:14:10 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1549297
20:14:10 <openstack> Launchpad bug 1549297 in octavia "octavia-health-manager requires a host-wise plugged interface to the lb-mgmt-net" [Undecided,New]
20:14:19 <sbalukoff> I'm not sure I understand the issue here.
20:14:45 <johnsom> I'm not sure if there is someone to talk to this here.  They said they might not be able to join today.
20:14:52 <sbalukoff> Granted, I spend most of the day in devstack, where that's set up already.
20:15:17 <sbalukoff> But it's true that in production: The controller worker and health monitor need to able to talk to the amphorae.
20:15:21 <johnsom> Basically they are proposing/asking about separation for the health manager receiver.  I.e. namespace or such
20:15:38 <sbalukoff> That means either a host-plugged interface, or routing to the lb-mgmt-net.
20:15:55 <johnsom> For both security and IP conflicts with other "host" networks where health manager might live.
20:15:59 <sbalukoff> That's not a bad idea...
20:16:03 <johnsom> That is my reading....
20:16:21 <sbalukoff> Ok. Maybe we talk about this when they can be here?
20:16:22 <xgerman> mine as well
20:16:30 <sbalukoff> Just so we understand exactly what they mean?
20:16:31 <xgerman> I think it makes sense
20:16:47 <xgerman> they provide a picture
20:17:13 <sbalukoff> Aah.
20:17:30 <sbalukoff> Sorry-- I feel like I'm just not prepared to talk about it this week.
20:17:38 <xgerman> ok, we can punt it
20:17:44 <johnsom> Ok, so I'll tag it RFE.  Please have a look and comment.
20:17:48 <TrevorV> 030091
20:17:54 <TrevorV> RIP sorry wrong chat
20:17:54 <sbalukoff> (Not that my preparedness on this needs to be a deciding factor on whether we discuss it...)
20:18:03 <johnsom> sbalukoff +1, I need to read more
20:18:04 <sbalukoff> johnsom: Sounds good.
20:18:19 <johnsom> #topic Mitaka blueprints/rfes/m-3 bugs for neutron-lbaas and octavia
20:18:20 <fnaval> im in TrevorV!
20:18:31 <johnsom> dougwig Any you want to cover?
20:18:58 <johnsom> #link https://bugs.launchpad.net/octavia/+bugs?field.tag=target-mitaka
20:19:18 <sbalukoff> We've got a lot to do there, though most of it doesn't seem too hard.
20:19:30 <johnsom> I have tagged bugs I think we should try to get fixed for Mitaka.  Most aren't too hard/big.  Some are L7 related
20:19:39 <sbalukoff> And we have two weeks after the feature freeze to squash as many of those as possible, right?
20:19:48 <dougwig> distracted by midcycle, but i'd like to hear the 2/29 status of get-me-an-lb and horizon ?
20:20:21 <johnsom> #link http://releases.openstack.org/mitaka/schedule.html
20:20:24 <sbalukoff> dougwig: both are under review now, looks like a good chance of landing by 2/29
20:20:29 <johnsom> March 14th would be RC1
20:20:44 <xgerman> dougwig: cascade delete is close as well
20:21:25 <ajmiller> dougwig: regarding horizon, basic LB create workflow is in place, there are still several patches filling out functionality, fixing bugs, improving defaults, etc.
20:21:42 <dougwig> how are we defining close?  for get-me-an-lb, the devil is in the corner cases. how are we testing that?  for horizon, do we have an analysis of gaps from the old UI?
20:21:55 <rm_work> well, the corner cases would be bugs, right?
20:21:55 <dougwig> e.g., last i looked at horizon, it didn't support multiple providers.
20:21:59 <rm_work> and we have until RC1 :)
20:22:09 <dougwig> rm_work: in orchestration, the corner cases is the feature.
20:22:10 <sbalukoff> ajmiller:  So, mostly "bugs" at this point? No major missing functionality (except L7, which we never had planned to have in the GUI by mitaka anyway)
20:22:40 <rm_work> dougwig: eh, what's a bug vs. a feature, REALLy? :P
20:22:52 <xgerman> so does Horizon need to land in sync with us?
20:23:01 <xgerman> they are their own project
20:23:19 <dougwig> do we want the packager's to include them?
20:23:59 <xgerman> we should probably check with doug-fish what hey think
20:24:04 <xgerman> so let’s table it
20:24:25 <markvan> doug-fish is at mid cycle for horizon this week
20:24:47 <markvan> but they are making good progress...and would always like more reviewers
20:24:55 <sbalukoff> Cool!
20:24:59 <dougwig> ass
20:25:04 <xgerman> yep, but do they want to be shipped on 2.29?
20:25:04 <dougwig> kevinbenton: typed that ^^
20:25:18 <sbalukoff> Haha
20:25:36 <markvan> you can create a LB via the new panels now...so add it to your devstack and kick the tires
20:25:55 <sbalukoff> markvan: will do!
20:25:56 <xgerman> markvan so you want us to say when it’s ready?
20:26:07 <xgerman> for packaging?
20:26:26 <xgerman> just trying to figure out who makes that call
20:26:39 <dougwig> it's up to us to ping them. when do we think that'll be?
20:27:13 <xgerman> From what I have seen we likely need an extension
20:27:35 <xgerman> but would like their input — let’s ping them Monday after their midcycle
20:27:37 <markvan> yeah, doug-fish will have to answer one
20:27:43 <dougwig> ok
20:27:46 <sbalukoff> Ok.
20:27:49 <markvan> I'll remind him as well...
20:28:00 <xgerman> thanks markvan
20:28:08 <blogan_mobile> dougwig the get me a lb is not orchestration
20:28:11 <johnsom> #action xgerman to ping doug-fish on Mitaka-3 for dashboard
20:28:21 <xgerman> payback?
20:28:34 * xgerman is the one who volun-tells
20:28:35 <dougwig> blogan_mobile: it's a single unit in octavia?
20:28:39 <rm_work> dougwig: yes
20:28:45 <sbalukoff> xgerman: Haha!
20:28:49 <rm_work> well there will be a neutron-lbaas side too
20:28:49 <dougwig> right, ok, good. that one is easy, then.
20:28:53 <rm_work> but
20:28:58 <rm_work> they're technically independent
20:29:43 <blogan_mobile> dougwig it's a single driver call
20:29:53 <johnsom> Ok, any other Mitaka-3 discussion?
20:30:09 <dougwig> blogan_mobile: right, because it's a single template splat for haproxy.  got it.
20:30:52 <johnsom> #topic Magnum with Neutron based networking
20:30:59 <johnsom> I wanted to brink awareness to a kuryr spec:
20:31:00 <blogan_mobile> well since nlbaas is calling Octavia, Octavia API has to support it
20:31:06 <sbalukoff> Nice!
20:31:09 <johnsom> #link https://review.openstack.org/#/c/269039/5/doc/source/specs/mitaka/nested_containers.rst
20:31:26 <sbalukoff> So is Magnum the particular container controller y'all are going to be going with? (I haven't looked at it closely yet)
20:31:32 <johnsom> This would be a good time to comment on hot-plugging neutron networks for the kuryr folks.
20:31:52 <sbalukoff> Sweet!
20:31:53 <johnsom> No, I can't speak to Magnum
20:32:03 <sbalukoff> Ok.
20:32:24 <johnsom> They had some questions for octavia team and brought up the spec, so I figured I would share.
20:32:44 <johnsom> @topic Converting LBaaS v1 objects to LBaaS v2 (neela)
20:32:48 <sbalukoff> Hot-plugging container interfaces would be rad.
20:33:11 <johnsom> neelashah1 Would you like to talk to this topic?
20:33:27 <johnsom> It was added to the agenda today
20:34:15 <johnsom> neelashah1 are you there?
20:34:20 <neelashah1> johnsom:yes
20:34:27 <johnsom> Ah, great
20:34:39 <johnsom> Would you like to talk to your agenda item?
20:34:53 <neelashah1> wondering if lbaas v1 and v2 can run in parallel? or if we would run into any conflicts with ips, etc?
20:35:17 <neelashah1> eseentially, can we convert v1 to v2 by bringing up both in parallel
20:35:31 <dougwig> i'd personally like to be able to run both, as it would let me run fewer test jobs.
20:35:36 <dougwig> it's not currently supported
20:35:54 <sbalukoff> I think our docs say we don't support it. Does anyone remember the exact reasons that breaks?
20:36:07 <dougwig> did we insanely reuse some db tables or something?
20:36:16 <xgerman> we did
20:36:22 <sbalukoff> Riiiight.
20:36:34 <xgerman> and somebody told me “nobody is running LBaaS V1"
20:36:59 <neelashah1> xgerman : more like nobody is running lbaas v2 (yet) ?
20:37:01 <neelashah1> :)
20:37:04 <sbalukoff> At the time, that was essentially true.
20:37:22 <xgerman> I guess releasing v2 spurred the adoption of v1?
20:37:25 <sbalukoff> Can't be helped if people write code to interface with obsolete, deprecated interfaces. :/
20:37:56 <blogan> dougwig: no we didn't reuse tables
20:38:01 <sbalukoff> xgerman: I think it was a timing thing: It took a while to implement v2. In the mean time, people moved forward with v1.
20:38:15 <dougwig> lots of people are running v1
20:38:25 <blogan> the problems came because when running the v1 agent and v2 agent at the same time, there were conflicts
20:38:59 <dougwig> not an issue now, right?
20:39:02 <sbalukoff> blogan: Do you recall what the nature of those conflicts was?
20:39:20 <blogan> plus v1 and v2 both have the resource pools, and even though their under different paths /lb/pools vs /lbaas/pools, the wsgi code would validate against the v1 pool structure as well
20:39:21 <johnsom> Just a reminder, we deprecated v1 in liberty
20:39:30 <blogan> if you made a v2 pool create call, and vice versa
20:39:38 <sbalukoff> johnsom: +1
20:39:56 <sbalukoff> johnsom: I think Neela is asking because she's trying to move off v1.
20:40:09 <blogan> dougwig: the agent stuff would be an issue if the namespace driver is being run in v2 i believe, and the v1 and v2 conflicts would be an issue as well
20:40:23 <johnsom> Yeah, I understand, it just seemed like we were heading down the path of engineering a way to run both....
20:40:24 <markvan> basic question will be how to get from running active v1 to v2.  step 1 shutdown/delete v1 objects, step 2, build new v2 objects?
20:40:27 <neelashah1> johnsom sbalukoff - understand - but for people who are on kilo and already using v1 (since v2 was just introduced in kilo) and now need to move to v2
20:40:55 <blogan> sbalukoff: i don't recall specifics for hte agent, but had to do with the v1 agent trying to process the a v2 load balancer
20:41:15 <sbalukoff> markvan: That should be script-able. But nobody has written this script yet, and it is disruptive in any case.
20:41:19 <xgerman> now as we removed the namespace agent… it might work?
20:41:31 <blogan> when did we remove the namespace agent?
20:41:47 <johnsom> We didn't remove it, just disabled it in the devstack scripts
20:41:55 <blogan> yeah
20:42:12 <xgerman> sorry, wrong wording… but it might be that you can now run both together?
20:42:24 <xgerman> though the database is still a hack...
20:42:26 <blogan> xgerman: if v2 is running octavia it'll get around the agent issues
20:42:29 <sbalukoff> Nobody has tried in a while?
20:42:35 <neelashah1> xgerman - so perhaps someone has to just try it out and see what happens?
20:42:43 <xgerman> yep
20:42:47 <blogan> but the pool requests being combined by the neutron wsgi layer will still be a problem
20:42:54 <xgerman> mmh
20:42:57 <markvan> so something like: delete v1 objects, shutdown v1 agents, start v2 agents and build the v2 objects.   disruptive, but doable?
20:43:02 <sbalukoff> blogan: +1
20:43:05 <blogan> and what about the database is a hack?
20:43:17 <johnsom> markvan do-able
20:43:18 <sbalukoff> markvan: Yes.
20:43:34 <xgerman> blogan, I think v1 can run on a v2 database but not vice versa
20:43:48 <xgerman> and objects in v1 m,ean different things in v2
20:43:52 <blogan> xgerman: they're totally different tables, so the db doesn't matter
20:44:12 <blogan> all v2 tables are preprended with lbaas_
20:44:17 <xgerman> oh, ok
20:44:27 <blogan> v1 tables are just vips, pools, members, healthmonitors
20:44:53 <xgerman> well, so it might work to some degree
20:45:01 <blogan> not creating pools
20:45:09 <blogan> if v1 and v2 are both enabled
20:45:35 <johnsom> neelashah1 So, in summary, we don't know or have a tested upgrade path.
20:45:45 <neelashah1> ok, great  - thanks for the discussion….johnsom balukoff blogan xgerman, we will see if its possible for us to try it out
20:45:56 <sbalukoff> Yep. the wsgi "pools" problem is a show-stopper for running both at the same time. There might be others.
20:45:56 <johnsom> Cool, let us know
20:46:09 <johnsom> #topic Security gate - Bandit
20:46:24 <sbalukoff> What is Bandit?
20:46:25 <johnsom> So, I had the pleasure of doing an internal security review of Octavia.
20:46:35 <sbalukoff> I'm so sorry.
20:46:36 <johnsom> #link https://wiki.openstack.org/wiki/Security/Projects/Bandit
20:46:37 <blogan> by pleasure do you mean misery?
20:47:00 <sbalukoff> Thanks, blogan-google.
20:47:01 <johnsom> You will see there are a few bugs recently added of things we should look at.  I have started that already
20:47:13 <blogan> what?
20:47:18 <xgerman> they only asked him easy questions “like how does it come you are so awesome"
20:47:29 <johnsom> One recommendation they had was to add the bandit gate to our project.
20:47:33 <sbalukoff> johnsom: Oh cool! Are you going to transfer actionable stuff in there to launchpad?
20:47:51 <johnsom> sbalukoff They are in launchpad now
20:48:05 <johnsom> The HMAC timing thing was one of them
20:48:10 <sbalukoff> I don't have a problem adding a bandit gate--  non-voting for now and let's see how it goes?
20:48:20 <xgerman> same here
20:48:33 <johnsom> Right, they offered to help us setup a non-voting bandit gate.  I wanted to run it by you folks first.
20:48:41 <sbalukoff> Do eet!
20:48:51 <xgerman> +1
20:48:55 <minwang2> +1
20:49:04 <johnsom> They also mentioned there is a fine guide here:
20:49:06 <johnsom> #link http://docs.openstack.org/security-guide/
20:49:16 <sbalukoff> I like the idea of not having obvious security problems in Octavia. We're a ways away from getting there, but we've got to start somewhere, right?
20:49:18 <johnsom> For common issues, etc.
20:49:45 <johnsom> We actually came out in pretty decent shape.  We do have work to do, but not bad.
20:49:54 <sbalukoff> Nice!
20:50:02 <johnsom> Ok, so if no objections, I will work with them to get the non-voting gate setup.
20:50:24 <sbalukoff> I'm totally going to pre-emtively override blogan's objections.
20:50:25 <sbalukoff> Do it!
20:50:28 <johnsom> I'm going to skip progress reports, I think we covered that already.
20:50:43 <johnsom> #topic Open Discussion
20:50:47 <fnaval> review requested pls: https://review.openstack.org/#/c/172199/
20:50:53 <johnsom> Did blogan object?  I didn't see that
20:50:55 <fnaval> once it passes all gates
20:51:01 <blogan> i guess i object a lot :(
20:51:10 <fnaval> it's the reworked-reworked-reworked scearnio lbtest
20:51:18 <johnsom> Ok
20:51:22 <sbalukoff> No, I was just pre-emptively overriding his objects in case he objects.
20:51:30 <sbalukoff> objections.
20:51:33 <fnaval> it should be ok - sbalukoff I'll address the deprecated urls in subsequent changes
20:51:39 <blogan> i was actually thinking of objecting, bc or our current gate and job instability
20:51:48 <blogan> but its non-voting and we'll see what happens
20:52:07 <xgerman> well, our enemies will see all opera security bugs
20:52:12 <sbalukoff> See? Aren't you glad I pre-emptively gave you all permission to ignore that?
20:52:14 <sbalukoff> ;)
20:52:16 <blogan> we don't have opera security bugs!
20:52:23 <xgerman> our
20:52:24 <johnsom> Yeah, but non-voting should be ok.  Plus the gate issues are high priority bugs.  I think I have found some issues in the tests that may fix some of this
20:52:50 <sbalukoff> johnsom: I know I have. Am reeeeeally close to fixing some of that.
20:53:03 <sbalukoff> (Like, if I had 10 more minutes before this meeting.)
20:53:19 <johnsom> sbalukoff I was going to update the httplib/urllib stuff.  Is that the same you are working on?
20:53:32 <sbalukoff> fnaval: Good to know on the tempest test work!
20:53:51 <sbalukoff> For what it's worth... tempest testing isn't a "feature" right? So, we can potentially merge that at any time, right?
20:53:56 <fnaval> cool thanks please review when you get a chance
20:53:59 <fnaval> I hope so!
20:54:08 <fnaval> blogan said that we should be able to
20:54:18 <blogan> sbalukoff: i'd ask supreme overlord dougwig on that
20:54:18 <fnaval> since it's tests
20:54:20 <xgerman> yeah, no deadline for that
20:54:20 <sbalukoff> johnsom: Nope, that's not what I'm working on. Feel free!
20:54:23 <blogan> fnaval: i said i wasn't sure
20:54:41 <johnsom> sbalukoff cool, I will put up a patch for neutron_lbaas/tests/tempest/v2/scenario/base.py this afternoon
20:54:46 <xgerman> I think we did it that way before
20:54:47 <fnaval> k check with dougwig - but please take a look at the tests anyway
20:55:03 <fnaval> johnsom: cool thanks johnsom
20:55:06 <sbalukoff> fnaval: +1
20:55:19 <markvan> Just a reminder:
20:55:21 <markvan> #link for orchestration/heat the LBv2 resources ready for final push/reviews https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/lbaasv2-suport
20:55:34 <madhu_ak> with the same patch like fnaval said, we can run the tests using tempest-plugin.
20:55:39 <johnsom> Oh, sorry, I thought that had landed already
20:55:49 <sbalukoff> markvan: Oh, good to know!
20:55:54 <sbalukoff> We should try to get those in.
20:56:01 <sbalukoff> Or add our +1's.
20:56:05 <fnaval> madhu_ak: yep, thats for your changes to make that happen madhu_ak
20:56:13 <sbalukoff> ... it's going to be a busy few days here before Monday. :P
20:57:11 <dougwig> sorry, midcycle distraction. what do i need to look at?
20:57:16 <johnsom> Yep.  I will give the heat patches another pass.  I have reviewed those once
20:57:18 <neelashah1> sbalukoff : yes, +1 from the lbaas team will be appreciated to land the heat support for v2
20:57:32 <neelashah1> +1 johnsom, thanks
20:57:34 <fnaval> https://review.openstack.org/#/c/172199/ dougwig
20:57:36 <sbalukoff> dougwig: Any deadline on merging tempest testing code in Octavia?
20:57:49 <sbalukoff> dougwig: It's not a "feature" right?
20:57:59 <dougwig> sbalukoff: if i say yes, i get a false sense of pressure and more commits.  if i say no, it's the truth.
20:58:11 <fnaval> ah ha
20:58:14 <xgerman> lol
20:58:19 <sbalukoff> dougwig: Haha!
20:58:36 <johnsom> Along those lines, do we want/need to cut an M3 octavia?
20:58:38 <sbalukoff> dougwig: Thank you for your honesty, eh!
20:58:49 <xgerman> johnsom no
20:58:54 <xgerman> I think final is good
20:59:02 <sbalukoff> xgerman: +1
20:59:18 <johnsom> Works for me
20:59:20 <sbalukoff> I think my people are waiting on the final and probably wouldn't use M3 per se.
20:59:41 <xgerman> my people are three months behind...
21:00:07 <sbalukoff> Yeah...
21:00:10 <xgerman> ok, times out
21:00:16 <xgerman> o/
21:00:18 <sbalukoff> Thanks, folks!
21:00:23 <fnaval> thanks!
21:00:29 <johnsom> #endmeeting