20:00:06 #startmeeting Octavia 20:00:06 Meeting started Wed Feb 24 20:00:06 2016 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:10 The meeting name has been set to 'octavia' 20:00:11 Hello, hello! 20:00:13 o/ 20:00:13 Hii o/ 20:00:13 o/ 20:00:14 Hi there 20:00:16 Hi 20:00:21 o/ 20:00:27 o/ 20:00:33 blogan_mobile? another towage? 20:00:38 o/ 20:00:48 #topic Announcements 20:00:48 o/ 20:00:57 L7 merged in Octavia! 20:00:59 Nope, just out and about 20:01:02 yeah!!! 20:01:03 WOOT 20:01:05 YAAAAAY! 20:01:20 blogan_mobile I would say the same when I am towed ;-) 20:01:28 o/ 20:01:29 Now we need to get it merged in neutron-lbaas! (And yes, I'm working on that.) 20:01:29 Thanks to sbalukoff, rm_work, johnsom for hanging out into the evening last night to get that done 20:01:44 Thanks to y'all for the extensive testing and code review. 20:01:54 I didn't know this beforehand, but apparently it was 6000+ lines of code. 20:01:56 yeah, you guys rock!! 20:02:17 Yes, sbalukoff, twas a feat :D 20:02:22 It was fun to see 7+ patches chained up in the merge gate 20:02:45 Priority patches needing review 20:02:58 L7 tracking etherpad 20:03:05 #link https://etherpad.openstack.org/p/lbaas-l7-todo-list 20:03:24 I may add the single-create review before this meeting is over. :) 20:03:26 There are still neutron-lbaas and CLI patches that need to merge. I think both have open bugs. 20:03:28 I think we have more than L7 20:03:40 #link https://review.openstack.org/#/c/282587/5 20:03:42 Only a little left on that-- but note that post-L7 bugs have been captured in launchpad and aren't on that etherpad. 20:03:45 TrevorV Throw it in here 20:03:53 #link https://review.openstack.org/#/c/282113/2 20:03:58 Will do 20:04:03 johnsom: You are correct. 20:04:27 #link https://review.openstack.org/#/c/284340/2 20:04:28 There are still some open horizon patches too: https://review.openstack.org/#/q/project:openstack/neutron-lbaas-dashboard+status:open 20:04:37 #link https://review.openstack.org/#/c/268237/7 20:04:41 #link https://review.openstack.org/#/c/172199/ 20:04:41 this patch has been there for a while, it is good to go #link https://review.openstack.org/#/c/272344/ 20:04:45 I saw a demo of TLS via horizon panels today, so again, good progress. 20:04:57 yep, horizon looks real good 20:05:07 johnsom: Given the feature freeze deadling next Monday, what are the high priorities to review to get in before the freeze? 20:05:23 TrevorV's single-create patch, and Min's anti-affinity patch? 20:05:37 sbalukoff: and cascade-delete as well 20:05:50 Oh! Cool-- that's ready for review? Great! 20:05:54 octavia scenario tests with tempest plugin here: #link https://review.openstack.org/#/c/172199/ 20:05:55 cascading delete (see my patches) - Horizon needs that 20:05:58 Priorities off my head are: L7, horizon panels, get-me-an-LB, delete-me-an-LB, anti-affinity 20:06:22 s/delete-me-an-b/cascading-delete/g 20:06:41 And the n-lbaas L7 and L7 CLI stuff. :) 20:06:42 Sorry for taking artistic license.... 20:06:52 :) 20:07:03 o/ (sorry late) 20:07:07 I think that's all doable. 20:07:08 no worries — cascading delete is especially critical since we won’t get an extension per dougwig 20:07:24 xgerman: Ok, good to know. 20:07:41 Yeah, all of that is in-flight so definitely possible 20:07:44 yeah “orchestration” ... 20:08:13 #topic Octavia/glance coupling for images (ihar) 20:08:24 ok, thanks for picking that up 20:08:39 Ugh, it looks like we just lost him 20:08:39 basically uses glance-tags instead of image ids 20:08:58 so you na change the image WITHOUT having to reboot the octavia control plane 20:09:10 Yeah, so this is around being able to swap amphora images without restarts 20:09:19 Ok. 20:09:27 downside you would couple closely with glance 20:09:31 Couldn't we just store the image ID in the dB to get that? 20:09:35 I had originally thought we would handle a signal to reload the octavia config. 20:09:36 so if you store your images elswhere 20:09:43 Doesn't seem like a bad idea. Though I will note that restarting the Octavia controler worker is not disruptive in most cases. 20:10:06 yeah 20:10:08 since queue 20:10:10 But this is probably important if we want to realistically support multiple controller workers. 20:10:15 blogan_mobile yes, but that would be reinventing the wheel 20:10:15 but a signal wouldn't be bad either 20:10:23 I'm not sure if the oslo service stuff makes signals easy or not 20:10:24 sbalukoff: it's likely a horizon error dialog down the road... 20:10:46 johnsom: A signal to reload the octavia config is also probably a good idea. 20:10:46 but yeah, restarting the controller-worker isn't horrible -- except if it's mid-operation on something 20:10:57 task board? 20:11:03 Job board 20:11:03 i don't know whether it immediately acks the queue, or if the job would go back on 20:11:07 JOB BOARD 20:11:11 job bored. 20:11:11 The signal would buy us being able to reload for reasons beyond image ID 20:11:12 >_< yes that'd do it 20:11:15 ;) 20:11:25 yeah i agree though johnsom, reload signal seems ideal 20:11:37 #action johnsom file reload in LP 20:11:37 this can't be uncommon 20:11:42 johnsom: +1 20:11:48 xgerman: i'm sure he already is doing it right now 20:11:49 :P 20:11:55 :-) 20:12:08 HAha! 20:12:09 Ok. We will work on a signal. 20:12:22 ok, I also like the glance idea — but if we think it would be too tight a coupling we can add a driver... 20:12:35 Maybe revisit image tags in the future, but they make me nervous as I'm not sure I trust glance to pick the right one 20:12:46 I'd hate to see another driver interface for this 20:13:04 And... well... maybe we don't need an 'image service' interface just yet? 20:13:04 why? we love driver interfaces… let’s vote on that 20:13:16 blogan_mobile: +1 20:13:26 Well, technically *we* don't talk to glance today. That would be new... 20:13:32 Yep. 20:13:35 Exactly. 20:13:59 #topic octavia-health-manager requires a host-wise plugged interface to the lb-mgmt-net 20:14:10 #link https://bugs.launchpad.net/octavia/+bug/1549297 20:14:10 Launchpad bug 1549297 in octavia "octavia-health-manager requires a host-wise plugged interface to the lb-mgmt-net" [Undecided,New] 20:14:19 I'm not sure I understand the issue here. 20:14:45 I'm not sure if there is someone to talk to this here. They said they might not be able to join today. 20:14:52 Granted, I spend most of the day in devstack, where that's set up already. 20:15:17 But it's true that in production: The controller worker and health monitor need to able to talk to the amphorae. 20:15:21 Basically they are proposing/asking about separation for the health manager receiver. I.e. namespace or such 20:15:38 That means either a host-plugged interface, or routing to the lb-mgmt-net. 20:15:55 For both security and IP conflicts with other "host" networks where health manager might live. 20:15:59 That's not a bad idea... 20:16:03 That is my reading.... 20:16:21 Ok. Maybe we talk about this when they can be here? 20:16:22 mine as well 20:16:30 Just so we understand exactly what they mean? 20:16:31 I think it makes sense 20:16:47 they provide a picture 20:17:13 Aah. 20:17:30 Sorry-- I feel like I'm just not prepared to talk about it this week. 20:17:38 ok, we can punt it 20:17:44 Ok, so I'll tag it RFE. Please have a look and comment. 20:17:48 030091 20:17:54 RIP sorry wrong chat 20:17:54 (Not that my preparedness on this needs to be a deciding factor on whether we discuss it...) 20:18:03 sbalukoff +1, I need to read more 20:18:04 johnsom: Sounds good. 20:18:19 #topic Mitaka blueprints/rfes/m-3 bugs for neutron-lbaas and octavia 20:18:20 im in TrevorV! 20:18:31 dougwig Any you want to cover? 20:18:58 #link https://bugs.launchpad.net/octavia/+bugs?field.tag=target-mitaka 20:19:18 We've got a lot to do there, though most of it doesn't seem too hard. 20:19:30 I have tagged bugs I think we should try to get fixed for Mitaka. Most aren't too hard/big. Some are L7 related 20:19:39 And we have two weeks after the feature freeze to squash as many of those as possible, right? 20:19:48 distracted by midcycle, but i'd like to hear the 2/29 status of get-me-an-lb and horizon ? 20:20:21 #link http://releases.openstack.org/mitaka/schedule.html 20:20:24 dougwig: both are under review now, looks like a good chance of landing by 2/29 20:20:29 March 14th would be RC1 20:20:44 dougwig: cascade delete is close as well 20:21:25 dougwig: regarding horizon, basic LB create workflow is in place, there are still several patches filling out functionality, fixing bugs, improving defaults, etc. 20:21:42 how are we defining close? for get-me-an-lb, the devil is in the corner cases. how are we testing that? for horizon, do we have an analysis of gaps from the old UI? 20:21:55 well, the corner cases would be bugs, right? 20:21:55 e.g., last i looked at horizon, it didn't support multiple providers. 20:21:59 and we have until RC1 :) 20:22:09 rm_work: in orchestration, the corner cases is the feature. 20:22:10 ajmiller: So, mostly "bugs" at this point? No major missing functionality (except L7, which we never had planned to have in the GUI by mitaka anyway) 20:22:40 dougwig: eh, what's a bug vs. a feature, REALLy? :P 20:22:52 so does Horizon need to land in sync with us? 20:23:01 they are their own project 20:23:19 do we want the packager's to include them? 20:23:59 we should probably check with doug-fish what hey think 20:24:04 so let’s table it 20:24:25 doug-fish is at mid cycle for horizon this week 20:24:47 but they are making good progress...and would always like more reviewers 20:24:55 Cool! 20:24:59 ass 20:25:04 yep, but do they want to be shipped on 2.29? 20:25:04 kevinbenton: typed that ^^ 20:25:18 Haha 20:25:36 you can create a LB via the new panels now...so add it to your devstack and kick the tires 20:25:55 markvan: will do! 20:25:56 markvan so you want us to say when it’s ready? 20:26:07 for packaging? 20:26:26 just trying to figure out who makes that call 20:26:39 it's up to us to ping them. when do we think that'll be? 20:27:13 From what I have seen we likely need an extension 20:27:35 but would like their input — let’s ping them Monday after their midcycle 20:27:37 yeah, doug-fish will have to answer one 20:27:43 ok 20:27:46 Ok. 20:27:49 I'll remind him as well... 20:28:00 thanks markvan 20:28:08 dougwig the get me a lb is not orchestration 20:28:11 #action xgerman to ping doug-fish on Mitaka-3 for dashboard 20:28:21 payback? 20:28:34 * xgerman is the one who volun-tells 20:28:35 blogan_mobile: it's a single unit in octavia? 20:28:39 dougwig: yes 20:28:45 xgerman: Haha! 20:28:49 well there will be a neutron-lbaas side too 20:28:49 right, ok, good. that one is easy, then. 20:28:53 but 20:28:58 they're technically independent 20:29:43 dougwig it's a single driver call 20:29:53 Ok, any other Mitaka-3 discussion? 20:30:09 blogan_mobile: right, because it's a single template splat for haproxy. got it. 20:30:52 #topic Magnum with Neutron based networking 20:30:59 I wanted to brink awareness to a kuryr spec: 20:31:00 well since nlbaas is calling Octavia, Octavia API has to support it 20:31:06 Nice! 20:31:09 #link https://review.openstack.org/#/c/269039/5/doc/source/specs/mitaka/nested_containers.rst 20:31:26 So is Magnum the particular container controller y'all are going to be going with? (I haven't looked at it closely yet) 20:31:32 This would be a good time to comment on hot-plugging neutron networks for the kuryr folks. 20:31:52 Sweet! 20:31:53 No, I can't speak to Magnum 20:32:03 Ok. 20:32:24 They had some questions for octavia team and brought up the spec, so I figured I would share. 20:32:44 @topic Converting LBaaS v1 objects to LBaaS v2 (neela) 20:32:48 Hot-plugging container interfaces would be rad. 20:33:11 neelashah1 Would you like to talk to this topic? 20:33:27 It was added to the agenda today 20:34:15 neelashah1 are you there? 20:34:20 johnsom:yes 20:34:27 Ah, great 20:34:39 Would you like to talk to your agenda item? 20:34:53 wondering if lbaas v1 and v2 can run in parallel? or if we would run into any conflicts with ips, etc? 20:35:17 eseentially, can we convert v1 to v2 by bringing up both in parallel 20:35:31 i'd personally like to be able to run both, as it would let me run fewer test jobs. 20:35:36 it's not currently supported 20:35:54 I think our docs say we don't support it. Does anyone remember the exact reasons that breaks? 20:36:07 did we insanely reuse some db tables or something? 20:36:16 we did 20:36:22 Riiiight. 20:36:34 and somebody told me “nobody is running LBaaS V1" 20:36:59 xgerman : more like nobody is running lbaas v2 (yet) ? 20:37:01 :) 20:37:04 At the time, that was essentially true. 20:37:22 I guess releasing v2 spurred the adoption of v1? 20:37:25 Can't be helped if people write code to interface with obsolete, deprecated interfaces. :/ 20:37:56 dougwig: no we didn't reuse tables 20:38:01 xgerman: I think it was a timing thing: It took a while to implement v2. In the mean time, people moved forward with v1. 20:38:15 lots of people are running v1 20:38:25 the problems came because when running the v1 agent and v2 agent at the same time, there were conflicts 20:38:59 not an issue now, right? 20:39:02 blogan: Do you recall what the nature of those conflicts was? 20:39:20 plus v1 and v2 both have the resource pools, and even though their under different paths /lb/pools vs /lbaas/pools, the wsgi code would validate against the v1 pool structure as well 20:39:21 Just a reminder, we deprecated v1 in liberty 20:39:30 if you made a v2 pool create call, and vice versa 20:39:38 johnsom: +1 20:39:56 johnsom: I think Neela is asking because she's trying to move off v1. 20:40:09 dougwig: the agent stuff would be an issue if the namespace driver is being run in v2 i believe, and the v1 and v2 conflicts would be an issue as well 20:40:23 Yeah, I understand, it just seemed like we were heading down the path of engineering a way to run both.... 20:40:24 basic question will be how to get from running active v1 to v2. step 1 shutdown/delete v1 objects, step 2, build new v2 objects? 20:40:27 johnsom sbalukoff - understand - but for people who are on kilo and already using v1 (since v2 was just introduced in kilo) and now need to move to v2 20:40:55 sbalukoff: i don't recall specifics for hte agent, but had to do with the v1 agent trying to process the a v2 load balancer 20:41:15 markvan: That should be script-able. But nobody has written this script yet, and it is disruptive in any case. 20:41:19 now as we removed the namespace agent… it might work? 20:41:31 when did we remove the namespace agent? 20:41:47 We didn't remove it, just disabled it in the devstack scripts 20:41:55 yeah 20:42:12 sorry, wrong wording… but it might be that you can now run both together? 20:42:24 though the database is still a hack... 20:42:26 xgerman: if v2 is running octavia it'll get around the agent issues 20:42:29 Nobody has tried in a while? 20:42:35 xgerman - so perhaps someone has to just try it out and see what happens? 20:42:43 yep 20:42:47 but the pool requests being combined by the neutron wsgi layer will still be a problem 20:42:54 mmh 20:42:57 so something like: delete v1 objects, shutdown v1 agents, start v2 agents and build the v2 objects. disruptive, but doable? 20:43:02 blogan: +1 20:43:05 and what about the database is a hack? 20:43:17 markvan do-able 20:43:18 markvan: Yes. 20:43:34 blogan, I think v1 can run on a v2 database but not vice versa 20:43:48 and objects in v1 m,ean different things in v2 20:43:52 xgerman: they're totally different tables, so the db doesn't matter 20:44:12 all v2 tables are preprended with lbaas_ 20:44:17 oh, ok 20:44:27 v1 tables are just vips, pools, members, healthmonitors 20:44:53 well, so it might work to some degree 20:45:01 not creating pools 20:45:09 if v1 and v2 are both enabled 20:45:35 neelashah1 So, in summary, we don't know or have a tested upgrade path. 20:45:45 ok, great - thanks for the discussion….johnsom balukoff blogan xgerman, we will see if its possible for us to try it out 20:45:56 Yep. the wsgi "pools" problem is a show-stopper for running both at the same time. There might be others. 20:45:56 Cool, let us know 20:46:09 #topic Security gate - Bandit 20:46:24 What is Bandit? 20:46:25 So, I had the pleasure of doing an internal security review of Octavia. 20:46:35 I'm so sorry. 20:46:36 #link https://wiki.openstack.org/wiki/Security/Projects/Bandit 20:46:37 by pleasure do you mean misery? 20:47:00 Thanks, blogan-google. 20:47:01 You will see there are a few bugs recently added of things we should look at. I have started that already 20:47:13 what? 20:47:18 they only asked him easy questions “like how does it come you are so awesome" 20:47:29 One recommendation they had was to add the bandit gate to our project. 20:47:33 johnsom: Oh cool! Are you going to transfer actionable stuff in there to launchpad? 20:47:51 sbalukoff They are in launchpad now 20:48:05 The HMAC timing thing was one of them 20:48:10 I don't have a problem adding a bandit gate-- non-voting for now and let's see how it goes? 20:48:20 same here 20:48:33 Right, they offered to help us setup a non-voting bandit gate. I wanted to run it by you folks first. 20:48:41 Do eet! 20:48:51 +1 20:48:55 +1 20:49:04 They also mentioned there is a fine guide here: 20:49:06 #link http://docs.openstack.org/security-guide/ 20:49:16 I like the idea of not having obvious security problems in Octavia. We're a ways away from getting there, but we've got to start somewhere, right? 20:49:18 For common issues, etc. 20:49:45 We actually came out in pretty decent shape. We do have work to do, but not bad. 20:49:54 Nice! 20:50:02 Ok, so if no objections, I will work with them to get the non-voting gate setup. 20:50:24 I'm totally going to pre-emtively override blogan's objections. 20:50:25 Do it! 20:50:28 I'm going to skip progress reports, I think we covered that already. 20:50:43 #topic Open Discussion 20:50:47 review requested pls: https://review.openstack.org/#/c/172199/ 20:50:53 Did blogan object? I didn't see that 20:50:55 once it passes all gates 20:51:01 i guess i object a lot :( 20:51:10 it's the reworked-reworked-reworked scearnio lbtest 20:51:18 Ok 20:51:22 No, I was just pre-emptively overriding his objects in case he objects. 20:51:30 objections. 20:51:33 it should be ok - sbalukoff I'll address the deprecated urls in subsequent changes 20:51:39 i was actually thinking of objecting, bc or our current gate and job instability 20:51:48 but its non-voting and we'll see what happens 20:52:07 well, our enemies will see all opera security bugs 20:52:12 See? Aren't you glad I pre-emptively gave you all permission to ignore that? 20:52:14 ;) 20:52:16 we don't have opera security bugs! 20:52:23 our 20:52:24 Yeah, but non-voting should be ok. Plus the gate issues are high priority bugs. I think I have found some issues in the tests that may fix some of this 20:52:50 johnsom: I know I have. Am reeeeeally close to fixing some of that. 20:53:03 (Like, if I had 10 more minutes before this meeting.) 20:53:19 sbalukoff I was going to update the httplib/urllib stuff. Is that the same you are working on? 20:53:32 fnaval: Good to know on the tempest test work! 20:53:51 For what it's worth... tempest testing isn't a "feature" right? So, we can potentially merge that at any time, right? 20:53:56 cool thanks please review when you get a chance 20:53:59 I hope so! 20:54:08 blogan said that we should be able to 20:54:18 sbalukoff: i'd ask supreme overlord dougwig on that 20:54:18 since it's tests 20:54:20 yeah, no deadline for that 20:54:20 johnsom: Nope, that's not what I'm working on. Feel free! 20:54:23 fnaval: i said i wasn't sure 20:54:41 sbalukoff cool, I will put up a patch for neutron_lbaas/tests/tempest/v2/scenario/base.py this afternoon 20:54:46 I think we did it that way before 20:54:47 k check with dougwig - but please take a look at the tests anyway 20:55:03 johnsom: cool thanks johnsom 20:55:06 fnaval: +1 20:55:19 Just a reminder: 20:55:21 #link for orchestration/heat the LBv2 resources ready for final push/reviews https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/lbaasv2-suport 20:55:34 with the same patch like fnaval said, we can run the tests using tempest-plugin. 20:55:39 Oh, sorry, I thought that had landed already 20:55:49 markvan: Oh, good to know! 20:55:54 We should try to get those in. 20:56:01 Or add our +1's. 20:56:05 madhu_ak: yep, thats for your changes to make that happen madhu_ak 20:56:13 ... it's going to be a busy few days here before Monday. :P 20:57:11 sorry, midcycle distraction. what do i need to look at? 20:57:16 Yep. I will give the heat patches another pass. I have reviewed those once 20:57:18 sbalukoff : yes, +1 from the lbaas team will be appreciated to land the heat support for v2 20:57:32 +1 johnsom, thanks 20:57:34 https://review.openstack.org/#/c/172199/ dougwig 20:57:36 dougwig: Any deadline on merging tempest testing code in Octavia? 20:57:49 dougwig: It's not a "feature" right? 20:57:59 sbalukoff: if i say yes, i get a false sense of pressure and more commits. if i say no, it's the truth. 20:58:11 ah ha 20:58:14 lol 20:58:19 dougwig: Haha! 20:58:36 Along those lines, do we want/need to cut an M3 octavia? 20:58:38 dougwig: Thank you for your honesty, eh! 20:58:49 johnsom no 20:58:54 I think final is good 20:59:02 xgerman: +1 20:59:18 Works for me 20:59:20 I think my people are waiting on the final and probably wouldn't use M3 per se. 20:59:41 my people are three months behind... 21:00:07 Yeah... 21:00:10 ok, times out 21:00:16 o/ 21:00:18 Thanks, folks! 21:00:23 thanks! 21:00:29 #endmeeting