20:00:03 #startmeeting Octavia 20:00:04 Meeting started Wed Apr 4 20:00:03 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:07 The meeting name has been set to 'octavia' 20:00:11 Hi folks 20:00:18 hi 20:00:35 #topic Announcements 20:00:42 Check your e-mail for a foundation survey about the PTGs 20:01:01 o/ 20:01:28 The foundation sent out a survey about the PTGs that I know some of us are happy to give feedback. So check your e-mail in case you missed your link to the survey 20:01:46 Also of note, The "S" release will be "Stein" 20:01:49 o/ 20:02:02 Solar won the vote, but had a legal conflict, so Stein it is... 20:02:40 I have a number of other OpenStack activity type things later on the agenda, but other than those any announcements I missed? 20:03:00 Rocky MS-1 is in two weeks 20:03:38 #topic Brief progress reports / bugs needing review 20:05:09 Ok, moving on. It's been a busy bit for me. I did another spin of the tempest plugin, got more feedback I hope I can address today. After that I started on the provider driver work. I have a base class and a noop driver up for review (marked WIP as I am still making some adjustments, but feedback is still welcome). 20:06:13 Lots-O-reviews too. Reviewed a bunch of great dashboard stuff (L7 support!) (sorry it took this long to get some reviews on it) and some other recent patches with new features for Rocky. 20:06:22 Anyone else have updates? 20:06:32 Spring break vacations? 20:06:56 I have a couple of things up still 20:07:07 i think maybe timeouts will merge soon? but, Usage API could use some attention 20:07:16 not sure everyone is aware of the work i'm trying to do there 20:07:17 rm_work: you couldn't help yourself :D 20:07:29 but would like to get people's general approval on the concept and how it's organized 20:07:51 This is an open comment section for everyone to share what they are working on with the team 20:07:56 cgoncalves: usually that's true ;) 20:07:57 johnsom: thanks a lot (!) for the work on tempest and providers. really looking forward to having them merged 20:08:59 rm_work, sup? 20:09:05 There is still a bunch of work to do on tempest, so if we have volunteers... 20:09:08 eandersson: lol 20:09:23 :D 20:09:25 How is the grenade gate going BTW? 20:09:27 eandersson: wanted you to review something, maybe it's fine now 20:10:02 I c - let me know =] 20:10:17 johnsom: no updates from my side. I know you reviewed it. I've been busy with finalizing octavia integration in tripleo and containerizing neutron-lbaas 20:10:31 plus kolla 20:10:35 Ok, looking forward to that too! 20:10:49 we are getting there :-) 20:11:03 I would love to be able to declare an upgrade tag in Rocky 20:11:28 Maybe even two.... 20:11:34 #topic Other OpenStack activities of note 20:12:14 These are just some things going on in the wider OpenStack I think you all might want to know about. Sorry if it's duplicate from the dev mailing list (let me know if this is of value or not). 20:12:22 No more stable Phases welcome Extended Maintenance 20:12:30 #link https://review.openstack.org/548916 20:12:35 johnsom: first 2 upgradability tags are achievable with grenade as-is, I think 20:12:38 #link https://review.openstack.org/#/c/552733/ 20:12:58 cgoncalves I think there is still a bug or two there, thus the comments 20:13:05 But close! 20:14:01 Based on packager feedback at the PTG and other forums, there is a change in how stable branches are going to be handled. It is still up for review if you want to comment. 20:14:20 A quick note on recent IRC trolling/vandalism 20:14:27 #link http://lists.openstack.org/pipermail/openstack-dev/2018-April/129024.html 20:14:45 Just an FYI that work is being done to try to help with the IRC spammers 20:14:58 a plan to stop syncing requirements into projects 20:15:05 #link http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html 20:15:47 The way global requirements are handled is changing. You have probably seen the lower constraint gates, but you will see less proposal bot GR updates. 20:16:22 And finally, there are some upstream package changes coming we should be aware of in case they break things. 20:16:47 Pip and PBR are both doing major releases in the next few weeks. 20:17:00 Replacing pbr's autodoc feature with sphinxcontrib-apidoc 20:17:07 #link http://lists.openstack.org/pipermail/openstack-dev/2018-April/128986.html 20:17:20 oh yeah, pip10 is a big one right? 20:17:36 We probably need to investigate that work to update our docs gates. Looking for volunteers there too. 20:18:37 Yeah, the big pip 10 is out in like beta/rc or something. It bit the ansible project since they were pulling from git. It will hit the rest of us in a week or two I think. There have been some mailing list threads on that to. 20:19:06 Even one of the pip 9 dot releases broke the Ryu project and neutron-agents alread 20:19:08 y 20:19:32 So, FYIs. It might get bumpy here soon. 20:19:47 can we, if makes sense at all, to add non-voting/experimental pip10 jobs? 20:20:44 Probably, yes. If someone has cycles to do that. I think infra/qa is working on some experimental gates to see what is coming. 20:22:40 Sadly I don't think I can carve off that time right now. But would support others 20:22:42 #topic Octavia deleted status vs. 404 20:22:44 I'm not up to speed on pip10 or have much time this week, but would try next week if not too late 20:23:42 Ok, sure. I think the plan is to land it the week of the 14th. So, you can help with cleanup... grin Maybe we will be fine. I just wanted to give a heads up of places to look in case things break. 20:24:11 I know we don't import pip modules, so we are ahead of the game there! 20:24:29 Ok, 404 carry over from the last few meetings. 20:24:34 #link https://review.openstack.org/#/c/545493/ 20:24:47 This is still open 20:25:11 argh, looks like it has a gate issue. 20:25:24 Any other updates about the libraries or other comments on this? 20:26:02 Though that failure, only 8 minutes it, must be a devstack failure. Our code wouldn't be running yet. 20:27:03 Ok, moving on then. 20:27:14 #topic Open Discussion 20:27:21 Other topics for today? 20:27:30 one small topic we started last week 20:27:41 #link https://etherpad.openstack.org/p/storyboard-issues 20:27:56 Right! 20:28:05 I plan to contact the storyboard team next week (we are in a holiday this week) 20:28:14 They just finished their meeting I think 20:28:24 so please, if you didn't add your stuff just yet, please do it. 20:28:39 I guess I can just ping ppl in the channel :) 20:28:45 see how that goes.. 20:29:11 Yeah, or next week when you are back in office plan to join the meeting 20:29:47 I saw they either have or are in the process of moving a few more projects 20:30:35 Other topics today? 20:30:39 oh i had a thng 20:30:41 (a thing 20:30:45 *a thing 20:30:45 ) 20:31:08 * rm_work dies 20:31:23 I feel like I should... 20:31:33 umm yeah, so 20:31:35 * johnsom revives rm_work with potion of health 20:31:43 thanks ^_^ 20:31:59 pratrik brought up the concept of AZs again 20:32:14 (I wish they'd stick around or read scrollback so we could chat about it) 20:32:34 but anyway, it seems more than just GD are doing multi-az stuff in the nova scheduler 20:32:45 and it seems to be done in a compatible way 20:33:08 yep, we should add support 20:33:11 so I'm wondering if I cleaned up and split out the work I did around multi-AZ / AZ-anti-affinity, if people think it would be possible to merge 20:33:11 So upstreaming it into nova? 20:33:23 grin 20:33:26 the reason i didn't do this in the past is that enabling it requires custom nova patches 20:33:26 lol 20:33:26 GD? 20:33:30 GoDaddy 20:33:55 I would love to get nova to properly support this but i think that may be a losing battle that would stretch over multiple years 20:34:04 mmh, can’t we just make it so that you cna supply a list of AZ when you create an LB and we out them there 20:34:18 so what i'm talking about is octavia support, assuming people use approximately the same compatible custom nova scheduler stuff 20:34:38 mmh, how would we test that in the gate? 20:34:45 that is a good question 20:34:57 i guess that might be the closest to a firm answer, actually 20:35:05 Yeah, and AZs would be octavia driver specific... 20:35:05 until it's gate testable, it'd be hard to run it 20:35:14 yes, that also 20:35:20 it'd be specific to the amphora driver 20:35:20 well, as I said we could introduce an AZ parameter when you create an LB 20:35:26 *amphora provider 20:35:35 I'm open to the idea for sure, it make sense in some cases. 20:35:37 instead of having obe per installation 20:36:08 xgerman_: the sort of thing i do now is to have octavia transparently support multi-az, and handle anti-affinity transparently 20:36:19 the user shouldn't need to know anything about this 20:36:41 I just worry about putting a bunch of code in to become a nova scheduler, that each implementation wants to do a different way (some have cross AZ subnets, some don't, etc.). 20:36:54 right, the other complication is networking 20:37:02 I mean our dream would be nova server groups that are AZ aware (I think) 20:37:06 yes 20:37:14 so, i honestly don't care that much 20:37:17 and the network spanning AZs 20:37:37 or we cop out and tell them to use kosmos 20:37:41 but we got some interest 20:37:47 and i have the code ready 20:37:55 it just needs to be split out from my monolith patch 20:38:09 this is a preliminary query about whether it's worth my time 20:38:45 Right. So, I guess since you know what code you have, is it small enough, simple enough, and configuration enabled that we could add it, but at the operator's own risk enable it? 20:39:06 yep, like “unstable” 20:39:55 I'm not a fan of adding it to the API for users to enter AZs, I think that gets a bit too driver specific / how does a user know the right answer? 20:40:14 But something via flavors or config... 20:41:54 netsplit? 20:42:05 approval 20:42:09 It got quiet 20:42:23 we need flavors for so many things - someone ought to write the code for it 20:42:34 Ha, trying..... 20:42:57 yeah so 20:42:59 Well, to be fair, there are parts of it in a patch already. I'm just adding the glue 20:43:02 that would essentially be it 20:43:08 rm_work: have you probed the nova team about such feature? others may also want that, you never know 20:43:29 just an "experimental feature" that can be enabled by using config a certain way 20:43:46 may or may not be gate-able, other than making sure it doesn't break the normal paths 20:43:53 * rm_work shrugs 20:44:02 it would be neat to not have to carry this patch myself 20:44:03 but 20:44:05 Yeah, I think if it's not like 1000's of lines of code, lots-o-warnings, etc. 20:44:19 alright, i'll look at splitting it out 20:44:33 i think i can get that done effectively 20:44:55 +1 20:45:21 Ok, but if we see a bunch of "add nova placement API to AZ scheduler patches" I'm going to be like... Why isn't this just in nova.... 20:45:54 lol 20:46:29 BTW, we can't really use that code from pastebin. That isn't a way that we know the code was licensed for use in OpenStack, so I am assuming we are talking about your code and patch..... 20:47:34 Ok, other topics today? 20:48:44 Ok, thanks folks! Have a good week (vacation) 20:48:54 #endmeeting