20:00:05 #startmeeting Octavia 20:00:07 Meeting started Wed Mar 1 20:00:05 2017 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:10 The meeting name has been set to 'octavia' 20:00:11 o/ 20:00:33 o/ 20:00:37 Hi folks 20:01:21 o/ 20:01:40 #topic Announcements 20:01:52 The first OpenStack PTG was last week 20:02:28 We had pretty good representation for Octavia with five regular team members present 20:02:46 +1 20:03:09 I tried to keep notes on the etherpad as I attended meetings or we discussed items from the etherpad 20:03:17 #link https://etherpad.openstack.org/p/octavia-ptg-pike 20:03:27 I might clean that up a bit if I get time later today. 20:03:39 Highlights... 20:04:07 We met with barbican and it seems like the cascading ACLs can happen. I opened a bug to track it 20:04:22 #link https://bugs.launchpad.net/barbican/+bug/1666963 20:04:22 Launchpad bug 1666963 in Barbican "Enable cascading ACLs based on container ID" [Undecided,New] 20:04:45 Lots of folks are interested in using octavia in container environments 20:04:57 We also had a lot of interest in amphora in containers 20:05:36 We had a good discussion about the state of testing in octavia and a path forward there 20:06:04 We also highlighted some much needed documentation and put names to those 20:06:34 I want to discuss the OpenStack client part later in the agenda so I will wait on that. 20:06:42 Any questions about the PTG or the notes? 20:07:04 not at the moment 20:07:10 maybe after your cleanup :) 20:07:19 Hahaha, yeah, it got messy 20:07:25 indeed :D 20:07:31 So many discussions going on.... 20:07:46 yeah.. sorry i couldn't be there 20:08:31 py35 is a pike goal, or queens? 20:08:40 pike 20:08:47 Other announcements, I have put in a request for new git repositories: octavia-dashboard (for renaming and migrating the dashboard), python-octaviaclient (OSC plugin), and octavia-tempest-plugin (tempest plugin) 20:08:50 I didn’t see that in the notes, though we discussed 20:09:13 #link https://governance.openstack.org/tc/goals/pike/index.html 20:09:13 johnsom: how about the namespace driver 20:09:17 ? 20:09:23 thx 20:10:06 I have also put in to migrate octavia over to cycles-with-milestones release cycles. This will make i18n, packaging, and some of our end of cycle steps easier. 20:10:55 diltram Good point, I will need to do that as well 20:11:26 and how about this dashboard? 20:11:32 * johnsom thinks he really should clean up the notes.... 20:11:32 we need to move the code on our own? 20:11:45 johnsom, if I may expand on diltram's first question: what about 3rd party drivers support in general? 20:11:47 diltram Yes, I will take care of that 20:12:22 nmagnezi: we gonna support drivers 20:12:49 nmagnezi We did talk about that. It is an open rfe/bug for the lbaas-merge work. We just need to get the base API merged before that task can start. 20:12:52 probably I will be reponsible for delivering those drivers api 20:13:13 got it. thanks :) 20:13:45 One item that did come out of the PTG for the drivers is we will be adding an endpoint to the health manager to allow the drivers to submit status and stats. This should allow for good scalability 20:15:01 Ok, let's move on 20:15:12 #topic Brief progress reports / bugs needing review 20:15:32 I think we have cleared up the two-three gate issues that were bugging us. 20:16:03 It sounds like the qa folks are starting to work on the devstack issue again, so maybe we can pull out that work around soon. 20:16:59 I am also continuing to work on our API-REF docs 20:17:18 Any other notable progress? 20:17:35 is that the time to mention new bugs? 20:17:57 Sure, if there are bugs you would like to bring attention to, please do 20:18:04 yup 20:18:18 just one bug i have found today 20:18:19 https://bugs.launchpad.net/octavia/+bug/1669019 20:18:19 Launchpad bug 1669019 in octavia "The gates are not testing the latest amphora-agent code" [Undecided,New] 20:18:50 i basically gave as many details as a could. and IMHO it is important to resolve this 20:18:52 I saw that this morning, but haven't yet looked deeper into it 20:19:03 if we can agree on "how" I can submit the patch 20:19:35 in short, the agent that is being run inside the amp instance does not include the patch that should be tested 20:19:41 nmagnezi We did get switched over to Python3.5 in the amphora due to the DIB changes in Ocata 20:20:10 ah. ok so that is expected. just brought it up because i wasn't sure 20:20:18 so good to know. 20:21:14 can we support one more parameter to decide how we're gonna install amphora-agent code? 20:21:17 Yeah, we were not expecting it, but it happened, so we adapted. You will see we have a number of py3x gates now too (more needed actually). Pike has a goal for full py3x testing and support. 20:22:17 johnsom, devstack is going to use python3 as well? (when it is starting the openstack services) 20:22:26 We do override the amphora-agent install to pickup the checked out version. It's a bit strange to follow. The element says master, but in reality for the devstack, we override it to the current patch 20:23:34 nmagnezi yes, there is the USE_PYTHON3=True setting for localrc. But note, if you set this you cannot just un-set it and have devstack work with python2.7 again 20:23:39 It is not clean 20:24:07 johnsom, noted. thank god for snapshots :) 20:24:21 +1 to that 20:24:44 I will dig a bit deeper on that bug after meeting. 20:24:54 great 20:24:57 thank you 20:25:34 Any other progress to discuss or bugs to highlight? 20:25:54 #topic Octavia team mascot 20:26:05 #link https://etherpad.openstack.org/p/octavia-mascot 20:26:23 Well, we got a little "input" from our designate friends..... 20:26:56 * nmagnezi reads 20:27:26 I am thinking we leave this open for ideas for another week, then next week I will ask for votes 20:27:39 +1 20:28:41 I will see if I can come up with a ranked voting thing that isn't over complicated. Otherwise it will be +1s on the etherpad 20:29:37 #topic OpenStack Client (OSC) commands for octavia 20:30:14 At the PTG I spent some time in OSC room discussing our client need for Pike 20:30:49 Dean Troyer was very helpful and supportive 20:31:17 It was agreed that we must put our commands in a OSC plugin repo 20:31:52 which I don't like or agree with. But he's the big boss 20:31:58 I think ankur-gupta-f4 was thinking we could do as neutron did and put them in tree, but that sounded like a no-go from the room 20:32:23 I kind of like having it under our control 20:32:36 yeah, having our own repo shields us from being accidentially deleted 20:32:42 +1 20:32:42 +1 20:32:58 * johnsom thinks "security groups" 20:33:32 We also talked about the new "terminology" that is being used for OSC 20:33:44 do we have an octavia namespace in github to at least create a fork from the OSC repo? 20:34:05 The command layout folks liked in the room (myself included) are: 20:34:15 this is actually the first time i hear about the octavia client. I'm not sure I'm following on the disagreement ankur-gupta-f4 had with Dean 20:34:26 question then becomes do you want to have our python-octaviaclient just contain OSC plugin commands or should we also bring up a pure octavia client. So users can run 'octavia * create' and 'openstack * create' 20:34:30 m-greene I have put in to create our OSC plugin repo. It will not be a fork however, just a plugin. 20:35:28 i.e. something like this https://github.com/openstack/python-neutronclient/tree/master/neutronclient/osc 20:35:38 got it. I was thinking that we’ve had this problem too, and have had to either restore from someone’s fork, or contact github and ask them to “undelete” (which sometimes works) 20:35:39 ankur-gupta-f4 For Pike I am mostly interested in just the OSC plugin. I guess we could consider a native client in the future, but not sure that is the direction OpenStack is going. 20:36:00 okay sounds good and achievable for Pike 20:36:36 Excellent 20:36:56 Anyway, the commands we discussed and I would like the team's feedback on: 20:37:43 "openstack loadbalancer create ..." 20:37:44 "openstack loadbalancer listener create ..." 20:37:44 "openstack loadbalancer pool create ..." 20:38:01 It has tab completion, so that at least speeds it up 20:38:06 +1 20:38:28 we would own the loadbalancer namespace within OSC 20:38:29 basically our stuff would live under the "loadbalancer" namespace 20:38:35 Yep 20:38:47 +1 20:38:49 are the old commands (lbaas-loadbalancer-create for example) going to be deprecated in Pike? 20:38:58 those don't exist in OSC 20:39:15 and the neutronclient is ALREADY deprecated, right? 20:39:19 yep 20:39:20 yea 20:39:22 yes 20:39:35 Yes, as soon as we have a replacement available we can mark the old commands deprecated (though neutron kind of already did that to us). 20:40:40 Any other thoughts/comments on that? 20:41:12 I wish we could alias "lb" namespace too 20:41:19 but i guess if tabcomplete ALWAYS works? 20:41:39 i assume it's only if it installs the bash-completion stuff correctly (and you are using bash?) 20:42:03 Yeah, the alias is an interesting question. I'm not sure about that. Though "lb" might be confusing for the neutron lbaas v1 hold outs 20:42:41 Just to be clear, there will not be support for LBaaS v1 API in the OSC 20:43:02 rm_work you can run "openstack" and get an interactive environment as an option... 20:43:25 ah, true 20:43:53 Ok, one last item on my agenda and the open discussion 20:44:01 #topic Proposed Health Manager endpoint for provider health/stats reporting 20:44:40 At the PTG we discussed adding another endpoint to the health manager processes so that the drivers can post status/stats updates. 20:45:09 This would be similar to how the amps report in their health heartbeats. 20:45:48 I am thinking something simple like another UDP message format, similar signing. 20:46:01 Any comments on that? 20:46:10 can we see a spec? 20:46:14 Are there any other vendor driver folks here? 20:46:26 is that something the vendors are going to want to deal with? 20:46:32 Yeah, it would need a spec. Plus I want to write up a "how to write a driver" doc 20:46:40 rm_work yes 20:46:43 I mean UDP 20:46:57 they asked for a similar functionality to neutron agent-list which tell you the status of agents 20:47:04 ah, yeah ok 20:47:18 rm_work Well, currently they are reaching into the neutron DB to post this. I want a scalable way for them to do it that doesn't mean reaching into the DB 20:47:59 ah, ok, got confused… the one I mentioned was health 20:48:01 i just wonder if they'd be more happy about something like just another REST call 20:48:14 If we just expose a callback in the API process for the drivers, it is limited to the number of API processes deployed, which I would expect would be a smaller number than the HM would be 20:48:33 since they wouldn't need it to deal with failover, the timing would be less of an issue, and it might also be batched 20:48:37 We could do a full REST 20:48:38 most installations I know scale them lineary 20:48:46 Might be a hammer for a fly though 20:48:57 well, not like we've been accused of that before :P 20:49:16 I'd just like to hear from some vendors first, before we go and implement something 20:49:30 Yep, thus the agenda item here.... 20:49:38 plus rm_work m-greene told us that they're really interested in using our internal 20:49:49 internals* 20:49:59 not just plain using of drivers api 20:50:19 k, i just don't see a lot of vendors at our meetings :P 20:50:29 might be a good thing for the ML 20:50:40 Yeah, sometimes people lurk. I figured this is a good start 20:50:52 yeah, in my opinion 20:51:07 we can start with supporting one really interested vendor 20:51:20 right.. we need to evaluate how much of the octavia guts we can leverage to not reinvent the wheel. 20:52:10 I prefer to help someone who is interested in this help that help all of completely not interested in it people 20:52:11 o/ kk 20:52:12 and companies 20:52:13 probably health, but not housekeeping.. hence a way to post status/health to allow an operator to self-diagnose 20:52:15 Ok, at least the topic was brought up. Next steps would be ML or a spec for people to comment on 20:52:26 +1 20:53:00 Anyone volunteering to start a spec? 20:53:18 * johnsom thinks it can't hurt to ask..... 20:53:43 i don’t know enough, plus hoping to join in on flavors and possibly gui 20:53:49 Don't trip stepping backward.... Grin 20:54:01 Ok, I will get to it soon-ish 20:54:20 m-greene those would be great 20:54:37 #topic Open Discussion 20:54:43 Rich and I posted comments on the flavors spec, not sure next steps 20:54:47 depending on my osa adventures I might be able to help 20:54:50 Since we have a few minutes left , any other items? 20:55:20 should we talk flavors? 20:55:24 m-greene Yeah, I'm not sure if the original poster is still able to work on that or not. 20:55:53 We have five minutes left. Let's comment on the spec and put it on next week's agenda. 20:55:59 +1 20:56:10 also ACTIVE-ACTIVE 20:56:17 #link https://review.openstack.org/392485 20:56:36 Yeah Act/Act is another good one for next week 20:56:42 k 20:58:07 Done, on next weeks agenda 20:58:09 I am planning/re-planning my team’s work through June. Is GUI or flavors more important to the community? 20:58:23 GUI always increases adoption 20:58:29 so I would vote GUI 20:58:42 Yeah, GUI is the mass market appeal 20:58:43 +1 20:59:02 both are “high” value to me, but not sure we’d be able to tackel both technically. 20:59:05 ok 20:59:40 well, once the spec is ironed out we can see if somebody else can pick up flavors… 21:00:02 I would like to see progress on the flavors spec though. We need some level of "flavors" in the API 21:00:18 +1 21:00:19 Ok, we are out of time today. Thanks folks! 21:00:23 o/ 21:00:27 thx, cu 21:00:29 o/ 21:00:30 #endmeeting