20:00:06 #startmeeting Octavia 20:00:07 Meeting started Wed Oct 25 20:00:06 2017 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:10 The meeting name has been set to 'octavia' 20:00:11 o/ 20:00:24 Hi folks 20:00:56 םץ. 20:00:59 o/ 20:01:02 (sorry :D ) 20:01:02 hi 20:01:10 hi 20:01:17 Ah, there are some people. 20:01:21 #topic Announcements 20:01:27 o/ 20:02:03 The Queens MS1 release has not yet gone out. We are working through gate/release system issues from the zuul v3 changes. I expect it will happen in the next day or two. 20:02:33 The newton release has officially gone EOL and the git branches removed. This happened last night. 20:03:04 Just a reminder if you need to reference that code, there is a tag newton-eol you can pull, but we can't put anymore patches on newton. 20:03:30 And finally, I am still working through all of the zuul v3 changes. 20:04:01 Octavia and neutron-lbaas should no longer have duplicate jobs, but I'm still working on the dashboard repos. 20:04:21 I still have more cleanup to finish and I need to fix the stable branches. 20:04:37 for zuul v3 ? 20:04:39 However, I think we have zuul v3 functional at this point. 20:04:46 yes 20:05:18 stable branch gates aren't running at the moment (just failing). 20:06:18 Oh, we do have some TC changes: 20:06:20 #link https://governance.openstack.org/election/results/queens/tc.html 20:06:40 Any other announcements today? 20:07:23 #topic Brief progress reports / bugs needing review 20:07:39 Please continue reviewing and commenting on the provider driver spec: 20:07:47 #link https://review.openstack.org/509957 20:08:06 back to asking folks to consider THIS option for amphora az cacheing: 20:08:08 #link https://review.openstack.org/#/c/511045/ 20:08:29 longstaff Are you planning patchset version 3 soon? 20:08:33 I added a topic to the the agenda 20:08:36 for that 20:08:41 oh k 20:09:19 Yes. I plan to commit an update to the provider driver spec tomorrow. 20:09:30 Excellent, thank you! 20:10:17 I have been focused on zuul v3 stuffs and some bug fixes to issues that came up over the last week. 20:10:33 I still have more patches for bug fixes coming. 20:10:51 And much more zuulv3 patches... sigh 20:11:12 #link https://storyboard.openstack.org/#!/story/2001258 20:11:21 Any other progress updates to share or should we jump into the main event: AZs 20:11:24 johnsom and I put up some patches to improve LBaaS v2 <-> Octavia 20:12:00 johnsom: saw your notes about HM failover after the listener fails . Is there a way to not trigger failover unless the original listener was in a known good state? 20:12:12 since this is new provision 20:12:17 jniesz Yeah, one of those has a patch up for review. The secondary outcome is around the network driver being dumb. I haven't finished that patch yet 20:13:21 I have taken the eyes of the ball in OpenStackAnsible and so there is some cruft we are tackling: https://review.openstack.org/#/c/514767/ — 20:14:02 good news I will remain core for the Q cycle over there… 20:14:14 in Octavia OSA 20:14:38 jniesz Well, the failover we saw was a valid failover. The amp should have a working listener on it, but it didn't, so IMO Octavia did "the right thing' and ended in the right state with the load balancer in error as well because we could not resolve the problem with the amp. (notes for those that haven't read my novel. A bad local jinja template change caused listeners to fail to deploy) 20:15:09 Yeah, Octavia OSA is getting some more attention 20:15:37 so wouldn't we want to not fail over unless the original provisioned ok? 20:15:38 that seems right -- if the listeners fail to deploy, even on an initial create, why wouldn't it be an error? 20:15:55 oh, failover -- ehh... maybe? I mean, it could have been a one-time issue 20:16:02 go to error -- definitely 20:16:06 but was it failover-looping? 20:16:09 jniesz No, it could have failed for reasons related to that host, like the base host ran out of disk 20:16:37 we might want to have something that tracks failover-count (I wanted this anyway) and at some point if it fails too many times quickly detect something is wrong 20:16:58 No, it saw the listener was failed, attempted a failover, which also failed (same template), so it gave up and marked the LB in error too 20:17:07 ah 20:17:10 yeah that seems right 20:17:23 is failover the same has re-create listener? 20:17:26 is that the same code path 20:17:31 LB in ERROR will stop the failover attempts 20:18:16 No, failover is rebuild the whole amphora, which, because there was a listener deployed did attempt to deploy a listener again. 20:18:57 Fundamentally the "task" that deploys a listener is shared between create and failover 20:20:28 and we trigger the failover of the amp because at that point it is in an unknown state I assume 20:20:33 because of error 20:21:52 It triggered failover because the timeout expired for the health checking on the amp. The amp was showing no listeners up but the controller knew there should be a listener deployed. It gives it some time and then starts the failover process. 20:22:17 because the listener create responded back with invalid request 20:23:39 No, this health checking was independent of that error coming back and the listener going into "ERROR". 20:24:06 Two paths, both figuring out there was a problem, with one escalating to a full amp failover 20:24:33 so if listener create goes from PENDING -> ERROR HM will still trigger? 20:24:42 or if it goes from PENDING -> RUNNING -> failure 20:26:41 If the LB goes ACTIVE and then some part of the load balancing engine is not working it will trigger 20:27:52 ok, because from logs octavia.controller.worker.tasks.lifecycle_tasks.ListenersToErrorOnRevertTask went to RUNNING from PENDING 20:28:08 and then it failed on octavia.controller.worker.tasks.amphora_driver_tasks.ListenersUpdate 20:28:10 that task always runs I think? 20:28:30 which is when hit the template issue 20:28:33 it's so when the chain reverts, it hits the revert method in there 20:29:37 Yeah, in o-cw logs you will see ListenersUpdate went to running, then failed on the jinja so went to REVERTED, then reverted ListenersToErrorOnRevertTask which put the listener into ERROR. 20:29:49 yep 20:30:24 independent the o-hm (health manager) was watching the amp and noticed there should be a working listener but there is not. 20:30:46 that is when o-hm would trigger a failover in an attempt to recover the amp 20:31:06 ok, but should ListenersUpdate have ever went to running? 20:31:14 Yes 20:31:45 ListenersUpdate task should have gone: PENDING, RUNNING, REVERTING, REVERTED 20:32:02 I think there is a scheduling in the beginning somewhere too 20:32:19 ListenersUpdate definitely started, but failed due to the bad jinja 20:33:18 so jinja check is not done until post LIstenerUpdate 20:33:24 Ok, we are at the half way point in the meeting time, I want to give the next topic some time. We can continue to discuss the bug after the meeting if you would like. 20:33:35 ok that is fine 20:33:40 jinja check is part of the ListenerUpdate task 20:33:47 yeah basically, i think it went exactly as it should 20:33:55 from what i've heard 20:34:16 Yeah, me too aside from the bug and patch I have already posted. 20:34:31 #topic AZ in Octavia DB (xgerman_) 20:35:05 So, I wonder if there are highlights from our PM discussion on this that would be good to summarize 20:35:14 Sure, 20:35:42 Basically, there are cases where we need to be able to quickly filter by AZ ... and doing a full query to nova is not feasible (at scale) 20:35:43 My summary is rm_work wants to cache the AZ info from an amphora being build by nova 20:36:20 and my argument is if somebody wants to query by data center, rack — I don’t want to add those columns 20:36:39 so this feels like meta data and needs a more generalized solution 20:36:39 so we can compromise by storing it clearly labeled as a "cache" that can be used by processes that need quick-mostly-accurate, and we can query nova in addition for cases that need "slow-but-exact" 20:36:52 well, that's not something that's in nova is it? 20:36:57 rack? 20:37:07 and DC would be... well outside the scope of Octavia, since we run in one DC 20:37:11 as does ... the cloud 20:37:20 so it's kinda obvious which DC you're in :P 20:37:23 The use case is getting a list of amps that live in an AZ and to be able to pull an AZ specific amp from the spares pool. 20:37:51 i mean, the scope limit here is "data nova provides us" 20:37:52 so we are doing scheduling by AZ 20:38:06 I'd like to enable operators to do that if they want to 20:38:11 Well, other deployer have different ideas of what an AZ is.... Many AZ is different DC 20:38:23 right which does make sense 20:38:28 so, that's still fine IMO 20:38:32 yeah, so if is for scheduling it should be some schesulign hint 20:38:47 yeah, we can do scheduling hints in our driver 20:39:05 yea, I would like the scheduling control as well 20:39:07 but they rely on knowing what AZ stuff is in already 20:39:38 my worry is to keep it generalzied enought that we can add other hints in the future 20:39:42 and querying from nova for that isn't feasible at scale and in some cases 20:39:52 i'm just not sure what other hints there ARE in nova 20:39:54 maybe HV 20:39:58 err, "host" 20:40:08 I don’t want to limit it to nova hints 20:40:10 which ... honestly i'm OK with cached_host as well, if anyone every submits it 20:40:14 Really the issue is OpenStack has a poor definition of AZ and nova doesn't give us the tools we want without modifications or sorting through every amphora the service account has. 20:40:17 well, that's what we get back on the create 20:40:38 we're just talking about adding stuff we already get from a compute service 20:40:50 and it's hard to talk about theoretical compute services 20:40:52 johnsom, it does not allow to get a filtered list by AZ ? 20:41:03 i mean, just the instances in a specific AZ? 20:41:03 but in general, we'd assume some concept that fits generally into the category of "az" 20:41:43 I am saying we should take a step back and look at scheduling — is adding an az column the best/cleanest way to achieve scheduling hints 20:41:56 nmagnezi Yes, but we would have to take that list, deal with paging it, and then match to our spares list. For example. 20:42:16 nmagnezi: basically, if we want to see what AZs we have in the spares pool, the options are: call nova X times, where X is the number of amps in the spares pool; or: call nova-list and match stuff up, which will cause significant paging issues at scale 20:42:36 or you can pull out of nova db : ) 20:42:39 Don't get me wrong, I'm not a huge fan of this, but thinking through it with rm_work led me to feel this might be the simplest solution 20:42:46 jniesz: lol no we cannot :P 20:42:52 yeah sounds problematic.. 20:43:10 xgerman_: well, our driver actually *does* do scheduling on AZ already 20:43:16 * johnsom slaps jniesz's had for even considering getting into the nova DB.... 20:43:18 it's just not reliant on existing AZs 20:43:24 so if I run different nova flavors and want to pull one out based on flavor and AZ — how would I dod that? 20:43:34 had->hand 20:43:41 would I add another column? 20:43:54 xgerman_: do we not track the flavor along with the amp? 20:43:59 I guess not? actually we probably should 20:44:18 I even kinda want to track the image_ref we used too... 20:44:47 we can add image_ref and compute_flavor IMO 20:44:53 alongside cached_az 20:44:58 i would find that data useful 20:45:19 I see xgerman_'s point, this is getting a bit tightly coupled to the specific nova compute driver 20:45:38 should we store this in flavor metadata? 20:45:38 (I was also looking at a call to allow failover of amps by image_id so we could force-failover stuff in case of security concerns on a specific image) 20:45:51 that would only be initial though and wouldn't account for changes 20:45:54 johnsom, i was just thinking the same. we need to remember we do want to allow future support for other compute drivers 20:45:54 jniesz: i was talking about in the amphora table 20:45:54 yep, that’s what I was getting at + I don’t know if I want to write python code for different scheduling hints 20:46:12 i think the DB columns should absolutely be agnostic 20:46:18 but we can use them in the compute drivers 20:46:48 I think the concept of "compute_flavor" and "compute_image" and "compute_az" are pretty agnostic though -- won't most things need those? 20:47:09 even amazon and GCE have AZs AFAIU 20:47:17 and flavors and images... 20:47:43 kubernetes does too 20:47:46 what about vmware 20:47:49 so I would be in favor of adding *all three* of those columns to the amphora table 20:47:51 aside from flavors (which just don't know if it exists as a concept in other alternatives) I agree 20:48:19 i'm not saying i disagree about flavors, just don't really sure 20:48:20 :) 20:48:35 i don't know much about vmware -- i would assume they'd NEED something like that, no? 20:49:27 we're all about being generic enough here to support almost any use-case -- and i'm totally on board with that, and in fact that's the reason for this 20:49:30 we can either use some metadata concept where we go completely configurable or pick sinner and throw them in the DB 20:49:38 winners 20:49:40 rm_work, google says they do have it. 20:49:41 not sinners 20:49:44 it should be possible to write a custom compute driver that can take advantage of these fields 20:50:05 i think it's pretty clear that those fields are something shared across most compute engines 20:50:08 I mean at a base level, I wish nova server groups (anti-affinity) just "did the right thing" 20:50:15 +1 20:50:17 yeah, that'd be grand, lol 20:50:40 vmware I thought only had flavors with OpenStack, not native 20:50:49 you clone from template or create vm 20:51:20 jniesz: in that case, flavor would be ~= "template"? 20:51:41 well that included image 20:51:48 and I thin you still specify vcpu, mem 20:51:51 So, some thoughts: 20:51:51 Cache this now 20:51:51 Work on nova to make it do what we need 20:51:51 Setup each compute driver to have it's own scheduling/metadata table 20:51:51 .... 20:51:56 has been awhile for me with VMware 20:52:20 different drivers might care about different metadata, so like that idea 20:52:25 jniesz the vmware integration has a "flavor" concept 20:52:31 johnsom +1 20:53:03 I think keeping it flexible and not putting it straight into the amp table gives us space for future comoute drivers /schedulers 20:53:13 Those were options/ideas, not a ordered list BTW 20:53:26 i think it *is* flexible enough in the amphora table to allow for future compute drivers 20:54:16 I just worry if the schema varies between compute drivers, how is the API going to deal with it... 20:54:27 i think rm_work has a point here. and for example even if a compute driver does not have the concept of AZs we can just see it as a single AZ for all instances 20:55:04 Yes 20:55:10 yep 20:55:17 and yes, exactly, worried about schema differences 20:55:40 We have five minutes left in the meeting BTW 20:55:50 I am not too worried about those — they are mostly for scheduling and the API doesnt know about azs right now anyway 20:56:23 We have the amphora API now, which I expect rm_work wants the AZ exposed in 20:56:48 it's been three years and what we have for compute is "nova", haven't even managed to get the containers stuff working (which also would have these fields), and every compute solution we can think of to mention here would also fit into these ... 20:57:00 so blocking something useful like this because of "possible future whatever" seems lame to me 20:57:02 just saying 20:57:10 so what’s bad about another table? 20:57:18 Well, let's all comment on the patch: 20:57:20 #link https://review.openstack.org/#/c/510225/ 20:57:21 just, why 20:57:30 we could do another table -- but we'd want a concrete schema there too 20:57:43 and the models would just auto-join it anyway 20:57:45 I will mention, AZ is an extension to nova from what I remember and is optional 20:57:48 yes, but it would be for nova and if we do vmware they would do their table 20:58:00 the models need to do the join 20:58:07 we can't have a variable schema, can we? 20:58:12 +1 20:58:20 +1 to which, lol 20:58:28 I really don't want a variable schema. It has ended poorly for other projects 20:59:24 isn't it hard to have single schema for multiple drivers that would have completely different concepts? 20:59:36 Final minute folks 20:59:43 jniesz: yes, but i don't think they HAVE different concepts 21:00:02 please name a compute system that we couldn't somehow fill az/flavor/image_ref in a meaningful way 21:00:07 i can't think of one 21:00:09 Review, comment early, comment often... grin 21:00:22 #endmeeting