21:00:09 #startmeeting nova 21:00:10 Meeting started Thu Sep 27 21:00:09 2018 UTC and is due to finish in 60 minutes. The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:12 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:14 The meeting name has been set to 'nova' 21:00:18 hi everyone, welcome to the nova meeting 21:00:20 o/ 21:00:22 o/ 21:00:54 ō/ 21:00:54 is it just the 3 of us? this is going to be a quiet meeting 21:01:05 well, there goes that 21:01:12 there goes the neighborhood 21:01:17 ok, let's start 21:01:20 #topic Release News 21:01:31 #link Stein release schedule: https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule 21:01:50 we had the PTG a week and a half ago 21:01:51 \o 21:01:57 milestone 1 is Oct 25 21:02:29 #link NovaScheduler minutes http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-09-24-14.00.html 21:02:29 mriedem gave an update on grenade stuff for placement extraction. It's proceeding apace. 21:02:29 We talked about theconsumer generation series 21:02:31 shit 21:02:33 sorry 21:02:40 heh, that's ok 21:02:58 * efried slinks off to compose in an editor like he ought 21:03:02 I was gonna say, be on the lookout reviewing specs and proposing specs at this stage 21:03:26 spec freeze is milestone 1 or 2? 21:03:31 help remind people to re-propose specs from last cycle if applicable 21:03:39 https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule says milestone 2 21:04:23 I think we're planning for milestone 2, I'm starting on the analysis of the RC-time regressions here https://etherpad.openstack.org/p/nova-rocky-rc-regression-analysis so far, I haven't done anything other than copy paste the regressions themselves 21:04:37 wfm since i'm not really reviewing specs atm 21:04:59 if anyone wants to help, that would be appreciated. the idea was, let's find out if the regression fest was related to the later spec freeze date 21:05:37 so I'm going to look at when each regression was found, when the change related to it landed, etc. and see if there's anything there to suggest an earlier freeze date would be helpful in reducing regressions 21:06:24 I'll add notes to the analysis etherpad and ask everyone to take a look and we'll decide as a group whether to move the spec freeze back to milestone 1 21:06:43 I'm going to start adding notes after this meeting 21:07:29 anybody have any other questions on the release schedule? 21:07:50 #link Stein runway etherpad: https://etherpad.openstack.org/p/nova-runways-stein 21:08:03 runways are open for stein and we have two blueprints currently in runways 21:08:10 #link runway #1: https://blueprints.launchpad.net/nova/+spec/use-nested-allocation-candidates (gibi) [END: 2018-10-04] next patch is https://review.openstack.org/583667 21:08:16 #link runway #2: https://blueprints.launchpad.net/nova/+spec/vmware-live-migration (rgerganov) [END: 2018-10-12] one patch https://review.openstack.org/270116 21:08:30 both of these are getting good review attention 21:09:11 we did releases of stable branches last week/this week 21:09:20 can't remember exactly when all of them went out 21:09:38 anything else on release news? 21:09:52 #topic Bugs (stuck/critical) 21:10:05 no critical bugs in the link 21:10:13 #link 65 new untriaged bugs (same since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 21:10:20 #link 21 untagged untriaged bugs (down 3 since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW 21:10:31 bug counts have crept up since everyone was busy with the PTG 21:10:42 thanks to those who have been helping with triage since getting back 21:11:02 let's continue to get more triage done to get the numbers down 21:11:08 #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags 21:11:15 #help need help with bug triage 21:11:22 Gate status 21:11:26 #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 21:11:35 gate status has been especially bad lately 21:11:43 clarkb had an email to the ML about the situation 21:11:46 * melwitt gets link 21:12:20 #link zuul job backlog http://lists.openstack.org/pipermail/openstack-dev/2018-September/134867.html 21:12:52 see this email for how you can help ^ 21:13:19 I must admit I'm not clear on whether there are any specific gate bugs we can focus on fixing to help alleviate the strain on the gate 21:13:28 categorization rate is good now http://status.openstack.org/elastic-recheck/data/integrated_gate.html 21:13:37 problem is dropped node providers 21:13:40 mdbooth was working on parallel_evacuate_... 21:13:50 that one's been a persistent PITA 21:13:56 that's a drop in the bucket 21:14:02 fine 21:14:03 ok, so at this point it's the loss of cloud provider mainly 21:14:12 that's my understanding, 21:14:15 mriedem: eh there are still a ton of falky jobs in openstack 21:14:17 would be nice if clarkb did an update on that thread 21:14:21 I wouldn't pin this all on lack of clodu resources 21:14:23 but yes, mdbooth is working on a gate bug we've seen cropping up. and the patch has been receiving review 21:14:32 http://status.openstack.org/elastic-recheck/index.html#1686542 21:14:34 glance neutron cinder in particular have some really flaky jobs 21:14:40 http://status.openstack.org/elastic-recheck/index.html#1708704 21:14:42 functional testing mostly 21:14:48 http://status.openstack.org/elastic-recheck/index.html#1783405 21:14:57 those are all extremely bad, 21:15:06 i don't know what to do about the mirrors one 21:15:19 http://status.openstack.org/elastic-recheck/gate.html#1449136 21:15:23 0 fails in 24 hrs / 899 fails in 10 days 21:15:24 mriedem: the no more mirrors to try one? 21:15:33 I think thats mostly tripleo talking to their centos prerelease repos 21:15:48 I'm nto sure there is anything we can do for that other than having them improve the reliability of those services 21:15:55 if it's tripleo, meh 21:15:59 i mostly care about integrated gate 21:16:14 ya they consume some random rpm repos that are for centos pre release etc 21:16:31 http://status.openstack.org/elastic-recheck/gate.html#1449136 is the killer 21:16:36 ok, I guess those number confuse me because they have quite low fail numbers per 24 hours. how can they be causing the massive backlog? 21:16:52 900 failures in 10 days is extreme 21:16:55 on ^ 21:17:03 and that's hitting a bunch of the tox unit test jobs 21:17:04 no, it totally is, I just don't understand the 0 in 24 hours part 21:17:12 which have doubled recently b/c of python3-magedon 21:17:14 does that mean it's gone as of today? 21:17:24 no 21:17:27 indexing is backed up 21:17:29 by like 12 hours 21:17:37 Delay in Elastic Search: Indexing behind by 10 hours 21:17:41 ok, I see 21:18:11 ok, maybe we can do a ML email about that particular bug and see if we can get some help tracking it down 21:18:30 something specific to focus on 21:18:57 ok, moving on for now 21:18:59 3rd party CI 21:19:11 #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days 21:19:49 I had been seeing a lot of 3rd party CI failures anecdotally this week but things seem to be turning green 21:20:04 seeing more jobs pass today than yesterday, it seems 21:20:21 anyone have anything else on bugs or gate status or third party CI before we move on? 21:20:40 #topic Reminders 21:20:50 #link high level nova PTG summary: http://lists.openstack.org/pipermail/openstack-dev/2018-September/135122.html 21:21:03 summary to the ML posted yesterday ^ 21:21:26 forum session topic submissions deadline was yesterday as well 21:21:53 please see https://etherpad.openstack.org/p/nova-forum-stein for a list of topics we've submitted and a link to how you can view all of the forum topic submissions 21:22:01 #link Stein Subteam Patches n Bugs: https://etherpad.openstack.org/p/stein-nova-subteam-tracking 21:22:23 etherpad for subteams to organize patches ^ gmann has added API related patches to the API subteam section 21:22:36 anyone else have any reminders to call out? 21:22:59 #topic Stable branch status 21:23:05 #link stable/rocky: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/rocky,n,z 21:23:11 #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z 21:23:15 #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z 21:23:20 #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z 21:23:27 we have lots of stable reviews to do, all around 21:23:56 and I think we're looking at doing stable releases again after merging these patches we have. lots of bug fixes to be had 21:24:14 we also need to cut ocata-em 21:24:39 http://lists.openstack.org/pipermail/openstack-dev/2018-September/134915.html 21:25:01 ah, ok. I will look at that 21:25:03 thanks 21:25:05 tl;dr if we want to cut a final ocata release before tagging the ocata branch as extended maintenance, we need to flush the queue and then do that, 21:25:18 which means getting a lot of the rocky/queens/pike stuff merged so we can merge the remaining ocata backports 21:25:28 b/c once ocata is em, we don't do any more releases 21:26:12 so, 21:26:14 ok, ok... I guess that feels the same as EOL. I need to read up on how it's different again 21:26:18 maybe we need to coordinate a review day next week 21:26:35 it's not eol b/c we don't delete the branch 21:26:40 but we don't release it, 21:26:49 and it can start to bit rot if no one cares about it 21:27:04 https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html 21:27:10 ah. so we can keep merging patches to it but we just never release it 21:27:16 correct 21:27:43 thank you. that's the crash course (of learning it, not that EM is a crash course) 21:27:49 alrighty 21:28:05 anything else for stable branch status before we move to subteams? 21:28:20 #topic Subteam Highlights 21:28:36 we didn't have a cells v2 meeting because dansmith and tssurya weren't around 21:29:04 my cross-cell resize patch is fattening up nicely 21:29:10 ~1450+ LOC so far 21:29:23 it should be ready by thanksgiving 21:29:44 haha, nice 21:29:56 we had some talk in the channel about a way to return per cell exceptions from a scatter-gather and sean-k-mooney has proposed an idea on how to do that using a Result class that can hold an exception object 21:30:35 this came out of some of the down cell work tssurya's doing, where it would help if she could differentiate between an InstanceNotFound exception and other exceptions, from a cell scatter-gather 21:30:50 so, just FYI 21:31:00 anything else to mention for cells? 21:31:42 moving on to the scheduler subteam, efried 21:31:49 #link NovaScheduler minutes http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-09-24-14.00.html 21:31:57 mriedem gave an update on grenade stuff for placement extraction. It's proceeding apace. 21:32:04 We talked about the 21:32:04 #link consumer generation series starting at https://review.openstack.org/#/c/591597 21:32:14 In particular about the 'delete' patch. It isn't buying us a whole lot, but we agreed to do it anyway, with a note acking that it's not buying us a whole lot. That change has since merged, as well as several subsequent ones in the series. 21:32:25 FYI we haven't yet gotten past the consumer-gen part of the series and into the use-nrp part. 21:32:33 We agreed gibi's any-traits specs could wait, but since they're merged, they'll just be considered low priority and likely to slide to Train. 21:32:45 #link Libvirt vgpu reshape https://review.openstack.org/#/c/599208/ 21:32:45 was mentioned. Discussed using it as a vehicle to establish a pattern for how reshaper code should be positioned - like in a separate module where we can track which reshapes correspond to which virt drivers and versions, kind of like how we do db migrations. 21:33:01 We started discussing how to handle belmoreira's min_unit boggle generically in an NRP world. Goes back to the idea of a generic inventory/provider descriptor file with customizations. Led to some good 21:33:01 #link post-meeting talk about how to standardize provider names http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-09-24.log.html#t2018-09-24T15:02:05 21:33:09 END 21:33:21 thanks 21:33:36 no notes from gibi on notifications subteam 21:33:45 there wasn't a meeting 21:34:01 ok, cool 21:34:15 and gmann left a note for the API subteam "No office hour. Updated the stein-nova-subteam-tracking etherpad for API items." which is the etherpad linked earlier on in Reminders 21:34:28 does anyone have anything else for subteams? 21:34:44 #topic Stuck Reviews 21:34:57 no items in the agenda. anyone in the room have anything for stuck reviews? 21:35:25 ok 21:35:26 #topic Open discussion 21:35:36 one item in the agenda from takashin 21:35:42 #link Remove the unused instance-name https://review.openstack.org/#/c/602520/ 21:35:49 It is a fix for '--instnace-name' argument in 'nova list' command. 21:35:57 In nova side, 'instance_name' query parameter is just ignored currently. 21:36:04 Since https://review.openstack.org/#/c/10917/ (merged in August 2012). 21:36:10 So it is useless. 21:36:23 Should we just remove the '--instnace-name' argument in 'nova list' command immediately, or make the argument deprecated then remove it? 21:36:46 deprecate the optoin 21:36:48 *option 21:36:59 that's what we've done in the past, regardless of how useful or not the thing is 21:37:20 okay. will make the argument deprecated then remove it 21:37:25 at the time that we remove it, we would have to do a major version bump, fwiw 21:37:38 yeah, but we probably have other stuff we can drop at the same time 21:37:52 which is pretty much what johnthetubaguy already said in the review as I read this comment 21:37:56 yeah 21:37:57 https://docs.openstack.org/python-novaclient/latest/reference/deprecation-policy.html 21:38:26 Thank you. 21:38:42 yeah, ok, so deprecate now and then wait a cycle before removing 21:38:51 ^ that doc is old btw 21:39:41 so... no waiting a cycle? 21:40:14 it's been broken for 6 years, 21:40:16 we can wait a cycle 21:40:26 deprecate in S, remove in T 21:40:45 ok. wasn't sure what you were getting at by saying the doc is old 21:41:21 I think it's usual to let the deprecation message be there for a cycle before removing 21:41:35 ok. does anyone have anything else for open discussion before we wrap up? 21:41:54 like we don't do our own releases of novaclient anymore 21:42:11 "2. Once the change is approved, have a member of the nova-release team release a new version of python-novaclient." 21:42:19 oh, I see 21:42:29 anywho 21:42:35 thanks 21:42:41 ok, I guess that's all folks 21:42:49 thanks everyone 21:42:52 #endmeeting