21:00:04 #startmeeting nova 21:00:05 Meeting started Thu May 10 21:00:04 2018 UTC and is due to finish in 60 minutes. The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:08 The meeting name has been set to 'nova' 21:00:11 o/ 21:00:12 o/ 21:00:13 hi everybody 21:00:15 o/ 21:00:19 o/ 21:00:27 \o 21:00:47 let's get started 21:00:49 ō/ 21:00:52 #topic Release News 21:00:58 #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule 21:01:13 we've got the summit coming up soon, the week after next 21:01:27 hmm. I totally shouldn't be here. Oh well. 21:01:30 * cdent settles in 21:01:34 r-2 is June 7 which is also the spec freeze 21:01:53 current runway status: 21:01:57 #link Rocky review runways: https://etherpad.openstack.org/p/nova-runways-rocky 21:02:03 #link runway #1: XenAPI: Support a new image handler for non-FS based SRs [END DATE: 2018-05-11] series starting at https://review.openstack.org/#/c/497201 21:02:11 #link runway #2: Add z/VM driver [END DATE: 2018-05-15] spec amendment needed at https://review.openstack.org/562154 and implementation starting at https://review.openstack.org/523387 21:02:17 #link runway #3: Local disk serial numbers [END DATE: 2018-05-16] series starting at https://review.openstack.org/526346 21:03:10 anyone have anything to add for release news or runways? 21:03:48 #topic Bugs (stuck/critical) 21:03:59 no critical bugs 21:04:07 #link 34 new untriaged bugs (up 3 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 21:04:15 #link 9 untagged untriaged bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW 21:04:24 damn was down to 28 last night 21:04:42 wow, 6 new overnight then. that's a lot 21:04:52 https://bugs.launchpad.net/nova/+bug/1718439 isn't new but... 21:04:53 Launchpad bug 1718439 in OpenStack Compute (nova) "Apparent lack of locking in conductor logs" [Undecided,New] 21:05:47 oh, okay got added to nova today 21:05:57 added back, i tried to pawn it off on oslo a year ago 21:06:02 anyway 21:06:21 okay 21:06:43 #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags 21:07:00 please lend a helping hand with bug triage if you can ^ and thanks to all who have been helping 21:07:28 Gate status: 21:07:32 #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 21:07:45 there have been a lot of job timeouts lately, I've noticed 21:08:26 one cool thing is that e-r (elastic recheck) is working again and commenting on reviews if a known gate bug is hit 21:08:49 3rd party CI: 21:08:55 #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days 21:09:28 the zKVM third party job is currently broken because of a consoles related devstack change of mine 21:10:02 ML thread http://lists.openstack.org/pipermail/openstack-dev/2018-May/130331.html and a fix is proposed https://review.openstack.org/567298 21:10:24 does anyone have anything else on bugs, gate status or third party CI? 21:11:04 #topic Reminders 21:11:14 #link Forum session moderators, create your etherpads and link them at https://wiki.openstack.org/wiki/Forum/Vancouver2018 21:11:20 #link ML post http://lists.openstack.org/pipermail/openstack-dev/2018-May/130316.html 21:11:45 friendly reminder to get your etherpads created and link them on the wiki so people can find em 21:11:59 #link Rocky Review Priorities https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 21:12:26 reminder that the subteam and bugs etherpad is there for finding patches 21:12:33 anything else for reminders? 21:12:57 #topic Stable branch status 21:13:04 #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z 21:13:15 #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z 21:13:20 #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z 21:13:35 #link We're going to do some stable releases to get some regression fixes out: https://etherpad.openstack.org/p/nova-stable-branch-status 21:14:15 looks like "libvirt: check image type before removing snapshots in _cleanup_resize" is the last thing we need before we do the releases 21:14:33 anything else on stable branch status? 21:14:56 #topic Subteam Highlights 21:15:16 we skipped the cells v2 meeting again because we didn't need a meeting. anything interesting to mention dansmith? 21:15:23 negative ghostrider 21:15:28 k 21:15:38 edleafe, have some scheduler subteam news? 21:15:42 Gave an update on Nested Resource Providers progress to a Cyborg team rep 21:15:45 Bauzas remembered that he owes me a beer, to be paid in Vancouver 21:15:47 Confirmed the importance of merging the conversion of libvirt's get_inventory() method to update_provider_tree() 21:15:51 #link https://review.openstack.org/#/c/560444 21:15:53 This will be my last update. After running the Scheduler meeting for over two years, I'm stepping down. 21:15:56 EOM 21:16:58 glad to hear the collab with the cyborg team is going along smoothly it sounds like 21:17:22 yeah, I want to keep them in the loop 21:17:34 and sorry to hear that edleafe. thanks for chairing the meeting all these years 21:18:17 gibi has left some notes about the notifications meeting 21:18:26 "We talked about #link https://review.openstack.org/#/c/563269 Add notification support for trusted_certs and agreed that it is going to a good direction" 21:18:45 "We agreed that the notification requirement in #link https://review.openstack.org/#/c/554212 Add PENDING vm state can be fulfilled with existing notifications. But now I see that additional requirements are popping up in the spec so I have to go back and re-review it." 21:19:19 anything else for subteams? 21:19:45 #topic Stuck Reviews 21:19:57 nothing in the agenda. anyone have anything for stuck reviews? 21:20:21 #topic Open discussion 21:20:27 couple of agenda items in here, 21:20:33 first one: 21:20:41 (wznoinsk): I wanted to raise a concern about not running some advanced tempest tests (they're part of 'slow' type hence are skipped): https://github.com/openstack/tempest/search?utf8=%E2%9C%93&q=%22type%3D%27slow%27%22&type= 21:21:05 are you around wznoinsk? 21:21:14 we do run the slow tests in a job in the experimental queue 21:21:19 i always have to look up which one though 21:21:39 so they only run on-demand then? I wonder if that should be a daily periodic 21:21:48 for example, the encrypted volume tests are marked as slow 21:21:52 well, 21:22:01 i think i've also floated the idea of a job that only runs slow tests 21:22:14 since there are relatively few of them 21:22:35 melwitt, yes 21:22:37 they were removed from the main tempest-full job because the time it was taking to run api, scenario and slow tests 21:22:45 to run on every change, the slow tests you mean? 21:22:48 yes 21:22:57 ah, okay. so they used to be in tempest-full 21:23:09 i believe so, until sdague went on a rampage 21:23:13 yeah, I think they should be run once a day at the very least 21:23:22 we could even have a nova-slow job in-tree that just runs compute api and all slow scenario tests 21:23:36 i could put up something to see what that looks like and how long it takes 21:23:39 having a separate slow-only job on every change sounds fine too being that we used to run them anyway as part of tempest-full 21:23:48 if it's only like 45 minutes that's about half of a normal tempest-full run 21:23:50 melwitt, mriedem I don't think some of them are that slow... I can compare times but IIRC there wasn't anything time-exploding 21:23:53 okay, I think that would be a good idea 21:24:24 i think it's this one legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend 21:24:39 melwitt, mriedem basically I found that because we don't run these slow upstream at all (or I couldn't find any check/gate job with them) some of them are basically broken 21:24:41 in nova experimental i mean 21:24:56 yeah, and we could be regressing encrypted volume support w/o knowing it 21:25:03 wznoinsk: gotcha. thanks for bringing it up, I didn't know about this 21:26:05 okay, so let's try out a nova-slow in-tree job and see if it's reasonable to run on all changes to guard against regressions 21:26:30 anything else on that topic? 21:26:33 #action mriedem to add nova-slow job 21:27:01 do I have to type that to make it go on the minutes? 21:27:02 mriedem, there are a few networking related slows too 21:27:15 wznoinsk: sure but nova changes don't care about slow provider network tests 21:28:01 we can argue in the review 21:28:03 mriedem, would I bring up this topic with networking guys too then? or can it be somehow discussed between teams internally? 21:28:20 cinder would also likely want this 21:28:28 or we just do a generic tempest-slow job 21:28:33 should talk to gmann 21:28:41 mriedem++ 21:28:55 wznoinsk: how about you talk to gmann :) 21:29:09 mriedem, I don't mind, will do 21:29:45 cool. let's move to the next agenda item: 21:30:00 (takashin): Abort Cold Migration #link https://review.openstack.org/#/c/334732/ 21:30:22 About the "Abort Cold Migration" function(spec), I got feedbacks from operators in openstack-operators mailing list. 21:30:36 #link http://lists.openstack.org/pipermail/openstack-operators/2018-May/015209.html 21:30:44 #link http://lists.openstack.org/pipermail/openstack-operators/2018-May/015237.html 21:30:53 I would like to proceed "List/show all server migration types" implementation that the function depends on. 21:31:00 #link https://review.openstack.org/#/c/430608/ 21:31:16 takashin: you got replies from some people from your company, 21:31:21 and i asked them some follow up questoins 21:31:23 *questions 21:31:47 Yes. 21:32:27 so i personally would like to at least wait to hear those responses 21:32:58 okay. 21:33:08 IIRC, this has been discussed long ago and the usual use case is, cold migrating an instance with a large disk that will take forever, and wanting the ability to cancel it 21:33:30 sure, on non-shared storage 21:33:45 which was the one other reply in the ops list, they are rolling out a new deployment and before investing in it, they use cheap local storage 21:33:45 or it is stalled out 21:33:48 not ceph 21:34:03 right 21:34:21 i'd like more details in the ops list reply about the stall out case 21:35:03 i dont really want to put a bunch of time into new apis and plumbing for one operator that hits some stall issue without details 21:36:19 fwiw, I've thought the use case sounds reasonable but that the implementation would be really error prone and racy. I don't know if that could be solved with a simple task_state lockout 21:37:04 we also can't reliably test this 21:37:12 without stubs 21:37:24 by test i'm talking tempest 21:37:41 cleanup from stuff like this is a pita 21:37:46 and if we can't test it (agree we can't), then it's going to be ugly 21:37:59 I know it sounds useful in principle, 21:38:13 but I think most people that care about this kind of thing use shared storage, 21:38:38 because allowing your users to move terabytes of data around your network and disks every time they want to resize is kinda ugly 21:38:55 the cancel and cleanup case for ceph, by the way, 21:39:20 is potentially even worse than the "stop a copy and roll back" case of copying the disk manually 21:39:49 so without some really compelling reason I'd love to not introduce this complication 21:39:50 you mean non-shared ceph? 21:40:22 no 21:41:10 okay, I thought in the shared storage case, you don't have to copy anything 21:41:20 right, 21:41:54 you've got a vastly smaller window of time where it makes sense to abort 21:42:38 i would expect anyone using ceph to not need this 21:42:40 so they won't use it 21:42:53 right, but, it'll be there.. a button to push 21:43:18 yeah ... I think that's a good summary of why we haven't gone forward with this in the past. really complex and anyone with large storage isn't going to be doing local storage 21:43:25 (if they want to migrate things) 21:44:31 so (1) wait for replies, (2) talk some more 21:44:40 yeah 21:44:41 (3) we're <2 weeks from the forum with real live operators in the room to harass 21:44:57 there will be a public cloud forum session, 21:44:58 okay 21:45:01 where they talk about gaps and such, 21:45:07 we should get some feedback in that session for sure 21:45:16 i'll find their etherpad and add this 21:45:32 cool, thank you 21:45:39 Thanks. 21:46:16 https://etherpad.openstack.org/p/YVR-publiccloud-wg-brainstorming btw 21:46:40 okay, so we'll await the replies to the ML thread and gain more understanding of what the "stall out" problem is, whether large disk local storage is being used, and why shared storage or boot-from-volume isn't being used 21:47:52 anything else on that topic? anyone have any other topics for open discussion? 21:48:14 END IT 21:48:27 :) 21:48:34 going once 21:48:44 going twice 21:48:56 ... 21:49:13 #endmeeting