14:00:14 <melwitt> #startmeeting nova
14:00:16 <openstack> Meeting started Thu Jul 12 14:00:14 2018 UTC and is due to finish in 60 minutes.  The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:20 <openstack> The meeting name has been set to 'nova'
14:00:22 <mriedem> o/
14:00:23 <gibi> o/
14:00:25 <melwitt> hello everyone
14:00:27 <dansmith> o/
14:00:29 <efried> ō/
14:00:34 <takashin> o/
14:00:46 <edleafe> \o
14:00:59 <melwitt> #topic Release News
14:01:06 <melwitt> #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule
14:01:24 <melwitt> we're just two weeks out from feature freeze r-3 July 26
14:01:48 <melwitt> we have a blueprint status tracking etherpad to help with organizing review for completing blueprints
14:01:50 <melwitt> #link https://etherpad.openstack.org/p/nova-rocky-blueprint-status
14:02:30 * alex_xu waves late
14:02:31 <melwitt> the placement reshaper and cells v2 handling of a down cell are high priority things I expect to go past feature freeze
14:03:45 <melwitt> if there are any other FFE candidates, I'd like to collect those somewhere, maybe by adding comments to the blueprint status etherpad. so please add comments there if there are things that need an FFE
14:04:24 <melwitt> #link Rocky review runways: https://etherpad.openstack.org/p/nova-runways-rocky
14:04:36 <tssurya> o/
14:04:41 <melwitt> runways got populated with new blueprints on tuesday
14:04:48 <melwitt> #link runway #1: Allow abort live migrations in queued status https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status (Kevin Zheng) [END DATE: 2018-07-25] starts here https://review.openstack.org/563505
14:04:53 <melwitt> #link runway #2: Add z/VM driver https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky (jichen) [END DATE: 2018-07-25] starts here https://review.openstack.org/523387
14:05:00 <melwitt> #link runway #3: Support traits in Glance https://blueprints.launchpad.net/nova/+spec/glance-image-traits (arvindn05) [END DATE: 2018-07-25] last patch https://review.openstack.org/569498
14:05:34 <melwitt> does anyone have any questions or comments to add about release news or runways?
14:06:25 <melwitt> #topic Bugs (stuck/critical)
14:06:33 <melwitt> no critical bugs in the link
14:06:40 <melwitt> #link 50 new untriaged bugs (down 1 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
14:06:46 <melwitt> #link 6 untagged untriaged bugs (down 1 since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW
14:06:53 <melwitt> #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags
14:06:58 <melwitt> #help need help with bug triage
14:07:28 <melwitt> we're keeping the bug count at bay the past week, which is good, but we do need to do more triage
14:07:42 <melwitt> and try to whittle that down
14:07:48 <melwitt> Gate status:
14:07:53 <melwitt> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
14:07:57 <melwitt> gate seems to have been ok
14:08:03 <melwitt> 3rd party CI:
14:08:08 <melwitt> #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days
14:08:36 <melwitt> anyone have anything else for bugs or gate status or third party CI?
14:09:25 <melwitt> #topic Reminders
14:09:32 <melwitt> #link Rocky Subteam Patches n Bugs: https://etherpad.openstack.org/p/rocky-nova-priorities-tracking
14:09:40 <melwitt> #link Stein PTG planning has commenced: https://etherpad.openstack.org/p/nova-ptg-stein
14:09:55 <melwitt> we have an etherpad for collecting topic ideas for the denver PTG ^
14:10:19 <melwitt> so please feel free to start using it
14:10:44 <melwitt> that's all I have for reminders. anyone have anything to add for reminders?
14:11:18 <melwitt> #topic Stable branch status
14:11:24 <melwitt> #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z
14:12:00 <melwitt> we could use some stable reviews on queens, there are 8 proposed
14:12:07 <melwitt> #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z
14:12:25 <melwitt> similar on pike, several already have one +2 there
14:12:34 <melwitt> #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z
14:13:06 <melwitt> and finally, there are a couple of reviews to do on ocata, patches that are not -W
14:13:28 <melwitt> does anyone have anything else for stable branch status?
14:13:47 <melwitt> #topic Subteam Highlights
14:14:14 <melwitt> cells v2, we skipped having a meeting this week. and work on handling a down cell is underway. anything else to highlight dansmith?
14:14:20 <dansmith> nay
14:14:28 <melwitt> cool
14:14:40 <melwitt> scheduler/placement, efried?
14:14:43 <efried> #link NovaScheduler meeting log http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-07-09-14.01.log.html
14:14:43 <efried> => Discussed the relative merits of having Project Management for the reshaper effort, since there's so many moving parts & different people on the hook for different parts.  I don't think we landed on anything solid there.
14:14:43 <efried> => We did discuss the scheduling/pacing of the work and determined that
14:14:43 <efried> - sql-fu (jaypipes) needs to be done by Wednesday (yesterday)
14:14:43 <efried> - POST /reshaper API (cdent) by end of this week (not sure this was well planned, since cdent is out today & tomorrow)
14:14:44 <efried> - client-side bits (resource tracker (efried) and management CLI (dansmith)) by FF (7/26)
14:14:44 <efried> => We discussed the various consumer gen bugs & blockers. Most of that work has since merged.
14:14:45 <efried> => We had a question from Jack1 about placement affordance for PCI_DEVICE (there isn't any) which we continued discussing in #openstack-placement after the meeting.
14:14:45 <efried> => cdent asked to stop being on the hook to run the scheduler meetings; efried agreed to take over.
14:14:46 <efried> <EOM>
14:15:33 <melwitt> nice summary, thanks
14:16:14 <efried> I think the main takeaway there is that we're getting dangerously close to being behind on reshaper stuff...
14:16:46 <melwitt> AFAIK the sql-fu hasn't been proposed yet but we're on the lookout for it
14:17:42 <melwitt> we expect this work to continue past FF but of course it's best to get it done sooner than later
14:17:55 <mriedem> we only have 2 weeks between FF and rc1 don't we?
14:18:21 <mriedem> yeah 7/26 to 8/9
14:18:39 <mriedem> so i guess we should re-assess as we get toward FF, but there really isn't any window post-FF
14:18:56 <mriedem> if it's like 1 change, then maybe that's ok
14:19:15 <melwitt> hm, okay
14:19:19 <mriedem> and dansmith is out next week so no reviews from him
14:19:23 <dansmith> yup
14:19:25 <dansmith> and agree,
14:19:34 <dansmith> it's pretty scary stuff to be doing far into FF
14:20:06 <mriedem> jaypipes: are you around this week to work on the sql part of reshaper?
14:20:51 <mriedem> i guess we should move on, but that worries me
14:21:09 <melwitt> aye
14:21:38 <melwitt> okay, so it sounds like it would be a bad idea to continue work on reshaper past FF but we'll re-assess where we are closer to FF
14:22:03 <melwitt> moving on, notification subteam highlights, gibi?
14:22:25 <gibi> melwitt: sure
14:22:43 <gibi> there was no meeting but there is a status mail #link http://lists.openstack.org/pipermail/openstack-dev/2018-July/132068.html
14:23:08 <gibi> the last notification only bp is closed ( bp add-action-initiator-to-instance-action-notifications)
14:23:29 <gibi> that is all
14:23:44 <melwitt> coolness, thanks gibi
14:24:02 <melwitt> we have a new addition to the subteam agenda, API subteam from gmann
14:24:44 <melwitt> he said he might not be able to attend this week but will in the future, for this week we have a link to a highlight mail to the ML
14:24:50 <melwitt> #link API highlights post from the ML: http://lists.openstack.org/pipermail/openstack-dev/2018-July/132148.html
14:25:09 <melwitt> that's all I have for subteams. anyone have anything else for subteams?
14:25:38 <melwitt> #topic Stuck Reviews
14:25:54 <melwitt> no items in the agenda, does anyone in the room have anything for stuck reviews?
14:26:14 <melwitt> #topic Open discussion
14:26:26 <melwitt> I have one item that's a follow up from last meeting
14:26:42 <melwitt> about the specless blueprint, proof-of-concept code has been uploaded (jmlowe) https://blueprints.launchpad.net/nova/+spec/rbd-erasure-coding
14:26:47 <melwitt> #link proof-of-concept code https://review.openstack.org/581055
14:27:43 <melwitt> last week we agreed that approval of the blueprint would be contingent on 1) proof-of-concept code being uploaded 2) two core reviewers committing to reviewing it
14:28:05 <dansmith> just curious, but EC implies some amount of overhead for the parity data.. does ceph hide that in what it tells us is free space if enabled?
14:28:40 <melwitt> from what I understand about what jmlowe said, ceph will not hide that in what it tells for usage
14:28:51 <dansmith> usage or free space?
14:29:24 <dansmith> if usage then our allocations don't really line up with the inventory and we need to have some reserved amount to cover the overhead I guess?
14:30:07 <melwitt> oh, yeah that's a good point to bring up
14:30:26 <melwitt> he had said usage but I don't know if that applies to free space too
14:30:27 <jaypipes> mriedem: sorry, was out. yes, I am currently underway on the reshaper sql series.
14:30:54 <dansmith> melwitt: those kind of questions are what I'd ask in a spec, FWIW
14:31:09 <mriedem> i'm personally not interested in being distracted by the EC stuff at this point in the cycle
14:32:19 <melwitt> if it's more involved than the POC that's been posted, then I agree it's too late in the cycle to mess with it
14:32:47 <mriedem> the libvirt driver can report disk_gb overhead to the RT, but that's only used for claims which we don't do if you've disabled the DiskFilter
14:33:16 <mriedem> our workaround for overhead from the driver is increasing reserved on the provider inventory
14:33:26 <mriedem> which we don't do automatically
14:33:32 <dansmith> yeah,
14:33:34 <melwitt> okay, that's helpful info
14:33:40 <dansmith> but what is not straightforward,
14:33:46 <dansmith> is if one compute has this enabled but another doesn't,
14:34:07 <dansmith> does that mean the allocations for just that compute node will be 20% off?
14:34:08 <dansmith> also, the scheduler makes those and doesn't know how the compute is configured
14:34:27 <dansmith> I know 20% is likely high, just using it as an example
14:34:32 <efried> dansmith: Are we talking about a shared disk here?
14:34:47 <dansmith> efried: it's ceph, so shared pool
14:34:55 <efried> And who's managing the rp for that?
14:34:58 <dansmith> which we don't do correctly right now anyway, granted, but..
14:35:38 <efried> If the op does it, like for nfs shared storage, then the driver  will never touch the reserved value, because the driver doesn't report it at all.
14:35:49 <efried> So the op gets to set the reserved value (and allocation ratio and total, and....)
14:36:05 <dansmith> melwitt: anyway, I think in five minutes we've justified some spec-worthy Q&A
14:36:42 <melwitt> yep, that's fair. I'll chat with jmlowe and let him know this should indeed be a spec because of the usage/allocations piece
14:36:57 <melwitt> thanks everyone for the discussion
14:37:06 <melwitt> does anyone else have anything for open discussion before we wrap up?
14:37:51 <melwitt> alrighty, thanks everyone
14:37:54 <melwitt> #endmeeting