21:00:04 <melwitt> #startmeeting nova
21:00:05 <openstack> Meeting started Thu Jun  7 21:00:04 2018 UTC and is due to finish in 60 minutes.  The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:09 <melwitt> hello everybody
21:00:09 <openstack> The meeting name has been set to 'nova'
21:00:12 <edmondsw> o/
21:00:15 <mriedem> o/
21:00:22 <takashin> o/
21:00:25 <edleafe> \o
21:00:39 <melwitt> #topic Release News
21:00:46 <melwitt> #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule
21:00:49 <efried> ō/
21:01:04 <melwitt> today is the r-2 milestone, so we're proposing a release by EOD
21:01:24 <melwitt> is there a particular patch or patches to wait for, for the release tag?
21:02:12 <melwitt> I had been trying to get this one squared away https://review.openstack.org/540258 but am stuck on the functional test, found an issue in the cells fixture, don't know the root cause, etc it's going to take more work
21:02:49 <melwitt> so my plan is to just take the current HEAD of the tree at EOD my time to propose the release with
21:03:12 <melwitt> so if anyone has an important bug patch, let me know otherwise
21:03:13 <mriedem> i've also been meaning to write a functoinal test related to something in that patch we talked about
21:03:22 <mriedem> but...time
21:03:28 * melwitt nods
21:03:39 <mriedem> "TODO: We need a new bug and test for the multi-cell affinity scenario  where two instances are scheduled at the same time in the same affinity  group. We need 2 cells with 1 host each, with exactly enough capacity to  fit just one instance so that placement will fail the first request and  throw it into the other host in the other cell. The late affinity check  in the compute won't fail because it can't see the other memb
21:03:39 <mriedem> n the  other cell, so it will think it's fine. "
21:03:54 <mriedem> ftr
21:04:29 <melwitt> k. yeah, really similar to what I'm doing except without the parallel request
21:04:36 <melwitt> I mean I'm not doing parallel
21:04:50 <melwitt> #link Rocky review runways: https://etherpad.openstack.org/p/nova-runways-rocky
21:05:01 <melwitt> #link runway #1: Certificate Validation - https://blueprints.launchpad.net/nova/+spec/nova-validate-certificates (bpoulos) [END DATE: 2018-06-15] https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nova-validate-certificates
21:05:08 <melwitt> #link runway #2: Neutron new port binding API for live migration: https://blueprints.launchpad.net/nova/+spec/neutron-new-port-binding-api (mriedem) [END DATE: 2018-06-20] Starts here: https://review.openstack.org/#/c/558001/
21:05:14 <melwitt> #link runway #3: XenAPI: improve the image handler configure:https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement (naichuans) [END DATE: 2018-06-20] starts here: https://review.openstack.org/#/c/486475/
21:05:28 <melwitt> please lend your eyeballs to runways blueprint patch reviews
21:06:36 <melwitt> thanks to all who have been helping out there
21:06:52 <melwitt> anyone have anything else for release news or runways?
21:07:01 <mriedem> people need to put stuff in the runways queue
21:07:04 <mriedem> don't wait for subteams
21:07:21 <melwitt> yeah, that's a good reminder
21:07:51 <melwitt> folks needn't feel pressured to have too much pre-review. if the implementation is done, no longer in WIP state, it's a good idea to join the runways queue
21:09:07 <melwitt> #topic Bugs (stuck/critical)
21:09:20 <melwitt> no critical bugs in the link
21:09:26 <melwitt> #link 44 new untriaged bugs (up 2 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
21:09:32 <melwitt> #link 13 untagged untriaged bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW
21:09:39 <melwitt> #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags
21:10:01 <melwitt> I know things have been really busy lately, but hopefully soon we can get some more triage done. how to guide above ^
21:10:10 <melwitt> Gate status
21:10:19 <melwitt> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
21:10:36 <melwitt> gate has seemed okay
21:10:42 <melwitt> #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days
21:10:45 <mriedem> http://status.openstack.org/elastic-recheck/index.html#1775491 was big and new
21:10:47 <mriedem> but there is a fix in the gate
21:10:55 <mriedem> https://review.openstack.org/#/c/573107
21:11:12 <melwitt> a-ha, cool. I saw that one a few times but didn't realize it was that big
21:11:44 <melwitt> anyone have anything else for bugs, gate status or third party CI?
21:12:17 <melwitt> #topic Reminders
21:12:24 <melwitt> #link Rocky Subteam Patches n Bugs https://etherpad.openstack.org/p/rocky-nova-priorities-tracking
21:12:31 <melwitt> #info Spec Freeze Day today Thursday June 7
21:12:57 <melwitt> that said, I think we're looking at a couple of exceptions for major issues that are still being spec reviewed,
21:13:14 <melwitt> one is the placement resource providers => nested resource providers migration
21:13:25 <mriedem> #link https://review.openstack.org/#/c/572583/
21:13:26 <melwitt> that will affect anyone upgrading to rocky
21:13:55 <melwitt> the other is the handling of a down cell, related to resiliency in a multiple cells deployment
21:14:00 <mriedem> #link https://review.openstack.org/#/c/557369/
21:14:15 <mriedem> i've asked for user and ops feedback on ^
21:14:22 <mriedem> so far it's just me and gibi on the spec review
21:14:46 <mriedem> but it's pretty huge in what's being proposed
21:15:05 <melwitt> our friends at CERN have been working with us on this one, they're running queens with multiple cells and have run into issues with resiliency for down or low performing cells/databases
21:15:33 <melwitt> so we really need to do something to deal with those issues
21:15:47 <melwitt> yes, there's been a post to the ML by mriedem on that asking for input
21:16:28 <melwitt> #link http://lists.openstack.org/pipermail/openstack-dev/2018-June/131280.html
21:16:50 <melwitt> okay, I think that's all I have. anyone else have anything for reminders?
21:17:42 <melwitt> #topic Stable branch status
21:17:49 <melwitt> #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z
21:17:54 <melwitt> #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z
21:17:58 <melwitt> #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z
21:18:34 <melwitt> we released queens and pike pretty recently and I can't remember if I linked those in a previous meeting
21:19:10 <melwitt> #link queens 17.0.5 released on 2018-06-04 https://review.openstack.org/571494
21:19:34 <melwitt> #link pike 16.1.4 released on 2018-06-04 https://review.openstack.org/571521
21:19:55 <melwitt> #link ocata 15.1.3 soon to be released https://review.openstack.org/571522
21:20:13 <melwitt> does anyone have anything else for stable branch status?
21:20:55 <melwitt> #topic Subteam Highlights
21:21:04 <melwitt> we had a cells v2 meeting this week
21:21:14 <melwitt> main topics were the handling of a down cell spec mentioned earlier
21:21:38 <melwitt> and the nova-network removal go-ahead for removing the REST API bits but keeping the core functionality intact until Stein
21:22:20 <melwitt> CERN is in the middle of a nova-network => neutron migration and keeping nova-network functioning underneath is a really helpful safety net for them. so we're deferring removal of the core functionality until Stein
21:22:51 <melwitt> but we are in the clear to remove the REST API bits and one change has merged for that and others are proposed
21:23:18 <melwitt> scheduler subteam, jaypipes or efried?
21:23:31 <efried> cdent chaired
21:23:39 <efried> rats, lemme look up the logs quick...
21:23:57 <melwitt> I was there too but forgot everything
21:25:15 <efried> right, so the summary is that nrp-in-alloc-cands is priority (we've merged the bottom four or five since then; progress is being made) but the upgrade business is a close second.
21:25:46 <melwitt> yeah, upgrade/migration issue is very high priority
21:25:57 <efried> ...and blocks blueprints that are changing their tree structures by *moving* existing resource classes, but *not* anyone who's just *adding* new inventories to child providers.
21:25:57 <melwitt> okay, cool
21:26:16 <efried> upgrade depends on nrp-in-alloc-cands and consumer generations.
21:26:20 <efried> upgrade spec was linked earlier.
21:26:39 <melwitt> k thanks
21:26:50 <melwitt> gibi left some notes for notifications,
21:26:59 <melwitt> "We had a meeting with Matt and talked about the possible need of a major bump of the ServerGroupPayload due the the renaming and retyping of the policies field of the InstanceGroup ovo."
21:27:05 <melwitt> "We agreed to aim for keeping both the deprecated policies and adding the new policy field with a minor version bump if possible."
21:27:10 <melwitt> #link https://review.openstack.org/#/c/563401/3/doc/notification_samples/common_payloads/ServerGroupPayload.json@10
21:27:33 <melwitt> anything else for subteams?
21:27:45 <mriedem> that reminds me,
21:27:52 <mriedem> i need to talk to dansmith about yikun's changes there at some point
21:27:54 <mriedem> but low priority atm
21:28:06 <mriedem> the InstanceGroup.policies field is being renamed
21:28:10 <mriedem> which is weird with objects
21:28:44 <melwitt> didn't know about that. curious why the need to rename but I'll go look it up later
21:28:45 <mriedem> https://review.openstack.org/#/c/563375/11/nova/objects/instance_group.py@165
21:28:50 <mriedem> it's a different format
21:28:55 <mriedem> details are in the spec
21:29:02 <melwitt> k, will check that out
21:29:19 <melwitt> #topic Stuck Reviews
21:29:34 <melwitt> nothing in the agenda. anyone in the room have any stuck reviews they need to bring up?
21:30:07 <melwitt> #topic Open discussion
21:30:22 <mriedem> i've got 2 things
21:30:31 <melwitt> k
21:30:44 <mriedem> from the tc meeting today
21:30:51 <mriedem> 1. mnaser is our guidance counselor now https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Project_Teams
21:31:03 <mriedem> so if you are mad at your parents or gf/bf, you can talk to him
21:31:04 <melwitt> \o/
21:31:10 <melwitt> awe-some
21:31:17 <mnaser> hi
21:31:35 <efried> mnaser: have I told you lately how thick and luscious your beard is?
21:31:38 <mriedem> 2. as part of the goals selection stuff and the "finally do stuff in OSC" goal, i've started an etherpad to list the gaps for compute API microversions in OSC https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc
21:31:50 <mriedem> i haven't gotten far,
21:31:58 <mriedem> and i'll post a more format call for help in the ML later
21:32:00 <mriedem> but fyi
21:32:01 <melwitt> #link https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc
21:32:13 <melwitt> good to know
21:32:13 <mriedem> *formal
21:32:14 <mnaser> efried: hah :p thanks, I guess?
21:32:21 * mnaser lets meeting go back to its order
21:32:29 <mriedem> efried: have you seen gmann?
21:32:32 <mriedem> no contest
21:32:35 <efried> true story
21:32:37 <melwitt> lol
21:32:42 <efried> But gmann is not our guidance counselor.
21:32:47 <mriedem> true
21:32:58 <mriedem> that's it
21:33:07 <mnaser> 🙄
21:33:17 <melwitt> I guess since I'm thinking about it, there's a post to the ML about increasing the number of volumes to attach to a single instance > 26 and some approaches. lend your thoughts on the thread if you're interested
21:33:25 <melwitt> #link http://lists.openstack.org/pipermail/openstack-dev/2018-June/131289.html
21:33:47 <melwitt> that's all I have. anyone have anything else for open discussion?
21:33:53 <mnaser> But anyways, all jokes aside, we’re trying to be more proactive as TC to check in on project health so feel free to reach out
21:34:16 <melwitt> super
21:34:51 <melwitt> okay, if no one has anything else, we can call this a wrap
21:34:59 <melwitt> thanks everyone
21:35:02 <melwitt> #endmeeting