14:00:58 <efried> #startmeeting nova-scheduler
14:00:59 <openstack> Meeting started Mon Nov 26 14:00:58 2018 UTC and is due to finish in 60 minutes.  The chair is efried. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:00 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:02 <openstack> The meeting name has been set to 'nova_scheduler'
14:01:03 <cdent> o/
14:01:04 <gibi> o/
14:01:04 <takashin> o/
14:01:05 <mriedem> o/
14:01:05 <edleafe> \o
14:01:31 <alex_xu> o/
14:01:42 <efried> #link agenda https://wiki.openstack.org/wiki/Meetings/NovaScheduler#Agenda_for_next_meeting
14:02:20 <tetsuro> o/
14:02:47 <tssurya> o/
14:02:52 <efried> I was out most of last week, and at summit the week before, and DST failed the week before that, so I apologize if the agenda is a bit light and I don't seem to know what's going on. It's not an illusion.
14:03:17 <efried> #topic last meeting
14:03:17 <efried> #link last minutes: http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-11-05-14.00.html
14:04:07 <efried> This was where cdent covered and he and jaypipes chatted for a few minutes while takashin looked on.
14:04:18 <cdent> pretty much
14:04:32 <efried> #topic specs and review
14:04:32 <efried> #link latest pupdate: http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000189.html
14:04:51 <efried> I have not read the pupdate yet. cdent, care to summarize and/or highlight the important bits?
14:05:51 <cdent> mostly pointed out the changes related gpu reshaping and efried's cleaning up the resource tracker to be less chatty
14:06:13 <efried> ight
14:06:17 <cdent> that there are a lot of specs that need updates from authors
14:06:29 <efried> I know I'm on that list
14:06:44 <cdent> that there is a lot of extraction-related code to review, within placement, nova, devstack
14:06:58 <mriedem> did the devstack change merge yet?
14:07:01 <mriedem> i'm guessing no
14:07:12 <cdent> as usual: reading the pupdate has lots of handy links and can drive your day
14:07:19 <mriedem> nope https://review.openstack.org/#/c/600162/
14:07:38 <mriedem> oh just need frickler back on that
14:07:41 <cdent> mriedem: not yet, and that was going to be one of my "opens" questions: is there anything we're waiting for or can we just go for it
14:07:54 <efried> what's the worst that could happen?
14:08:20 <cdent> we break other services with custom devstack jobs that weren't paying attention? that's unlikely though
14:08:39 <mriedem> i don't really have a good answer, but i don't remember anyone saying "we really shouldn't merge the devstack change until x"
14:08:40 <efried> and revertible
14:08:55 <mriedem> merging it earlier is better for burn in time
14:08:58 <cdent> and that would be a good kind of breakage as it would reveal things we need revealed
14:09:00 <cdent> yes, definitely
14:09:42 <cdent> tetsuro found an issue with data migrations that could impact some situations, but we have fix designed already, see discussion on
14:09:48 <cdent> #link https://review.openstack.org/#/c/619126/
14:10:14 <cdent> (basically after dan's migration script runs we need to 'stamp' to have the alembic table present)
14:10:36 <cdent> so yeah: let's merge the devstack change asap (the above won't imact that)
14:10:58 <cdent> (well, actualy it might, but we can see)
14:11:23 <cdent> (and if it does better to do the fix, asap)
14:11:26 <efried> it looks like frickler was +2 one PS ago, so that should be a relatively easy sell?
14:11:40 <cdent> yes
14:11:57 <efried> Cool. Whose action to go camp outside his house?
14:12:17 * cdent has a tent
14:12:18 <mriedem> i did it
14:12:44 * cdent hopes to see the world burn
14:12:51 <efried> cool. Anything else for reviews or specs?
14:13:16 <cdent> so there is a big pile of mostly trivial, or non-functional changes to placement that need generic review
14:13:48 <cdent> cleanups that keep us well positioned for "next"
14:14:05 <cdent> approx 15 or so change in placement, eager for eyes
14:14:12 <efried> https://review.openstack.org/#/q/project:openstack/placement+status:open
14:14:15 <efried> kind of thing?
14:14:26 <cdent> ya
14:14:36 <efried> or rather
14:14:37 <cdent> of those
14:14:49 <cdent> #link integrated template https://review.openstack.org/#/c/617565/
14:14:51 <efried> #link placement changes needing review, mostly trivial https://review.openstack.org/#/q/project:openstack/placement+status:open
14:14:59 <cdent> is probably most important
14:15:25 <mriedem> ooo yeah https://review.openstack.org/#/c/617565/ is very important if we land that devstack change
14:16:15 <cdent> there are several others in that collection which depends-on the devstack change
14:16:35 <cdent> if they all go in in a swell foop we have some pretty nice testing
14:16:38 <cdent> but also a slower gate :(
14:17:25 <mriedem> i need to dig into this ENABLED_PYTHON3_PACKAGES variable
14:18:06 <cdent> mriedem: there's some back and forth (to myself) about that on a grenade change: https://review.openstack.org/#/c/619552/
14:18:07 <mriedem> efried is rebooting
14:18:17 <cdent> it took me 4ever to find
14:18:23 <cdent> even though I was pretty sure what the problem was
14:19:16 <mriedem> and this is only seen in the grenade py3 job right?
14:19:25 <cdent> yeah
14:19:32 <cdent> it's because swift _doesn't_ py3
14:19:34 <mriedem> and why wasn't it a problem before?
14:19:46 <mriedem> i mean, i had passing runs on the grenade py3 job
14:19:58 <cdent> because we had a depends-on in there that was changing something
14:20:12 <efried> o/
14:20:37 <cdent> there are sadly too many variables involved that control which code you are looking at any given time and zuul and devstack/grenade aren't entirely behaving properly
14:20:50 <cdent> in any case: the thing I added to devstack doesn't fix the grenade problem
14:21:00 <cdent> that's just there as a completeness/precaution thing
14:21:11 <cdent> the fix in grenade was to reinstall python-openstackclient
14:21:17 <cdent> so that the right one is on the path
14:21:39 <mriedem> yeah i'll dig in after the meeting
14:22:04 <cdent> if we were in the golden age of grenade we'd spend some time making this cleaner, but unclear if we have the cylces :(
14:22:26 <mriedem> we don't
14:22:37 <mriedem> it's dan and i keeping the lights on
14:23:11 <efried> Sorry for the hitch there, folks. Eavesdrop hasn't caught up yet, so apologies if we're past this, but I was saying that https://review.openstack.org/#/c/617565/ is another example of a patch that changes zuul config, but doesn't run the things being configured, right?
14:23:15 <cdent> woot. I have asked (internally) and may be able to get some but given I'm already something like 150% over time
14:23:39 <mriedem> efried: it does
14:23:45 <mriedem> it's running tempest py3 and grenade py3
14:24:16 <efried> ahcrap, I pulled up another random patch for comparison, but it just happened to be https://review.openstack.org/#/c/619299/
14:25:19 <cdent> I'm confused what your question actually is/was efried ?
14:26:23 <efried> It's cool. I had noticed a week or two ago that, for some .zuul.yaml changes, the patch in which you make the change doesn't actually run the job you added. It may have been a reno or other doc-ish change in that case.
14:28:02 <efried> So +2 on https://review.openstack.org/#/c/617565/ - but it looks like the deps still have some way to go.
14:28:19 <mriedem> deps are light actually, should have those done soonish
14:28:29 <mriedem> i'll update the zuul template one after the meeting
14:28:32 <mriedem> and i'm looking at grenade now
14:29:33 <efried> Okay, any other reviews or specs to highlight before we move on?
14:29:35 <cdent> It would probably be good/best if we can merge all this stuff this week so that we have a couple weeks before xmas to clear up any chaos, so that we don't leave a mess. I'm not expecting a huge mess or anything, but better safe than sorry.
14:29:49 <cdent> I think we can probalby move on, as long as people are going to be good people and read the pupdate
14:29:59 * efried pinky swears
14:30:33 <efried> Anything else related to
14:30:33 <efried> #topic Extraction
14:30:33 <efried> which we seem to have mostly covered already?
14:31:57 <efried> I'll throw these in here
14:31:58 <efried> libvirt/xenapi reshaper series
14:31:58 <efried> #link libvirt reshaper https://review.openstack.org/#/c/599208/
14:31:58 <efried> #link xen reshaper (middle of series) https://review.openstack.org/#/c/521041
14:32:06 <efried> still needing reviews
14:32:19 <efried> (from at least me, for sure)
14:32:19 <mriedem> yeah i rebased the libvirt one before i left for thanksgiving,
14:32:22 <mriedem> but that's as far as i got
14:32:30 <cdent> yeah, we had a ml post earlier today asking for review on the xen ones
14:32:44 <cdent> artom left a comment I absolutely loved on the libvirt one
14:33:54 <mriedem> heh artom is in for a treat in his numa-aware live migration bp then
14:34:09 <mriedem> welcome to nova/virt/hardware.py
14:34:34 <cdent> yeah, treat
14:34:53 <mriedem> comments on the grenade fix btw for the swift/osc thing
14:35:01 <cdent> yeah, just saw that, making a bug now
14:35:50 <efried> Any update on the FFU framework script for reshaper?
14:36:53 <cdent> If we don't have people who are trying to make FFU happen clamoring for that, do "we" need to care? As in: we can't do everything, but we can coordinate other people if they come along.
14:37:38 <efried> is it something where we can afford to wait until we really need it?
14:38:54 <cdent> Which we do you mean? Which deployment tools do FFU?
14:38:57 <efried> yes
14:39:05 <mriedem> tripleo claims support for FFU,
14:39:06 <efried> oh, "which"
14:39:13 <mriedem> and yes i tend to agree that those who care about it need to step up
14:40:30 <efried> okay.
14:41:04 <efried> #action us keep kicking reshaper FFU can down road
14:41:12 <efried> #topic bugs
14:41:12 <efried> #link Placement bugs https://bugs.launchpad.net/nova/+bugs?field.tag=placement
14:41:31 <efried> anyone?
14:42:01 <efried> #topic opens
14:43:28 <efried> Anything else before we close?
14:43:36 <cdent> I got nothing other than to say: I'm glad we're trying to build in some burn in time. In my personal testing things are working well, but every now and again things like what tetsuro finds come up
14:44:19 <efried> tetsuro has a flamethrower
14:44:30 <cdent> he's good like that
14:44:32 <efried> ...or something cleverly implying he's good at burn-in
14:44:41 <cdent> close enough
14:44:48 * tetsuro googles flamethrower
14:45:10 * efried waits for it
14:45:10 <cdent> we're never going to see tetsuro again as he disappears down a google hole
14:45:24 <mriedem> https://www.google.com/search?q=the+thing+flamethrower&safe=off&client=firefox-b-1-ab&source=lnms&tbm=isch&sa=X&ved=0ahUKEwi0vZKOpvLeAhVKQ6wKHZVeCpkQ_AUIDigB&biw=1536&bih=738
14:45:32 <mriedem> now tetsuro needs to watch the thing
14:45:39 <mriedem> it's a staple for winter
14:45:42 <efried> "burn in" would be a much harder colloquialism to explain
14:45:47 <cdent> the perfect holiday film
14:46:34 <efried> okay, thanks all, go do work.
14:46:36 <efried> #endmeeting