14:00:58 #startmeeting nova-scheduler 14:00:59 Meeting started Mon Nov 26 14:00:58 2018 UTC and is due to finish in 60 minutes. The chair is efried. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:02 The meeting name has been set to 'nova_scheduler' 14:01:03 o/ 14:01:04 o/ 14:01:04 o/ 14:01:05 o/ 14:01:05 \o 14:01:31 o/ 14:01:42 #link agenda https://wiki.openstack.org/wiki/Meetings/NovaScheduler#Agenda_for_next_meeting 14:02:20 o/ 14:02:47 o/ 14:02:52 I was out most of last week, and at summit the week before, and DST failed the week before that, so I apologize if the agenda is a bit light and I don't seem to know what's going on. It's not an illusion. 14:03:17 #topic last meeting 14:03:17 #link last minutes: http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-11-05-14.00.html 14:04:07 This was where cdent covered and he and jaypipes chatted for a few minutes while takashin looked on. 14:04:18 pretty much 14:04:32 #topic specs and review 14:04:32 #link latest pupdate: http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000189.html 14:04:51 I have not read the pupdate yet. cdent, care to summarize and/or highlight the important bits? 14:05:51 mostly pointed out the changes related gpu reshaping and efried's cleaning up the resource tracker to be less chatty 14:06:13 ight 14:06:17 that there are a lot of specs that need updates from authors 14:06:29 I know I'm on that list 14:06:44 that there is a lot of extraction-related code to review, within placement, nova, devstack 14:06:58 did the devstack change merge yet? 14:07:01 i'm guessing no 14:07:12 as usual: reading the pupdate has lots of handy links and can drive your day 14:07:19 nope https://review.openstack.org/#/c/600162/ 14:07:38 oh just need frickler back on that 14:07:41 mriedem: not yet, and that was going to be one of my "opens" questions: is there anything we're waiting for or can we just go for it 14:07:54 what's the worst that could happen? 14:08:20 we break other services with custom devstack jobs that weren't paying attention? that's unlikely though 14:08:39 i don't really have a good answer, but i don't remember anyone saying "we really shouldn't merge the devstack change until x" 14:08:40 and revertible 14:08:55 merging it earlier is better for burn in time 14:08:58 and that would be a good kind of breakage as it would reveal things we need revealed 14:09:00 yes, definitely 14:09:42 tetsuro found an issue with data migrations that could impact some situations, but we have fix designed already, see discussion on 14:09:48 #link https://review.openstack.org/#/c/619126/ 14:10:14 (basically after dan's migration script runs we need to 'stamp' to have the alembic table present) 14:10:36 so yeah: let's merge the devstack change asap (the above won't imact that) 14:10:58 (well, actualy it might, but we can see) 14:11:23 (and if it does better to do the fix, asap) 14:11:26 it looks like frickler was +2 one PS ago, so that should be a relatively easy sell? 14:11:40 yes 14:11:57 Cool. Whose action to go camp outside his house? 14:12:17 * cdent has a tent 14:12:18 i did it 14:12:44 * cdent hopes to see the world burn 14:12:51 cool. Anything else for reviews or specs? 14:13:16 so there is a big pile of mostly trivial, or non-functional changes to placement that need generic review 14:13:48 cleanups that keep us well positioned for "next" 14:14:05 approx 15 or so change in placement, eager for eyes 14:14:12 https://review.openstack.org/#/q/project:openstack/placement+status:open 14:14:15 kind of thing? 14:14:26 ya 14:14:36 or rather 14:14:37 of those 14:14:49 #link integrated template https://review.openstack.org/#/c/617565/ 14:14:51 #link placement changes needing review, mostly trivial https://review.openstack.org/#/q/project:openstack/placement+status:open 14:14:59 is probably most important 14:15:25 ooo yeah https://review.openstack.org/#/c/617565/ is very important if we land that devstack change 14:16:15 there are several others in that collection which depends-on the devstack change 14:16:35 if they all go in in a swell foop we have some pretty nice testing 14:16:38 but also a slower gate :( 14:17:25 i need to dig into this ENABLED_PYTHON3_PACKAGES variable 14:18:06 mriedem: there's some back and forth (to myself) about that on a grenade change: https://review.openstack.org/#/c/619552/ 14:18:07 efried is rebooting 14:18:17 it took me 4ever to find 14:18:23 even though I was pretty sure what the problem was 14:19:16 and this is only seen in the grenade py3 job right? 14:19:25 yeah 14:19:32 it's because swift _doesn't_ py3 14:19:34 and why wasn't it a problem before? 14:19:46 i mean, i had passing runs on the grenade py3 job 14:19:58 because we had a depends-on in there that was changing something 14:20:12 o/ 14:20:37 there are sadly too many variables involved that control which code you are looking at any given time and zuul and devstack/grenade aren't entirely behaving properly 14:20:50 in any case: the thing I added to devstack doesn't fix the grenade problem 14:21:00 that's just there as a completeness/precaution thing 14:21:11 the fix in grenade was to reinstall python-openstackclient 14:21:17 so that the right one is on the path 14:21:39 yeah i'll dig in after the meeting 14:22:04 if we were in the golden age of grenade we'd spend some time making this cleaner, but unclear if we have the cylces :( 14:22:26 we don't 14:22:37 it's dan and i keeping the lights on 14:23:11 Sorry for the hitch there, folks. Eavesdrop hasn't caught up yet, so apologies if we're past this, but I was saying that https://review.openstack.org/#/c/617565/ is another example of a patch that changes zuul config, but doesn't run the things being configured, right? 14:23:15 woot. I have asked (internally) and may be able to get some but given I'm already something like 150% over time 14:23:39 efried: it does 14:23:45 it's running tempest py3 and grenade py3 14:24:16 ahcrap, I pulled up another random patch for comparison, but it just happened to be https://review.openstack.org/#/c/619299/ 14:25:19 I'm confused what your question actually is/was efried ? 14:26:23 It's cool. I had noticed a week or two ago that, for some .zuul.yaml changes, the patch in which you make the change doesn't actually run the job you added. It may have been a reno or other doc-ish change in that case. 14:28:02 So +2 on https://review.openstack.org/#/c/617565/ - but it looks like the deps still have some way to go. 14:28:19 deps are light actually, should have those done soonish 14:28:29 i'll update the zuul template one after the meeting 14:28:32 and i'm looking at grenade now 14:29:33 Okay, any other reviews or specs to highlight before we move on? 14:29:35 It would probably be good/best if we can merge all this stuff this week so that we have a couple weeks before xmas to clear up any chaos, so that we don't leave a mess. I'm not expecting a huge mess or anything, but better safe than sorry. 14:29:49 I think we can probalby move on, as long as people are going to be good people and read the pupdate 14:29:59 * efried pinky swears 14:30:33 Anything else related to 14:30:33 #topic Extraction 14:30:33 which we seem to have mostly covered already? 14:31:57 I'll throw these in here 14:31:58 libvirt/xenapi reshaper series 14:31:58 #link libvirt reshaper https://review.openstack.org/#/c/599208/ 14:31:58 #link xen reshaper (middle of series) https://review.openstack.org/#/c/521041 14:32:06 still needing reviews 14:32:19 (from at least me, for sure) 14:32:19 yeah i rebased the libvirt one before i left for thanksgiving, 14:32:22 but that's as far as i got 14:32:30 yeah, we had a ml post earlier today asking for review on the xen ones 14:32:44 artom left a comment I absolutely loved on the libvirt one 14:33:54 heh artom is in for a treat in his numa-aware live migration bp then 14:34:09 welcome to nova/virt/hardware.py 14:34:34 yeah, treat 14:34:53 comments on the grenade fix btw for the swift/osc thing 14:35:01 yeah, just saw that, making a bug now 14:35:50 Any update on the FFU framework script for reshaper? 14:36:53 If we don't have people who are trying to make FFU happen clamoring for that, do "we" need to care? As in: we can't do everything, but we can coordinate other people if they come along. 14:37:38 is it something where we can afford to wait until we really need it? 14:38:54 Which we do you mean? Which deployment tools do FFU? 14:38:57 yes 14:39:05 tripleo claims support for FFU, 14:39:06 oh, "which" 14:39:13 and yes i tend to agree that those who care about it need to step up 14:40:30 okay. 14:41:04 #action us keep kicking reshaper FFU can down road 14:41:12 #topic bugs 14:41:12 #link Placement bugs https://bugs.launchpad.net/nova/+bugs?field.tag=placement 14:41:31 anyone? 14:42:01 #topic opens 14:43:28 Anything else before we close? 14:43:36 I got nothing other than to say: I'm glad we're trying to build in some burn in time. In my personal testing things are working well, but every now and again things like what tetsuro finds come up 14:44:19 tetsuro has a flamethrower 14:44:30 he's good like that 14:44:32 ...or something cleverly implying he's good at burn-in 14:44:41 close enough 14:44:48 * tetsuro googles flamethrower 14:45:10 * efried waits for it 14:45:10 we're never going to see tetsuro again as he disappears down a google hole 14:45:24 https://www.google.com/search?q=the+thing+flamethrower&safe=off&client=firefox-b-1-ab&source=lnms&tbm=isch&sa=X&ved=0ahUKEwi0vZKOpvLeAhVKQ6wKHZVeCpkQ_AUIDigB&biw=1536&bih=738 14:45:32 now tetsuro needs to watch the thing 14:45:39 it's a staple for winter 14:45:42 "burn in" would be a much harder colloquialism to explain 14:45:47 the perfect holiday film 14:46:34 okay, thanks all, go do work. 14:46:36 #endmeeting