14:00:13 <edleafe> #startmeeting nova_scheduler
14:00:14 <openstack> Meeting started Mon Apr  9 14:00:13 2018 UTC and is due to finish in 60 minutes.  The chair is edleafe. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:18 <openstack> The meeting name has been set to 'nova_scheduler'
14:00:28 <edleafe> Good UGT morning, everyone! Who's here today?
14:00:34 <tssurya> o/
14:00:41 <takashin> o/
14:00:44 <efried> รถ/
14:01:06 * edleafe wonders how efried got those bumps on his head
14:01:13 <efried> it's my new haircut
14:01:19 <mriedem> o/
14:01:22 <tetsuro> o/
14:01:32 <edleafe> a double mohawk?
14:01:35 <cdent> horns
14:01:45 <jaypipes> o/
14:02:32 <edleafe> Well, let's get started
14:02:35 <edleafe> #topic Specs
14:02:46 <edleafe> Still a huge number of outstanding specs
14:02:54 <edleafe> Here's my current list:
14:03:18 <edleafe> #link VMware: place instances on resource pool (using update_provider_tree) https://review.openstack.org/#/c/549067/
14:03:21 <edleafe> #link mirror nova host aggregates to placement API https://review.openstack.org/#/c/545057/
14:03:24 <edleafe> #link Proposes NUMA topology with RPs https://review.openstack.org/#/c/552924/
14:03:27 <edleafe> #link Account for host agg allocation ratio in placement https://review.openstack.org/#/c/544683/
14:03:30 <edleafe> #link Spec for isolating configuration of placement database https://review.openstack.org/#/c/552927/
14:03:33 <edleafe> #link Support default allocation ratios https://review.openstack.org/#/c/552105/
14:03:36 <edleafe> #link Spec on preemptible servers https://review.openstack.org/#/c/438640/
14:03:39 <edleafe> #link Handle nested providers for allocation candidates https://review.openstack.org/#/c/556873/
14:03:42 <edleafe> #link Add Generation to Consumers https://review.openstack.org/#/c/556971/
14:03:45 <edleafe> #link Proposes Multiple GPU types https://review.openstack.org/#/c/557065/
14:03:48 <edleafe> #link Standardize CPU resource tracking https://review.openstack.org/#/c/555081/
14:03:51 <edleafe> #link Network bandwidth resource provider https://review.openstack.org/#/c/502306/
14:03:54 <edleafe> #link Propose counting quota usage from placement https://review.openstack.org/#/c/509042/
14:03:57 <edleafe> If I missed yours, please let me know
14:04:14 <mriedem> i've got one to discuss
14:04:14 <jaypipes> there's a new one from tetsuro
14:04:15 <efried> Did tetsuro's end up in there?  looking...
14:04:20 <jaypipes> jinx :)
14:04:25 <edleafe> Probably not
14:04:32 <mriedem> had something on jaypipes' mirror aggregates spec https://review.openstack.org/#/c/545057/
14:04:35 <edleafe> I just removed the merged ones from last week's list
14:04:45 <efried> Here's tetsuro's: https://review.openstack.org/#/c/559466/
14:04:53 <tetsuro> oh thanks
14:05:28 <edleafe> #link Return all resources in provider summaries https://review.openstack.org/#/c/559466/
14:05:43 <edleafe> mriedem: go for it
14:05:53 <mriedem> now i lost my spot
14:05:57 <mriedem> https://review.openstack.org/#/c/545057/8/specs/rocky/approved/placement-mirror-host-aggregates.rst@130
14:06:07 <mriedem> thing about the upgrade impact and nova-api requiring placement,
14:06:20 <mriedem> since we can't check for that with nova-status, i was wondering if we should make that graceful in rocky, and hard fail in stein
14:06:37 <mriedem> like we did in newton with the computes reporting to placement
14:07:42 <mriedem> tbc, at some point after ocata, nova-conductor needed to start requiring placement too for a bug fix with forced live migrate and evacuate
14:08:02 <edleafe> jaypipes: your feeling on this?
14:08:09 <jaypipes> edleafe: fine by me.
14:08:31 <mriedem> it wouldn't be the end of the world if it were a hard fail off the bat,
14:08:35 <mriedem> but seems we can be nice here
14:09:02 <jaypipes> like I said, fine by me
14:09:19 <mriedem> alright, add that and i'm +2
14:09:29 * edleafe notes that jaypipes wants to be nice
14:09:30 <jaypipes> k
14:09:41 <jaypipes> edleafe: don't take that the wrong way.
14:10:04 <edleafe> Any other spec questions?
14:10:46 <cdent> I'd really like to get the isolated/optional database stuff happening
14:11:00 <cdent> is there anything blocking that other than lack of review bandwidth?
14:11:29 <mriedem> not that i'm aware of
14:11:32 <edleafe> It's already got a +2 and a ton of +1s
14:12:09 * bauzas waves late
14:12:26 * edleafe waves back
14:13:55 <edleafe> OK, nova-specs cores: please take a look at that spec
14:14:17 <bauzas> sure, I'll try
14:14:21 <edleafe> thx
14:14:30 <edleafe> Next up...
14:14:34 <edleafe> #topic Reviews
14:14:49 <edleafe> Once again, here's the current link dump:
14:14:53 <edleafe> #link Update Provider Tree https://review.openstack.org/#/q/topic:bp/update-provider-tree
14:14:56 <edleafe> #link Neste resource providers https://review.openstack.org/#/q/topic:bp/nested-resource-providers
14:14:59 <edleafe> #link Nested providers in allocation candidates https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates
14:15:02 <edleafe> #link Request Filters https://review.openstack.org/#/q/topic:bp/placement-req-filter
14:15:05 <edleafe> #link Mirror nova host aggregates to placement https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates
14:15:08 <edleafe> #link Forbidden Traits https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits
14:15:11 <edleafe> #link Consumer Generations https://review.openstack.org/#/q/topic:bp/add-consumer-generation
14:15:14 <edleafe> #link Extraction https://review.openstack.org/#/q/topic:bp/placement-extract
14:15:17 <edleafe> #link Purge comp_node and res_prvdr records during deletion of cells/hosts https://review.openstack.org/#/c/546660/
14:15:21 <edleafe> #link A huge pile of improvements to osc-placement https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky
14:15:24 <edleafe> #link Add compute capabilities traits (to os-traits) https://review.openstack.org/#/c/546713/
14:15:27 <edleafe> #link General policy sample file for placement https://review.openstack.org/#/c/524425/
14:15:30 <edleafe> #link Provide framework for setting placement error codes https://review.openstack.org/#/c/546177/
14:15:33 <edleafe> #link Get resource provider by uuid or name (osc-placement) https://review.openstack.org/#/c/527791/
14:15:36 <edleafe> #link placement: Make API history doc more consistent https://review.openstack.org/#/c/477478/
14:15:39 <edleafe> #link Handle agg generation conflict in report client https://review.openstack.org/#/c/556669/
14:15:42 <edleafe> #link Slugification utilities for placement names https://review.openstack.org/#/c/556628/
14:15:45 <edleafe> #link Remove usage of [placement]os_region_name https://review.openstack.org/#/c/557086/
14:15:48 <edleafe> #link Get rid of 406 paths in report client https://review.openstack.org/#/c/556633/
14:15:53 <edleafe> This is the list of reviews from last week, with the merged ones removed
14:16:02 <mriedem> https://review.openstack.org/#/q/topic:bp/placement-req-filter is just WIPs at this point
14:16:04 <bauzas> -ETOOMANYLINES
14:16:07 <mriedem> not sure if that should stay in there
14:16:16 <bauzas> -----buffer overflow-----
14:16:29 <bauzas> yeah, agreed with mriedem
14:16:47 <bauzas> also, I'd like to understand what misses for nested-resource-providers
14:16:55 <edleafe> mriedem: so no further work will likely be done on the two outstanding patches?
14:16:58 <bauzas> when I reviewed it last time, it was only having one change to merge
14:17:21 <mriedem> edleafe: mine will at some point,
14:17:23 <cdent> bauzas: read my latest placement update, that theme is on two topics
14:17:25 <mriedem> but it's lower priority
14:17:33 <bauzas> cdent: I did
14:17:36 <mriedem> i can't speak for dan's wip
14:17:42 <bauzas> cdent: but gerrit wasn't saying the same
14:18:07 <bauzas> at least on last week
14:18:08 <cdent> hmmm. maybe things have changed yet again. topics never seem as stable as I hoope
14:18:31 <bauzas> cdent: anyway, will check the changes by your email
14:18:37 <bauzas> thanks for that, btw.
14:18:47 <edleafe> Are there any reviews anyone wants to discuss here?
14:19:50 * edleafe listens to the crickets chirping
14:20:05 <edleafe> Then let's move on to
14:20:11 <edleafe> #topic Open Discussion
14:20:16 <edleafe> There is one topic:
14:20:18 <edleafe> Priorities etherpad
14:20:19 <edleafe> https://etherpad.openstack.org/p/rocky-nova-priorities-tracking
14:20:23 <edleafe> oops
14:20:33 <edleafe> #link Priorities etherpad https://etherpad.openstack.org/p/rocky-nova-priorities-tracking
14:20:35 * cdent sings the jeffersons theme
14:20:42 <edleafe> copy/paste fail
14:21:20 <edleafe> We said we would add the reviews we were working on to that etherpad, and then prioritize them today at this meeting
14:21:32 <edleafe> No one (myself included) did that
14:21:42 <edleafe> So there really isn't anything to prioritize
14:22:07 <edleafe> Is this still important? Or do we just want to keep working on what we are working on?
14:22:13 <cdent> heh
14:22:35 <cdent> I guess since I wasn't here last week I wasn't really aware of that plan
14:22:58 <cdent> only that something was going to be done with the etherpad
14:22:59 <edleafe> What? You don't read the minutes of every meeting???
14:23:37 <cdent> I know it can seem like I read everything...
14:23:56 <jaypipes> it does indeed.
14:24:13 <edleafe> Let me ask again:
14:24:15 <edleafe> Is this still important? Or do we just want to keep working on what we are working on?
14:24:43 <jaypipes> edleafe: is this a runway thing?
14:25:43 <mriedem> sounds like subteam runway
14:25:44 <edleafe> jaypipes: no, just a way to deal with the huge number of patches out there
14:26:18 <edleafe> the consensus was that there is no way we are going to get all of that merged in Rocky
14:26:30 <jaypipes> edleafe: If I'm being honest with folks, I'm maybe going to be able to review (properly) maybe one or two patch series per day.
14:26:39 <edleafe> so what do we want to really focus on?
14:26:55 <jaypipes> edleafe: if I'm going to actually have time to work on my own assigned code around mirroring and other things
14:27:21 <jaypipes> edleafe: for me, tetsuro's patches around alloc candidates are top of my list right now, review-wise.
14:27:37 <jaypipes> edleafe: and I'd like to get the remainder of your consumer generation patches reviewed.
14:27:44 <edleafe> jaypipes: Don't forget you have to also finish your cloning machine
14:27:47 <jaypipes> edleafe: that will probably be it for me today.
14:28:02 <jaypipes> I might be able to do some cdent patches while waiting on tests
14:28:31 <cdent> The mental process I tend to use is that I order things based on how they are ordered in the placement update email: the main themes are at the top, and the stuff in "other" is ordered such that newer stuff is added to the end
14:28:49 <edleafe> That's an alternative
14:28:49 <cdent> so if you are stuck for priority, start at the top of the email and work down
14:29:13 <edleafe> Focus on the placement email ordering, and we can discuss changing that if/when things change
14:29:17 <cdent> that has a flaw, though, if it means no one ever gets in the other stack
14:29:46 <edleafe> cdent: maybe that's where this meeting could be helpful
14:31:41 <cdent> perhaps. Are we addressing the right problem? Is there also the problem of "we start too much work"
14:32:09 <cdent> except I don't want to say that because much of the work I start is outside the priority themes
14:33:09 <bauzas> I just feel we implemented a lot of things in between Newton and now
14:33:14 <edleafe> We're all going to focus on the work we are doing, and the patches that are related to that. Where I see this as helpful is "I've got a few spare cycles. What should I look at?"
14:33:22 <bauzas> now, the Placement API is really important for Nova
14:33:47 <bauzas> but maybe it also means that we discover a lot of new concerns now
14:33:58 <bauzas> because we now *use* the Placement API
14:34:00 <edleafe> Or also, "I'm focused on my stuff. What are the important areas I need to keep up with?"
14:34:18 <bauzas> so, IMHO, it's not really a problem
14:34:35 <bauzas> I remember when nova-volume stopped and then we used cinder
14:34:46 <bauzas> it was the same point
14:34:53 <bauzas> quantum, well...
14:35:13 <bauzas> anyway
14:35:49 <edleafe> bauzas: that's a good point. We've dealt with this sort of thing before. We just need to keep getting better at it
14:36:19 <bauzas> it just means we need some time
14:36:26 <bauzas> that's it
14:36:31 <edleafe> Or clones
14:36:36 <bauzas> but maybe I'm just optimistic
14:36:38 <jaypipes> moving on...
14:36:51 <bauzas> edleafe: I already have two clones
14:36:58 <edleafe> Anything else to discuss? Or should we get back to work?
14:37:22 <edleafe> bauzas: I saw pictures of them. They aren't clones; they're much better looking :)
14:37:28 <jaypipes> edleafe: we need to settle the unrestricted vs. separate by default thing.
14:37:35 <bauzas> edleafe: but not like in Star Wars, it needs time for my clones to be IT people :p
14:37:39 <edleafe> jaypipes: Ah, good point
14:37:56 <edleafe> want to start?
14:38:05 <bauzas> jaypipes: I think the default behaviour should be the exisiting
14:38:07 <jaypipes> edleafe: so, after thinking all weekend on this, I do see efried's point on this.
14:38:25 <bauzas> from what I did read from cdent's email
14:38:39 <bauzas> ie. unrestricted
14:39:34 <efried> Are you waiting for me to say something?
14:40:12 <jaypipes> although I am loath to say I agree with efried on anything of substance, I submit that in this case, leaving the "unrestricted by default" wording in the spec and coming up with another way of communicating "hey, these request groups MUST land on separate providers" is probably the best bet.
14:40:32 <efried> It makes the implementation easier, I can tell you that from having worked on it some over the past few days.
14:41:04 <jaypipes> like I said, I am loath to agree with you, efried, but yes, I think you're right.
14:41:11 * efried rejoices?
14:41:27 <efried> If it helps, jaypipes, you originally agreed with it when we were writing the spec.
14:41:49 <jaypipes> now, I will need to bring dansmith on board with this change in my mindset, though. :)
14:41:59 <edleafe> maybe add another query param to indicate separate RPs for all granular requests?
14:42:03 <efried> With bauzas, it's now three on one.
14:42:07 <jaypipes> efried: well, I may have originally agreed with it, but not intentionally ;)
14:42:28 <bauzas> edleafe: for unrestricted ? I don't think so
14:42:29 <efried> edleafe: Yes, we will have to do that, or something like it, at some point.
14:42:47 <efried> But I think not now.
14:42:51 <efried> i.e. it's not immediately needed.
14:42:59 <edleafe> Unrestricted seems to fit most use cases that have been brought up
14:43:05 <edleafe> efried: zactly
14:43:09 <jaypipes> efried: that said, I still think the spec should have some wording added to clear things up.
14:43:32 <efried> jaypipes: I will happily review any edits you'd like to propose.
14:43:34 <jaypipes> efried: specifically, to state that we have not yet decided how to communicate the requirement that separate providers be used.
14:44:24 <efried> jaypipes: There was a rev in there where I spelled that out.  But dansmith made me remove that text.
14:44:46 <jaypipes> efried: ok, I can go back and look at the older rev.
14:45:12 <jaypipes> efried: for the record...
14:46:31 <jaypipes> efried: the reason I eventually came to this decision was because I concluded you were correct in saying that *not* doing it this way would mean a backwards-incompatible behaviour change for deployments
14:46:32 <bauzas> efried: ping me when you're done with a new revision for your spec
14:46:54 <efried> bauzas: Which spec?
14:46:56 <bauzas> jaypipes: +1 with me, exactly why I agree
14:47:13 <bauzas> efried: (16:43:09) jaypipes: efried: that said, I still think the spec should have some wording added to clear things up.
14:47:18 <bauzas> which I agree topo
14:47:20 <bauzas> too
14:47:28 <jaypipes> efried: w.r.t. how a request for 4 VCPU would (using separate by default) not land on a node with 2 VCPU available on 2 NUMA node providers) whereas before it would.
14:49:45 <cdent> are we done?
14:49:52 <edleafe> There is one other thing I'd like to discuss: should we move to a separate #openstack-placement channel
14:49:53 <efried> jaypipes: Bottom of the "Alternatives" section mentions the "niche" of unsatisfied use cases, which were explained further in PS3 of the original spec: https://review.openstack.org/#/c/510244/3/specs/queens/approved/granular-resource-requests.rst@361
14:50:50 <cdent> edleafe: yes
14:50:58 <bauzas> edleafe: if we're keeping that channel to specific Placement questions, sure
14:51:16 <edleafe> bauzas: and general placement discussions
14:51:19 <efried> edleafe: I'm +1 on the idea.  Saw a couple other +1s in the ML.  Nobody has come out against it yet.
14:51:25 <bauzas> edleafe: but like I said in my email, for example NUMA topologies using nested RPs should still be discussed in #nova
14:51:27 <edleafe> unclogging the -nova channel a bit
14:51:51 <bauzas> edleafe: because some nova experts like sean mooney could have opinions
14:51:55 <jaypipes> edleafe: I have been in #openstack-placement for an hour or so :)
14:52:04 <bauzas> shit, I need to join then
14:52:14 <bauzas> AFAIR, we also need to make it "official"
14:52:18 <edleafe> ah, didn't realize it had become a reality yet!
14:52:23 <bauzas> in eavesdrop I mean
14:52:33 <bauzas> ie. adding loggers and so on
14:52:35 <efried> There will be a certain amount of growing pains and overlap and "let's move this to the other channel" stuff for a while.  That's a normal part of doing business in IRC.
14:52:44 <efried> Certainly not a reason to avoid doing it.
14:52:55 <jaypipes> edleafe: well, it's easy enough to /join ...
14:52:56 <jaypipes> edleafe: someone else can do the needful w.r.t. eavesdrop and all that jazz.
14:53:02 <edleafe> jaypipes: just did
14:53:14 <bauzas> efried: yeah, tbc, the main thing to remember is that if the convo needs some nova experts, then use #nova
14:53:38 <bauzas> if that's all about placement bits, then #placement
14:53:44 <efried> jaypipes: Yeah, unless you spell it 4/join :P
14:53:52 <jaypipes> efried: :) indeed.
14:54:26 <edleafe> So is anyone taking on the eavesdrop stuff?
14:54:35 * jaypipes has no idea...
14:54:46 <efried> I nominate cdent
14:55:03 <cdent> I already volunteered, last week
14:55:03 <bauzas> edleafe: lemme find the doc
14:55:12 <cdent> and have the doc somewhere nearby
14:55:16 <bauzas> I did that a while for another stackforge project
14:55:17 <cdent> so will take care of stuff
14:55:19 <edleafe> oh, it sounds like bauzas is volunteering!!
14:55:38 <bauzas> edleafe: I volunteered for finding pointers :p
14:55:42 <efried> It would be neat if we could get patchbot to post for patches based on, like, subject containing "placement", or containing files within the placement hierarchy, etc.
14:55:52 <bauzas> baby steps first :)
14:56:00 <bauzas> at least the status and the logger bots
14:56:04 <efried> Though that'll be (relatively) temporary until it gets its own project.
14:56:17 <edleafe> efried: let's hope!
14:56:40 <bauzas> found https://docs.openstack.org/infra/system-config/irc.html
14:56:44 <bauzas> there it is
14:56:50 * bauzas rolls his sleeves
14:57:00 <cdent> bauzas: I'll do it
14:57:06 <bauzas> cdent: cool
14:57:16 <bauzas> just follow the doc
14:57:21 <cdent> as I had already had it on my to do list this week
14:57:36 <bauzas> we don't need the meetbot tho
14:57:37 <edleafe> #action cdent to set up #openstack-placement with eavesdrop, bots, etc., to make it "official"
14:58:02 <edleafe> Anything else?
14:58:14 <bauzas> oh shit, we need meetbot for logging
14:58:34 <edleafe> OK, thanks everyone!
14:58:37 <edleafe> #ndmeeting
14:58:40 <edleafe> ugh
14:58:44 <edleafe> #endmeeting