14:00:17 <mriedem> #startmeeting nova
14:00:18 <openstack> Meeting started Thu Mar 23 14:00:17 2017 UTC and is due to finish in 60 minutes.  The chair is mriedem. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:21 <openstack> The meeting name has been set to 'nova'
14:00:28 <mriedem> well hello
14:00:29 <takashin> o/
14:00:33 <sfinucan> o/
14:00:35 <dansmith> o/
14:00:40 <edleafe> \o
14:00:59 <johnthetubaguy> o/
14:01:00 * bauzas ohai
14:01:10 <jroll> \o
14:01:15 <alex_xu> o/
14:01:27 <gibi> o/
14:01:36 <gcb> o/
14:01:37 <mriedem> #link agenda https://wiki.openstack.org/wiki/Meetings/Nova
14:01:47 <mriedem> #topic release news
14:01:53 <mriedem> #link Pike release schedule: https://wiki.openstack.org/wiki/Nova/Pike_Release_Schedule
14:02:06 <mriedem> #info next upcoming milestone: Apr 13: p-1 milestone, Nova Spec Freeze
14:02:15 <mriedem> so 3 weeks away
14:02:24 <mriedem> #info Blueprints: 59 targeted, 35 approved
14:02:34 <mriedem> and 1 completed
14:02:41 <mriedem> #info Open spec reviews: 111 (down 6 from last week)
14:02:52 <mriedem> so we have a lot of open specs
14:03:08 <mriedem> personally i haven't been doing a great job of reviewing new specs,
14:03:16 <mriedem> i've been feeling a bit overwhelmed by what we already have going
14:03:22 <mriedem> but that's just me
14:03:33 <mriedem> anything for the release?
14:03:46 <mriedem> #topic bugs
14:03:56 <mriedem> no critical bugs
14:04:05 <mriedem> gate status
14:04:05 <mriedem> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
14:04:10 <mriedem> things have been ok
14:04:27 <mriedem> jbernard was asking about the ceph job yesterday so it sounds like that is starting to move again
14:04:33 <mriedem> he's working on the whitelist
14:04:45 <mriedem> 3rd party CI
14:04:46 <mriedem> #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days
14:05:03 <mriedem> vmware nsx ci continues to have some issues, i noticed it voting on long merged patches again yesterday
14:05:08 <mriedem> but i'm told they are working on it
14:05:18 <mriedem> any bugs anyone wants to bring up?
14:05:26 <johnthetubaguy> mriedem: its not just you, was feeling the same when looking through the specs
14:05:46 <mriedem> cool, misery loves company :)
14:05:51 * johnthetubaguy nods
14:05:53 <johnthetubaguy> :)
14:05:55 <bauzas> spec review day maybe then ?
14:06:03 <mriedem> bauzas: yeah probably should
14:06:26 <mriedem> looking at dates we could do next week, or the first week of april
14:06:31 <johnthetubaguy> a big push might help some
14:06:35 <mriedem> which then gives people about another week to address comments
14:06:44 <bauzas> yup
14:06:45 <dansmith> first day of april would be good
14:06:50 <mriedem> wrong!
14:06:54 <bauzas> heh, was about to say foolish :p
14:06:57 <dansmith> ah, dang that's a saturday
14:06:59 <mriedem> yeah
14:07:00 <mriedem> ha
14:07:06 <dansmith> I could really go hog wild on that
14:07:25 <mriedem> next week isn't good for me, travling
14:07:28 <mriedem> *traveling
14:07:38 <mriedem> so how about 4/4?
14:07:48 <johnthetubaguy> yeah
14:07:56 <bauzas> wfm
14:08:05 <mriedem> #agreed spec review day on April 4th
14:08:22 <mriedem> #topic reminders
14:08:26 <mriedem> #link Pike Review Priorities etherpad: https://etherpad.openstack.org/p/pike-nova-priorities-tracking
14:08:35 <mriedem> #link Forum planning: https://wiki.openstack.org/wiki/Forum/Boston2017
14:08:42 <mriedem> #link https://etherpad.openstack.org/p/BOS-Nova-brainstorming Forum discussion planning for nova (add your name if you are going)
14:08:50 <mriedem> those are all a bit old
14:08:54 <mriedem> the important one now is:
14:08:55 <mriedem> #info EOD April 2: deadline for Forum topic submission: http://forumtopics.openstack.org/
14:09:04 <bauzas> I added a section for ad-hoc possible discussions
14:09:09 <bauzas> around tables
14:09:18 <bauzas> in case we need that tho :)
14:09:19 <mriedem> there are topics up there for cells v2 and hierarchical quotas, and other things that involve nova
14:09:32 <mriedem> i'm meaning to submit a forum sessions about placement,
14:10:06 <mriedem> i'm not entirely sure what it will say, or content, since jaypipes is already doing 2 talks on placement, but i guess this can be the cross-project, plus users and operators, to ask low-level details, or talk about roadmap stuff and current progress
14:10:14 <bauzas> well
14:10:22 <bauzas> jay's talks are more presentations
14:10:34 <bauzas> I was expecting kind of interactions with our lovely operators at the Forum
14:10:45 <mriedem> i also expect ops and users to be at jay's talks
14:10:50 <bauzas> sure
14:10:51 <mriedem> and one of them is high level and one is low level
14:10:58 <johnthetubaguy> so I was trying to get a group of things in a cross project-ey way
14:11:04 <mriedem> anyway, we probably need to talk about claims in the scheduler there anyway
14:11:05 <johnthetubaguy> let me get the etherpad of that session plan
14:11:11 <bauzas> but Q&A for 5-10 mins is maybe too short for that big prezo :)
14:11:18 <mriedem> we didn't talk about claims in the scheduler at the PTG,
14:11:29 <mriedem> and i feel like that's a big new thing that sort of exploded on the schedule in short order
14:11:31 <bauzas> mriedem: I wrote a start of a draft around that
14:11:38 <johnthetubaguy> #link https://etherpad.openstack.org/p/BOS-TC-vm-baremetal-platform
14:11:41 <bauzas> (and a spec)
14:12:03 <cdent> if we can virtualize and forum discussion on claims, that would be fantastic
14:12:07 <bauzas> I need cycling around the current concerns and prividing a new PS for that claims spec
14:12:09 <cdent> s/and/any/
14:12:24 <johnthetubaguy> mriedem: bauzas: would it be worth some virtual meet up for a few hours to deal with claims?
14:12:40 <mriedem> johnthetubaguy: yeah probably
14:12:41 <bauzas> well
14:12:43 <mriedem> for me at least
14:12:46 <bauzas> the main problem is space
14:12:55 <mriedem> johnthetubaguy: are you suggesting a hangout?
14:12:55 <bauzas> I would be happy with, but where ?
14:12:59 <mriedem> or at the forum?
14:13:02 <johnthetubaguy> I was thinking a google hangout, yeah
14:13:10 <johnthetubaguy> I mean virtual rather than physical
14:13:14 <mriedem> pre-summit
14:13:16 <johnthetubaguy> yeah
14:13:17 <bauzas> ah
14:13:20 <mriedem> yes i'm good with that
14:13:25 <bauzas> I'm good too
14:13:37 <johnthetubaguy> seems a worthy experiment
14:13:38 <bauzas> I just wanted to insuflate thoughts on that pre-Queens
14:13:43 <mriedem> #agreed have a hangout pre-forum about claims in the scheduler
14:13:58 <bauzas> hence the Super-WIP (c) me spec
14:13:58 <johnthetubaguy> now we might *need* a forum chat, but we can try fix that sooner
14:14:15 <mriedem> #action mriedem to submit a placement/scheduler/claims forum session
14:14:34 <mriedem> my concern is the agreement in the ML was claims in the scheduler are now a top priority,
14:14:45 <mriedem> but i don't have an understanding of it at all,
14:14:55 <mriedem> so i'd like to talk about it before i have to review it :)
14:14:55 <bauzas> wait, what?
14:15:04 <bauzas> anyway, off-meeting
14:15:07 <johnthetubaguy> we need it pre-split
14:15:08 <cdent> mriedem: don't feel bad, nobody does
14:15:17 <dansmith> oh come on now
14:15:31 <mriedem> there are 2-4 people that have an idea
14:15:33 <mriedem> i'll grant that
14:15:50 <johnthetubaguy> I have a design in my head, I bet its wrong and not like anyone else's
14:15:50 <bauzas> FWIW the spec is https://review.openstack.org/#/c/437424/
14:16:01 <dansmith> anyway, we should move on
14:16:03 <cdent> johnthetubaguy: right, exactly that
14:16:10 <mriedem> yeah let's move on
14:16:13 <johnthetubaguy> yeah, moving on time
14:16:14 <johnthetubaguy> cdent: +1
14:16:24 <mriedem> #topic Stable branch status: https://etherpad.openstack.org/p/stable-tracker
14:16:32 <mriedem> #info Ocata 15.0.2 is released.
14:16:37 <mriedem> #info Newton 14.0.5 is released.
14:16:42 <mriedem> #info Mitaka 13.1.4 is released.
14:16:49 <mriedem> all of those were monday night
14:16:52 <mriedem> for a cve
14:17:12 <mriedem> otherwise stable is looking ok
14:17:19 <mriedem> #topic subteam highlights
14:17:23 <mriedem> dansmith: cells v2
14:17:34 <dansmith> basically a no-op meeting this week,.
14:17:48 <dansmith> we have my set up, melwitt is working on fixing quotas after I broke them
14:17:55 <dansmith> that's about it.. just chugging along
14:18:04 <mriedem> ok
14:18:09 <mriedem> edleafe: scheduler
14:18:12 <edleafe> Traits code is getting very close - pushing to get it in ASAP
14:18:13 <edleafe> Discussed correctness of requiring Content-Type when there is no content, as in a PUT without a body. Decided that was not correct.
14:18:15 <edleafe> Wondered about a way to wire jaypipes to the internet so we can Google his brain.
14:18:19 <edleafe> Discussed whether the Resource Tracker failing silently when an Allocation fails was a bug or not. Jay assured us it is by design, so as not to break the RT when placement fails, but agreed that adding a warning to the logs would be ok.
14:18:22 <edleafe> We expressed concern that the current claims spec was "too how, and not enough what".
14:18:25 <edleafe> EOM
14:18:50 <mriedem> ok
14:18:59 <mriedem> tdurakov: live migration
14:19:18 <mriedem> moving on
14:19:25 <mriedem> alex_xu: api subteam meeting highlights?
14:19:29 <alex_xu> We discuss the spec for policy-remove-scope-check https://review.openstack.org/433037 and additional-default-policy-roles https://review.openstack.org/427872. Those two specs are looking for more wider feedback
14:19:36 <alex_xu> Also talk about the spec for using uuid in services and os-hypervisors api https://review.openstack.org/447149. Finally, using 'PUT' instead of the strange action '/services/{action}' in the services API.
14:19:45 <alex_xu> Also talk about deprecate the os-hosts API, there is mail about that http://lists.openstack.org/pipermail/openstack-dev/2017-March/114487.html from mriedem
14:19:50 <alex_xu> that is all
14:20:01 <mriedem> i bombarded https://review.openstack.org/433037 for johnthetubaguy yesterday
14:20:06 <mriedem> showing my policy ignorance in there
14:20:15 <johnthetubaguy> so you did, coolness
14:20:18 <mriedem> thanks alex_xu
14:20:22 <alex_xu> mriedem: np
14:20:28 <mriedem> moshele isn't around
14:20:36 <mriedem> sfinucan: is the sriov/pci meeting still happening? do you attend that?
14:20:59 <mriedem> i can check later
14:21:00 <sfinucan> mriedem: No, I've been there but nothing has happened since January
14:21:03 <mriedem> ok
14:21:15 <mriedem> gibi: notification meeting highlights?
14:21:23 <gibi> wating for searchlight to get a list of important notification to transform
14:21:32 <gibi> I tried to ping the guys, no luck so far
14:21:46 <gibi> transformation work progressing steadily
14:22:08 <gibi> the BDM in instance notification work has a WIP patch up
14:22:14 <johnthetubaguy> gibi: I wondered if we had that list yet when I was reading mriedem's spec
14:22:22 <mriedem> Kevin_Zheng: can you help out with figuring out the priority list of notification transformations that searchlight needs to adopt nova versioned notifications?
14:22:50 <mriedem> i need to address comments in my spec too
14:22:56 <mriedem> i'll also ask about the priority list in the ML
14:23:02 <mriedem> thanks gibi
14:23:06 <gibi> thanks
14:23:13 <mriedem> powervm, efried left notes
14:23:19 <mriedem> We have six changes ready for broader review.  They're listed in order on the pike focus etherpad
14:23:25 <mriedem> (https://etherpad.openstack.org/p/pike-nova-priorities-tracking).  First one has been updated per mriedem comments and is hopefully close to approvable.
14:23:40 <mriedem> i need to go back to review that first bottom change in the series
14:23:50 <mriedem> but once i do, watch out other cores
14:24:00 <mriedem> cinder
14:24:11 <mriedem> so i'll represent the nova/cinder updates
14:24:13 <mriedem> still working on same things as last two weeks (bdm.attachment_id,  support for cinder v3, and the John's spec for the new cinder APIs)
14:24:19 <Kevin_Zheng> mriedem: sure
14:24:23 <mriedem> Kevin_Zheng: thanks
14:24:28 <mriedem> we have the nova/cinder weekly meeting later today
14:24:54 <mriedem> i'm happy with lyarwood's bdm.attachment_id changes, but mdbooth was -1 until something was using them, which jgriffith has a patch for but we need to restore it and rebase
14:25:07 <mriedem> and get johnthetubaguy spec merged. which again, i need to review.
14:25:20 <jgriffith> https://review.openstack.org/#/c/443932/1
14:25:43 <jgriffith> and a -1 from johnthetubaguy here https://review.openstack.org/#/c/439520/
14:25:56 <mriedem> jgriffith: https://review.openstack.org/#/c/443932/ isn't what i'm thinking of,
14:26:04 <mriedem> jgriffith: it was your new detach flow patch
14:26:10 <mriedem> that checked the bdm.attachment_id
14:26:14 <mriedem> but we can talk about that after the meeting
14:26:26 <jgriffith> mriedem which lyarwood 's duplicates the base it's on.  Sure, sorry
14:26:36 <mriedem> jgriffith: while you're here, nova spec for cinder image backend? :)
14:26:51 <jgriffith> mriedem we should talk about that too :)
14:26:55 <mriedem> heh ok
14:26:57 <mriedem> moving on
14:27:00 <mriedem> #topic stuck reviews
14:27:05 <mriedem> there was nothing on the agenda
14:27:11 <mriedem> does anyone have something they want to bring up?
14:27:14 <mdbooth> jgriffith: I'm very interested in cinder imagebackend, btw
14:27:39 <mriedem> no stuck reviews
14:27:42 <mriedem> #topic open discussion
14:27:46 <jgriffith> mdbooth cool
14:27:51 <mriedem> there was nothing on the agend
14:27:52 <mriedem> *agenda
14:27:57 <mriedem> anyone want to mention something?
14:28:03 <gibi> one thing from my side
14:28:13 <gibi> there is the scheduler hint api spec
14:28:25 <gibi> https://review.openstack.org/#/c/440580/
14:28:31 <gibi> it seems a bit stuck
14:28:44 <mriedem> i need to look at the latest back and forth in there
14:28:46 <mriedem> on the use case
14:28:59 <mriedem> can you summarize?
14:29:02 <gibi> I think it boils down to that
14:29:16 <gibi> we don't want to make the scheduler_hints part of the nova's API contract
14:29:24 <gibi> but adding it to the response would do that by default
14:30:00 <mriedem> hmm, contract how? we just return the hints that were used to create the instance right?
14:30:08 <mriedem> just like with flavors, those hints might no longer be around
14:30:11 <mriedem> or re-usable on all clouds
14:30:35 <mriedem> they aren't exactly like flavors though since flavors have the same keys
14:30:35 <gibi> quoting sdague "The thing is, up until this point scheduler hints have been completely free form, which has meant they are largely skirting the API. When we start returning these and documenting the return values, we're going to start really forming these as strong contracts going forward."
14:30:41 <mriedem> hints are a big mess of custom
14:31:03 <bauzas> yeah
14:31:14 <mriedem> we can't really document them, except the ones we have in tree
14:31:18 <gibi> my view is that if we was able to not document the accepted hints in the request then we could do the same with the response
14:31:21 <mriedem> just like the in-tree scheduler filters
14:31:31 <bauzas> tbh, I don't want to nitpick but returning the filters doesn't mean that you could have the same placement
14:31:47 <bauzas> because operators could disable the related filter and then meh.
14:31:54 <johnthetubaguy> we might want to do the standard vs custom thing
14:31:55 <gibi> bauzas: true
14:32:01 <mriedem> well, same for flavors,
14:32:02 <johnthetubaguy> (and everything starts a custom)
14:32:05 <bauzas> so
14:32:10 <gibi> johnthetubaguy: I'm OK with only return the standard hints
14:32:18 <mriedem> you might not be able to place a 2nd instance with the same flavor as the original if the extra specs make it hard to place
14:32:47 <bauzas> I'd like to be honest with our users and say "you could see those hints, that's nice, but that's just meaning that you *could* have the same behaviour"
14:32:52 <johnthetubaguy> gibi: that might be quite a good compromise
14:33:13 <sdague> so, it really feels like encoding the current hints is kind of a mess
14:33:16 <bauzas> either way, placing an instance is really related to the time
14:33:33 <sdague> especially when the real issue is the affinity persistance
14:33:50 <johnthetubaguy> so step 1 is the flavor stuff, that gives people help to build something similar
14:33:52 <sdague> that should maybe become a more top level concern than just random hints
14:33:54 <bauzas> I think it's just a matter of being explicit
14:34:07 <bauzas> seeing hints doesn't mean that it's an hard value
14:34:25 <bauzas> you *could* possibly have the behaviour you want
14:34:27 <gibi> bauzas: you mean being honest in the API doc?
14:34:40 <bauzas> but it's not something we make sure
14:34:46 <bauzas> gibi: maybe I dunno
14:35:00 <bauzas> the real problem is that if we being to put them, people will trust them
14:35:00 <johnthetubaguy> this is a total deep hole, but... image overrides, hints, extra-specs, per-instance-image-overrides, etc, I wish we had a way to wrangle that mess into something that could be interoperable (we know its possible)
14:35:21 <bauzas> and people will expect to have a specific behaviour
14:36:06 <johnthetubaguy> at the PTG I was on the you passed it to us, we should hand it back, side of the argument, but I think that was misguided
14:36:07 <gibi> my users already expecting that the placement hints are kept during migration
14:36:22 <gibi> so even if I not showing them the hints
14:36:32 <johnthetubaguy> gibi: so that happens today, I thought?
14:36:33 <gibi> they can complain about not following them
14:36:34 <sdague> gibi: right, and that is different than pulling them back out over the API
14:36:35 * johnthetubaguy looks at bauzas
14:36:43 <bauzas> about what ? :)
14:36:48 <johnthetubaguy> does it work?
14:36:52 <gibi> johnthetubaguy: yes that works
14:36:59 <mriedem> yes we said at the ptg, or in the spec, that scheduler hints are honored or should be on move operations
14:37:03 <gibi> that is a behavior they are relaying on
14:37:05 <bauzas> hints being persisted ? yes it does
14:37:12 <johnthetubaguy> gibi: ah, sorry, I see your point now
14:37:27 <mriedem> what does that have to do with exposing the hints out of th API to the user?
14:37:30 <mriedem> as long as the move works
14:37:42 <sdague> the issue is this edge case
14:37:57 <sdague> compute A, compute B (same-host=A)
14:38:03 <bauzas> mriedem: I'm fine with that, but users could wait for a specific placement behaviour if reusing that hint
14:38:03 <sdague> migrate A; migrate B works
14:38:10 <sdague> migrate B; migrate A fails
14:38:10 <johnthetubaguy> sdague: yeah, that one sucks
14:38:12 <bauzas> sdague: my pint
14:38:16 <bauzas> point even
14:38:35 <gibi> sdague: yes exactly
14:38:36 <johnthetubaguy> its he one reason I saw to migrate all VMs on the host, somehow
14:38:42 <sdague> but... exposing all of this for that edge case seems like a really big hammer
14:38:46 <bauzas> if SameHostFilter or AffinityFilter is disabled between the persisted hint and the move operation, then it will place the instance somewhere not respecting this hint
14:39:00 <bauzas> I know it's a corner case tho
14:39:06 <bauzas> so I don't want to nitpick on that
14:39:08 <mriedem> so i'm hearing bug
14:39:15 <johnthetubaguy> I would rather have a migration hint "trust me, thats moving")
14:39:16 <sdague> can we specify > 1 host on migrate?
14:39:23 <mriedem> no
14:39:24 <bauzas> I'm just explaining that if users trust those hints, then they'll expect some placement
14:39:26 <johnthetubaguy> sdague: no, there was a spec for that
14:39:35 <sdague> mriedem: so I'd be happier if we did that instead
14:39:45 <johnthetubaguy> sdague: for the workload rebalancing folks I think
14:39:49 <mriedem> we'd have to dig up the spec, i thought it was edleafe's
14:39:53 <bauzas> yeah
14:39:55 <mriedem> yes watcher wanted to sending a list of hosts
14:40:14 <sdague> because then the answer would be, if you want to move hosts with affinity references, you have to issue a single migrate command
14:40:19 <bauzas> but providing a target means you're operator
14:40:28 <mriedem> or a service
14:40:29 <mriedem> like watcher
14:40:35 <johnthetubaguy> sdague: you mean multiple servers?
14:40:35 <bauzas> it's very different from exposing an hint to the end-user
14:40:41 <johnthetubaguy> sdague: I mean multiple VMs?
14:40:48 <mriedem> adjust the policy so the watcher service role can perform migrations
14:40:49 <sdague> johnthetubaguy: yeh, sorry, I don't mean nodes
14:40:52 <sdague> I mean instances
14:40:52 <mriedem> i'm sure they already do that
14:41:01 <sdague> it's multiple instances that's the issue
14:41:08 <johnthetubaguy> sdague: ah, cool, so I have been thinking the same thing for that edge case
14:41:16 <sdague> so instead of: migrate A; migrate B
14:41:21 <sdague> it's migrate A B
14:41:26 <johnthetubaguy> sdague: or --move-my-sticky-friends-too
14:41:27 <sdague> it's migrate B A,
14:41:38 <bauzas> edleafe's spec wasn't about that
14:41:40 <sdague> johnthetubaguy: no, I think you specify all the instances
14:41:48 <bauzas> it was about migrate instance1 hostA,hostB
14:41:49 <sdague> nothing gets moved that you don't specify
14:41:59 <mriedem> migrate all instances with tag=foo :)
14:42:01 <edleafe> bauzas: right. It was about providing a list of potential migration targets
14:42:03 <sdague> and if something is going to fail to move it tells you you also have to specify X
14:42:06 <mriedem> migrate entire server groups!
14:42:06 <johnthetubaguy> sdague: yeah, thats cleaner, just throwing it out there
14:42:13 <johnthetubaguy> mriedem: tempting
14:42:20 <bauzas> oh man
14:42:24 <sdague> mriedem: it's honestly really the thing that's being asked for
14:42:29 <mriedem> these are server groups right?
14:42:34 <mriedem> and you move them as a group
14:42:37 <johnthetubaguy> they should be, yeah
14:42:42 <sdague> I'd be fine if we said they had to be
14:42:43 <mriedem> the alternative is tag them
14:42:44 <johnthetubaguy> find all these folks a new place, I like that
14:42:57 <mriedem> and move everything with the same tag, but that could be messy
14:43:05 <johnthetubaguy> mriedem: I think thats what jaypipes would like us to move to, instead of server groups
14:43:05 <mriedem> seems migrating server groups is more what this is for
14:43:11 <mriedem> johnthetubaguy: yeah i know
14:43:16 <bauzas> so we have an host evacuate command that doesn't perform very well
14:43:23 <johnthetubaguy> mriedem: yeah, just being explicit
14:43:34 <bauzas> are we talking of migrating a couple of instances and orchestrating their placement once ?
14:43:44 <sdague> anyway, I would much rather go down this path than push the hints back to the user and have them do work around code to build this
14:43:45 <mriedem> we aren't talking about evacuate
14:43:52 <bauzas> I know
14:43:54 <johnthetubaguy> bauzas: I think you would want to claim a new spot for them all, then move them
14:44:11 <mriedem> but yes this is orchestrating a move of a group of servers
14:44:15 <mriedem> how they are grouped, tbd
14:44:30 <johnthetubaguy> so crazy idea...
14:44:31 <mriedem> and would depend on claims in the scheduler first, yes?
14:44:37 <bauzas> johnthetubaguy: well, it's a all-or-none placement logic, but I see your point
14:45:02 <johnthetubaguy> how you about aquire claims from several places, so that the system then allows them to all move individually?
14:45:29 <bauzas> well
14:45:44 <bauzas> some paper tell it's suboptimal to do group placement, but we could try
14:45:47 <johnthetubaguy> I know thats more complicated, but I think that allows people to "dry" run their plans
14:45:50 <gibi> for me it is simple, option a) we let the user know the hints and orchestrate the move accordingly b) provide a call nova that does the orchestration
14:46:16 <johnthetubaguy> so today, you just force the host, this is about making that better
14:46:21 <mriedem> gibi: (a) leaves the burden for the move to the user correct?
14:46:25 <mriedem> s/user/robot/
14:46:26 <gibi> mriedem: yes
14:46:32 <gibi> and yes
14:46:33 <sdague> mriedem: and it bakes that contract in
14:46:35 <gibi> it is cheap on nova
14:46:51 <gibi> (b) is nicer but that is expensive on nova
14:47:08 <johnthetubaguy> I don't like slamming our users under a bus on that one
14:47:17 <mriedem> well,
14:47:25 <sdague> my concern is hints structure being part of the API in nova forever, because I feel like it's honestly one of those places we've specifically not gone about standardizing
14:47:40 <mriedem> i don't like to either, but if it means we say we're going to do (b) but never actually do it, we aren't helping users either
14:47:44 <sdague> and I get concerned when we say "oh, we'll just start saying whatever is in tree is THE ONE TRUE WAY"
14:47:46 <bauzas> sdague: which I think is a valid concern
14:48:24 <bauzas> couldn't we signal that those hinsts are best-effort contract ?
14:48:27 <mriedem> gibi: can someone spec out the (b) idea?
14:48:31 <johnthetubaguy> so the middle ground is we have very clear standard and custom scheduler hints, so we are explicit?
14:48:41 <johnthetubaguy> yeah, I think next step is write this up in a spec
14:48:52 <sdague> mriedem: while I get that ... I also don't want to say "b might take us some time, so lets do the cheap to implement way"
14:49:05 <mriedem> i know, can't win either way
14:49:10 <gibi> mriedem, johnthetubaguy: I could not promise this will be done soon as it is quite complex matter. jaypipes had a similar spec
14:49:29 <gibi> https://review.openstack.org/#/c/183837/4/specs/liberty/approved/generic-scheduling-policies.rsthttps://review.openstack.org/#/c/183837/4/specs/liberty/approved/generic-scheduling-policies.rst
14:49:35 <sdague> gibi: well, the question is the external interface, and how robust it is
14:49:40 <johnthetubaguy> gibi: some some rough notes that start to collection the options in a backlog spec would be great
14:49:46 <johnthetubaguy> +1 for focus on the API
14:49:47 <mriedem> is it just the affinity/anti-affinity hints you need?
14:50:03 <gibi> mriedem: same_host, different_host
14:50:06 <mriedem> would it be terrible to just scope it to those to start?
14:50:15 <mriedem> no custom hints
14:50:15 <johnthetubaguy> well, lets write down this specific use case, and a few options, thats a great start
14:50:22 <bauzas> gibi: same_host are now part of the same affinity filter, right?
14:50:45 <gibi> bauzas: not sure without looking at the code
14:50:45 <mriedem> johnthetubaguy: yeah a backlog spec would be good to get the use case and alternatives at a high level, and pros/cons of each approach
14:50:48 <mriedem> gibi: ^
14:50:55 <johnthetubaguy> yeah
14:51:08 <gibi> mriedem, johnthetubaguy: OK, let's try that
14:51:32 <johnthetubaguy> folks want to build instances that don't all die together or are kinda close, so I think its important to get this right
14:51:53 <johnthetubaguy> I mean, I see the problem becoming more important over time
14:52:16 <mriedem> we're also weighed down by other missions to mars right now,
14:52:20 <mriedem> so again, i am overwhelmed
14:52:24 <mriedem> but let's end the meeting :)
14:52:31 <gibi> thanks guys
14:52:34 <mriedem> #endmeeting