14:00:07 <mriedem> #startmeeting nova
14:00:10 <openstack> Meeting started Thu Apr  6 14:00:07 2017 UTC and is due to finish in 60 minutes.  The chair is mriedem. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:14 <openstack> The meeting name has been set to 'nova'
14:00:23 <mriedem> will give it a few minutes
14:00:31 <dansmith> o/
14:00:32 <takashin> o/
14:00:36 <efried> o/
14:00:42 <macsz> O/
14:00:58 <edleafe> \o
14:01:03 <jimbaker> o/
14:01:43 <raj_singh> o/
14:02:04 <mriedem> ok let's do it
14:02:06 <mriedem> #link agenda https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
14:02:19 <mriedem> #topic release news
14:02:24 <mriedem> #link Pike release schedule: https://wiki.openstack.org/wiki/Nova/Pike_Release_Schedule
14:02:38 <mriedem> #info p1 milestone is in 1 week (April 13)
14:02:45 <mriedem> #info Blueprints: 64 targeted, 49 approved, 1 completed
14:02:53 <bauzas> \o
14:02:55 <mriedem> #info Open spec reviews: 61 (down 48 from last week)
14:03:17 <mriedem> the spec review sprint cleaned up a bunch of old cruft from the open review backlog it looks like
14:03:18 <mriedem> which is nice
14:03:31 <mriedem> any release questions?
14:03:40 <mriedem> #topic bugs
14:03:47 <mriedem> #help Our backlog of new untriaged bugs is high. Need help with triage of new bugs.
14:03:57 <mriedem> we've got >80 untriaged bugs
14:04:03 <bauzas> wow
14:04:09 <mriedem> and it's all bauzas' fault
14:04:15 <bauzas> blame me
14:04:22 <mriedem> nothing listed as critical
14:04:29 <mriedem> gate status is ok
14:04:33 <mriedem> 3rd party ci status is ok
14:04:43 <mriedem> anything for bugs?
14:04:54 <mriedem> #topic reminders
14:05:01 <mriedem> #info Spec freeze is April 13 (1 week)
14:05:12 <mriedem> #link Pike Review Priorities etherpad: https://etherpad.openstack.org/p/pike-nova-priorities-tracking
14:05:27 <mriedem> please keep ^ up to date
14:05:36 <mriedem> #link Forum planning: https://wiki.openstack.org/wiki/Forum/Boston2017
14:05:42 <mriedem> #link https://etherpad.openstack.org/p/BOS-Nova-brainstorming Forum discussion planning for nova (add your name if you are going)
14:05:45 * edleafe needs to update for scheduler
14:05:47 <mriedem> ^ is a bit stale at this point
14:05:52 <mriedem> #info Currently proposed forum topics: http://forumtopics.openstack.org/
14:06:04 <mriedem> i see that the forum sessions are already being reviewed and some are being rejected
14:06:15 <mriedem> i expect a lot of that should shake out next week
14:06:24 <mriedem> #info More information on planning: http://lists.openstack.org/pipermail/openstack-dev/2017-March/114703.html
14:06:36 <mriedem> any questions about the forum?
14:06:56 <mriedem> #topic stable branch status
14:07:05 <mriedem> i'm working on getting some ocata regressions backported
14:07:20 <mriedem> we've got one more that needs to be merged on master and then backported, and then release
14:07:43 <mriedem> that is https://review.openstack.org/#/c/453859/ and down
14:07:49 <mriedem> ^ is an ocata regression
14:07:54 <mriedem> once merged, i'll backport and release ocata
14:08:06 <mriedem> bottom 2 have +2s
14:08:10 <mriedem> top patch is trivial
14:08:16 * bauzas on it
14:08:20 <mriedem> no news for stable/newton really
14:08:26 <mriedem> #info Planned EOL for Mitaka is April 10
14:08:39 <mriedem> i haven't heard any communication from tonyb yet about mitaka eol
14:08:42 <mriedem> but it's right around the corner
14:09:01 <mriedem> questions about stable?
14:09:14 <mriedem> #topic subteam highlights
14:09:19 <mriedem> dansmith: cells v2
14:09:34 <dansmith> we had a meeting this week,
14:09:38 <dansmith> discussed the regressions mriedem was mentioning above
14:10:03 <dansmith> I gave a little status on the cellsv2 test patch, and guilted mriedem into picking up the page on reviewing my series, which worked like a charm
14:10:13 <dansmith> melwitt says she's close to resubmitting the quotas patch
14:10:28 <dansmith> and mriedem/dtp are getting rolling on the api-facing service id->uuid stuff
14:10:37 <dansmith> so lots of good progress and momentum all around
14:10:38 <dansmith> le fin
14:10:44 <mriedem> yay
14:10:46 <mriedem> edleafe: scheduler
14:10:53 <edleafe> Reviewed os-traits status. Agreed to clean up the code by adding the autoimport code.
14:10:56 <edleafe> Had a good long discussion about claims being made in placement, and just where they should be made, and how the workflow should proceed. Agreed not to use any special claim database or token or anything, and just have failed builds delete any allocations for the instance UUID
14:11:01 <edleafe> One concern was that if the scheduler claims resources, what happens when the resource tracker does its periodic update of its view of resources. Will it wipe out the claimed resources? Made a note to investigate this.
14:11:05 <edleafe> Had a strong back-and-forth about returning scheduler hints in the API. No agreement on this, except to continue the discussion on the spec. https://review.openstack.org/#/c/440580/
14:11:09 <edleafe> Discussed the use-local-scheduler spec (https://review.openstack.org/#/c/438936/). No agreement yet on that one
14:11:12 <edleafe> jaypipes got Madonna songs going in everyone's head.
14:11:14 <edleafe> EOM
14:11:53 <mriedem> ok i also left comments in https://review.openstack.org/#/c/440580/
14:12:00 <mriedem> it sounds like sdague would hate it less if it was a subresource
14:12:10 <mriedem> rather than in the full server repr
14:12:29 <sdague> yes, and I thought we'd agreed on that previously
14:12:48 <mriedem> i don't remember an agreement on it, but i don't disagree with it
14:12:52 <mriedem> in fact, i think i agree with it :)
14:12:56 <macsz> http://45.55.105.55:8082/bugs-dashboard.html
14:13:12 <macsz> markus_z made this dashboard, might be helpful in bug triage
14:13:12 <mriedem> macsz: premature linking?
14:13:27 <mriedem> macsz: let's get to that in open discussion
14:14:10 <mriedem> so in general i'm good with a subresource on the hints, and restricting the hints, and also making it more like how we're going to do flavors (comments on that in the spec),
14:14:25 <mriedem> 2 weeks ago we talked about a backlog spec for a high-level idea on moving entire groups of servers
14:14:40 <mriedem> rather than exposing hints so an external system has to sort all of this garbage out, which is terrible u
14:14:41 <mriedem> *ux
14:14:55 <mriedem> but i know if we block the small incremental thing for the big thing, neither will get done
14:14:59 <mriedem> and everyone will be sad
14:15:16 <mriedem> anywho, moving on
14:15:25 <mriedem> tdurakov: are you around for live migration?
14:15:34 <bauzas> mriedem: I think I basically agree with your point about subresource and restricting hints
14:15:35 <raj_singh> no LM meeting this week
14:15:44 <bauzas> (FWIW)
14:15:50 <mriedem> there were two LM specs to review
14:15:57 <mriedem> one was the per-instance timeout, that spec is approved
14:15:59 <mriedem> the other is:
14:16:00 <mriedem> Live migration completion timeout(config part): https://review.openstack.org/#/c/438467/
14:16:13 <mriedem> that's got a +2, just needs another reviewer
14:16:19 <mriedem> or 5
14:16:32 <mriedem> alex_xu: are you around for the api meeting?
14:16:36 <mriedem> *api subteam
14:17:00 <mriedem> guess not, we mostly talked about policy specs
14:17:10 <mriedem> specifically johnthetubaguy remove scope spec
14:17:22 <mriedem> https://review.openstack.org/#/c/433037/
14:17:46 <johnthetubaguy> yeah
14:17:47 <mriedem> we talked some about my spec to remove bdm.device_name from server create and volume attach POST requests, which i've since abandoned
14:18:09 <mriedem> and finally we talked some about a path forward to deprecate os-virtual-interface and replace with os-interface, and eventually get bdm/vif tags out of the rest api
14:18:20 <mriedem> which i need to write a spec for, but i'm all spec'ed out
14:18:32 <macsz> mriedem: my bouncer died for 5 minutes, couldnt see anything so i may be out of sync
14:18:45 <sfinucan> ^ me too :/
14:18:46 <mriedem> macsz: we'll talk about the bugs dashboard in open discussion
14:19:02 <mriedem> gibi: notification highlights?
14:19:22 <mriedem> i think we're missing the gibster
14:19:42 <mriedem> we talked a bit about bdms in payload notifications for searchlight
14:19:57 <mriedem> and agreed to make that config driven for now, since it will eventually be config driven for searchlight anyway
14:20:11 <mriedem> disabled by default so we don't make unnecessary db calls from computes
14:20:31 <mriedem> powervm - efried?
14:20:39 <efried> PowerVM driver blueprint impl progress has stagnated.  Broader team, please review https://review.openstack.org/438119
14:20:51 <efried> In other news, *something* changed underneath us a couple of weeks ago which made the PowerVM code hang while pulling a glance image to an instance's boot disk.  Something to do with eventlets?  A thread blocks open()ing a FIFO - and hangs the entire compute process.  We've worked around it, but if anyone knows more about the root cause of this, please hit me and thorst up after the meeting.
14:21:25 <mriedem> ok
14:21:29 <efried> In conclusion: Broader team, please review https://review.openstack.org/438119
14:21:43 <mriedem> cinder/nova updates
14:22:01 <mriedem> i wasn't in last week's meeting, but lyarwood's vol detach refactor patch is merged
14:22:05 <mriedem> Review focus is on the new detach flow: https://review.openstack.org/#/c/438750/
14:22:18 <mriedem> ^ is all noop right now, by design
14:22:35 <mriedem> #topic stuck reviews
14:22:43 <mriedem> there was nothing in the agenda
14:22:56 <mriedem> #topic open discussion
14:23:05 <mriedem> macsz: did you want to mention something about bugs?
14:23:31 <sfinucan> I have something  - https://review.openstack.org/#/c/450211/
14:23:33 <macsz> wanted to point out that there is the dashboard that markus_z created : http://45.55.105.55:8082/bugs-dashboard.html
14:23:45 <macsz> helps with bug triage
14:23:47 <mriedem> macsz: we also talked about doing a bug spring clean
14:23:49 <macsz> and the queue of those
14:23:50 <mriedem> after p1
14:23:56 <macsz> is getting looong
14:23:57 <macsz> and yes
14:24:04 <mriedem> by accident i attended the bug team meeting this week
14:24:06 <mriedem> the team of 1
14:24:16 <macsz> you made a crowd, sir
14:24:22 <mriedem> and was reminded that in newton, around this time last year, we did a spring clean of launchpad
14:24:30 <mriedem> not fixing bugs,
14:24:35 <mriedem> but just going through cruft in launchpad
14:24:47 <mriedem> i think i'll propose that again, to be done in a couple of weeks
14:24:57 <mriedem> more info in the ML, when it happens
14:25:12 <mriedem> sfinucan: ok what's up
14:25:28 <sfinucan> So I was hoping Jens would be here
14:25:46 * sfinucan doesn't know his nick
14:25:51 <mriedem> fickler?
14:25:59 <mriedem> frickler:
14:26:00 <mriedem> ^
14:26:27 <sfinucan> but long story short, seems I exposed some flaws in how the metadata service exposes network info
14:26:33 <mriedem> this looks much like what dtp has a bp for
14:26:33 <sfinucan> ...with a recent change
14:26:41 <johnthetubaguy> it reminded me about the lack of docs we have around the metadata and config drive "API"
14:26:51 <sfinucan> ah, so yeah - I was hoping for some context
14:26:54 <mriedem> johnthetubaguy: yes, very much
14:27:09 <mriedem> i had the same concern in https://review.openstack.org/#/c/400883/
14:27:26 <sfinucan> he's talking about "making a new metadata version" in that review. Curious to know (a) what that means and (b) if it's necessary
14:27:48 <sfinucan> ...and therefore if it should block that fix
14:27:52 <johnthetubaguy> the URLs get versioned a bit, one per release
14:28:06 <johnthetubaguy> but its not really very clear
14:28:11 <mriedem> we don't bump the version in the meta service if there are no changes
14:28:14 <artom> Presumably https://github.com/openstack/nova/blob/master/nova/api/metadata/base.py#L50-L59
14:28:19 <artom> For the versions
14:28:21 <johnthetubaguy> mriedem: +1 true
14:28:29 <mriedem> artom: no those are the aws versions
14:28:31 <mriedem> ec2
14:28:36 <mriedem> artom: https://github.com/openstack/nova/blob/master/nova/api/metadata/base.py#L76
14:28:49 <artom> mriedem, doh, right
14:29:16 <mriedem> sfinucan: so i'm sure we do a crap job of verifying or testing this,
14:29:38 <mriedem> but if for example we add new things to the metadata payload, or change the response in one of the existing keys, then yeah it would be a version bump
14:29:48 <johnthetubaguy> I was wondering how hard it would be to do something like we have with the samples tests for the API
14:30:01 <mriedem> johnthetubaguy: we really probably should have something like that
14:30:07 <mriedem> with a fixture or something
14:30:13 <sfinucan> mriedem: Right, and do we need to provide the ability to get the older version a la microversions?
14:30:14 <johnthetubaguy> I just feel I am flying blind in there, which is a worry
14:30:19 <mriedem> because if you change the format of network_data.json it would be hard to tell
14:30:26 <mriedem> sfinucan: i think we do
14:30:34 <johnthetubaguy> sfinucan: we hand out all versions today, its in the URL or directory I think
14:30:41 <artom> johnthetubaguy, it's been bouncing around in the back of my head to overhaul metadata api, with proper versionning like the API microversions
14:30:52 <artom> johnthetubaguy, yeah, I think that's correct
14:30:58 <artom> "version" is in the URL as a directory
14:31:00 <johnthetubaguy> artom: its not that simple, because of config drive, but better testing would help a tun
14:31:01 <mriedem> johnthetubaguy: we hand out all versions for config drive i think
14:31:08 <johnthetubaguy> mriedem: yeah, exactly that
14:31:10 <mriedem> but you can request a specific version via the metadata api
14:31:29 <johnthetubaguy> yeah, in the URL or directory path, I believe
14:32:19 <mriedem> ok, anyway, sfinucan does that answer your question?
14:32:37 <sfinucan> Sort of . We can revisit on openstack-nova later
14:33:08 <johnthetubaguy> interesting project for someone there, getting the testing sorted
14:33:30 <mriedem> i believe we now have a fixture for running actual requests against the metadata api
14:33:37 <artom> johnthetubaguy, I probably have the bandwidth for that this cycle
14:33:38 <mriedem> sdague and mikal wrote that for the vendordata v2 stuff
14:33:49 <artom> Unless you have someone else in mind
14:34:00 <sfinucan> mriedem: Link?
14:34:03 <mriedem> so in functional tests we could write some tests to hit each version and assert what we get back
14:34:07 <mriedem> sfinucan: i can dig it up later
14:34:13 <sfinucan> artom I can help with that too
14:34:16 <sfinucan> mriedem: cool cool
14:34:21 <mriedem> we'd have to stub out the backend data/content of course
14:34:29 <mriedem> but it could test the actual route path logic
14:34:45 <mriedem> ok anything else?
14:35:02 <mriedem> sounds like no, ok, thanks everyone
14:35:03 <mriedem> #endmeeting