14:00:34 <johnthetubaguy> #startmeeting nova
14:00:35 <openstack> Meeting started Thu Jun 12 14:00:34 2014 UTC and is due to finish in 60 minutes.  The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:38 <openstack> The meeting name has been set to 'nova'
14:00:50 <johnthetubaguy> hello everyone
14:00:53 <funzo> good morning
14:00:57 <n0ano> nope, nobody
14:00:58 <mriedem> hi
14:01:02 <cyeoh> hi
14:01:05 <gibi> hello
14:01:10 <alaski> hi
14:01:17 * johnthetubaguy attempts australian accent
14:01:19 <danpb> afternoon
14:01:31 <johnthetubaguy> #topic Juno mid-cycle meet up
14:01:33 <mzoeller> hi folks
14:01:33 <PhilD> G'day Blue
14:01:42 <johnthetubaguy> so this is mostly informational
14:01:44 <n0ano> johnthetubaguy, just say g'day a lot
14:01:55 <johnthetubaguy> #link https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint
14:02:03 <johnthetubaguy> n0ano: sure :)
14:02:14 <johnthetubaguy> anyone got anything about the meet up
14:02:20 * danpb will sadly miss the mid-cycle meetup due to holiday plans
14:02:21 <johnthetubaguy> not seen an etherpad started for topics yet
14:02:47 <mriedem> johnthetubaguy: there isn't an etherpad but the wiki has some things under 'nova specifics'
14:02:48 <johnthetubaguy> #link https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803
14:02:52 <johnthetubaguy> is the registration
14:03:19 <johnthetubaguy> #link https://etherpad.openstack.org/p/juno-mid-cycle-meetup
14:03:26 <johnthetubaguy> well I created one, lets put topics on there I guess
14:03:53 <johnthetubaguy> any more for any more?
14:04:11 <johnthetubaguy> #topic Juno-1
14:04:24 <johnthetubaguy> well again, this is just to say its released, effectively
14:04:31 <johnthetubaguy> #link https://github.com/openstack/nova/releases/tag/2014.2.b1
14:04:36 <n0ano> sorry, back to meetup, there's a pad at https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint
14:04:47 <n0ano> or a link to one anyway
14:05:07 <mriedem> i'll put the etherpad link in the wiki
14:05:10 <johnthetubaguy> n0ano: just added that, but yes, feel free to add to it :)
14:05:22 <johnthetubaguy> ah, oops
14:05:26 <johnthetubaguy> two links
14:05:28 <johnthetubaguy> never mind
14:05:35 <johnthetubaguy> anyways, any more on juno-1?
14:05:38 <johnthetubaguy> hopefully not
14:05:40 <mriedem> nope
14:05:45 <johnthetubaguy> #topic Juno-2
14:06:00 <johnthetubaguy> OK, so we are unfreezing the approving of blueprints now
14:06:20 <baoli> johnthetubaguy: Hi
14:06:26 <johnthetubaguy> #info its time to start reviewing blueprints again
14:06:33 <baoli> I have a pending review https://review.openstack.org/#/c/86606/
14:06:50 <johnthetubaguy> baoli: going to push those to the Open discussion if thats OK
14:06:57 <baoli> sure
14:06:58 <johnthetubaguy> just thinking process wise for now, any issues?
14:07:10 <johnthetubaguy> we really didn't get much into juno-1
14:07:19 <johnthetubaguy> so we kinda need to fix that in juno-2
14:07:21 <mriedem> johnthetubaguy: i felt like we had 2 weeks last week for juno-1 bp reviews
14:07:35 <mriedem> like june 20-something was the deadline
14:07:38 <mriedem> but might be confused
14:08:11 <johnthetubaguy> hmm, OK, some miss communication there, sorry about that
14:08:18 <mriedem> anyway, gate has been bad for the last week
14:08:19 <johnthetubaguy> two weeks ago at this time we had two weeks ish
14:08:22 <mriedem> ha
14:08:30 <mriedem> personally i'm fine with fewer approved blueprints
14:08:30 <johnthetubaguy> but the gate has hampered the push a bit, for sure
14:08:38 <johnthetubaguy> we should get more people to help with that, if possible
14:08:42 <mriedem> we have ~1500 bugs or something crazy
14:08:45 <cyeoh> be nice to get a priority on spec reviews, otherwise we end up with a worse merge crunch for juno-2 which will then flow on to juno-3
14:08:56 <johnthetubaguy> mriedem: yeah more bugs was good
14:08:59 <johnthetubaguy> cyeoh: agreed
14:09:07 <mriedem> johnthetubaguy: i mean a backlog
14:09:08 <johnthetubaguy> I think we should have a blueprint review push this next week
14:09:11 <mriedem> so != good :)
14:09:29 <johnthetubaguy> mriedem: oh yeah, I missready lol
14:09:37 <mriedem> so we have a shitload of bugs
14:09:45 <mriedem> we have reverts for gate issues on a seemingly daily basis
14:09:53 <johnthetubaguy> I meant we fixed more bugs than blueprints, but not sure we moved the neadle too far
14:09:54 <mriedem> poor attendance at the bug day last week
14:10:16 <johnthetubaguy> yeah, I had a bad clash with other stuff myself, which was screwy
14:10:18 <mriedem> i feel like our priorities aren't in the right place...so adding spec reviews and features on top of all that is counter productive
14:10:28 <cyeoh> mriedem: do we have lots of reverts because we're simply merging stuff we shouldn't have in the first place? (sorry I haven't been able to keep up with them)
14:10:40 <mriedem> cyeoh: some yes - we'll get to that in the bug discussion
14:10:53 <johnthetubaguy> yeah… lets sit on that one for now
14:11:12 <mriedem> anyway we can move on
14:11:14 * dansmith slinks in late
14:11:14 <johnthetubaguy> mriedem: so my thinking is we try and get through the blueprint backlog
14:11:33 <johnthetubaguy> then we draw a soft line in the sand, to say no more, we have all we can do for juno
14:11:38 <johnthetubaguy> lets do more bugs
14:11:56 <mriedem> what is 'getting through the blueprint backlog' though?
14:12:02 <mriedem> the specs that are up for juno?
14:12:20 <johnthetubaguy> mriedem: I think its more decide what blueprints currently up for reveiw we really need
14:12:29 <johnthetubaguy> to getting the list of high and medium sorted
14:12:40 <johnthetubaguy> does that make any sense?
14:12:53 <johnthetubaguy> then we kinda shut the floodgates, so we can get some bugs fixed
14:12:56 <mriedem> sort of yeah, i know what you're saying
14:13:41 <johnthetubaguy> I am thinking of a fixed size funnel, and wanting to leave room for bug fixes, and structural work
14:14:19 <johnthetubaguy> one last thing, any blueprints people don't want to leave till juno-3 but we need in juno
14:14:24 <johnthetubaguy> its worth a think
14:14:42 <johnthetubaguy> stuff that will be too risky for juno-3 but we want it
14:14:53 <johnthetubaguy> some of the scheduler rework feels like that
14:14:54 <PhilD> Perpahs we need to have  acut off date for new BPs - and then prioirtise the ones that are in nova-specs at that point (both for spec review and implementation)
14:15:13 <mriedem> PhilD: agree with that
14:15:21 <johnthetubaguy> PhilD: yeah, I think that is what I am saying, just not very well
14:15:25 <dansmith> like -2 the ones currently proposed with "we're not going to spend time on this, sorry" ?
14:15:26 <mriedem> basically if you don't have a spec filed by mid juno-2 yo'ure out
14:15:34 <johnthetubaguy> dansmith: +1
14:15:36 <dansmith> the ones that we don't want to prioritize I mean
14:15:41 <mriedem> then prioritize the rest that are available for review
14:15:45 <johnthetubaguy> mriedem: +1
14:15:52 <mriedem> otherwise they'll just keep coming
14:15:56 <johnthetubaguy> so..
14:15:58 <dansmith> I think it's important that we look at what we have, prioritize, and not just assume anything that is proposed gets the same amount of attention
14:16:05 <johnthetubaguy> 3rd July sounds good right?
14:16:17 <danpb> well we'd want some flexibility in there  IMHO
14:16:26 <johnthetubaguy> yeah, its that prioritise, we need to do that better
14:16:27 <danpb> because sometimes blueprints are really very trivial
14:16:28 <mriedem> danpb: sure there can be exceptions
14:16:37 <johnthetubaguy> danpb: yeah, always exceptions possible
14:16:48 <mriedem> trivial low-risk bp's or late in the release changes, like nova/neutron events for example
14:16:54 <devananda> johnthetubaguy: hi! I'd like to point out the ironic driver spec which has been up for ~3 weeks
14:17:16 <PhilD> We coudl say that after a date you need to start putting entries into the "K" directory for specs - maybe moving things there rather tnan -2 would be better too
14:17:17 <johnthetubaguy> devananda: we stopped reviewing for two weeks, to get juno-1 sorted, appologies
14:17:27 <devananda> johnthetubaguy: let's not leave it until the last minute :)
14:17:31 <johnthetubaguy> devananda: yours is in that high priority list in your head!
14:17:47 <johnthetubaguy> PhilD: thats fair
14:17:54 <johnthetubaguy> so… the proposal
14:18:09 <johnthetubaguy> July 3rd, no new proposals for juno specs
14:18:23 <johnthetubaguy> allow proposals for K, but low priority for reviews
14:18:31 <johnthetubaguy> exceptions to be raised in nova-meeting for review
14:18:35 <devananda> johnthetubaguy: "in my head"?
14:18:35 <johnthetubaguy> does that work?
14:19:02 <PhilD> Works for me - and also idetify specs currently in juno that should be moved to K ?
14:19:08 <johnthetubaguy> devananda: yeah, there is an etherpad of priorities from the summit, but its a bit neglected right now
14:19:24 <johnthetubaguy> PhilD: thats a point...
14:19:38 <cyeoh> that sounds fine to me, but definitely want an email to openstack-dev so people know about the deadline well in advance
14:19:59 <johnthetubaguy> By July 10, nova-drivers to agree high and medium priority items, and stuff that must not be in Juno as its too late
14:20:09 <PhilD> Feels that it would be better to more a spec to K reather than -2 it if we mean "not yet".
14:20:19 <johnthetubaguy> cyeoh: +1
14:20:40 <johnthetubaguy> PhilD: well, −2 till you re-propose this for K
14:20:41 <danpb> yeah, "-2" is a very negative thing to say to a contributor and should be avoided unless we don't want it ever
14:20:56 <yjiang51> PhilD: I think -2 no forever, while move to K is next cycle.
14:21:06 <yjiang51> danpb: +1
14:21:08 <PhilD> Is that ""nova-drivers propose a list that we confirm in a nova meeting" ?
14:21:20 <johnthetubaguy> PhilD: yeah, thats fair
14:21:28 <mriedem> or just FFE
14:21:33 <mriedem> seems like the same process
14:21:34 <PhilD> philthefairguy
14:21:39 <mriedem> nova-drivers say they want to defer
14:21:47 <mriedem> if you have a strong case for keeping it in juno, FFE
14:21:49 <johnthetubaguy> #action johnthetubaguy to send process for Juno blueprints to dev list for review
14:22:00 <mriedem> then PTL decides i guess - with core team backing for review
14:22:25 <mriedem> we're really just moving the FFE thing way to the left
14:22:41 <johnthetubaguy> well, either way, shout up you feel like we have the wrong end of the stick
14:23:00 <mriedem> let them shout in the ML, let's move on :)
14:23:13 <johnthetubaguy> yeah
14:23:20 <johnthetubaguy> #topic bugs
14:23:20 <PhilD> Well FF for specs being eariler that FF for implemetaion makes sense to me
14:24:04 <johnthetubaguy> OK, not seen a hot bug list, but we have a few on the meeting agenda from mikal
14:24:04 <mriedem> tjones doesn't appear to be around
14:24:23 <johnthetubaguy> http://lists.openstack.org/pipermail/openstack-dev/2014-June/037304.html
14:24:31 <johnthetubaguy> lp1328694
14:24:35 <mriedem> that's mine
14:24:38 <mriedem> oh goody
14:24:45 <mriedem> so when we talk about things that merged which shouldn't have, ^
14:24:53 <mriedem> that was a feature that got in via bug report
14:25:00 <johnthetubaguy> crap
14:25:12 <mriedem> api and db api changes, cli changes, doc impacts, potential performance impacts, etc
14:25:32 <mriedem> the question now is do we fix the bug where ceilometer is spamming the n-api logs,
14:25:43 <mriedem> or do we revert the nova change and make this go through nova-specs
14:25:45 <devananda> johnthetubaguy: I dont see a link in scrollback to the priories etherpad you mention, or in the list of Juno Summit etherpads. would you mind sharing that?
14:26:15 <johnthetubaguy> devananda: I will have to dig that up, was a few nova meetings ago
14:26:23 <cyeoh> mriedem: how long ago did the api change merge?
14:26:29 <johnthetubaguy> mriedem: I am tempted to say revert and thinking it though
14:26:29 <mriedem> cyeoh: couple weeks
14:26:38 <devananda> johnthetubaguy: ack, thanks for the pointer. i'll look in the archives
14:26:41 <mriedem> cyeoh: https://review.openstack.org/#/c/81429/
14:27:00 <mriedem> merged on 5/22
14:27:04 <mriedem> cli change was after that
14:27:10 <mriedem> and ceilometer change after that which exposed the bug we have now
14:27:17 <mriedem> introduced by the api change above
14:27:20 <leifz> revert if it's after juno start?
14:27:33 <mriedem> my bigger concern is the polling
14:27:33 <johnthetubaguy> leifz: people deploy off trunk though, its not always that simple
14:27:47 <johnthetubaguy> PhilD: what do you recon about this specific instance?
14:28:10 <johnthetubaguy> it seems like we should revert this and do this properly, but we did just release it in Juno-1
14:28:13 <mriedem> ceilometer is hitting the nova db and api server every time it polls on all servers and all floating ips
14:28:21 <johnthetubaguy> so maybe we revert, and re-spin Juno-1
14:28:28 <mriedem> idk about juno-1
14:28:42 <dansmith> johnthetubaguy: what does that mean?
14:28:46 <mriedem> the only consumer is ceilometer
14:28:47 <dansmith> johnthetubaguy: "respin j1"
14:29:03 <johnthetubaguy> dansmith: I was just thinking about that too, it doesn't really mean anything :(
14:29:07 <mriedem> :)
14:29:08 <johnthetubaguy> remove the tag?
14:29:11 <mriedem> no
14:29:14 <mriedem> not worth that
14:29:15 <PhilD> Sorry - got distracted.  What was the question ?
14:29:18 <dansmith> johnthetubaguy: okay, I was wondering if you had a delorean I didn't know about :)
14:29:35 <mriedem> delorean won't fit the tuba
14:29:35 <johnthetubaguy> dansmith: I have a lot of junk in my loft, but sadly no :(
14:29:40 <dansmith> johnthetubaguy: heh
14:29:43 <johnthetubaguy> mriedem: true, true
14:29:47 <johnthetubaguy> anyway...
14:29:53 <johnthetubaguy> to revert or not to revert
14:29:55 <mriedem> the question is if we revert this https://review.openstack.org/#/c/81429/
14:29:56 <johnthetubaguy> yuck
14:30:01 <mriedem> along with the related novaclient change
14:30:08 <mriedem> and then make this go through bp review process
14:30:08 <dansmith> If it's a small window,
14:30:12 <dansmith> and we know it's bad,
14:30:15 <dansmith> I'd rather revert ASAP
14:30:21 <johnthetubaguy> yeah, +1
14:30:26 <dansmith> especially if it was a special interface to be consumed by a machine and not a user
14:30:27 <cyeoh> +1 to revert
14:30:38 <johnthetubaguy> OK, anyone against a revert?
14:30:39 <leifz> 20 day window is small in my book +1
14:30:46 <mriedem> would be good if some of the nova core team will read the related ML and respond with opinions there
14:30:46 <johnthetubaguy> mriedem: are you cool to propose the revert?
14:30:48 <mriedem> since that has the defaults
14:30:51 <mriedem> *details
14:31:16 <PhilD> Revert sounds OK to me
14:31:16 <johnthetubaguy> #help please respond to http://lists.openstack.org/pipermail/openstack-dev/2014-June/037304.html
14:31:18 <mriedem> johnthetubaguy: i'm more than cool, but want informed consensus first, i.e. responses in the ML so i know people read it
14:31:31 <johnthetubaguy> mriedem: yeah, totally
14:31:41 <johnthetubaguy> so, lp1323658
14:31:41 <mriedem> i am more or less worried about allowing precedent for ceilometer to change nova apis for it's polling needs
14:31:53 <johnthetubaguy> http://lists.openstack.org/pipermail/openstack-dev/2014-June/037221.html
14:32:00 <cyeoh> and just a reminder to reviewers that any api change even if its backwards compatible *has* to go through nova-specs
14:32:02 <johnthetubaguy> ssh timeout bug, a request for nova help
14:32:19 <johnthetubaguy> cyeoh: agreed, maybe I should send an email about that
14:32:42 <johnthetubaguy> #action johnthetubaguy to ensure we restate that all api changes need a nova-spec
14:32:53 <johnthetubaguy> so about the ssh timeout help
14:33:06 <johnthetubaguy> was that neutron and nova-network?
14:33:44 <johnthetubaguy> #help need someone to help with lp 1323658 as it would help the gate a lot
14:33:46 <mriedem> the email from kyle was for neutron
14:33:46 <uvirtbot> Launchpad bug 1323658 in nova "Nova resize/restart results in guest ending up in inconsistent state" [Undecided,New] https://launchpad.net/bugs/1323658
14:34:00 <mriedem> it's a set of neutron scenario tests
14:34:16 <mriedem> sounds like a problem when the instance goes through resize/restart
14:34:20 <johnthetubaguy> mriedem: yeah, just with it being nova side, I was curious, but I guess we need to dig
14:34:22 <johnthetubaguy> yeah
14:34:30 <mriedem> there was a separate gate blocker on ssh timeouts with nova-network
14:34:43 <mriedem> we reverted a tempest change on monday morning to get past that, but the bug is still open
14:34:45 <johnthetubaguy> ah, thats probably why I am confused
14:34:53 <mriedem> we're thinking there is a leak in nova-network somewhere
14:35:06 <mriedem> i'm suspicious of the ec2 3rd party tempest tests and/or the ec2 api
14:35:16 <mriedem> given those don't get much love and run concurrently with the scenario tests that were failing
14:35:24 <PhilD> Could the bug here be that stop/resize do a power off that could lead to a data corruption ?
14:35:44 <johnthetubaguy> PhilD: good sell for your blueprint there, but yeah, you have a point
14:35:45 <PhilD> (I have that fix pendeing to do a controlled shutdown instead)
14:36:08 <mriedem> PhilD: not sure, could be that networking isn't associated correctly when the instance comes back up
14:36:10 <johnthetubaguy> fails to come back up would mean no ssh
14:36:21 <johnthetubaguy> they suggest no console output
14:36:25 <johnthetubaguy> like the VM failed to boot
14:36:25 <PhilD> If its reproducabel it woudl be easy for someone to pull in my propsoed fix and see if it helps
14:36:54 <johnthetubaguy> PhilD: well its kinda flakey rather than always, at least that was my impression, but its worth a whirl
14:37:03 <johnthetubaguy> oh wait
14:37:10 <dansmith> mriedem: what does leak mean here, leaking floating ips or something?
14:37:17 <johnthetubaguy> were we going to skip the graceful shutdown in the gate, becuase it gets too slow?
14:37:31 <mriedem> dansmith: yeah i think so
14:37:44 * johnthetubaguy is worries about the time, hoped to start sub teams at half past
14:37:55 <mriedem> anyway, the nova-network ssh timeout bug is discussed here http://lists.openstack.org/pipermail/openstack-dev/2014-June/037002.html
14:37:59 <johnthetubaguy> yeah
14:38:02 <PhilD> With th enew fix it only adds ~5 minutes to the gate
14:38:03 <mriedem> mikal has a patch to add more trace logging to nova-network
14:38:21 <johnthetubaguy> mriedem: did you say there was something else you wanted to cover here?
14:38:26 <mriedem> but we seemed to be hitting the problem around 250 instances
14:38:30 <mriedem> johnthetubaguy: no
14:38:46 <johnthetubaguy> mriedem: cool, we covered it now
14:38:47 <mriedem> johnthetubaguy: oh wait
14:38:53 <johnthetubaguy> sure
14:38:55 <mriedem> from agenda
14:38:56 <mriedem> "spotting bug "themes", like force_config_drive and resize/migrate (mostly due to those not being tested with multi-node hosts in the gate)"
14:38:57 <johnthetubaguy> any more on bugs
14:39:02 <mriedem> tjones had that, i added resize/migrate
14:39:13 <mriedem> i've been tagging resize/migrate bugs even though it's not an official tag
14:39:14 <johnthetubaguy> yeah
14:39:18 <mriedem> but in the hopes that we avoid duplicates
14:39:29 <mriedem> since we don't have multi-node testing in the gate to test migrations
14:39:40 <johnthetubaguy> mriedem: that was my reason to add that idea there, yeah
14:39:51 <mriedem> i'm going to attend the qa meeting today, see what needs to be done for that
14:40:00 <johnthetubaguy> so mikal wanted me to raise ideas about how we improve our bug triage
14:40:07 <johnthetubaguy> basically a call to put thinking caps on
14:40:12 <johnthetubaguy> and send proposals to the ML
14:40:15 <mspreitz> I wouldn't mind some answers on https://bugs.launchpad.net/nova/+bug/1327406
14:40:17 <uvirtbot> Launchpad bug 1327406 in nova "The One And Only network is variously visible" [Undecided,In progress]
14:41:00 <johnthetubaguy> mriedem: as you said, I think picking a bug theme, then chasing it down, looking for duplicates, etc, might be a good way to go
14:41:22 <mriedem> attendance at the bug meeting yesterday wasn't great from what i could tell, it was my first time though :)
14:41:25 <johnthetubaguy> so next time you pick up a bug, or triage a bug, maybe check for duplicates, and maybe we start tagging some common themes
14:41:27 <danpb> johnthetubaguy: we should probably put bug discussion as the last agenda item next time as it expands to consume all available time :-)
14:41:41 <mriedem> yeah let's move on
14:41:49 <johnthetubaguy> mriedem: yeah, I need to make myself free for that again, its a slightly awkard time for me
14:41:57 <johnthetubaguy> yeah
14:42:06 <mriedem> it's basically tagging, not really triage
14:42:08 <johnthetubaguy> #topic Gate status
14:42:18 <johnthetubaguy> I think we actually covered this in bugs
14:42:20 <dansmith> "sucky"
14:42:22 <dansmith> lets move on
14:42:24 <dansmith> :P
14:42:25 <johnthetubaguy> quite
14:42:26 <mriedem> yeah
14:42:28 <mriedem> it's better
14:42:31 <mriedem> from friday
14:42:39 <johnthetubaguy> help please help, ssh bug is one
14:42:50 <mriedem> http://status.openstack.org/elastic-recheck/
14:42:50 <johnthetubaguy> #topic Sub team reports
14:43:00 * n0ano gantt
14:43:13 <danpb> Libvirt: nothing especially notable to report this week
14:43:42 <johnthetubaguy> xenapi: same, nothing to report, general CI progress
14:43:48 <johnthetubaguy> n0ano: fire away
14:43:51 <cyeoh> nova-api: just looking for nova-spec reviews so we can start merging stuff
14:44:10 <devananda> ironic is finally unblocked after the revert to the HostState.__init__ landed.
14:44:16 <johnthetubaguy> cyeoh: ack, they are in the top priority list in my head too
14:44:19 <baoli> johnthetubaguy: is this right time to talk about sriov?
14:44:26 <n0ano> biggest thing is we've decided to abondon the no-db BP (for now), given recent improvement it's a premature optimzation for the moment
14:44:28 <cyeoh> johnthetubaguy: thx!
14:44:43 <devananda> we were able to get in a lot of bug fixes this week. otherwise, just looking for our spec to be reviewed so we can start planning to merge the driver around the time of the mid-cycle
14:44:49 <johnthetubaguy> baoli: yep, I have added you to the queue,
14:44:56 <baoli> thx!
14:45:03 <johnthetubaguy> n0ano: any more on scheduler
14:45:07 <baoli> So the nova-spec for sriov is pending
14:45:11 <johnthetubaguy> I see some progress on the split out, just proving hard
14:45:29 <n0ano> forklift is still WIP (work in progress), hope for some concrete results by juno-2, we'll see
14:45:41 <n0ano> that's about all
14:45:54 <baoli> we also need core reviewers for this bug: https://review.openstack.org/#/c/81954/
14:46:55 <johnthetubaguy> baoli: keep bugging me about your sepc, we should get to that in the blueprint push thats coming up
14:47:03 <johnthetubaguy> OK, any more sub team reports?
14:47:26 <baoli> johnthetubaguy: sure.
14:47:36 <johnthetubaguy> #topic Open Discussion
14:47:39 <funzo> I'm working with several folks on getting the nova-docker hypervisor plugin squared away for feature parity and tempest test failures fixed. I'd like to have the plugin be considered for merging back into the nova tree (probably K?). does anyone have thoughts about this?
14:48:15 <danpb> i have a question about our policy wrt changes which help Python3 portability
14:48:18 <danpb> eg https://review.openstack.org/#/c/98573/
14:48:42 <danpb> Joe rejected that saying we shouldn't do python3 port work
14:49:05 <devananda> so i have a question about nova.virt.baremetal. how much do ya'll want to deprecate it?
14:49:06 <danpb> IMHO if people wish to contribute patches to help Nova code portaibility to Python3 we should welcome it
14:49:25 <dansmith> danpb: yeah, that response confuses me
14:49:26 <danpb> so that when our external deps do finally support python3, nova code will mostly be ready
14:49:35 <johnthetubaguy> danpb: yeah, I didn't think we were going to block that, just not put it as high priority
14:50:11 <johnthetubaguy> #action johnthetubaguy to reach out to jogo about https://review.openstack.org/#/c/98573/
14:50:19 <johnthetubaguy> Ok, so we have some agenda items here
14:50:24 <johnthetubaguy> alaski: you added some?
14:50:28 <danpb> anyone know if there's a wiki page mentioning nova's python3 status ?
14:50:36 <danpb> if so i could edit it to make this policy clearer
14:50:46 <johnthetubaguy> danpb: I don't remember one, worth proposing I guess
14:50:53 <alaski> johnthetubaguy: I didn't... maybe something was carried over from last week?
14:50:58 <leifz> danpb: maybe on the code review page as well?
14:51:09 <johnthetubaguy> alaski: oops, probably my bad
14:51:35 <mriedem> danpb: https://wiki.openstack.org/wiki/Python3
14:51:38 <johnthetubaguy> cyeoh: I put down your v3 API specs, but I guess we should discuss those in the review
14:51:41 <alaski> johnthetubaguy: well I can ask about tasks here
14:51:42 <devananda> or is everyone content leaving nova.virt.baremetal in its current semi-frozen state in the tree for ever?
14:52:02 <danpb> mriedem: ah, thanks
14:52:04 <mriedem> devananda: i'd like the nova bugs tagged with baremetal to be triaged/moved if that's the case
14:52:20 <mriedem> devananda: if they aren't critical for nova bm driver, let's move them from nova to ironic
14:52:34 <danpb> devananda: if we leave it in forever, then someone still has to deal with any security issues that may arise with it
14:52:46 <danpb> and its a burden for people when they want to refactor internal code
14:52:54 <devananda> mriedem: so, it /should/ be deprecated and replaced by ironic, but that wont happen if no one reviews it
14:52:55 <johnthetubaguy> yeah, I feel bad about leaving it in tree for ever
14:53:00 <cyeoh> johnthetubaguy: yep I'm happy to discuss that in review - v2.1 on v3 is the priority, but also the API policy check one (by alex xu which is half merged anyway)) and tasks (alaski)
14:53:08 <devananda> danpb: exactly
14:53:18 <danpb> so IMHO, we should be aiming to replace it with ironic, not let it rot forever
14:53:19 <mriedem> devananda: that should'nt prevent the ironic team from triaging those bugs though right?
14:53:21 <devananda> danpb: i dont want to see that. it's unmaintaned code. it shouldn't be there
14:53:28 <devananda> mriedem: totally different code.
14:53:36 <devananda> mriedem: the baremetal driver is untested and unmaintaned
14:53:52 <mriedem> i hope we have a warning in the driver to that effect...
14:53:56 <devananda> mriedem: with a slight exception -- tripleo was, until recently, using it and, well, filing tons of bugs
14:54:02 <devananda> mriedem: we dont
14:54:05 <cyeoh> johnthetubaguy: also although we've slated micoversions for the mid cycle I do think we need to move it along a bit beforehad because tasks is probably a bit dependent on it in practice (if we want to merge it fully in Juno)
14:54:25 <danpb> mriedem: its tagged Tier-3 isn't it
14:54:25 <johnthetubaguy> cyeoh: yeah, that makes sense
14:54:47 <johnthetubaguy> danpb: it never really got deprecated officially yet though
14:54:49 <mriedem> danpb: yeah https://wiki.openstack.org/wiki/HypervisorSupportMatrix#Group_C
14:54:56 <johnthetubaguy> its in limbo
14:55:09 <johnthetubaguy> devananda: so just to be clear, what you want is more reviews?
14:55:09 <devananda> right ^ but there's not an actual Deprecation warning in the logs, is what I mean
14:55:10 <alaski> cyeoh: yes.  at this point I'm still aiming for v3 as is, but am happy to use tasks to make progress on our new direction
14:55:12 <funzo> any thoughts about the nova-docker driver moving back into the tree provided the tempest tests are passing?
14:55:33 <devananda> johnthetubaguy: one thing I want, which we talked *at length* about at the summit, is landing the nova.virt.irnoic driver
14:55:42 <devananda> johnthetubaguy: which requires getting reviews on the spec as a first step
14:55:55 <devananda> johnthetubaguy: and, once that's agreeable and approved, then getting reviews on the driver code
14:55:56 <mriedem> devananda: i'm adding one now
14:56:12 <cyeoh> alaski: yep I'm ok with that - either will let us expose it as experimental and get some real world testing on it.
14:56:14 <devananda> johnthetubaguy: which will be a forklift from the ironic tree of several thousand lines.
14:56:17 <devananda> mriedem: ++
14:56:26 <johnthetubaguy> devananda: agreed, just checking, are you proposing that baremetal lives for ever?
14:56:28 <danpb> once Ironic lands, we should definitely add a deprecation warning in Baremetal
14:56:33 <devananda> johnthetubaguy: absolutely not :)
14:56:46 <johnthetubaguy> devananda: sorry, totally misread what you put then, good good
14:56:53 <devananda> johnthetubaguy: was just playing devil's advocate, since taht is what would happen if we dont merge nova.virt.irnoic (or you guys dont simply kick baremetal out)
14:57:04 <dansmith> three minute warning
14:57:04 <johnthetubaguy> devananda: I was hoping thats true :)
14:57:22 <johnthetubaguy> dansmith: ack
14:57:30 <johnthetubaguy> dansmith: and thank you :)
14:57:36 <devananda> there's a fairly large team from lots of companies committed to maintaining ironic (and the nova driver for it) at this point
14:57:39 * dansmith has another meeting to get to
14:57:45 <devananda> and almost no one looknig at baremetal
14:58:12 <johnthetubaguy> devananda: I couldn't agree more to removing it, but would be good to have the transition plan sorted first
14:58:17 <johnthetubaguy> anyways we should review the spec
14:58:21 <devananda> johnthetubaguy: well, turns out we do have it sorted
14:58:35 <johnthetubaguy> devananda: awesomeness
14:58:39 <johnthetubaguy> any more for any more?
14:58:49 <funzo> thoughts about nova-docker?
14:58:50 <johnthetubaguy> we are out of agenda items I think
14:58:54 <dansmith> and time
14:58:57 <funzo> haha
14:59:06 <n0ano> fungi, just do it :-)
14:59:13 <johnthetubaguy> funzo: well I think it follows the usual patter, prove stability, then propose the spec with the details
14:59:19 <johnthetubaguy> pattern^
14:59:31 <funzo> johnthetubaguy: ok
14:59:33 <funzo> thank you
15:00:08 <johnthetubaguy> funzo: the bigger discussion is feature compatibility
15:00:12 <johnthetubaguy> cool, so we are done
15:00:18 <johnthetubaguy> thanks all for attending
15:00:22 <johnthetubaguy> #endmeeting