21:00:56 <mriedem> #startmeeting nova
21:00:57 <openstack> Meeting started Thu Nov 19 21:00:56 2015 UTC and is due to finish in 60 minutes.  The chair is mriedem. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:00 <openstack> The meeting name has been set to 'nova'
21:01:15 <mriedem> #link meeting agenda is here: https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
21:01:25 <mriedem> who is around?
21:01:25 <bauzas> \o
21:01:32 <auggy> o/
21:01:32 <melwitt> o/
21:01:32 <_gryf> o/
21:01:33 <edleafe> \o
21:01:36 <rlrossit> o/
21:01:38 <takashin> o/
21:01:41 <tpatil> Hi
21:01:50 <dansmith> o/
21:01:57 <tonyb> \o
21:02:07 <mriedem> alright, let's get started
21:02:12 <mriedem> #topic release status
21:02:23 <mriedem> today is spec review day
21:02:33 <mriedem> i've reviewed 1 spec today, so i hope others are doing better
21:02:33 <mikal> .
21:02:45 <raildo> \o
21:02:49 <mriedem> Virtual doc sprint December 8th and 9th
21:03:07 <mriedem> i haven't been following the doc string thing, is that just reviewing nova's docs (globally?)
21:03:14 <mriedem> s/string/sprint/
21:03:24 <bauzas> I feel it's api related
21:03:25 <mriedem> manuals, api docs, config docs, devref?
21:03:27 <mriedem> ok
21:03:31 <mriedem> focused would be nice
21:03:41 <mriedem> anyone have that etherpad handy with the api doc gaps?
21:04:00 <ccarmack> https://etherpad.openstack.org/p/nova-v2.1-api-doc ?
21:04:01 <bauzas> https://etherpad.openstack.org/p/nova-v2.1-api-doc
21:04:06 <bauzas> snaaaaap again
21:04:22 <mriedem> #link https://etherpad.openstack.org/p/nova-v2.1-api-doc
21:04:40 <mriedem> probably some good things for new people in there
21:04:53 <mriedem> lots of api extensions for v2.1 are usually missing from what i remember
21:04:59 <mriedem> anything else on this?
21:05:05 <auggy> should we add a link to that doc to the mitaka api etherpad? https://etherpad.openstack.org/p/mitaka-nova-api
21:05:30 <mriedem> auggy: sure
21:05:34 <mriedem> there is a documentation section in there
21:05:41 * auggy is adding it now
21:05:43 <mriedem> keeping tracks of etherpads is a losing game really
21:05:58 <mriedem> Mitaka-1 is December 1st-3rd
21:06:14 <mriedem> dhellmann had a note in the ML about that
21:06:27 <mriedem> basically getting ducks in a row, which i think we have, at least with reno
21:06:58 <mriedem> i think one TODO there is having reno changes for any upgrade impact or completed bp's with docimpact before mitaka-1
21:07:04 <mriedem> bauzas was starting that
21:07:12 <bauzas> we just need to make sure that all UpgradeImpact merged patches have a reno file
21:07:16 <bauzas> yeah that
21:07:36 <mriedem> yup, and if anyone has completed blueprints already and there were doc changes, let's get those release note changes up
21:07:41 <mriedem> like deprecating nova-manage service stuff
21:07:42 <mriedem> moving on
21:07:49 <mriedem> December 3rd is also non-priority spec and blueprints freeze
21:07:53 <bauzas> yup, there is an open discussion point about that, moving on
21:08:08 <mriedem> so exactly 2 weeks,
21:08:17 <mriedem> we stop accepting non-priority specs
21:08:34 <mriedem> dansmith: ^ is that stop accepting *new* specs or stop approving all non-priority specs?
21:08:39 <mriedem> i always get that tripped up
21:08:49 <tonyb> mriedem: the latter IIUC
21:08:55 <dansmith> mriedem: I don't remember
21:08:58 <mriedem> yeah, that's what it looks like here https://wiki.openstack.org/wiki/Nova/Mitaka_Release_Schedule
21:09:09 <dansmith> mriedem: last time we had a spec proposal freeze and we don't have that this time I think?
21:09:09 <mriedem> let's assume it's a freeze on all non-priority specs
21:09:09 * tonyb shoudl get his spec written!
21:09:15 <mriedem> dansmith: that'd be good
21:09:18 <mriedem> it was confusing before
21:09:28 <mriedem> yeah, so if you have specs to write, get them up like a month ago
21:09:44 <mriedem> as usual, the specs review etherpad is here https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking
21:09:46 <mriedem> for categories
21:09:48 <tonyb> mriedem: a month ago I didn't know I had a spec to write ;P
21:10:03 <mriedem> time machine my friend
21:10:09 <sdague> wibbly wobbly
21:10:10 <mriedem> specless blueprints, do we have any?
21:10:39 <mriedem> there are some in https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking
21:11:01 <mriedem> so check those out ^
21:11:05 <sdague> do we have a blueprint registered yet for privsep?
21:11:26 <mriedem> sdague: i'm not seeing one in the etherpad
21:11:30 <mriedem> under that name at least, or rootwrap
21:11:36 <mriedem> that was a cross project spec right?
21:11:41 <sdague> I'm not seeing it in the list, and osbrick and osvif are kind of borked if we don't get that in
21:11:46 <sdague> there is privsep existing
21:11:53 <sdague> then the is nova implementing it
21:12:12 <mriedem> do we have a link to a cross project spec?
21:12:15 <mriedem> was it already approved?
21:12:42 <mriedem> http://specs.openstack.org/openstack/oslo-specs/specs/liberty/privsep.html
21:12:43 <mikal> We said privseo sat under os-vif at the summit
21:12:50 <mikal> i.e. is a priority
21:12:59 <mikal> I don't think we made it clear to gus that he needs a spec for the nova work
21:13:06 <mriedem> so i guess privsep was approved as a cross-project spec
21:13:10 <sdague> I don't think we need a spec
21:13:16 <mriedem> we need a bp for tracking purposes
21:13:18 <mriedem> but not a spec
21:13:19 <mikal> Yep, there's already an impl landed in oslo, but it needs to get released
21:13:19 <sdague> I think we need a blueprint for tracking getting it in
21:13:26 <mikal> Ahhh, yep. Sounds fair.
21:13:30 <mikal> I will walk gus through that today
21:13:31 <mriedem> who wants to ping gus about this?
21:13:33 <mriedem> ok
21:13:41 <mriedem> #action mikal to get gus to get a bp up for nova re: http://specs.openstack.org/openstack/oslo-specs/specs/liberty/privsep.html
21:13:46 <sdague> because osbrick has coupled cinder / osbrick / nova into lockstep until we have that
21:13:53 <mikal> Yep, understood
21:13:55 <mriedem> yup
21:14:02 <mikal> The code took a while to land in oslo, but is done now
21:14:02 <mriedem> i can add it to the etherpad we have later too
21:14:06 <mikal> The next step is tricking dims into releasing it
21:14:17 <mriedem> i don't tihnk you need to trick dims into release oslo things
21:14:23 <sdague> mikal: typey typey tricksy tricksy
21:14:23 <tonyb> mikal: gus was hoping that would be handeld by dim's oslo process
21:14:26 <mikal> Heh
21:14:33 <mriedem> yeah, every monday is oslo release monday
21:14:41 <mikal> Yeah, I think he and I are hoping dims will do the releasy bits
21:14:43 <mriedem> and tuesday is gate breakage tuesday :)
21:14:45 <mikal> I will bribe him until its done
21:14:52 <mriedem> moving on
21:14:58 <mriedem> i have a call out for novaclient 3.0 changes http://lists.openstack.org/pipermail/openstack-dev/2015-November/079915.html
21:15:01 <tonyb> mriedem: it wasn't in the last batch so I'll get dims and gus in a cage match^W^Wroom
21:15:18 <mriedem> basically, if you know of any backward incompatible changes to novaclient we want to make, we want to batch them up for 3.0
21:15:21 <mriedem> to release in mitaka-2
21:15:45 <mriedem> anything else on release status?
21:15:53 <mriedem> moving on
21:15:58 <mriedem> #topic regular reminders
21:16:03 <mriedem> #link https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
21:16:07 <mriedem> review priorities as usual ^
21:16:23 <mriedem> anyone have an update on the trivial bug tracking going on in there?
21:16:28 <mriedem> i haven't looked in there in awhile
21:16:44 <mriedem> looks like lxsli has been adding things
21:16:57 <mriedem> moving on
21:17:00 <mriedem> #topic bugs
21:17:10 <mriedem> gate status: http://status.openstack.org/elastic-recheck/index.html
21:17:12 <mikal> I think we've also previously said its a bigger deal later in the cycle
21:17:25 <mriedem> so the gate,
21:17:32 <mriedem> o.vo thing got fixed earlierin the week
21:17:33 <bauzas> the gate is good, nope?
21:17:40 <mriedem> there is the ebtables thing sdague is working around
21:17:44 <sdague> also, the incredible ebtables hack
21:17:44 <mriedem> sdague: anything on that?
21:17:54 <bauzas> oh
21:17:57 <sdague> yeh, we landed a hack in devstack to stop the bleeding
21:17:58 <tonyb> sdague: link?
21:18:20 <mriedem> https://review.openstack.org/#/c/247250/
21:18:27 <mriedem> plus master
21:18:32 <sdague> https://review.openstack.org/#/c/246501/
21:18:50 <sdague> it's off by default in devstack, devstack-gate turns it on in the gate
21:19:02 <sdague> the real fix landed in nova, but needs an env with libvirt 1.2.11
21:19:04 <mriedem> is the d-g change in?
21:19:07 <sdague> yes
21:19:15 <mriedem> have we seen a drop in that failure?
21:19:17 <sdague> yes
21:19:23 <dansmith> awesome
21:19:24 <sdague> only on liberty today
21:19:31 <sdague> then I pushed the backport
21:19:41 <mriedem> http://status.openstack.org/elastic-recheck/gate.html#1501558
21:19:47 <mriedem> http://status.openstack.org/elastic-recheck/gate.html#1501366
21:19:54 <sdague> logstash had an outage so the data is a little hard to be sure, but it looks promissing
21:19:59 <mriedem> cool
21:20:05 <mriedem> i don't have anything else on gate status
21:20:15 <mriedem> 3rd party ci http://ci-watch.tintri.com/project?project=nova&time=7+days
21:20:22 <mriedem> intel has been off in the weeds
21:20:30 <mriedem> ebay was spamming us
21:20:33 <mriedem> *was*
21:20:39 <dansmith> that tool doesn't give me any results anymore
21:20:41 <dansmith> is it broken?
21:20:45 <mriedem> heh, idk
21:21:06 <tonyb> dansmith: Yeah it looks down to me :(
21:21:06 <sdague> yeh, nothing there works
21:21:08 <mriedem> there is an intern at cloudbase that rechecks changes manually for hyper-v
21:21:11 <mriedem> for whatever reason..
21:21:18 <mriedem> intenrbot
21:21:24 <dansmith> yeah
21:21:34 <dansmith> but regardless, intel ci has been off the rails for a while i think
21:21:35 <mriedem> moving on to critical bugs
21:21:36 <dansmith> also- xen?]
21:21:36 <mriedem> do we have any?
21:21:43 <mriedem> xen has some racey fails
21:21:49 <mriedem> i've been opening bugs on the xenproject ci fails
21:22:20 <sdague> nothing listed as critical in the tracker
21:22:24 <mriedem> i've been doing that so i can point them out to bob when he yells at me for merging a thing that breaks xenproject ci
21:22:31 <tonyb> 2 in LP both with fixes
21:22:35 <sdague> the ebtables ones were our worst long running gate fail
21:22:45 <mikal> So, I promised to chase Intel CI and have been doing that thing
21:22:50 <mikal> They have active work on reliability
21:23:00 <mikal> And we talked about ways to make that more obvious to the community
21:23:12 <mriedem> an intel person did show up in irc the other day asking about something
21:23:18 <mriedem> so they at least seem engaged
21:23:26 <mriedem> let's move on?
21:23:29 <mikal> Yeah, we can talk more details later if you want
21:23:35 <mriedem> i don't really :)
21:23:37 <mriedem> but thanks
21:23:40 <dansmith> heh
21:23:42 <mikal> Heh
21:23:46 <mriedem> stable branch status
21:23:47 <dansmith> I just want it to work
21:23:48 <mriedem> kilo: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/kilo,n,z
21:24:01 <mriedem> lots of red there
21:24:03 <mriedem> so idk
21:24:17 <mriedem> juno is getting tagged today, if not already done
21:24:21 <mriedem> alan was doing that this morning
21:24:33 <mriedem> and we're not going to bump versions so things are just done, and then EOL soon
21:24:39 <mriedem> RIP juno
21:24:42 <tonyb> mriedem: some of the red in kilo is blokced on juno EOLing so that's a thing :)
21:24:42 <mriedem> (upstream)
21:24:49 <mriedem> tonyb: oh, grenade
21:24:56 <mriedem> yeah, there will be job cleanup going on
21:25:08 <sdague> spot checking it looks like ceph is the fail on a lot of those
21:25:23 <tonyb> mriedem: Umm oslo.utils needs a relase butit conflicts with Juno .....
21:25:36 <mriedem> tonyb: ok, we can tackle that in -stable
21:25:58 <mriedem> there has been a ceph thing in the gate that jbernard was working on
21:26:01 <mriedem> so i'm not surprised
21:26:03 <mriedem> moving on
21:26:12 <mriedem> #topic stuck reviews
21:26:17 <mriedem> there is one listed: https://review.openstack.org/#/c/135387/
21:26:25 <tpatil> Main point of objection so far is, there is no need to take instance snapshot as the instance disk files will be retained when shelved_offload_disk config parameter is set to False
21:26:26 <mriedem> Improve Performance of UnShelve api
21:26:45 <tpatil> Please refer to PS 18 for comments from John
21:26:58 <tpatil> If snapshot is not taken during shelving, then it would be an API change from user’s point of view. So John suggested to make this discoverable in the API somehow.
21:27:28 <tpatil> We can modify shelve API allowing user to pass parameter “snapshot=False/True” to decide whether to take snapshot or not during shelving process.  but then, it will complicate the case
21:27:39 <mriedem> why should the user have to know
21:27:40 <mriedem> ?
21:27:51 <mriedem> user shouldn't know the stack env right?
21:27:52 <sdague> yeh, it seems like the user should not really need to know this
21:28:08 <mriedem> shelve my thing in the most efficient way possible please
21:28:34 <dansmith> I thought we said we weren't going to do this at the summit?
21:28:36 <tpatil> We got a comment from john that snapshot is not required
21:28:41 <mriedem> tpatil: so you're wondering about a behavior change of not taking a snapshot and then people wondering where it is?
21:28:49 <dansmith> maybe I'm just remembering what I wanted to hear?
21:28:55 <mriedem> dansmith: it was a fantasy
21:29:03 <tpatil> mriedem: correct
21:29:06 <mriedem> hmm
21:29:18 <dansmith> so, for real though, I feel like this has been "stuck" for about 18 months, which I think means it shouldn't be on the stuck list anymore
21:29:44 <mriedem> could have a microversion that changes the response and tells you if you get a snapshot or not
21:29:46 <tpatil> dansmith: We have been chasing this spec since last 2 releases, hope it gets approved in this cycle.
21:30:03 <mriedem> i'd vote for just putting something in the response that says if you did a new thing or not, idk
21:30:09 <tpatil> Since Dec 3rd is freeze date for approving non-priority specs, I'm little worried now
21:30:36 <mriedem> are there other operators that are interested in solving this?
21:30:45 <tpatil> We have already replied to John comments and waiting for him to respond
21:31:07 <mriedem> tpatil: what i've been doing lately is engaging the ops community on stuff like this, via the #openstack-operators channel and the openstack-operators ML
21:31:10 <tpatil> no other party is interested it seems like
21:31:22 <mriedem> tpatil: i'd try to get some opinions from other operators on this and see if they have opinions
21:31:27 <mriedem> crowdsource the spec
21:31:30 <tpatil> But it's an important thing for us (NTT)
21:31:42 <tpatil> mriedem: Thanks
21:31:44 <mriedem> sure, but if we're talking about an api change, we should have buy in
21:31:46 <mriedem> from others
21:31:53 <mriedem> let's move on
21:32:14 <mriedem> #topic open discussion
21:32:17 <mriedem> a few items
21:32:18 <mriedem> 1. should we create some explicit guidelines about the creation of "reno" release-note files?
21:32:24 <tpatil> mriedem: We don't want to change the API, but if that's what community want we can make that happen
21:32:43 <mriedem> i need a link for the reno ML thread
21:32:44 <bauzas> mriedem: https://review.openstack.org/#/c/247775/
21:33:05 <mriedem> #link https://review.openstack.org/#/c/247775/ devref on when to add release notes
21:33:24 <bauzas> http://lists.openstack.org/pipermail/openstack-dev/2015-November/079907.html is markus_z's thread
21:33:25 <mriedem> also
21:33:27 <mriedem> yeah
21:33:32 <bauzas> comments welcome of course...
21:33:34 <mriedem> #link http://lists.openstack.org/pipermail/openstack-dev/2015-November/079907.html mailing list thread on when to reno
21:34:02 <mriedem> i don't think this is rocket science so we can probably just restrict it to the ML and review right?
21:34:14 <sdague> honestly, I feel like it's just going to be a lot more consistent now than randomly figuring out important things later. We did one for the ebtables fix, and for the cells db being required.
21:34:47 <mriedem> yeah i don't think this is too difficult
21:34:51 <mriedem> common sense
21:34:54 <bauzas> yup
21:35:03 <mriedem> we don't need a release note that you fixed a spelling error
21:35:15 <sdague> heh
21:35:26 <Vek> heh
21:35:27 <mriedem> moving on
21:35:31 <mriedem> this is my baby: How do we want to handle instance_actions wrt purge/archive of instances?
21:35:36 <mriedem> http://lists.openstack.org/pipermail/openstack-dev/2015-November/079778.html
21:36:05 <mriedem> so i have a change that starts to fix the nova-manage db archive_deleted_rows command: https://review.openstack.org/#/c/246635/
21:36:20 <mriedem> it's blocked on instance_actions not being soft deleted when instances are soft deleted
21:36:29 <mriedem> soft deleted being instances.deleted != 0, not the API
21:36:45 <mriedem> so the question is, do we start soft deleting instance_actions? or when archiving, do we hard delete those mofos?
21:36:58 <sdague> right, so instance_actions on deleted instances is potentially pretty useful
21:37:08 <mriedem> former is harder but probably better for the ops people, latter is easier to write and maintain
21:37:10 <sdague> that's your time machine to figure out what happened
21:37:16 <mriedem> right
21:37:27 <nic> +1
21:37:29 <mriedem> i was hoping some ops people would speak up about this
21:37:29 <bauzas> I haven't yet seen a reply from ops, right?
21:37:37 <mriedem> since it was ops people that wanted us to fix the damn command
21:37:40 <mriedem> bauzas: right
21:37:46 <sdague> today it grows without bounds?
21:37:59 <mriedem> sdague: instance_actions?
21:38:00 <mriedem> yeah
21:38:01 <sdague> yeh
21:38:14 <mriedem> and the instance actions API doesn't read deleted instances
21:38:22 <mriedem> so once you delete the instance, the instance actions API is useless
21:38:33 <bauzas> MHO is that hard deleting actions when purging sounds good, but I leave ops ranting that
21:38:37 <sdague> oh, tha's kind of suck
21:38:39 <melwitt> that's unfortunate, IMO
21:39:09 <mriedem> yeah, another alternative is hard delete and assume ops are getting the notifications, and storing them, and storing them for long enough to be useful
21:39:23 <sdague> so, honestly, in my ideal world instance actions would work with deleted instances, and there would be a separate pruning mechanism there
21:39:25 <bauzas> from the API pov, a deleted instance is forgotten, so getting a list of actions is not possible
21:39:26 <mriedem> which imo is not a great assumption
21:39:29 <melwitt> and I'm not sure it was intentional. generally all the api does an instance lookup as a universal first step
21:39:44 <nic> I'm pretty sure it wasn't
21:39:47 <mriedem> melwitt: sdague: so fixing the os-instance-actions API is easy peasy
21:39:52 <bauzas> getting the list of actions for a deleted instance means an API change then
21:39:57 <nic> The deeltion action is recorded, just not accessible
21:39:58 <mriedem> yeah, but it's a simple one
21:40:01 <sdague> bauzas: GET /servers/details?changes-since... returns deleted instances
21:40:01 <mriedem> yup
21:40:21 <bauzas> sdague: sorry I meant os-instance-actions
21:40:24 <nic> sdague: that is highly useful information
21:40:31 <sdague> bauzas: right, I know
21:40:37 <mriedem> ctrath: is working on the purge change,
21:40:43 <mriedem> but i think that only purges soft deleted instances
21:40:45 <sdague> I was just saying we sometimes do return deleted in the API
21:40:46 <mriedem> yeah it has to be
21:40:50 <ctrath> right
21:41:01 <mriedem> but...we still have to start soft deleting instance actions then
21:41:18 <mriedem> for purge to do it's magic, or we have special logic when purging instances to also hard-delete instance_actions
21:41:26 <mriedem> archive has the same issue
21:41:48 <mriedem> so...
21:42:16 <sdague> honestly, probably just take this back to the ML, everyone coffee up and see what can be figured out there.
21:42:19 <mriedem> or we punt and remove nova-manage db archive_deleted_rows and leave that up to the osops script that already exists :)
21:42:38 <bauzas> :)
21:42:40 <mriedem> sdague: yeah, it's in the ML
21:42:42 <mriedem> no takers
21:42:48 <mriedem> i'll add it to the ops meeting agenda
21:42:58 <sdague> mriedem: I'll throw in, I was chasing other fails today
21:43:00 <mriedem> #action mriedem to follow up in ops meeting about instance_actions problem
21:43:00 <Vek> try pushing the convo again after Thanksgiving?
21:43:14 <mriedem> we can move on
21:43:22 <mriedem> PowerVM Driver Docs
21:43:25 <mriedem> thorst: ^
21:43:43 <mriedem> from the agenda
21:43:44 <mriedem> "Nova core team has asked the PowerVM Drivers team to show users of the driver other than the large PowerVC use. We have some good queries and interest in it, and of course do significant testing with pure OpenStack. However, to encourage external use, we would like to have some official documentation on docs.openstack.org to help with operator comfort in using the driver."
21:43:47 <thorst> Yeah, we're building out our driver operator docs.  We're hoping to get those hosted on docs.openstack.org
21:44:14 <thorst> I'll paste the other paragraph
21:44:27 <thorst> The Telemetry and Networking teams have recognized these projects as official sub-projects of their programs, allowing us to use the OpenStack docs and specs infrastructure. This leaves nova-powervm in a bit of a chicken/egg scenario: providing official docs is a big part of getting users, but becoming official requires users.
21:44:28 <sdague> right, you can't do that, you aren't openstack yet.
21:44:41 <sdague> I saw the merged patch for rtd earlier though
21:44:42 <thorst> right...so its just a bit of a chicken and egg scenario...
21:44:46 <thorst> and we're looking for guidance.
21:44:50 <sdague> so that's a thing
21:45:08 <sdague> rtd seems fine for now
21:45:09 <thorst> yeah.  Its kinda a hack.
21:45:43 <mriedem> http://nova-powervm.readthedocs.org/en/latest/
21:45:46 <thorst> personally, if I were an operator I'd be looking on docs.openstack.org.
21:45:58 <tonyb> I'm not certain I buy the argument that bing on d.o.o will help but that's just me
21:46:06 <sdague> tonyb: ++
21:46:21 <sdague> this is out there, googleable
21:46:28 <mriedem> http://lmgtfy.com/?q=nova-powervm+docs
21:46:34 <mriedem> it's the first result
21:46:47 <sdague> yeh
21:46:54 <tonyb> add a ..note:: We're workign in/eith the commnity .... to the top of the docs it's easy to show
21:47:00 <mriedem> so i don't think docs.o.o is a major blocker
21:47:04 <sdague> agreed
21:47:33 <sdague> and it's done in such a way that moving to d.o.o later is pretty easy, so not much work wasted.
21:47:39 <thorst> alright.  It was my hope that given our approach we could try to get the more official docs.  But we'll continue to build out the docs here, merge it in to nova-powervm's git
21:47:50 <thorst> yeah, not wasted work at all.  Fully understand that.
21:47:57 <thorst> just want to find the best home for the work  :-)
21:48:03 <sdague> so, one other open discussion item - live migration testing
21:48:17 <mriedem> go for it
21:48:18 <dansmith> I'm about to be needing some livemigration testing
21:48:20 <dansmith> because I'm effing up the rpc all over
21:48:28 <tonyb> thorst: git it in the openstack.ibm.com domain :D
21:48:39 <tonyb> s/git/get/
21:48:44 <sdague> tdurakov is building a multinode job dedicated to live migration testing the various configs
21:49:02 <sdague> so it will actually start the services - run the live migration tests
21:49:10 <sdague> stop them, reconfigure a new backend, do it again
21:49:27 <dansmith> oh my
21:49:34 <sdague> so we can do no shared storage, block backed, ceph, and nfs in the same job
21:49:50 <tonyb> sdague: how long will that take?
21:50:04 <mriedem> and how easy it will be to tell which config failed?
21:50:05 <sdague> this is *only* going to run live migration tests
21:50:17 <sdague> mriedem: pretty easy, it will do them serial with output
21:50:22 <sdague> so it shouldn't bee too bad
21:50:25 <sdague> anyway, it's an idea
21:50:30 <sdague> and it's progressing
21:50:58 <sdague> but it was worth a heads up
21:51:00 <mriedem> ok, there aren't that many live migration tests,
21:51:03 <mriedem> and several skip based on config
21:51:29 <sdague> that's what's going on with the nova/tests/live_migration dir
21:51:29 <mriedem> experimental queue right now too right
21:51:32 <sdague> yep
21:51:38 <dansmith> oh,
21:51:39 <mriedem> are cinder people helping with this?
21:51:51 <sdague> mriedem: not as of yet
21:51:55 <dansmith> so wait, don't we have one live migration test running in the regular multinode job?
21:52:03 <sdague> tdurakov is driving
21:52:05 <sdague> dansmith: we do
21:52:09 <dansmith> okay cool, thought so
21:52:14 <sdague> and that's fine
21:52:20 <dansmith> there are more in experimental? or no?
21:52:32 <dansmith> sdague: I'm asking out of selfish need for my current set, is all
21:52:36 <dansmith> thought there was at least one I could run
21:52:38 <sdague> but having a dedicated job where we could expand and do some grey box poking made sense
21:52:41 <dansmith> yes yes
21:52:45 <sdague> dansmith: this is in experimental
21:52:49 <sdague> it's still pretty raw
21:52:50 <mriedem> is the current multi-node job stable?
21:52:54 <sdague> mriedem: no
21:52:55 <dansmith> sdague: oh, already?
21:53:10 <sdague> dansmith: yes, but I don't know if it's working really yet
21:53:16 <mriedem> dansmith: yeah, remember me harassing tdurakov yesterday
21:53:16 <sdague> he's making good progress though
21:53:18 <dansmith> okay
21:53:45 <mriedem> if the existing job is non-voting and failing, it's going to be hard to tell if dansmith's changes break it
21:53:50 <sdague> mriedem: having grenade multinode voting is going to keep some fundamental multinode stuff from backsliding
21:54:02 <dansmith> mriedem: well, the fails for my patch set will be pretty specific
21:54:04 <sdague> mriedem: well, you can read the test results
21:54:12 <dansmith> mriedem: just fail to ping at the end is not related to my stuff most likely
21:54:13 <mriedem> booooring!
21:54:26 <mriedem> yeah i suppose
21:54:34 <mriedem> ok, anything else on this?
21:54:35 <sdague> the current multinode needs some love, shelf is one of the trouble makers, I don't know the others
21:54:37 <sdague> nope
21:54:53 <mriedem> shelve huh
21:54:55 <dansmith> shelve
21:54:56 <mriedem> seems we were just taling about that
21:55:00 <mriedem> *talking
21:55:10 <mriedem> ok, let's end 5 min EARLY
21:55:15 <mriedem> ta
21:55:18 <mriedem> #endmeeting