21:03:19 <russellb> #startmeeting nova
21:03:20 <openstack> Meeting started Thu Dec  5 21:03:19 2013 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:03:20 <lbragstad> hey
21:03:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:03:22 <mriedem> hi
21:03:24 <hartsocks> \o
21:03:25 <openstack> The meeting name has been set to 'nova'
21:03:25 <n0ano> o/
21:03:25 <alaski> hi
21:03:26 <mrodden> o/
21:03:26 <shane-wang> :D
21:03:26 <russellb> hey everyone!  sorry for starting a couple minutes late
21:03:28 <dansmith> .
21:03:32 <dripton> hi
21:03:34 <bpokorny> Hi
21:03:38 <MikeSpreitzer> o/
21:03:38 <beagles> hi
21:03:43 <melwitt1> hi
21:03:49 <jog0> o/
21:03:56 <russellb> awesome, lots of folks
21:04:00 <russellb> #topic general announcements
21:04:00 <cyeoh> hi
21:04:05 <russellb> icehouse-1 is out!
21:04:12 <russellb> #link https://launchpad.net/nova/+milestone/icehouse-1
21:04:16 <russellb> 13 blueprints, > 200 bugs fixed
21:04:29 <russellb> release is going by fast already
21:04:31 <russellb> scary.
21:04:47 <russellb> we'll talk more about icehouse-2 planning in a bit
21:04:52 <russellb> other thing ... mid-cycle meetup
21:04:59 <russellb> #link https://wiki.openstack.org/wiki/Nova/IcehouseCycleMeetup
21:05:03 <russellb> 6 people signed up so far, heh
21:05:05 <shane-wang> # of blueprints is not expected:)
21:05:16 <shane-wang> I remember last time it was 65?
21:05:25 <yjiang5> russellb: 6 people?
21:05:28 <russellb> well ... that's because *everything* was targeted at icehouse-1
21:05:35 <russellb> and now most of it is on icehouse-2, some icehouse-3 ...
21:05:38 <shane-wang> 6?
21:05:43 * jog0 signs up
21:05:49 <russellb> i suspect some folks are still working on getting budget approval
21:05:56 <russellb> or just haven't filled out the registration thingy
21:05:58 <russellb> so just a reminder
21:05:59 * n0ano tentative signup
21:06:00 <mriedem> yeah, should we hold off on signing up before that?
21:06:10 <russellb> mriedem: yeah probably
21:06:14 <mriedem> k
21:06:27 <russellb> but just making sure everyone saw all the info there
21:06:28 <russellb> hotel info added
21:06:55 <russellb> look at that, 2 people just signed up
21:07:00 <hartsocks> meh, I have Marriott points.
21:07:18 <hartsocks> :-)
21:07:22 <russellb> heh, that's what's recommended anyway
21:07:29 <russellb> just let me know if you have any questions about that
21:07:32 <russellb> #topic sub-teams
21:07:36 <russellb> let's do sub-teams first!
21:07:40 <hartsocks> whee!
21:07:40 <russellb> hartsocks: you!
21:07:51 <russellb> (others raise your virtual hand so i know you're in line)
21:07:54 <hartsocks> We're getting our act together for i2
21:08:00 * n0ano hand up
21:08:16 <melwitt> o/
21:08:21 <russellb> hartsocks: cool, anything urgent that needs attention?
21:08:21 <hartsocks> We have 2 BP now in flight for i2 and I think we'll be proposing more stuff for i3
21:08:34 <russellb> hartsocks: just approved one BP for i2 for you
21:08:41 <hartsocks> danke
21:08:55 <hartsocks> I'm trying to figure out how to move more of our stuff into Oslo.
21:08:58 <hartsocks> BTW
21:09:09 <russellb> cool.
21:09:12 <hartsocks> or at least seeing what's appropriate to share there.
21:09:14 <russellb> sure
21:09:31 <hartsocks> I would like to do less *special* work in our driver and do smarter work over all.
21:09:47 <hartsocks> We'll be getting our act together the next few weeks on that.
21:09:47 <russellb> i don't think i'll argue with that :)
21:09:52 <hartsocks> 'sall from me.
21:09:55 <russellb> thanks!
21:10:08 <russellb> n0ano: what's up in scheduler land
21:10:47 <n0ano> much discussion about boris' no_db scheduler work, everyone likes the idea, issues about maintaining compatibily while transistioning to it.
21:11:20 <russellb> yeah, good point, i haven't thought about the migration path too much on that
21:11:22 <n0ano> also a lot of talk about the forklift of the scheduler code, didn't have much time on that, will probably discuss on the email list & next week
21:11:31 <russellb> ok, was curious if you guys got to that ...
21:11:36 <russellb> i think we're about ready to start
21:11:48 <russellb> need someone to do the initial heavy lifting of the code export
21:11:51 <n0ano> do we have a BP for the work involved in the forklift
21:12:02 <russellb> lifeless: talking about scheduler forklift if you're around
21:12:04 <russellb> we do have a blueprint
21:12:16 <russellb> https://blueprints.launchpad.net/nova/+spec/forklift-scheduler-breakout
21:12:47 <lifeless> russellb: i
21:12:49 <n0ano> I'll go look at that, looks like there's lots of people signed up to do the work so it should go
21:12:53 <lifeless> russellb: sorry, I am totally around
21:12:58 <russellb> lifeless: all good
21:13:00 <lifeless> I think we're ready to start indeed
21:13:01 <lifeless> we need:
21:13:08 <lifeless> - infra definitions for two projects
21:13:28 <lifeless> I was going to use openstack/python-commonschedulerclient and openstack/commonscheduler
21:13:43 <lifeless> - some filter-tree work to get commonscheduler existing
21:13:44 <russellb> heh, that's fine, assuming we can rename if we want to be clever later
21:13:48 <mriedem> holy long name
21:13:59 <lifeless> and I think just cookiecutter to get the python-commonschedulerclient one in place
21:14:02 * jog0 votes for oslo-scheduler
21:14:07 <n0ano> what's in a name, as long as the code is there and works I don't care that much
21:14:08 * russellb was thinking gantt
21:14:11 <shane-wang> mriedem: +1
21:14:18 <lifeless> n0ano: +1 ;)
21:14:28 <mriedem> what's in a name can be annoying for packagers....
21:14:32 <mriedem> think quantum/neutron
21:14:36 <lifeless> jog0: so,no - not oslo, unless we want to trigger all the integration/incubation questions up now
21:14:44 <russellb> sure, real name to be decided before it's released/packaged
21:14:47 <lifeless> mriedem: we can rename anytime in the next three months.
21:14:48 * n0ano I'm a developer, what's packaging :-)
21:14:56 <geekinutah> yes please, name change so bad
21:14:57 <lifeless> n0ano: it's one way that users get your code :)
21:14:57 <russellb> gantt!
21:15:01 <jog0> lifeless: point taken, sorry for side tracking
21:15:13 <lifeless> I'm fine with gantt
21:15:25 <lifeless> I'll put the infra patch up today
21:15:41 <lifeless> do we have volunteers to do the git filtering for the api server tree?
21:15:53 <lifeless> I"ll do the client tree, I've it mostly canned here already anyhow
21:16:11 <russellb> client tree is basically 1 file i think
21:16:15 <russellb> nova/scheduler/rpcapi.py
21:16:23 <lifeless> russellb: yes, 95% will be cookiecutter stuff
21:16:29 <n0ano> lifeless, if the work involved is well defined we should be able to get people to do it
21:16:32 <lifeless> making it installable, tests for ietc.
21:16:42 <russellb> and there's probably a nova/tests/scheduler/test_rpcapi.py
21:16:48 <lifeless> n0ano: it's defined in the etherpad; I don't know if it's well enough defined
21:16:52 <lifeless> russellb: right :)
21:17:16 <n0ano> lifeless, indeed, I guess we'll have to just start at some point in time
21:17:36 <russellb> n0ano: yep ... and we'll have to periodically sync stuff ... it'll be a pain for a while
21:17:44 <russellb> that's why we need people looking after it regularly until it's done
21:17:55 <russellb> kinda like nova-volume -> cinder
21:17:58 <lifeless> ok, so n0ano are you volunteering to do the git filter?
21:18:16 <n0ano> lifeless, sure, either me or I can always delegate someone
21:18:30 <lifeless> heh :) - whatever works
21:18:31 * n0ano what did  I just sign up for!!
21:18:35 <lifeless> n0ano: thanks!
21:18:55 <russellb> #note n0ano (or a delegate) to start working on the new scheduler git tree
21:18:56 <russellb> :)
21:18:59 <n0ano> lifeless, send me emails with any details you need if necessary
21:19:20 <russellb> it's in the meeting minutes, you have to now
21:19:21 <shane-wang> n0ano: +1
21:19:31 <n0ano> russellb, NP
21:19:35 <russellb> alright, let's move on
21:19:41 <russellb> melwitt: hey!  python-novaclient
21:19:46 <russellb> anything on fire?
21:19:55 <melwitt> haha no, fortunately
21:19:59 <russellb> good.
21:20:06 <melwitt> here is the weekly report:
21:20:06 <melwitt> open bugs, 117 !fix released, 81 !fix committed
21:20:06 <melwitt> 24 new bugs
21:20:06 <melwitt> 0 high bugs
21:20:06 <melwitt> 22 patches up, 7 are WIP, https://review.openstack.org/#/c/51136/ could use more reviewers
21:20:56 <russellb> for API merged into nova it says, so should be fine
21:20:59 <russellb> assuming code is sane
21:21:29 <russellb> looks sane at a glance, i'll take another look afterwards
21:21:42 <russellb> the count of new bugs is lower than i remember, you must have been working on cleaning that up :)
21:21:55 <melwitt> yes I have :)
21:22:01 <russellb> excellent!
21:22:42 <russellb> as usual, if anyone wants to help with novaclient tasks, please talk to melwitt !
21:22:56 <russellb> some bugs to be fixed i'm sure
21:23:04 <russellb> melwitt: anything else you wanted to bring up today?
21:23:14 <melwitt> no, I think that's it
21:23:17 <russellb> great thanks
21:23:20 <russellb> #topic bugs
21:23:32 <russellb> lifeless: any comments on nova bugs for today?
21:23:35 <lifeless> ruh roh
21:23:35 <lifeless> so
21:23:39 <russellb> ha
21:23:43 <russellb> 200 New bugs
21:23:46 <lifeless> I have some stats but it's not quite right yet
21:23:50 <mriedem> 72 untagged
21:24:05 * russellb has falled behind on tagging and could use some help
21:24:12 <dansmith> nice grammar
21:24:16 <dansmith> I falled behind too
21:24:20 <russellb> damn you
21:24:29 <jog0> plenty of critical bugs too (I am to blame for most I think)
21:24:36 <russellb> #note dansmith to catch us up on bug triage this week
21:24:42 <dansmith> oof
21:24:46 <russellb> #undo
21:24:47 <lifeless> https://review.openstack.org/#/c/58903/ <- is the thing I have put together
21:24:49 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x26be350>
21:24:53 <dansmith> whew
21:25:01 <lifeless> http://pastebin.com/raw.php?i=vj4FuErC
21:25:02 <russellb> it removed the topic, not the note ... wat
21:25:10 <lifeless> russellb: LOL
21:25:19 <mrodden> lolbug
21:25:25 <lifeless> so my intent is to get this fixed this week, and be able to actually frame workloads sensibly.
21:25:27 <russellb> oh well, yes, stats!
21:25:39 <lifeless> Also, next week I'll have drafted a proposed update to the triage workflow
21:25:51 <lifeless> so - I know I'm not crushing this, but I am working on it :)
21:26:05 <mriedem> lifeless: thanks for working it
21:26:07 <russellb> cool, appreciate the stats work, that's been a hole
21:26:17 <russellb> and for anyone who has some bandwidth, our current process is:
21:26:17 <lifeless> but if the stats are even vaguely right, we get from 30 to 80 bugs a week
21:26:17 <mriedem> not crushing > nothing
21:26:20 <russellb> #link https://wiki.openstack.org/wiki/Nova/BugTriage
21:26:50 <lifeless> which is ~ 10 a day to do 'is this urgent or backburner' assessment on - which I think is a pretty light workload really
21:26:52 <russellb> i wonder how many of those are devs filing right before they post a patch
21:26:59 <lifeless> this is across both nova and novaclient
21:27:04 <mriedem> russellb: i think that happens a lot
21:27:04 <lifeless> should I report them separately?
21:27:19 <russellb> well, the ones that go to In Progress immediately don't really need to be triaged in the same sense
21:27:21 <russellb> i think
21:27:37 <lifeless> russellb: Do we ask them to do that? Is it valuable? [agreed that in progress immediately isn't a triage burden]
21:27:42 <mriedem> agreed, but priority isn't set either sometimes, or backport potential
21:28:00 <russellb> mriedem: good point, so there's still some triage work to do
21:28:05 <dansmith> I usually create it in Confirmed state if I know I've got a bug, but haven't yet got a patch
21:28:07 <russellb> i retract my comment then
21:28:10 <cyeoh> yea and sometimes they eventually ended up abandoned by the original developer so need to go back into triage
21:28:27 <russellb> cyeoh: true, that's one of the tasks listed on https://wiki.openstack.org/wiki/BugTriage
21:28:33 <mriedem> yeah, would be nice if launchpad had an expiration feature or something
21:28:36 <russellb> though i don't know how often we make it past looking at the New ones ...
21:28:46 <mriedem> or knew when the patch was abandoned so it could set the bug to confirmed or something
21:28:48 <mriedem> or new
21:28:57 <mriedem> we should automate that, right?
21:29:06 <mrodden> probably
21:29:10 <lifeless> mriedem: it does
21:29:19 <mriedem> lifeless: after how long?
21:29:23 <lifeless> mriedem: if a bug is marked incomplete with no follow on comments, and on one project, it will expire
21:29:28 <mriedem> i've seen abandoned patches with in progress bugs long after they were abandoned
21:29:29 <jog0> while on bugs can we talk about some of the gate bugs?
21:29:36 <mriedem> lifeless: ah, that's different
21:29:44 <mriedem> bug is in progress but patch is abandoned
21:29:44 <russellb> jog0: yep
21:29:46 <mriedem> the bug doesn't change
21:29:48 <lifeless> mriedem: but if we need a different policy it's an api script away.
21:29:55 <lifeless> mriedem: oh, you want to unassign idle bugs?
21:29:55 <jog0> in short http://lists.openstack.org/pipermail/openstack-dev/2013-December/021127.html
21:30:00 <mriedem> lifeless: yeah
21:30:02 <mriedem> something like that
21:30:09 <cyeoh> lifeless: be nice if it had an auto-warn first
21:30:11 <jog0> we have a lot of bugs on that list and most don't have anyone working on them
21:30:11 <lifeless> mriedem: sounds like an infra feature request.
21:30:15 <cyeoh> (to the person it is assigned to)
21:30:23 <jog0> including some neutron blocking bugs
21:30:37 <lifeless> cyeoh: they can always toggle it back, it's non destructive
21:31:13 <jog0> russellb: hopeing for some volunteers to fix some gate bugs
21:31:22 <cyeoh> lifeless: yea was just thinking of avoiding races where someone else picks it up and ends up duplicating work already done
21:31:33 <russellb> alright guys, let's not leave jog0 hanging.  who can help with some gate issues over the next week?
21:31:48 <russellb> (and really, it's all of openstack-dev, not just jog0)  :-)
21:31:59 * jog0 can't he will be at two conferences
21:32:15 <russellb> looks like most of it is nova+neutron
21:32:25 <jog0> yeah anda few nova + tempest
21:32:26 <mriedem> crap, there was another one i opened last night that has been happening but wasn't reported, scheduler fail
21:32:43 <jog0> mriedem: that one isn't on the list yet but yeah
21:32:45 <mriedem> phil day had a patch that should help at least make it more obvious in the logs when it fails
21:32:47 <jog0> I commented on it
21:32:59 <mriedem> jog0: k, haven't read the bug emails yet
21:33:03 <russellb> i wonder how to give more incentive to work on these ...
21:33:07 <jog0> mriedem: https://bugs.launchpad.net/nova/+bug/1257644
21:33:09 <uvirtbot> Launchpad bug 1257644 in nova "gate-tempest-dsvm-postgres-full fails - unable to schedule instance" [Critical,Triaged]
21:33:13 <lifeless> no other patches land until these fixed?
21:33:18 <russellb> heh that's one way
21:33:19 <jog0> and we have libvirt still stacktracing all the time
21:33:20 <lifeless> stop-the-line in LEAN terms
21:33:28 <russellb> and i'm willing to use that hammer in desperate times
21:33:42 <mriedem> jog0: is that the 'instance not found' libvirt stacktrace?
21:33:43 <lifeless> russellb: Just a thought, but if the expectation is that that hammer will come out
21:33:45 <jog0> russellb lifeless: we aren't there yet  ( I think)
21:33:52 <lifeless> russellb: perhaps folk will respond more quickly in non desperate times.
21:34:20 <russellb> true
21:34:37 <jog0> mriedem: mriedem thats a different thing
21:34:39 <jog0> let me dig
21:35:02 <mrodden> might be worth a test run
21:35:08 <jog0> https://bugs.launchpad.net/nova/+bug/1255624
21:35:10 <uvirtbot> Launchpad bug 1255624 in nova "libvirtError: Unable to read from monitor: Connection reset by peer" [Critical,Triaged]
21:35:11 <mrodden> re: stop the line thing
21:35:13 <jog0> https://bugs.launchpad.net/nova/+bug/1254872
21:35:15 <uvirtbot> Launchpad bug 1254872 in nova "libvirtError: Timed out during operation: cannot acquire state change lock" [Critical,Triaged]
21:35:16 <russellb> i think if failure rates pass some really bad threshold we should do that
21:35:31 <jog0> it was 25% the other day
21:35:31 <russellb> (the hammer)
21:35:39 <jog0> in gate not check
21:35:48 <lifeless> Can we link MC Hammer in the email where you say it's happened?
21:35:51 <russellb> thoughts on where the "stop everything" line should be?
21:35:53 <russellb> sure
21:35:56 <russellb> hammertime
21:35:59 * lifeless is satisfied
21:36:08 <lifeless> mrodden: go on?
21:36:28 <russellb> http://goo.gl/25j6nx
21:36:49 <mrodden> i think the idea is just no +A until we get criticals in gate fixed, threshold down etc.
21:36:50 <jog0> gate-tempest-dsvm-postgres-full = 20.00 failure rate as of now
21:37:07 <mriedem> is there an idea of how fast these start to build up, i.e. once you hit a certain % it's exponential fails after that?
21:37:09 <russellb> yeah, that's high
21:37:09 <jog0> so fixing gate issues is a several step process
21:37:25 <jog0> 1) fingerprint the bug, and add to e-r
21:37:26 <russellb> probably not (no more +A) high yet
21:37:31 <jog0> 2) identify root cause
21:37:34 <jog0> 3) fix it
21:37:46 <jog0> step 1 is what I have been focusing on
21:37:53 <jog0> step 2 and 3 are alot more work
21:37:57 <mriedem> is 'reverify no bug' dead yet?
21:38:13 <russellb> jog0: hugely helpful, then people know which individual things to dive deep on
21:38:24 <russellb> jog0: the libvirt things, danpb had some feedback on collecting more logs, any progress on that?
21:38:34 <jog0> russellb: been backed up so no
21:38:45 <russellb> ok, not expecting you to do it, just asking in general
21:39:25 <russellb> alright, well, work needed here, but let's move on for the meeting
21:39:27 <russellb> #topic blueprints
21:39:36 <russellb> #link https://launchpad.net/nova/+milestone/icehouse-2
21:39:37 <jog0> so gate pass rate= 80% right now
21:39:46 <russellb> #link https://blueprints.launchpad.net/nova/icehouse
21:39:55 <russellb> 113 total icehouse blueprints right now
21:40:07 <dims> jog0, which jobs are failing consistently?
21:40:11 <russellb> which already seems over our realistic velocity
21:40:19 <russellb> let's continue the gate chat in #openstack-nova
21:40:24 <dims> k
21:40:36 <russellb> how that icehouse-1 is done, we need to get our icehouse-2 and icehouse-3 lists cleaned up
21:40:40 <russellb> 87 blueprints on icehouse-2
21:40:43 <russellb> which is *far* from realistic
21:40:44 <shane-wang> if we get a lot of stuffs in rc2 or rc3, does that mean we will get a lot of bugs in the end?
21:41:00 <russellb> shane-wang: i'm not sure what you mean?
21:41:34 <shane-wang> russellb: I mean do we need some time in rc to clean the bugs?
21:41:49 <russellb> shane-wang: icehouse will be feature frozen after icehouse-3
21:41:56 <russellb> and there will be a number of weeks where only bug fixes can be merged
21:42:01 <shane-wang> ok
21:42:06 <russellb> #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
21:42:11 <shane-wang> just a little worried.
21:42:18 <russellb> from March 6 to April 17
21:42:20 <russellb> bug fixes only
21:42:26 <shane-wang> good, thanks.
21:42:29 <russellb> sure np
21:42:39 <russellb> so, 40 blueprints still need some review attention
21:42:44 <russellb> you can see it pretty well on the full icehouse list
21:42:49 <russellb> anything not prioritized is still in the review process
21:43:01 <jog0> russellb: how do our plans change if we decide to unfreeze nova-network?
21:43:16 <russellb> Pending Approval (waiting on review), Review (waiting on submitter, but we need to follow up to check if updates have been posted)
21:43:27 <russellb> Drafting (not ready for review), Discussion (review pending some discussion in progress)
21:43:44 <russellb> so any help on those is appreciated, and generally expected of nova-drivers :-)
21:44:00 <russellb> jog0: so, after icehouse-2 i'd like to have the nova-network discussion
21:44:11 <russellb> our plans don't change *too* much at this point
21:44:20 <russellb> other than we can accept more patches
21:44:29 <cyeoh> jog0: we will have a bunch of v3 API work if we decide to support nova-network again, but I think it'll be manageable
21:44:37 <russellb> cyeoh: great point.
21:44:52 <jog0> well lets hope it doesn't come to that
21:44:56 <russellb> agreed
21:45:03 <russellb> it seems they are making much better progress than pre-summit
21:45:11 <russellb> so i remain optimistic that it won't come to that
21:45:18 <lifeless> n0ano: whats your email address?
21:45:31 <n0ano> lifeless, donald.d.dugger@intel.com
21:45:46 <dansmith> russellb: I am not optimistic
21:45:54 <russellb> if it does come to that, unfreezing nova-network will spawn a much more significant discussion about neutron that is beyond just nova
21:46:00 <lifeless> n0ano: ack, thanks
21:46:03 <russellb> so ... not for just us to work out the details
21:46:25 <russellb> any specific blueprints anyone wants to cover?
21:47:26 <russellb> well, if you have stuff targeted at i2, please consider moving it to i3 if you don't think you'll have code in the next few weeks
21:47:36 <russellb> #topic open discussion
21:47:44 <russellb> the agenda on the wiki had something for open discussion
21:47:48 <russellb> mriedem: that was you right?
21:47:48 <dansmith> upgrades are broken!
21:47:52 <dansmith> oh
21:47:54 <mriedem> russellb: yeah
21:47:55 <russellb> dansmith: but you're fixing them
21:48:05 <russellb> dansmith: we can talk about that in a sec though
21:48:08 <mriedem> russellb: ndipanov probably cares but don't see him here
21:48:20 <mriedem> basically the ML thread on mox/mock, sounds like that's mainly sorted out
21:48:36 <mriedem> use mock for new tests, move mox to mox3 over time, and there are exceptions to not using mock
21:48:48 <mriedem> exceptionial cases sound like backports or big ugly hacks
21:49:09 <mriedem> the other thing on the agenda i was pointing out was our plethora of hacking guides
21:49:09 <russellb> yeah
21:49:18 <mriedem> jog0 was cleaning up the keystone hacking guide to point at the global one
21:49:22 <mriedem> i think we should do the same for nova
21:49:29 <russellb> i thought we already point to the global one?
21:49:29 <mriedem> i.e. nova/tests/README is super stale
21:49:37 <russellb> and then have addition of our own nova specific rules
21:49:49 <mriedem> yeah, but i think the nova-specific stuff might need to be reviewed again
21:49:54 <russellb> OK
21:49:58 <mriedem> nova/tests/README is it's own animal i htink
21:50:10 <russellb> and I think you're talking about something more broad than what people generally think of when you say HACKING around here
21:50:13 <russellb> you mean ... dev docs
21:50:14 <mriedem> and last point being the horizon guys have a great testing guide
21:50:18 <mriedem> yeah
21:50:20 <russellb> there's also docs/source/devref/
21:50:27 <mriedem> well, there are wikis, devref, readmes, etc
21:50:30 <russellb> yeah.
21:50:31 <mriedem> it's everywhere
21:50:43 <russellb> docs/source/devref/ should probably be where we focus right?
21:50:43 <jog0> ++ to streamlined dev docs
21:50:46 <mriedem> i think we should try to get as much of that as possible into the global hacking docs (devref)
21:50:51 <mriedem> yes
21:50:55 <russellb> ok cool, that works for me
21:50:59 <russellb> I support this effort, heh
21:51:03 <mriedem> and i think we want to integrate horizon's testing guide into the global hacking devref
21:51:09 <mriedem> horizon has a great testing guide
21:51:19 <mriedem> and then once that's done, we add our stance on mox/mock
21:51:30 * mriedem needs to breath
21:51:31 <russellb> you planning to work on this?
21:51:38 <mriedem> christ, so....
21:51:48 <mriedem> i think i can get the ball rolling
21:51:52 <jog0> mriedem: I can get behind that idea, a little doc re-org is needed but very doable.
21:51:55 <mriedem> ndipanov seems to be passionate about it
21:52:02 <russellb> so do you :)
21:52:06 <mriedem> yeah, this isn't rocket science, just takes time to do it
21:52:25 <mriedem> i care about it simply because i get tired of having to explain it in reviews
21:52:32 <russellb> shall i write a letter to your boss?  we have work for you to do here
21:52:33 <mriedem> i want to just point to a single location
21:53:00 <mriedem> plus it will be easier to tell people in a review 'sorry, use mock, link here'
21:53:07 <russellb> works for me
21:53:09 <mriedem> then it's not a question of my personal preference, it's dogma
21:53:14 <russellb> don't think anyone would argue with this
21:53:17 <russellb> just need to do it :)
21:53:39 <mriedem> yeah, so i'll start to get some work items in place and then i (and maybe others) can help knock them off
21:54:17 <russellb> alright sounds good
21:54:26 <russellb> dansmith: you're up
21:54:48 <dansmith> well, there's not much to say, really, but geekinutah broke live upgrades
21:54:53 <dansmith> he broke them all to hell and back
21:54:57 <russellb> haha
21:55:00 <russellb> all his fault?
21:55:03 <geekinutah> now youre backtracking
21:55:05 <dansmith> of course, it wasn't his fault at all
21:55:08 <geekinutah> lol
21:55:12 <dansmith> but he DID break them
21:55:28 <dansmith> anyway, basically just infrastructure that we don't have all worked out yet
21:55:43 <dansmith> I've got a set up that fixes it, but it'll require a teensy backport to havana in order to make it work
21:56:03 <dansmith> however, with that, I'm back to being able to run tempest smoke against a deployment on master with havana compute nodes
21:56:05 <russellb> did you have to hack rpc too?
21:56:09 <russellb> to make your test work?
21:56:19 <dansmith> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:object_compat,n,z
21:56:23 <dansmith> if anyone is interested
21:56:30 <russellb> related:  https://review.openstack.org/60361  and https://review.openstack.org/60362
21:56:34 <russellb> posted 3 seconds ago
21:56:36 <dansmith> russellb: no, I see some like cert or console fails, but they're not blockers
21:56:59 <dansmith> russellb: so that set, with the conductor fix backported to havana makes it all work
21:57:14 <russellb> dansmith: oh, hm, i guess it's just a mixed havana-icehouse compute node env that broke?
21:57:20 <russellb> i guess that's right
21:57:31 <dansmith> russellb: right, that's the big breakage, which doesn't affect me
21:57:36 <russellb> you just can't have both at the same time right now ^^^ without those fixes
21:57:41 <dansmith> right
21:57:44 <russellb> k
21:57:57 <russellb> and then your object magic
21:58:04 <russellb> and the gate job
21:58:17 <russellb> hey guys, live upgrades are hard
21:58:21 <dansmith> yeah, we really need the gate job,
21:58:36 <dansmith> but at least I have a one-button "are we broke or not" thing I can check until then
21:58:52 <dansmith> that is all
21:59:05 <geekinutah> so next time I break things I'll have to fix them :-)?
21:59:22 <dansmith> geekinutah: next time you break them, I quit :)
21:59:26 <russellb> alright, we're about out of time
21:59:30 <russellb> thank you all for your time!
21:59:33 <russellb> #endmeeting