19:03:33 <lifeless> #startmeeting tripleo
19:03:34 <openstack> Meeting started Tue Sep 23 19:03:33 2014 UTC and is due to finish in 60 minutes.  The chair is lifeless. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:03:38 <openstack> The meeting name has been set to 'tripleo'
19:03:42 <lifeless> #topic agenda
19:03:42 <lifeless> * bugs
19:03:42 <lifeless> * reviews
19:03:42 <lifeless> * Projects needing releases
19:03:42 <lifeless> * CD Cloud status
19:03:44 <lifeless> * CI
19:03:45 <derekh> hi all
19:03:47 <lifeless> * Tuskar
19:03:49 <lifeless> * Specs
19:03:52 <lifeless> * open discussion
19:03:54 <lifeless> good morning everyone
19:04:22 <derekh> good morning
19:05:47 <lifeless> ok, lets roll
19:05:49 <lifeless> #topic bugs
19:05:51 <lifeless> #link https://bugs.launchpad.net/tripleo/
19:05:51 <lifeless> #link https://bugs.launchpad.net/diskimage-builder/
19:05:51 <lifeless> #link https://bugs.launchpad.net/os-refresh-config
19:05:53 <lifeless> #link https://bugs.launchpad.net/os-apply-config
19:05:56 <lifeless> #link https://bugs.launchpad.net/os-collect-config
19:05:58 <lifeless> #link https://bugs.launchpad.net/os-cloud-config
19:06:01 <lifeless> #link https://bugs.launchpad.net/tuskar
19:06:03 <lifeless> #link https://bugs.launchpad.net/python-tuskarclient
19:06:26 <lifeless> criticals
19:06:35 <lifeless> bug 1263294 - mkerrin
19:06:37 <uvirtbot> Launchpad bug 1263294 in tripleo "ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in /sys/block'" [Critical,In progress] https://launchpad.net/bugs/1263294
19:06:46 <lifeless> bug 1361235 - slagle
19:06:47 <uvirtbot> Launchpad bug 1361235 in tripleo "visit horizon failure because of import module failure" [Critical,In progress] https://launchpad.net/bugs/1361235
19:07:19 <tchaypo> I spoke to mkerrin about 1263294 last week
19:07:38 <tchaypo> he's no longer able to spend any time working on that.
19:07:54 <lifeless> so we should unassign him
19:08:11 <lifeless> not working on it == not assignee
19:08:17 <lifeless> tchaypo: can you do that?
19:08:17 <tchaypo> I've been meaning to update the issue but I haven't been able to figure out a new assignee
19:08:23 <tchaypo> I can unassign, yes
19:08:26 <lifeless> I'm really pleased to see only two criticals
19:08:37 <slagle> lifeless: going to drop that one to high since the workaround has been committed
19:08:50 <lifeless> slagle: cool
19:10:28 <lifeless> #topic reviews
19:10:43 <lifeless> #info There's a new dashboard linked from https://wiki.openstack.org/wiki/TripleO#Review_team - look for "TripleO Inbox Dashboard"
19:10:46 <lifeless> #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html
19:10:49 <lifeless> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
19:10:52 <lifeless> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
19:11:07 <lifeless> 
19:11:07 <lifeless> Stats since the last revision without -1 or -2 :
19:11:07 <lifeless> Average wait time: 10 days, 1 hours, 14 minutes
19:11:07 <lifeless> 1st quartile wait time: 4 days, 8 hours, 42 minutes
19:11:07 <lifeless> Median wait time: 6 days, 12 hours, 1 minutes
19:11:09 <lifeless> 3rd quartile wait time: 12 days, 17 hours, 27 minutes
19:11:24 <lifeless> we're still getting worse :(
19:11:44 <tchaypo> huh. I could have sworn we'd dropped down when I checked a few days ago
19:11:50 <tchaypo> I'd love to get some charting on that
19:12:05 <lifeless> ahha
19:12:15 <lifeless> here is a case of it doing the wrong thing
19:12:16 <tchaypo> It's not much worse - last week was 3rd quartile wait time: 12 days, 10 hours, 16 minutes
19:12:16 <lifeless> https://review.openstack.org/#/c/109163/
19:12:21 <tchaypo> and the week before that was 15 days
19:12:24 <lifeless> that was in workflow -1 until today
19:12:29 <lifeless> it should be counting it as one day old
19:12:31 <lifeless> not 60
19:12:58 <derekh> this looks to be getting better
19:13:00 <derekh> Queue growth in the last 30 days: 10 (0.3/day)
19:13:48 <lifeless> I'll file a bug for the reviewstats mis-count there
19:14:33 <lifeless> any specific reviews folk want to ask for help on?
19:15:14 <tchaypo> slightly related topic - now only 9 people making the 3-reviews-per-workday cutoff over the last 30, and 1 of them (me) is not core
19:15:24 <tchaypo> there was a review mentioned in #tripleo earlier
19:15:42 <tchaypo> 02:39:17 derekh Would sombody take a look at https://review.openstack.org/#/c/123425/ , I'd love to see the ubuntu overcloud job passing again
19:15:45 <derekh> tchaypo: if it was my one, its merged now
19:15:50 <tchaypo> very good :)
19:17:01 <lifeless> #link https://bugs.launchpad.net/openstack-ci/+bug/1373084
19:17:02 <uvirtbot> Launchpad bug 1373084 in openstack-ci "reviewstats not considering workflow -1 status when aging reviews" [Undecided,New]
19:17:32 <lifeless> if you want a better whoa-these-are-old, consider patching that bug :)
19:17:44 <lifeless> #topic projects needing releases
19:17:50 <lifeless> do we have a wolunteer ?
19:21:09 <lifeless> I'll put my hand up then
19:21:19 <lifeless> #info lifeless to release the world
19:21:53 <lifeless> #topic CD cloud status
19:22:12 <lifeless> AFAIK rh1 is fine
19:22:27 <lifeless> derekh: any update on the qualification of hp1?
19:22:31 <derekh> lifeless: yup, no rh1 issues I'm aware of
19:22:35 <lifeless> tchaypo: any update on the deploy of hp2 ?
19:22:46 <lifeless> #info RH1 nominal
19:23:10 <tchaypo> Just the usual. It turns out that deploying openstack as a novice is hard.
19:23:41 <tchaypo> I made good progress yesterday, but that ended with realising that some of the error message I'd been chasing seem to be normal and not related to the failures I'm seeing
19:23:44 <derekh> hp1 is pretty much ready, I was waiting on some patches to merge one for nova and one for neutron
19:23:50 <derekh> but I'm now pulling them into CI
19:24:11 <lifeless> #info HP2 ready just job changes pending
19:24:13 <lifeless> erm
19:24:14 <tchaypo> so I'm stepping back from those errors today to try to figure out why we only get exactly 5 nodes succesfully built in the overcloud
19:24:16 <lifeless> #undo
19:24:16 <openstack> Removing item from minutes: <ircmeeting.items.Info object at 0x1fc6250>
19:24:23 <lifeless> #info HP1 ready just job changes pending
19:24:26 <derekh> so we can move ahead, I'll propese the patch to add it back tommorow, once I check nothin else is outstanding
19:24:28 <tchaypo> I suspect this is the nova race condition
19:24:35 <lifeless> #info HP2 deployment proceeding
19:24:53 <lifeless> tchaypo: I think you might like to buddy with someone (e.g. me)
19:25:53 <tchaypo> I would, if you have the time. Talk to you after this meeting?
19:26:01 <lifeless> sure
19:26:10 <lifeless> #topic CI
19:26:30 <derekh> So I got a few items here :-)
19:26:38 <derekh> A couple of outages since the last meeting
19:26:46 <derekh> Ironic changed their ssh string (to one the wasn't whitelisted for a test env), ironic jobs failed
19:26:46 <derekh> patch revert, ci working ok again then new testenvs deployed, patch commited again
19:27:13 <lifeless> #info CI outages: Ironic changed ssh strings again, fixed
19:27:17 <derekh> Failures to get oslo.messaging >= 1.4.0, all CI test failed
19:27:17 <derekh> Our squid was caching the pypi index for upto 3 days, manually changed it to 1 day, and submitted a patch to change the squid element to 1 day
19:27:36 <derekh> Ubuntu overcloud jobs has been failing for WEEKS... I mentioned it last week hoping somebody would figue out why..
19:27:40 <lifeless> #info CI outages: Squid over-caching the pypi index
19:27:46 <derekh> I spent a day or two digging into this and the long at the short of it is, 2G VM's wasn't enough,
19:27:54 <derekh> I've redployed Testenvs with 3G VM's and properse we do the same for devtest  https://review.openstack.org/#/c/123453/
19:27:57 <lifeless> #info Ubuntu overcloud jobs failing due to 2G VMs re too small
19:28:08 <derekh> I'd like to see if everybody is ok with that ? And at the same time get a feel for peoples thoughts on moving to x86_64 vs. x86 testing...
19:28:10 <lifeless> #link https://review.openstack.org/#/c/123453/
19:28:27 <lifeless> derekh: I entirely support CI running adm64
19:28:30 <lifeless> erm
19:28:31 <lifeless> amd64
19:28:41 <slagle> 3G likely won't be enough for 64bit
19:28:46 <lifeless> yeah
19:28:46 <slagle> according to my local testing
19:28:52 <slagle> 3.5G at least
19:29:08 <lifeless> I don;'t think local dev using amd64 by default would fit with the consensus from the midcycle on dev vs ci
19:29:24 <derekh> ya, so thats the trade off, if we do 64 bit we loose some capacity, although now that we are using 3G VM's
19:29:39 <derekh> 3 VS.s 4 isn't as big a hop and 2 -> 4G
19:29:45 <lifeless> derekh: if we can get HP2 live, we have a lot more capacity ;)
19:29:52 <tchaypo> derekh: if I'm reading you correctly, you're saying we need 3G for 32-bit and 4G for 64-bit?
19:30:44 <derekh> tchaypo: well to be more accurate we need > 2G (not sure the exact requirment) for 32bit
19:31:02 <derekh> I've set it to 3 to avoid fractions
19:31:16 <SpamapS> o/
19:31:25 * SpamapS keeps forgetting which wheek this meeting is at noon
19:31:51 <tchaypo> From memory, a full-HA undercloud + full-ha overcloud control + 1 overcloud compute node + seed == 8 VMs, so 24Gb?
19:32:02 <derekh> ok, so I'm kind of on the fence so if there are no votes for sticking with 32bit lets make the move
19:32:10 <lifeless> SpamapS: I have put two repeating meetings in my work calendar
19:32:16 <lifeless> SpamapS: one at each time
19:32:18 <tchaypo> I have no problem with doing the needful in CI, but I'd like us to think about how to get a useful setup for devs
19:32:24 <tchaypo> ... more on this later
19:32:26 <SpamapS> Yeah I should do that and mark it every 2 weeks
19:32:38 <lifeless> derekh: lets get HP1 live first
19:32:52 <tchaypo> Make sure you put it into GMT timezone so it doesn't drift over daylight savings changes :)
19:32:56 <lifeless> derekh: so that we're not changing what we assessed mid-stream
19:33:07 <derekh> tchaypo: ya it all adds up
19:33:08 <derekh> lifeless: ok
19:33:17 <derekh> I've also proposed that we bring the ubuntu job back running on tripleo tripleo so it doesn't get into a unreliable state again
19:33:17 <derekh> https://review.openstack.org/#/c/122099/   ( as CI has been failing most of the time for pretty much all non tripleo projects for the last few weeks)
19:33:48 <derekh> Also Note we are no longer setting tripleo control variables in infra/config
19:33:48 <derekh> https://review.openstack.org/#/c/122122/
19:33:48 <derekh> https://review.openstack.org/#/c/122504/
19:34:09 <lifeless> we're not?
19:34:32 <lifeless> ah ok
19:34:38 <lifeless> derekh: so I think we need to assess a couple things
19:34:46 <lifeless> derekh: firstly, nova is dropping nova-bm
19:34:58 <lifeless> do we start testing last-stable-release-of-nova-bm
19:35:04 <lifeless> or do we stop testing it
19:36:20 <derekh> I suppose, do we expect anybody to be using it ?
19:36:34 <lifeless> I sure hope not :)
19:36:38 <derekh> we currently have 3 novabm jobs we can start by dropping that to 1
19:37:01 <lifeless> they give somewhat other coverage too I thought, so maybe some switch to ironic
19:37:28 <derekh> lifeless: yes, well thats what I meant, I wasn't going to just throw them away :-)
19:37:30 <lifeless> I don't think we can commit here to this - but  can I get a volunteer to propose this to the list?
19:37:48 <derekh> lifeless: I can do it
19:38:14 <lifeless> #info derekh to raise nova-bm removal from nova implications to tripleo testing on list
19:38:20 <lifeless> derekh: thanks
19:38:23 <derekh> And the last thing I have to copy and paste in here is
19:38:25 <derekh> All the syslog changes are finished to make the fedora/ubuntu logs consistent with each other so we finally have usable os-collect-config logs in logstash once this merges
19:38:25 <derekh> https://review.openstack.org/#/c/122466/
19:38:34 <lifeless> \o/ fantastic
19:39:20 <lifeless> derekh: secondly, that squid thing
19:39:24 <tchaypo> g15
19:39:34 <lifeless> derekh: I think we'd get much better results using a pypi mirror :)
19:40:11 <lifeless> derekh: but I realise thats something for later, I just worry that we're going to spend time chasing fine tuning of the cache
19:40:50 <derekh> lifeless: possibly, now that I have hopefully finished with hp1, I intend to look at the CI spec and start working through its other items
19:40:56 <lifeless> cool
19:41:01 <lifeless> ok, thats all I can remember
19:41:06 <lifeless> were there other CI things?
19:41:12 <derekh> I'm done
19:41:20 <lifeless> #topic tuskar
19:42:32 <lifeless> no tuskar folk here today?
19:42:42 <jdob> here, sorry
19:42:42 <lifeless> #topic specs
19:42:45 <tchaypo> I'm curious to see how devpi handles the load
19:42:51 <lifeless> #undo
19:42:51 <jdob> nothing spectcaular to report though
19:42:51 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x1edc610>
19:42:56 <lifeless> jdob: kk
19:43:00 <jdob> or a word that looks like that
19:43:05 <lifeless> tchaypo: I was proposing bandersnatch, not devpi :)
19:43:26 <lifeless> #topic specs
19:43:28 <tchaypo> I know :) Bandersnatch doesn't mirror everything though
19:43:33 <lifeless> any specs for discussion ?
19:44:04 <tchaypo> my understanding is that it will only cache packages hosted on pypi, it won't follow links to external packages. But that's a smaller case we can probably solve with a bit of squid
19:45:29 <lifeless> #topic open discussion
19:45:34 <tchaypo> So all the specs we've approved so far are int he juno folder
19:45:45 <tchaypo> I'm wondering if that's helpful, given that we on't follow the usual release cycle
19:46:13 <tchaypo> we have at least one open spec which i think is ready to land - but I can't give it a +1 because it's sitting in the folder above /juno, so it needs at least one more revision just to move it
19:46:14 <lifeless> is it important enough to bikeshed on vs the other project spec repos?
19:47:01 <tchaypo> Perhaps. My understanding is that for other projects, having a spec in  'juno' means they've committed to landing it before that release
19:47:33 <tchaypo> have we made that committment? If yes, then it's not worth bikeshedding on; if we just mean "this spec was finalised during juno" then maybe"
19:48:03 <tchaypo> I'm happy to leave it for later, it's not something that needs to be solved right now.
19:48:13 <SpamapS> Hi... just wanted to bring up that the tripleo-ansible repository has landed in stackforge.
19:48:21 <tchaypo> moving on to other business..
19:48:54 <lifeless> #info tripleo-ansibe repository now live in stackforge
19:49:05 <lifeless> So
19:49:10 <SpamapS> It is not entirely suitable for upstream usage yet, but we are at least in a position to start working directly in the open and pulling things in, instead of the other way around.
19:49:12 <lifeless> for folk that missed it
19:49:17 <lifeless> I'm not standing for PTL
19:49:23 <lifeless> step up all ye hearties
19:49:45 <tchaypo> I believe there are roughly 48 hours left for people to announce their candidacy
19:49:49 * derekh takes 2 steps backwards
19:50:34 * SpamapS feels bad now about putting that large wooden box 2 steps behind derekh
19:51:06 <derekh> "£$"£$% £$%£FeF""^£
19:51:08 <jdob> ha
19:51:14 <tchaypo> http://www.timeanddate.com/countdown/generic?iso=20140926T0559&p0=1440&msg=PTL+Candidacies+Close
19:51:15 <lifeless> if you're considering running
19:51:36 <lifeless> and want to talk to me about what its like, time commitment etc, just reach our privately :)
19:52:02 <jdob> to which an automated bot will reply "It's miserable."
19:52:20 <lifeless> hahaha
19:52:35 <tchaypo> Related - is https://etherpad.openstack.org/p/kilo-tripleo-summit-topics what we're using for kilo summit topics?
19:52:54 <tchaypo> It's strangely bare
19:53:00 <tchaypo> I'm not sure if that's a good sign or not
19:53:08 <derekh> So I put that there in responce to the email about design summit sessions
19:53:15 <lifeless> thanks derekh !
19:53:24 <tchaypo> thanks derekh
19:53:34 <lifeless> so yes we need to plan the summit
19:53:41 <lifeless> we also need to plan the next midcycle
19:53:42 <derekh> although as you can see, that all I did, and put a link on the wiki
19:53:54 <tchaypo> which is now I found it, so that's a good start
19:54:35 <lifeless> I'll send a mail
19:54:46 <derekh> lifeless: do you know things like, number of sessions we will have ?
19:54:53 <derekh> lifeless: ok
19:55:12 <lifeless> derekh: nope
19:55:35 <lifeless> at least I don't think so :)
19:55:41 <derekh> lifeless: ok
19:56:12 <lifeless> we have a question from thierry though
19:56:56 <lifeless> do we want more scheduled slots + half-day community meetup or less slots + full day community meetup
19:57:48 <lifeless> slagle: bnemec: dprince: ^
19:58:09 <tchaypo> What's a community meetup?
19:58:26 <lifeless> http://ttx.re/kilo-design-summit.html
19:58:29 <lifeless> contributor meetup
19:58:39 <lifeless> 'All programs will have a space for a half-day or a full-day of informal meetup with an open agenda. The idea is to use that time to get alignment between core contributors on the cycle objectives, to solve project-internal issues, or have further discussions on a specific effort. The format is very much like the mid-cycle meetups.'
19:59:23 <slagle> lifeless: i guess it depends on what those number of slots are
19:59:40 <lifeless> I don't knwo a number
19:59:41 <slagle> but, tbh, i think a full day community meetup is pretty vaulable, so i'm leaning that direction
19:59:44 <tchaypo> It sounds to me as though if we don't have enough slots, we could "informally" organise some more during our meetup time
19:59:44 <derekh> so we have 1.5 days in a room and how much do we want to be scheduled?
19:59:52 <lifeless> previous times the exact numbers have had a lot of flux
20:00:15 <ttx> lifeless: currently you have 4 scheduled slots and a half-day of meetup
20:00:27 <ttx> we could do 2 + full day
20:00:30 <lifeless> ttx: ahha! thnaks. and if we went full day meetup, how ma...
20:00:31 <lifeless> thanks
20:00:32 <tchaypo> I don't think it's scheduled vs unscheduled; more like formally scheduled as part of design summit vs informally scheduled amongst ourselves
20:00:39 <derekh> how about we rush to get sessions propesed and see hw many there are
20:00:40 <ttx> or just the full day if that makes more sense
20:01:10 <lifeless> ttx: format wise, whats the difference between slots and meetup
20:01:19 <ttx> all depends if you can single out a few key topics for the scheduled session
20:01:24 <lifeless> ttx: like - different rooms? no presentetion console?
20:01:25 <ttx> slots are timeboxed
20:01:32 <ttx> and appear in the schedule
20:01:38 <ttx> sorry meeting to start
20:01:45 <ttx> meetup is just loose agenda
20:01:55 <vishy> o/
20:02:11 <derekh> so sounds like we get more time if we do a full day meetup and have an informal schedule somewhere
20:02:17 <SpamapS> slots are good for gathering people you don't know about to disseminate and/or gather opinions
20:02:30 <SpamapS> meetups are good for working through hard design problems with the people you do know about and need to collaborate with
20:03:19 <derekh> we could always do the meetup in the bar
20:03:36 <lifeless> .oO 1Tbps ethernet in the pipeline
20:03:55 <lifeless> I'm leaning to 2 slots + full day too
20:04:07 <tchaypo> me too
20:04:17 <slagle> +1
20:04:27 <lifeless> #info lifeless to raise summit choice on list for ratification
20:04:33 <lifeless> #endmeeting