15:01:01 <johnthetubaguy> #startmeeting XenAPI
15:01:02 <openstack> Meeting started Wed Mar 12 15:01:01 2014 UTC and is due to finish in 60 minutes.  The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:05 <openstack> The meeting name has been set to 'xenapi'
15:01:15 <BobBall> o/
15:01:17 <johnthetubaguy> hi, who is around for todays meeting?
15:01:39 <BobBall> Me.  I don't think Mate can make it today though.
15:01:46 <johnthetubaguy> OK
15:01:55 <johnthetubaguy> you got anything you want to raise today?
15:02:06 <BobBall> We can talk about the CI
15:02:09 <BobBall> it's moved to RAX
15:02:12 <johnthetubaguy> cool
15:02:16 <johnthetubaguy> #topic Bugs
15:02:17 <BobBall> everything seems to be running again
15:02:21 <johnthetubaguy> its bug fixing time...
15:02:22 <BobBall> we had a short downtime though
15:02:31 <johnthetubaguy> we had an RC1 bug, but its OK now
15:02:38 <BobBall> I've updated https://wiki.openstack.org/wiki/XenServer/XenServer_CI
15:02:44 <johnthetubaguy> #topic CI
15:02:52 <johnthetubaguy> sorry, was in bugs, lets do CI
15:02:56 <BobBall> http://eeed722a22cb5387f3e9-8fd069087bab3f263c7f9ddd524fce42.r22.cf5.rackcdn.com/ci_status/current_queue.txt shows that we don't have any tests in the queue!
15:03:11 <BobBall> http://eeed722a22cb5387f3e9-8fd069087bab3f263c7f9ddd524fce42.r22.cf5.rackcdn.com/ci_status/recent_finished.txt show we have a very strong pass rate ATM
15:03:28 <johnthetubaguy> cool
15:03:34 <BobBall> but we did have a period of downtime
15:03:45 <BobBall> there was a bug in the new gerrit listener needed for RAx
15:03:48 <BobBall> RAX* deployment
15:03:55 <johnthetubaguy> ah, oops
15:04:02 <BobBall> which meant it built up a massive backlog of events
15:04:04 <BobBall> which were never seen
15:04:05 <BobBall> whoops
15:04:18 <johnthetubaguy> it it worth rekicking those, or find a way to do that in future?
15:04:35 <BobBall> Not sure
15:04:47 <BobBall> I say wait for http://www.rcbops.com/gerrit/reports/nova-cireport.html to be updated
15:04:51 <BobBall> then re-add all of the missing jobs ;)
15:05:01 <johnthetubaguy> yeah, that might do the trick
15:05:14 <BobBall> mikal: Could you add a timestamp to your report? :)
15:05:18 <johnthetubaguy> or nick that code to have a fix up script?
15:05:38 <BobBall> We could... but hopefully it'll be very rare
15:05:56 <BobBall> It does feel like we're subverting the stats though
15:06:04 <BobBall> by re-adding the jobs it claims we missed ;)
15:06:20 <BobBall> gaming rather than subverting
15:06:41 <johnthetubaguy> well, we are also ensuring we test everything, so I don't feel too bad about that
15:07:11 <BobBall> hehe
15:07:35 <BobBall> Anyway
15:07:39 <BobBall> the thing that's missing now
15:07:47 <BobBall> which - if you have something to help would be useful johnthetubaguy
15:07:54 <BobBall> is monitoring of the CI
15:08:02 <BobBall> clearly when jenkins goes down everybody jumps up and down
15:08:13 <BobBall> but if our 3rd party CI goes down it might take ages before we notice
15:08:16 <BobBall> which would be a pain
15:08:57 <johnthetubaguy> yeah, that would be good, not quite sure the best way to do that
15:09:04 <BobBall> Hmmm - maybe I could work with mikal on that - if he's producing the stats, maybe he can send an email to drivers that fall below a particular pass rate in the last day or something
15:09:05 <johnthetubaguy> needs to be in the right peoples eyes
15:09:30 <BobBall> if XS CI misses more than 10% of jobs in 24 hours, or passes fewer than 80% of jobs then something is broken
15:09:31 <johnthetubaguy> sounds like mikal's stuff is a good pointer
15:09:41 <BobBall> I mean piggy-back on his cronjob
15:09:43 <johnthetubaguy> more than 1% would do it for me
15:09:48 <johnthetubaguy> yeah, makes sense
15:09:56 <BobBall> it's a generic thing
15:10:00 <BobBall> so the other drivers should also sign up
15:10:19 <johnthetubaguy> sure, might be good getting it upstream
15:10:26 <johnthetubaguy> i mean in infra
15:10:32 <BobBall> maybe yeah
15:10:35 <johnthetubaguy> like in the status links
15:10:37 <johnthetubaguy> or something
15:11:05 <johnthetubaguy> cool, so all good
15:11:39 <BobBall> Think so
15:12:02 <johnthetubaguy> #topic Open Discussion
15:12:11 <BobBall> you were going to talk about bugs?
15:12:25 <BobBall> I saw you went through and reclassified some
15:12:26 <BobBall> which is great
15:13:48 <johnthetubaguy> yeah, still burning through the list trying to sort some of them out
15:14:02 <johnthetubaguy> so pushing up reviews or kicking them out as I go through :)
15:14:28 <BobBall> perfect
15:14:43 <BobBall> I'm not sure we'll have time to work on specific bugs ATM unless there is something you want to highlight
15:14:49 <BobBall> I want to bring down our tempest exclusion list
15:15:02 <johnthetubaguy> OK, that sounds good
15:15:12 <johnthetubaguy> nothing major I don't hink
15:15:15 <johnthetubaguy> think^
15:15:18 <BobBall> I'm not aware of anything either
15:15:22 <johnthetubaguy> cool
15:15:32 <johnthetubaguy> so, how is the load on your compute VM during the test run?
15:15:39 <BobBall> could do moer
15:15:40 <BobBall> more*
15:15:41 <johnthetubaguy> I noticed it uses quite a lot of memory these days
15:15:47 <BobBall> only passing 3 CPUs
15:15:55 <BobBall> devstack pulls in a load
15:16:01 <BobBall> can't run tempest with anything less than 8GB :(
15:16:13 <johnthetubaguy> yeah, but the memory in the domU compute node, does that have 8 GB?
15:16:17 <BobBall> things just run at a crawl
15:16:20 <BobBall> for the tests, yes
15:16:30 <BobBall> if you're just running compute rather than devstack you don't need anything like it
15:16:34 <BobBall> as you know
15:19:56 <johnthetubaguy> yeah, just wondering about how to speed that up
15:20:19 <johnthetubaguy> I found a way to reduce the api workers a bit, which helped with memory usage, we could try some bits of that out
15:20:31 <johnthetubaguy> anyways, just curious, what memory does compute have?
15:21:20 <BobBall> again, it's devstack
15:21:23 <BobBall> so it's more than just compute
15:22:03 <johnthetubaguy> agreed, I mean the DomU vm
15:22:04 <BobBall> 4G total
15:22:14 <johnthetubaguy> ah, so DomU gets 4 GB?
15:22:21 <johnthetubaguy> running devstack bits
15:22:36 <johnthetubaguy> just wondering if its swapping during the tests
15:22:37 <BobBall> yes
15:22:43 <BobBall> not much, no
15:22:45 <BobBall> but a bit
15:23:00 <BobBall> We upped the ram a couple of times to make it not swap
15:23:07 <johnthetubaguy> OK
15:23:21 <johnthetubaguy> just wondering about quick speed ups
15:23:39 <johnthetubaguy> seems to be running many more workers these days
15:23:47 <johnthetubaguy> anyways, I guess we are all done for today?
15:24:21 <BobBall> think so yes
15:24:30 <BobBall> many more workers though?
15:25:10 <johnthetubaguy> yeah
15:25:13 <johnthetubaguy> conductor and compute, etc
15:25:16 <johnthetubaguy> and api
15:25:28 <BobBall> oh I see
15:25:39 <johnthetubaguy> running many python process for each service, not strictly needed, but will up the memory usage
15:25:54 <johnthetubaguy> in theory should make some things faster, but we are running with less ram
15:25:55 <BobBall> indeed
15:26:02 <BobBall> and LOTS of threads for each too
15:26:13 <johnthetubaguy> well, if only python did that sort of thing
15:26:19 <BobBall> ok - I've got to run
15:26:22 <BobBall> talk next week.
15:26:23 <johnthetubaguy> me too
15:26:25 <johnthetubaguy> thanks
15:26:29 <johnthetubaguy> #endmeeting