15:06:46 <johnthetubaguy> #startmeeting XenAPI
15:06:47 <openstack> Meeting started Wed Jul 16 15:06:46 2014 UTC and is due to finish in 60 minutes.  The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:06:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:06:50 <openstack> The meeting name has been set to 'xenapi'
15:06:57 <johnthetubaguy> howdy all
15:07:07 <johnthetubaguy> #topic Mid cycle meet up
15:07:09 <BobBall> howdy
15:07:20 <johnthetubaguy> just thinking, who is going to the meet up?
15:07:26 <johnthetubaguy> I am heading over for that
15:07:31 * BobBall keeps his hands in his pockets
15:07:37 <johnthetubaguy> the nova mid cycle I mean
15:07:44 <johnthetubaguy> OKay, just checking
15:07:45 <BobBall> which makes it more impressive that I can continue to type
15:07:52 * johnthetubaguy giggles
15:07:56 <johnthetubaguy> #topic CI
15:08:01 <johnthetubaguy> OK, so how is the CI this week
15:08:05 <BobBall> Fun
15:08:20 <BobBall> garyk very helpfully pointed out a break that meant I had to rebase devstack-gate
15:08:37 <BobBall> I'm thinking of a cronjob to rebase d-g which would be exciting
15:08:51 <BobBall> or perhaps what would be better is a merge... Hmmm... anyway
15:09:01 <BobBall> it meant we were broken for around 2 hours
15:09:07 <BobBall> which is a shocking length of time
15:09:22 <BobBall> Apart from that, we're hitting the bugs that are also seen by the gate
15:09:22 <johnthetubaguy> ah
15:09:25 <BobBall> so all's good
15:09:43 <johnthetubaguy> ah, interesting, I thought we might avoid some of the gate bugs
15:09:46 <johnthetubaguy> which ones are you seeing?
15:10:39 <BobBall> https://bugs.launchpad.net/openstack-ci/+bug/1286818 for example
15:10:41 <uvirtbot> Launchpad bug 1286818 in openstack-ci "Ubuntu package archive periodically inconsistent causing gate build failures" [Low,Triaged]
15:10:45 <BobBall> and https://bugs.launchpad.net/devstack/+bug/1340660
15:10:48 <uvirtbot> Launchpad bug 1340660 in devstack "Apache failed to start in the gate" [Undecided,New]
15:10:58 <BobBall> (the first one is a known Rackspace problem! :) )
15:11:49 <johnthetubaguy> yeah, I never really understand that
15:12:05 <johnthetubaguy> people don't seem to agree thats and issue, etc.
15:12:24 <BobBall> really?
15:12:27 <BobBall> I get it a lot
15:12:37 <johnthetubaguy> well, you are hitting the ubuntu mirrors, not the rackspace mirros
15:12:41 <BobBall> you're talking about the Rackspace Ubuntu mirror sometimes being out of date, right?
15:13:06 <BobBall> no - mirror.rackspace.com/ubuntu
15:13:12 <johnthetubaguy> I think you need to explicitly point to the rackspace mirror, when I checked up on that
15:13:15 <BobBall> that's the hurtful one
15:13:19 <johnthetubaguy> hmm, that sounds like our mirror, lol
15:13:20 <BobBall> the default is the rackspace mirror
15:14:00 <johnthetubaguy> but that exception trace in the bug doesn't list the rackspace mirror, I guess thats what was confusing
15:14:06 <BobBall> remebering, we actually point _away_ from the rackspace mirror so we don't hit that problem (that one is gate only) - the second one is the one I was thinking of but I just hit those two in check jobs which is why I was thinking about it
15:14:44 <BobBall> https://bugs.launchpad.net/openstack-ci/+bug/1251117
15:14:45 <uvirtbot> Launchpad bug 1251117 in openstack-ci "Rackspace package mirror periodically inconsistent causing gate build failures" [Low,Triaged]
15:14:50 <BobBall> That's the job I should have pasted, sorry
15:15:00 <BobBall> -job + bug
15:15:37 <johnthetubaguy> ah, OK, so I will try and raise that internally, last time someone dug into that, I got told people were not actually hitting the rackspace mirror
15:15:47 <johnthetubaguy> it could be a network issue on the way there I guess
15:16:44 <johnthetubaguy> anyways, thanks for the extra detail
15:17:04 <johnthetubaguy> I was just checking we didn't see loads of the ssh issues
15:17:10 <BobBall> nah
15:17:11 <johnthetubaguy> I was hoping we side step those ones
15:17:12 <BobBall> not this week
15:17:56 <johnthetubaguy> cool
15:18:01 <johnthetubaguy> any more on CI?
15:18:09 <johnthetubaguy> any news on getting those extra tests enabled yet?
15:18:30 <BobBall> waiting on a devstack review
15:19:07 <johnthetubaguy> OK, you got a link?
15:19:19 <johnthetubaguy> I can see if my +1 helps
15:19:37 <BobBall> https://review.openstack.org/#/c/107345/
15:20:11 <johnthetubaguy> does that image work OK?
15:20:26 <johnthetubaguy> doesn't look like a VHD one, I guess its a raw one?
15:20:37 <BobBall> it's the three part image
15:20:41 <BobBall> it's what we use in our internal CI
15:20:54 <johnthetubaguy> so what about removing the github link all together?
15:20:59 <BobBall> https://github.com/citrix-openstack/qa/blob/master/install-devstack-xen.sh#L428
15:21:14 <BobBall> We want to test VHD, so I'd argue against removing it
15:21:51 <BobBall> Both formats have been broken by someone else's changes in the past - so we might as well have both in there because tempest will (by chance) test both for us
15:21:55 <johnthetubaguy> ah, so you want to have both, that makes sense
15:22:09 <BobBall> the default is the VHD image
15:22:23 <BobBall> but there are a specific class of tempest tests (that upload images to glance) that break that assertion
15:22:26 <johnthetubaguy> so the test that needs the latest image will pick that one?
15:22:47 <BobBall> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/manager.py#n490
15:23:05 <BobBall> Specifically, see the except IOError: clause
15:23:11 <BobBall> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/manager.py#n504
15:24:31 <BobBall> but all of that is too much info for the commit message :P
15:25:29 <johnthetubaguy> yeah
15:25:34 <johnthetubaguy> maybe
15:25:42 <johnthetubaguy> just can't see why just yet
15:25:44 <BobBall> Oh - I know it's not CI stuff, but can you comment on https://bugs.launchpad.net/bugs/1204165 ?
15:25:46 <uvirtbot> Launchpad bug 1204165 in nova "xenapi: vm_utils.ensure_free_mem does not take into account overheads" [Low,Triaged]
15:25:52 <BobBall> can't see why what?
15:26:42 <johnthetubaguy> well, just not sure why it needs the three part image for the test
15:26:47 <johnthetubaguy> vs a qcow2 image
15:26:57 <johnthetubaguy> feels very hypervisor specific
15:27:08 <BobBall> because it can't find qcow2, so it assumes everyone must have 3-part? or libvirt supports both, so it tries qcow and falls back to 3-part?
15:27:11 <BobBall> yes
15:27:14 <BobBall> it is exceptionally hypervisor specific
15:27:19 <BobBall> and perhaps the answer is to fix tempest
15:27:27 <johnthetubaguy> yeah, I think so
15:27:41 <BobBall> but the easy answer for us - and a very useful one for testing - is to have the 3 part image tested by that code path
15:28:00 <BobBall> as I said before, people sometimes break 3-part so we should test it somewhere; and we want to default to VHD
15:28:06 <BobBall> so this change is the right one for us
15:29:00 <johnthetubaguy> OK, but why is this stopping you adding some tests, because that bit of tempest can't create the correct image?
15:29:25 <BobBall> without the 3-part image tempest fails because it's not there and we don't support/provide qcow2
15:29:51 <BobBall> http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/34/96734/2/17227/testr_results.html.gz
15:30:00 <BobBall> IOError: [Errno 2] No such file or directory: '/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec/cirros-0.3.2-x86_64-vmlinuz'
15:30:08 <johnthetubaguy> I think I kinda get that now
15:30:13 <johnthetubaguy> anyways +1 on that
15:30:27 <BobBall> I'll ask in -infra
15:30:36 <johnthetubaguy> better to fix tempest though, but no idea which is quicker to review
15:30:38 <johnthetubaguy> cool
15:30:45 <johnthetubaguy> OK, any more?
15:30:46 <BobBall> no, better _not_ to fix tempest
15:30:55 <BobBall> as I said, we want UEC images to be tested
15:30:59 <BobBall> and they are not tested anywhere else
15:31:20 <BobBall> so if we can test them here because tempest is hypervisor specific then that's just fine by me
15:31:21 <johnthetubaguy> I kinda think we should rip that feature out of the system myself, but Ok
15:31:43 <BobBall> perhaps
15:32:04 <BobBall> Yes - any more: https://bugs.launchpad.net/bugs/1204165
15:32:05 <uvirtbot> Launchpad bug 1204165 in nova "xenapi: vm_utils.ensure_free_mem does not take into account overheads" [Low,Triaged]
15:32:08 <BobBall> Please comment on the bug :)
15:32:15 <BobBall> it's one you raised and the question is is it still relevant
15:32:28 <johnthetubaguy> I just added a comment
15:33:01 <BobBall> ah then no :(
15:33:29 <johnthetubaguy> yeah, we still need that fixing, I should work through some of those soon
15:33:49 <johnthetubaguy> I forgot about that one
15:34:03 <BobBall> fair enough
15:34:45 <johnthetubaguy> I think the real answer is to remove that check, or at least move it, but thats a different converstaion
15:34:49 <johnthetubaguy> #topic Bugs
15:34:54 <johnthetubaguy> I guess we did that already
15:35:02 <johnthetubaguy> #topic Open Discussion
15:35:03 <BobBall> well the one I was talking about yeah...
15:35:05 <johnthetubaguy> any more for any more
15:35:17 <BobBall> have you/anyone looked at the updated ocaml-vhd rpm?
15:35:20 <BobBall> for the snapshot bug?
15:35:45 <johnthetubaguy> I don't think there is an easy why for us to test that, the bug is not very reproducable
15:36:12 <BobBall> Shame...
15:36:17 <johnthetubaguy> or at least, we don't have the information on how to reproduce it right now
15:36:29 <BobBall> well I thought you suggested it should be 100% reproducible :)
15:36:32 <johnthetubaguy> done loads and they worked fine, all seemingly doing the same thing
15:37:09 <johnthetubaguy> on one particular VM it was reproducible, but we worked around things for that VM I think
15:37:22 <BobBall> shame you didn't copy the VHDs
15:38:07 <johnthetubaguy> yeah, duno what they are doing, its not my team working on that right now
15:38:22 <johnthetubaguy> they may have them, but thats customer data, so I suspect we couldn't do that
15:38:43 <johnthetubaguy> duno the details of that case right now
15:38:49 <BobBall> ok
15:38:55 <johnthetubaguy> anyways, any more for any more?
15:39:55 <johnthetubaguy> I guess we are done
15:40:02 <johnthetubaguy> BobBall: thanks, catch you next week
15:40:05 <johnthetubaguy> #endmeeting