15:01:55 <johnthetubaguy> #startmeeting XenAPI
15:01:56 <openstack> Meeting started Wed Jan 22 15:01:55 2014 UTC and is due to finish in 60 minutes.  The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:57 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:59 <openstack> The meeting name has been set to 'xenapi'
15:02:02 <johnthetubaguy> hi everyone
15:02:05 <johnthetubaguy> who is around today?
15:02:10 <leifz> hello
15:02:15 <thouveng> hello
15:02:40 <johnthetubaguy> leifz: this is your first time right? want to give yourself a quick intro
15:02:44 <BobBall> sorry for being late
15:02:56 <johnthetubaguy> BobBall: we were just saying hello to each other
15:03:02 <BobBall> Hello!
15:03:04 <johnthetubaguy> we have leifz join us
15:03:09 <BobBall> we've just had a fire alarm
15:03:13 <BobBall> Mate will be joining us shortly I'm sure
15:03:15 <johnthetubaguy> lol, perfect timing!
15:03:25 <leifz> Actually it's leif as in Christopher Lefelhocz from Rackspace.  I was on last week :-)
15:03:26 <BobBall> but it's messed up our schedule so I'm sure he's not had his reminder!
15:03:32 <leifz> And most of you already know me :-).
15:03:39 <johnthetubaguy> its quite risky in your building, not sure those stairs will take everyone all at once
15:03:49 <BobBall> Last week you did have a very obvious nickname though leifz ... Guestxxyxyz or something ;)
15:03:53 <leifz> Been at the Rack for 6 months working with John on whatever he decies is relevant :-)
15:04:13 <johnthetubaguy> lol, OK, so I didn't make it last week, but was probably in the same room as you, or maybe I was in a car, don't remember
15:04:16 <johnthetubaguy> anyways
15:04:20 <johnthetubaguy> lets get on with it
15:04:26 <johnthetubaguy> #topic blueprints
15:04:30 <leifz> in a car getting coffee.
15:05:14 <johnthetubaguy> yeah, getting coffee, that sounds like what I would have been doing
15:05:15 <johnthetubaguy> anyways
15:05:21 <johnthetubaguy> hows the blueprints going?
15:05:27 <thouveng> I pushed a change to add mechanism to attach a pci device to a VM
15:05:30 <BobBall> The only BP going is thouveng's
15:05:34 <thouveng> #link https://review.openstack.org/#/c/67125
15:05:35 <johnthetubaguy> PCI passthrough folks seemed a bit worried we were not talking to them
15:05:51 <johnthetubaguy> I had the resize up of ephemeral disks blueprint merge this week
15:05:59 <BobBall> I had a chat with ijw about it all
15:06:07 <BobBall> he seemed quite happy
15:06:14 <BobBall> will tell us as/when changes that affect drivers land
15:06:23 <BobBall> but the vast majority of the changes don't affect driver-world
15:06:40 <johnthetubaguy> so, I would love that to not be a new step, because it changes the percentage progress, but looks like a good start
15:06:43 <thouveng> On my side I have not a strong opinion about what should be done :(
15:06:59 <johnthetubaguy> yeah, we don't need to worry about what they are up to
15:07:06 <johnthetubaguy> its all driver side, like Bob said
15:07:13 <thouveng> Ok I can remove the @step decorator.
15:07:26 <johnthetubaguy> yeah, its just we have 10 steps at the moment
15:07:29 <BobBall> why do we care about the percentage progress john?
15:07:34 <BobBall> oh... so you get nice numbers haha
15:07:40 <johnthetubaguy> yeah
15:07:44 <BobBall> you could always have the RAX rounding to the closest 10% ;)
15:07:54 <johnthetubaguy> its really not accurate, so I kinda like the rounding
15:08:08 <johnthetubaguy> yeah, we should probably get the API to do that, or change the code
15:08:15 <johnthetubaguy> but I kinda like it
15:08:18 <johnthetubaguy> so...
15:08:19 <matel> Hello
15:08:29 <matel> Sorry for being late, I had a meeting.
15:08:33 <BobBall> We could indeed change the API... I think the more steps the better
15:09:03 <BobBall> finer grained reporting is better than chunky :)
15:09:14 <johnthetubaguy> true, but it all lies right now
15:09:25 <BobBall> Anyway - without the API rounding, I have no view on the number of steps
15:09:30 <BobBall> which step would you like it merged with john?
15:09:50 <johnthetubaguy> yeah, I guess attach_disks could become attach_devices
15:10:07 <johnthetubaguy> setup_network attaches vifs though
15:10:12 <johnthetubaguy> but its close
15:10:25 <BobBall> ok - so rename attach_disks and put the PCI stuff in there
15:10:44 <thouveng> it is ok for me
15:11:58 <johnthetubaguy> cool
15:12:02 <johnthetubaguy> one more quick thing
15:12:16 <johnthetubaguy> I would but all that code into a vm_utils method, rather than it sitting in vm_ops
15:12:22 <johnthetubaguy> it will be way easier to test that way
15:12:43 <johnthetubaguy> anyways, its minor stuff
15:13:13 <johnthetubaguy> any more on blueprints?
15:13:24 <thouveng> just a question
15:13:28 <BobBall> last week we agreed this would be re-targeted to I-3
15:13:28 <johnthetubaguy> sure
15:13:40 <thouveng> should I open them for review as bob said?
15:13:50 <johnthetubaguy> sure, its easier than draft
15:14:00 <johnthetubaguy> there is a "Work in progress" button if its not ready
15:14:10 <thouveng> ok I will do that thanks
15:14:12 <BobBall> I think the base patch is ready
15:14:16 <johnthetubaguy> once there are unit tests up for your code, I would go out of draft
15:14:18 <BobBall> the attach patch doesn't have a test
15:14:41 <thouveng> Yes the attach patch is a work in progress
15:14:59 <johnthetubaguy> yeah, just have it non-draft but marked as work-in-progress
15:15:02 <johnthetubaguy> its easier that way
15:15:35 <thouveng> ok, that's all for me
15:15:41 <johnthetubaguy> I have a few minor blueprints up for review, but nothing worth pointing out
15:15:51 <johnthetubaguy> so, lets move on...
15:15:54 <johnthetubaguy> #topic QA
15:16:08 <johnthetubaguy> hows the zuul work coming on
15:16:15 <johnthetubaguy> we are very very close to the wire here
15:16:30 <ijw> My ears are burning.
15:17:25 <johnthetubaguy> ijw: don't panic, just saying XenAPI PCI passthrough stuff should not affect the stuff you guys are doing
15:17:34 <johnthetubaguy> BobBall: do we have an update on Zuul stuff
15:17:40 <BobBall> :D
15:17:41 <ijw> Yeah, no trouble, if you want me for AOB I'm around, finish your stuff
15:17:43 <BobBall> matel first
15:18:11 <matel> Oh, Ok.
15:18:18 <matel> nodepool: Need to test changes + Have to make sure, we give some time for the VM to shut down before we snapshotting it.
15:18:32 <matel> localrc: I am working on it atm, this is the branch: https://github.com/matelakat/devstack-gate/tree/xenserver-integration
15:19:05 <johnthetubaguy> OK, do we have tempest running on some virtualized XenServer now, like statically?
15:19:06 <matel> I am hoping to reach a successful test run this week.
15:19:20 <johnthetubaguy> matel: that would be totally awesome
15:19:31 <matel> explain statically
15:19:53 <johnthetubaguy> erm, create a XenServer VM in rax cloud, then manually run tempest on that setup
15:20:07 <matel> So here is what it looks like:
15:20:35 <matel> I have some scripts here: https://github.com/citrix-openstack/xenapi-os-testing
15:20:59 <matel> These are "emulating" nodepool - according to my best knowledge
15:21:17 <johnthetubaguy> but do you have tempest inside a XenServer in the rackspace cloud running now?
15:21:26 <johnthetubaguy> oh, emulating nodepool is good too
15:21:29 <matel> Yes of course.
15:21:36 <johnthetubaguy> how long did that take?
15:21:38 <matel> It is running tempest.
15:21:56 <johnthetubaguy> full tempest that is?
15:22:14 <matel> I don't have measures yet, because the tests were failing - apparently there were some iptables rules in the node, that prevented dom0 to talk back to the domu
15:22:29 <matel> Wait a minute, I might have something.
15:22:40 <matel> https://github.com/citrix-openstack/xenapi-os-testing/issues/5
15:22:58 <matel> But as I said, this test run involves loads of timeouts.
15:23:04 <matel> Ran 279 tests in 1997.157s
15:23:04 <matel> FAILED (failures=6, skipped=58)
15:23:15 <matel> This is a smoke run.
15:23:27 <johnthetubaguy> oh boy, thats smoke
15:23:40 <johnthetubaguy> well lets hope some long timeouts bloated the running time
15:23:41 <matel> So the other issue is that the ephemeral storage can't be used from the HVM machine.
15:23:55 <johnthetubaguy> ah right, because its device id is too high
15:23:57 <matel> So that could be an issue, but let's hope, it won't.
15:24:23 <matel> I just did another smoke run:
15:24:30 <matel> Ran 284 tests in 2214.197s
15:24:30 <matel> FAILED (failures=6, skipped=59)
15:24:53 <johnthetubaguy> OK, so getting close
15:24:58 <johnthetubaguy> thats awesome stuff
15:25:14 <matel> So, let me summarize the todos:
15:25:21 <johnthetubaguy> cool
15:25:50 <BobBall> Also note that gate has been very flakely lately
15:26:00 <BobBall> and at least one of those failures is the one that gate kept hitting
15:26:10 <BobBall> that was fixed yesterday(?) - the BFV patterns test
15:26:13 <matel> 1.) Update config patches, 2.) Test nodepool 3.) Make sure nodepool snapshots a stopped VM 4.) localrc (devstack-gate)
15:26:21 <johnthetubaguy> BobBall: yes, true, kinda nice to see the same failures though
15:26:26 <BobBall> ofc
15:26:48 <johnthetubaguy> matel: good summary, thanks
15:27:42 <johnthetubaguy> for marketing reasons, can you do a blog post on this after you get the first full run in your simulated environment, then post that to the dev mailing lists with the [Nova] tag?
15:28:05 <johnthetubaguy> particular if thats towards the end of the week
15:28:35 <johnthetubaguy> anyways, lets move on
15:28:43 <johnthetubaguy> #topic Open Discussion
15:28:45 <BobBall> I'm starting to look at nodepool btw
15:28:58 <johnthetubaguy> any other things people want to raise?
15:29:03 <BobBall> Mate has a partial setup - but I'm crossing the I's and dotting the T.s
15:29:15 <BobBall> It's a fallback because the -infra team are being very slow with their reviews
15:29:26 <johnthetubaguy> that sounds like a good plan, test things to see how it hangs together
15:29:43 <BobBall> so we might have to run a nodepool and watch the gerrit stream ourselves, running in our RS account rather than the -infra one
15:29:55 <ijw> On the PCI, the driver end of thing will change pci_whitelist and the stats reporting on the driver end of things, as a headsup.  If you implement a new VIF plugging type, you get PCI for Neutron.  The rest should be much as before.
15:29:57 <matel> Realistically, I think we are speaking about weeks.
15:30:21 <BobBall> thanks ijw
15:30:25 <ijw> johnthetubaguy and I are not in agreement on the implementation but the bits in dispute don't change the effects on the driver end.
15:31:49 <BobBall> johnthetubaguy: worth calling out your API patch?
15:31:59 <johnthetubaguy> ijw: agreed
15:32:08 <johnthetubaguy> which patch was that?
15:32:14 <johnthetubaguy> I loose track
15:32:36 <BobBall> the one we were discussing
15:33:23 <BobBall> can't find it now
15:33:29 <BobBall> https://review.openstack.org/#/c/66493/
15:33:30 <johnthetubaguy> yeah, I don't remember what it was about now
15:33:40 <johnthetubaguy> oh damm, yes
15:33:43 <johnthetubaguy> thanks
15:34:14 <johnthetubaguy> Its an idea Bob had for making the session code a bit easier to write, then I mangled the idea in my head, and came up with the above patch
15:34:38 <johnthetubaguy> ideas welcome on that, before I try and convert over all the calls to XenAPI into the new format
15:34:58 <johnthetubaguy> it will be lots of patches, not one massive patch, just FYI
15:35:07 <BobBall> Two main aims were to make it easier to test and having auto complete in vi pick up the methods ;)
15:36:05 <johnthetubaguy> yeah, and I kinda want it easier to discover what xenapi calls other people already make, and make it easy to find workaround, like the ones to VBD.unplug, etc
15:36:32 <thouveng> cool :) I will have a closer look.
15:36:39 <johnthetubaguy> #help please give feedback on https://review.openstack.org/#/c/66493/ before all the follow up patches get uploaded
15:36:46 <johnthetubaguy> cool, any more for any more?
15:37:06 <BobBall> matel: will you look too? :)
15:37:06 <BobBall> yes
15:37:14 <BobBall> could you poke some of your core buddies johnthetubaguy ?
15:37:16 <johnthetubaguy> I want to test if not encrypting images make it faster to download: https://review.openstack.org/#/c/68363/
15:37:23 <johnthetubaguy> BobBall: yeah, I should do
15:37:24 <matel> BobBall: Ok.
15:37:31 <BobBall> the performance patches that you said you wanted to get in are dangling again
15:38:02 <johnthetubaguy> BobBall: whats up, need +A and +2, or just got rebased?
15:38:31 <BobBall> https://review.openstack.org/#/c/58754/
15:38:42 <BobBall> might just need re-+a
15:38:49 <BobBall> oh no
15:38:55 <johnthetubaguy> I will take a peek
15:38:58 <BobBall> one +a from matt dietz - so you could push it through
15:39:07 <BobBall> if you're happy of course ;)
15:39:39 <BobBall> the others in the stream all have a +a but have been rebased I think (tick means  approved, right?)
15:39:41 <johnthetubaguy> I +2 ed it in the past, so I suspect it will just take a quick look
15:40:02 <johnthetubaguy> yeah, a tick needs to be in the approve column though
15:40:19 <BobBall> it is for 3 of the patches
15:40:20 <johnthetubaguy> if you can rebase them all right now, I can take a look in 5 mins and see how they look?
15:40:30 <BobBall> I'll rebase, sure
15:40:34 <johnthetubaguy> thanks
15:40:44 <johnthetubaguy> justs make sure they are not going to fail in the gate for silly reasons
15:40:53 <johnthetubaguy> since the gate is so dam fragile
15:41:34 <BobBall> it's joyous isn't it
15:41:38 <ijw> At this point in time, more so than usual, they've been talking about gate problems all day
15:41:44 <BobBall> wow
15:41:47 <BobBall> 104 in the gate
15:41:51 <BobBall> not worth +A'ing
15:41:55 <johnthetubaguy> yeah, true
15:42:04 <BobBall> maybe we'll check again tomorrow :) but better to keep gate churn down
15:42:07 <johnthetubaguy> its like icehouse-2 cut soon right
15:42:16 <BobBall> I'm surprised they haven't purged the +A queue!
15:42:36 <johnthetubaguy> sure, I will remember to get back to your patches when the queue is below 50 or something!
15:42:44 <BobBall> I'll rebase now anyway
15:43:01 <ijw> i2 is Friday, I think?
15:43:05 <ijw> But yes, imminent
15:43:28 <BobBall> did you want those patches in I-2? guess it doesn't matter with you guys doing continuous deployment stuff
15:44:05 <ijw> Mad, mad fools
15:44:11 <ijw> ;)
15:44:14 <johnthetubaguy> well, I kinda wanted it soon, after I-2 is fine
15:44:25 <BobBall> could do with +A too *grin* gotta love a broken gate.  https://review.openstack.org/#/c/63778/
15:45:33 <johnthetubaguy> ijw: well if you make frequent change, its easier to find what you broke
15:46:06 <johnthetubaguy> cool, so any more for any more?
15:46:38 <matel> Nope
15:46:40 <BobBall> not this week
15:46:43 <johnthetubaguy> sweet
15:46:47 <thouveng> no
15:46:56 <johnthetubaguy> talk to you all next week
15:46:58 <BobBall> next week I hope to have news on both nodepool and xs-c on debian!
15:47:04 <johnthetubaguy> :)
15:47:09 <johnthetubaguy> #endmeeting