15:01:55 #startmeeting XenAPI 15:01:56 Meeting started Wed Jan 22 15:01:55 2014 UTC and is due to finish in 60 minutes. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:57 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:59 The meeting name has been set to 'xenapi' 15:02:02 hi everyone 15:02:05 who is around today? 15:02:10 hello 15:02:15 hello 15:02:40 leifz: this is your first time right? want to give yourself a quick intro 15:02:44 sorry for being late 15:02:56 BobBall: we were just saying hello to each other 15:03:02 Hello! 15:03:04 we have leifz join us 15:03:09 we've just had a fire alarm 15:03:13 Mate will be joining us shortly I'm sure 15:03:15 lol, perfect timing! 15:03:25 Actually it's leif as in Christopher Lefelhocz from Rackspace. I was on last week :-) 15:03:26 but it's messed up our schedule so I'm sure he's not had his reminder! 15:03:32 And most of you already know me :-). 15:03:39 its quite risky in your building, not sure those stairs will take everyone all at once 15:03:49 Last week you did have a very obvious nickname though leifz ... Guestxxyxyz or something ;) 15:03:53 Been at the Rack for 6 months working with John on whatever he decies is relevant :-) 15:04:13 lol, OK, so I didn't make it last week, but was probably in the same room as you, or maybe I was in a car, don't remember 15:04:16 anyways 15:04:20 lets get on with it 15:04:26 #topic blueprints 15:04:30 in a car getting coffee. 15:05:14 yeah, getting coffee, that sounds like what I would have been doing 15:05:15 anyways 15:05:21 hows the blueprints going? 15:05:27 I pushed a change to add mechanism to attach a pci device to a VM 15:05:30 The only BP going is thouveng's 15:05:34 #link https://review.openstack.org/#/c/67125 15:05:35 PCI passthrough folks seemed a bit worried we were not talking to them 15:05:51 I had the resize up of ephemeral disks blueprint merge this week 15:05:59 I had a chat with ijw about it all 15:06:07 he seemed quite happy 15:06:14 will tell us as/when changes that affect drivers land 15:06:23 but the vast majority of the changes don't affect driver-world 15:06:40 so, I would love that to not be a new step, because it changes the percentage progress, but looks like a good start 15:06:43 On my side I have not a strong opinion about what should be done :( 15:06:59 yeah, we don't need to worry about what they are up to 15:07:06 its all driver side, like Bob said 15:07:13 Ok I can remove the @step decorator. 15:07:26 yeah, its just we have 10 steps at the moment 15:07:29 why do we care about the percentage progress john? 15:07:34 oh... so you get nice numbers haha 15:07:40 yeah 15:07:44 you could always have the RAX rounding to the closest 10% ;) 15:07:54 its really not accurate, so I kinda like the rounding 15:08:08 yeah, we should probably get the API to do that, or change the code 15:08:15 but I kinda like it 15:08:18 so... 15:08:19 Hello 15:08:29 Sorry for being late, I had a meeting. 15:08:33 We could indeed change the API... I think the more steps the better 15:09:03 finer grained reporting is better than chunky :) 15:09:14 true, but it all lies right now 15:09:25 Anyway - without the API rounding, I have no view on the number of steps 15:09:30 which step would you like it merged with john? 15:09:50 yeah, I guess attach_disks could become attach_devices 15:10:07 setup_network attaches vifs though 15:10:12 but its close 15:10:25 ok - so rename attach_disks and put the PCI stuff in there 15:10:44 it is ok for me 15:11:58 cool 15:12:02 one more quick thing 15:12:16 I would but all that code into a vm_utils method, rather than it sitting in vm_ops 15:12:22 it will be way easier to test that way 15:12:43 anyways, its minor stuff 15:13:13 any more on blueprints? 15:13:24 just a question 15:13:28 last week we agreed this would be re-targeted to I-3 15:13:28 sure 15:13:40 should I open them for review as bob said? 15:13:50 sure, its easier than draft 15:14:00 there is a "Work in progress" button if its not ready 15:14:10 ok I will do that thanks 15:14:12 I think the base patch is ready 15:14:16 once there are unit tests up for your code, I would go out of draft 15:14:18 the attach patch doesn't have a test 15:14:41 Yes the attach patch is a work in progress 15:14:59 yeah, just have it non-draft but marked as work-in-progress 15:15:02 its easier that way 15:15:35 ok, that's all for me 15:15:41 I have a few minor blueprints up for review, but nothing worth pointing out 15:15:51 so, lets move on... 15:15:54 #topic QA 15:16:08 hows the zuul work coming on 15:16:15 we are very very close to the wire here 15:16:30 My ears are burning. 15:17:25 ijw: don't panic, just saying XenAPI PCI passthrough stuff should not affect the stuff you guys are doing 15:17:34 BobBall: do we have an update on Zuul stuff 15:17:40 :D 15:17:41 Yeah, no trouble, if you want me for AOB I'm around, finish your stuff 15:17:43 matel first 15:18:11 Oh, Ok. 15:18:18 nodepool: Need to test changes + Have to make sure, we give some time for the VM to shut down before we snapshotting it. 15:18:32 localrc: I am working on it atm, this is the branch: https://github.com/matelakat/devstack-gate/tree/xenserver-integration 15:19:05 OK, do we have tempest running on some virtualized XenServer now, like statically? 15:19:06 I am hoping to reach a successful test run this week. 15:19:20 matel: that would be totally awesome 15:19:31 explain statically 15:19:53 erm, create a XenServer VM in rax cloud, then manually run tempest on that setup 15:20:07 So here is what it looks like: 15:20:35 I have some scripts here: https://github.com/citrix-openstack/xenapi-os-testing 15:20:59 These are "emulating" nodepool - according to my best knowledge 15:21:17 but do you have tempest inside a XenServer in the rackspace cloud running now? 15:21:26 oh, emulating nodepool is good too 15:21:29 Yes of course. 15:21:36 how long did that take? 15:21:38 It is running tempest. 15:21:56 full tempest that is? 15:22:14 I don't have measures yet, because the tests were failing - apparently there were some iptables rules in the node, that prevented dom0 to talk back to the domu 15:22:29 Wait a minute, I might have something. 15:22:40 https://github.com/citrix-openstack/xenapi-os-testing/issues/5 15:22:58 But as I said, this test run involves loads of timeouts. 15:23:04 Ran 279 tests in 1997.157s 15:23:04 FAILED (failures=6, skipped=58) 15:23:15 This is a smoke run. 15:23:27 oh boy, thats smoke 15:23:40 well lets hope some long timeouts bloated the running time 15:23:41 So the other issue is that the ephemeral storage can't be used from the HVM machine. 15:23:55 ah right, because its device id is too high 15:23:57 So that could be an issue, but let's hope, it won't. 15:24:23 I just did another smoke run: 15:24:30 Ran 284 tests in 2214.197s 15:24:30 FAILED (failures=6, skipped=59) 15:24:53 OK, so getting close 15:24:58 thats awesome stuff 15:25:14 So, let me summarize the todos: 15:25:21 cool 15:25:50 Also note that gate has been very flakely lately 15:26:00 and at least one of those failures is the one that gate kept hitting 15:26:10 that was fixed yesterday(?) - the BFV patterns test 15:26:13 1.) Update config patches, 2.) Test nodepool 3.) Make sure nodepool snapshots a stopped VM 4.) localrc (devstack-gate) 15:26:21 BobBall: yes, true, kinda nice to see the same failures though 15:26:26 ofc 15:26:48 matel: good summary, thanks 15:27:42 for marketing reasons, can you do a blog post on this after you get the first full run in your simulated environment, then post that to the dev mailing lists with the [Nova] tag? 15:28:05 particular if thats towards the end of the week 15:28:35 anyways, lets move on 15:28:43 #topic Open Discussion 15:28:45 I'm starting to look at nodepool btw 15:28:58 any other things people want to raise? 15:29:03 Mate has a partial setup - but I'm crossing the I's and dotting the T.s 15:29:15 It's a fallback because the -infra team are being very slow with their reviews 15:29:26 that sounds like a good plan, test things to see how it hangs together 15:29:43 so we might have to run a nodepool and watch the gerrit stream ourselves, running in our RS account rather than the -infra one 15:29:55 On the PCI, the driver end of thing will change pci_whitelist and the stats reporting on the driver end of things, as a headsup. If you implement a new VIF plugging type, you get PCI for Neutron. The rest should be much as before. 15:29:57 Realistically, I think we are speaking about weeks. 15:30:21 thanks ijw 15:30:25 johnthetubaguy and I are not in agreement on the implementation but the bits in dispute don't change the effects on the driver end. 15:31:49 johnthetubaguy: worth calling out your API patch? 15:31:59 ijw: agreed 15:32:08 which patch was that? 15:32:14 I loose track 15:32:36 the one we were discussing 15:33:23 can't find it now 15:33:29 https://review.openstack.org/#/c/66493/ 15:33:30 yeah, I don't remember what it was about now 15:33:40 oh damm, yes 15:33:43 thanks 15:34:14 Its an idea Bob had for making the session code a bit easier to write, then I mangled the idea in my head, and came up with the above patch 15:34:38 ideas welcome on that, before I try and convert over all the calls to XenAPI into the new format 15:34:58 it will be lots of patches, not one massive patch, just FYI 15:35:07 Two main aims were to make it easier to test and having auto complete in vi pick up the methods ;) 15:36:05 yeah, and I kinda want it easier to discover what xenapi calls other people already make, and make it easy to find workaround, like the ones to VBD.unplug, etc 15:36:32 cool :) I will have a closer look. 15:36:39 #help please give feedback on https://review.openstack.org/#/c/66493/ before all the follow up patches get uploaded 15:36:46 cool, any more for any more? 15:37:06 matel: will you look too? :) 15:37:06 yes 15:37:14 could you poke some of your core buddies johnthetubaguy ? 15:37:16 I want to test if not encrypting images make it faster to download: https://review.openstack.org/#/c/68363/ 15:37:23 BobBall: yeah, I should do 15:37:24 BobBall: Ok. 15:37:31 the performance patches that you said you wanted to get in are dangling again 15:38:02 BobBall: whats up, need +A and +2, or just got rebased? 15:38:31 https://review.openstack.org/#/c/58754/ 15:38:42 might just need re-+a 15:38:49 oh no 15:38:55 I will take a peek 15:38:58 one +a from matt dietz - so you could push it through 15:39:07 if you're happy of course ;) 15:39:39 the others in the stream all have a +a but have been rebased I think (tick means approved, right?) 15:39:41 I +2 ed it in the past, so I suspect it will just take a quick look 15:40:02 yeah, a tick needs to be in the approve column though 15:40:19 it is for 3 of the patches 15:40:20 if you can rebase them all right now, I can take a look in 5 mins and see how they look? 15:40:30 I'll rebase, sure 15:40:34 thanks 15:40:44 justs make sure they are not going to fail in the gate for silly reasons 15:40:53 since the gate is so dam fragile 15:41:34 it's joyous isn't it 15:41:38 At this point in time, more so than usual, they've been talking about gate problems all day 15:41:44 wow 15:41:47 104 in the gate 15:41:51 not worth +A'ing 15:41:55 yeah, true 15:42:04 maybe we'll check again tomorrow :) but better to keep gate churn down 15:42:07 its like icehouse-2 cut soon right 15:42:16 I'm surprised they haven't purged the +A queue! 15:42:36 sure, I will remember to get back to your patches when the queue is below 50 or something! 15:42:44 I'll rebase now anyway 15:43:01 i2 is Friday, I think? 15:43:05 But yes, imminent 15:43:28 did you want those patches in I-2? guess it doesn't matter with you guys doing continuous deployment stuff 15:44:05 Mad, mad fools 15:44:11 ;) 15:44:14 well, I kinda wanted it soon, after I-2 is fine 15:44:25 could do with +A too *grin* gotta love a broken gate. https://review.openstack.org/#/c/63778/ 15:45:33 ijw: well if you make frequent change, its easier to find what you broke 15:46:06 cool, so any more for any more? 15:46:38 Nope 15:46:40 not this week 15:46:43 sweet 15:46:47 no 15:46:56 talk to you all next week 15:46:58 next week I hope to have news on both nodepool and xs-c on debian! 15:47:04 :) 15:47:09 #endmeeting