15:00:11 #startmeeting XenAPI 15:00:12 Meeting started Wed Dec 18 15:00:11 2013 UTC and is due to finish in 60 minutes. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:14 it's 15:04 according to rackspace's public cloud ;) 15:00:15 The meeting name has been set to 'xenapi' 15:00:26 ohhhh 15:00:38 okies 15:00:39 hi :) 15:00:51 well according to your VM I guess, time for NTP 15:00:57 #topic date of next meeting 15:01:04 OK so its a bit backwards 15:01:07 yeah 15:01:08 true 15:01:19 but its Christmas soon, so when is the next date people are around? 15:01:36 8th Jan? 15:01:40 I'm off for like ages 15:01:41 yes 15:01:43 should be back then 15:01:45 with any luck 15:01:47 cool 15:02:04 #info next meeting is Wednesday 8th January 2014 15:02:24 #topic blueprints 15:02:36 We've got a number of apologies 15:02:38 any blueprint updates this week? 15:02:43 Mate can't be here because he's off ill 15:02:50 ah, OK, apologies are good 15:02:52 Guillaume can't be here because he's on holiday 15:03:08 both reasonable excuses, and expected this time of year I guess 15:03:17 Euan can't be here either (although he's not been focusing on OpenStack stuff lately either) 15:03:22 so it's just you and me John. 15:03:50 I'll say what I can - but I suspect we'll have a short meeting :) 15:04:10 Guillaume has updated the ehterpad at https://etherpad.openstack.org/p/pci-passthrough-xenapi 15:04:13 hmm, thinking we should have just met up for mulled wine somewhere, but anyways 15:04:32 I can't have mulled wine and champagne on the same day... 15:04:58 lol, OK, its not even mixing your drinks 15:05:12 Oh, I see, more info on the plugin 15:05:14 cool 15:05:17 seems like progress 15:05:23 did you hear about any blockers? 15:06:01 blockers for what? 15:06:12 XenAPI being broken? or PCI or what? 15:06:16 no 15:06:20 in any case lol 15:06:47 I was just meaning anything they found that is blocking them macking progress 15:06:57 no 15:06:58 anyways, sounds all good 15:07:07 he's got a patch nearly ready to upload 15:07:13 I have a few blueprints up for review, but nothing major 15:07:18 looking forward to reviewing it 15:07:22 he asked me today (despite being on holiday) about how to get it up 15:07:34 so I said to upload to gerrit and mark WIP if he wants early feedback 15:07:43 cool 15:07:44 thanks 15:08:16 I have VIF hotplug, vcpu_pin_set and resize ephemeral disks up for review 15:08:32 brill 15:08:37 I've got a bunch of stuff up for review 15:08:40 but none has a BP 15:08:42 tempest improvements 15:08:47 right now I am concentrating on making live-migrate suck less, its mostly about cells, reporting errors, and edge cases, when it works it seems solid now 15:08:47 minor bug fixes 15:09:10 yeah, they look like quite important fixes mostly, nice work there 15:09:33 OK… so lets talk bugs I guess 15:09:38 #topic Bugs and QA 15:09:39 Should I make a random BP for them? 15:09:52 I know you're looking, but I'm not sure how else to encourage reviewer love ;) 15:09:56 BobBall: nah, probably not worth it, but its up to you really 15:10:04 well it would only get low anyways 15:10:08 well I'll give it till after christmas 15:10:10 maybe target the bugs for Icehouse-2 15:10:52 so I spotted a nasty live-migrate issue, where the VM can get deleted 15:10:59 deleted? 15:11:01 ouchy 15:11:02 how 15:11:03 https://review.openstack.org/#/c/62855/ 15:11:32 it could be a theoretical error, but I had a coding error make a VM get destroyed, so I felt the need to fix it 15:11:59 its because libvirt, I think creates the domain, so needs to destroy it, but XenServer looks after all that "for free" 15:12:20 ah ok 15:12:28 I need a follow up patch to do a proper rollback, but I am good with a bit broken over utter disaster 15:13:15 yeah 15:13:16 true 15:13:22 cool 15:13:45 so any updates on the Zuul + tempest + XenServer stuff? I have lost touch with mate the last few days 15:14:05 he's been off 15:14:13 Latest status is the patches you know about 15:14:42 I was hoping that he'd make some more progress this week 15:14:47 but he won't be back till tomorrow 15:14:50 ah, OK, that would explain that, hope he gets better soon (and not just because I want that tempest testing running!) 15:14:51 so not much will happen I'm afraid 15:15:08 yeah, thats the way it goes, but we have some progress at least 15:15:35 I have a config drive fix to get ip addresses, but it didn't get out in the pre-christmas deploy that is happening at the moment 15:15:44 so will be post christmas now 15:15:47 that's fine 15:15:51 I'm not fussed about that 15:15:55 that's niceities 15:16:05 but not needed for the initial proveitworks 15:16:06 yeah, its not a requirement, but could make things alot easer 15:16:24 yeah, I thought that, but the work arounds are getting more complex by the second 15:16:34 anyways, we are getting closer, slowly 15:16:34 hehe 15:16:38 I think we're there though 15:16:44 I don't think we need any more for the config drive issue 15:17:15 I promised to look at configuring the localrc, but I am guessing mate has the environment for that already, so I am not sure me wading in is worth it at this point 15:17:28 buy lets check back in early Jan, and see where we are all at 15:17:42 I assume Mate is off for most of the christmas period? 15:17:50 Not most of it, no 15:17:52 he's working most 15:19:00 sorry, I have to brb john 15:19:10 OK 15:19:23 we might be done... 15:19:29 #topic Open Discussion 15:19:42 BobBall: ping me if you have anything to raise 15:19:45 I think that is all from me 15:21:38 sorry about that - I'm back now 15:21:42 johnthetubaguy: ping 15:21:49 hey 15:21:52 just wanted to talk about backporting aggregate fixes 15:21:57 ah OK 15:22:02 for the pool support? 15:22:02 https://review.openstack.org/#/c/61712/ 15:22:07 indeed 15:22:25 The main issue is whether we can push for a simple backport 15:22:32 -backport 15:22:39 or if we have to backport the object support 15:22:56 the "simple fix" is to rename "metadetails" to "metadata" and cope with the hybrid model that we've got 15:23:00 but that's not the fix that went into trunk 15:23:02 ah, I would go for the simple backport 15:23:11 yeah, thats fine 15:23:18 good 15:23:21 that's what I thought 15:23:24 as long as a unit test catches it, I think thats cool 15:23:31 but wanted to make sure you were on board 15:23:37 no problem 15:23:50 the problem is the unit tests were wrong 15:23:58 it blocks out people using pools on havanna 15:24:05 and that seems like something worth fixing 15:24:13 if we don't plan on removing that support! 15:24:16 they were verifying the wrong behaviour in Havana - because the object support changed the interface without updating the users of the interface ;) 15:24:24 yeah, I remember 15:24:42 as long as there are fixed unit tests that fail before your fix, then pass after your fix, we are good, I feel 15:24:55 indeed john - there are 15:25:23 I figured they had to be, lol, but just giving you words to convince other people :) 15:25:52 The other part of it is I want - somehow - to test that the input to add_to_aggregate is what is expected at run time 15:25:55 not sure how to test that though 15:26:10 tempest doesn't do any mocking which would be needed to have two computes in a real environment 15:26:20 and it's an incredible high level test for a unit test 15:26:29 hmm, thats a good call, a unit test at the higher level should do that I guess? maybe compute rpc_api level? 15:27:01 do we have such tests that don't just mock everything? 15:27:17 my issue is that unit tests should be testing at a very low level 15:27:27 but this is a high level test with partial mocking that I'd need 15:27:31 and it's a bloomin pain 15:27:33 :) 15:27:35 well, you just need the call to RPC layer right, that shoudl be there already in a unit test 15:27:44 yeah 15:27:47 in theory at least 15:27:50 ok 15:28:01 I have a feeling I touched or created one of those at some point... 15:28:02 so you'd suggest a nova unit test doing whatever-the-hell-I-need-to-do 15:28:12 including DB access etc 15:28:20 cuz ideally I don't want to mock that 15:28:22 oh, I don't think you need any of that 15:28:41 Well the test will need a DB somewhere to store the aggregate I create 15:28:46 just create the object, fake out a load of stuff, and see what the compute rpc api gets 15:29:08 hmmm - I see 15:29:11 it will get an object_to_primitive conversion, and you should see the raw dict 15:29:29 I guess I can test that 15:29:38 hmmm 15:29:43 its worth a try, if it feels fake, go bigger I guess 15:29:46 that won't test everything I want tested though 15:30:06 yeah, it doesn't do end to end through RPC, but thats life at the moment 15:30:16 the thing that I want testing in a unit or tempest test is that the actual code in the driver to add an aggregate works with the input that it's passed... because that was where the object disconnect was 15:30:41 hmm, but tempest mocks nothing, just looks at public API endpoints 15:30:55 i.e. the input and output APIs were tested and "verified" by unit tests - but the input and output did not match 15:31:23 yeah, agreed, but thats why we need the XenAPI tempest ASAP (obviously) 15:31:33 That's still not going to test aggregates 15:31:38 and I don't see how it can without mocking 15:31:45 which tempest doesn't have ATM :) 15:32:02 yeah, we need multi host tests ASAP, which would test this 15:32:09 although I was musing on having a separable nova 15:32:20 prefix all hypervisor resources with a runtime defined string 15:32:28 then you can have multiple nova's for a single XenServer in theory 15:32:28 :) 15:32:39 we have the fake virt driver 15:32:47 you could expand that just to do the xenapi pool stuff 15:32:49 then test that 15:32:59 maybe 15:33:02 but its not worth it 15:33:04 I feel 15:33:12 just setup one system and log out the content 15:33:27 or better, just setup a system and test it once manually 15:33:48 you hit the errors really early on at least 15:34:16 manual testing isn't better 15:35:15 long term, sure, short term, its the best trade off I think 15:35:33 we need to get the multi xenserver tempest test, and sets up a pool at some point 15:35:43 but lets not run before we can walk right 15:36:16 That's a massive step - multi host stuff just isn't designed to work in zuul 15:36:22 __everything__ is single host only 15:36:31 agreed, we really need that soon 15:36:40 its killing us on a whole host of features 15:36:46 I think our only option in any <6 month (probably longer!) timescale is faking a host 15:37:00 or two novas on one VM 15:37:12 right, and we have the fake virt driver thats used of scheduler testing, you could expand that 15:37:21 yup 15:37:33 but its messy, and not very real 15:37:52 I guess our fake xenapi for the unit tests, wired up with the xenapi driver might get you some distance? 15:37:53 indeed 15:38:03 I started playing with that 15:38:07 but it gets ugly quickly 15:38:21 I know it doesn't do networking, for starters 15:38:40 I think its real or total fake as the two options 15:38:52 might be better making a local compute rpcapi 15:39:00 so we can test across the boundry 15:39:13 similar to some conductor stuff we do already I guess 15:39:26 would that work? 15:39:42 maybe, yeah 15:40:16 maybe try a fake rpc driver, that just makes real calls? 15:40:24 I think part of my issue is struggling to decide on the concept of the test because there are so many ways it _could_ be done and all of them are horrid 15:40:35 I'll have another think 15:40:41 let's not worry too much this week 15:40:46 hopefully we can talk about it next meeting 15:40:52 although I'm on holiday for most of the xmas period 15:40:52 OK 15:40:59 likewise really 15:41:31 I think testing across the compute boundary is worth a look, by someone wiring the two bits together 15:41:55 anyways 15:41:58 lets call it a day 15:42:02 any other bits? 15:42:08 nop 15:42:09 nope* 15:42:18 cool, have a good Christmas! 15:42:28 #endmeeting