15:02:38 #startmeeting XenAPI 15:02:39 Meeting started Wed Jul 3 15:02:38 2013 UTC. The chair is johnthetubaguy1. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:02:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:02:42 The meeting name has been set to 'xenapi' 15:02:49 Hi all 15:02:55 who is here for the meeting today? 15:03:16 Euan here 15:04:20 Bob here 15:04:44 Mate coming 15:04:44 So I was wanting to check progress towards H-2 15:04:57 so lets go to the blueprints 15:05:02 #topic blueprints 15:05:14 so has anyone got anything to report 15:05:21 H2 and H3 15:05:25 nope - didn't think we had blueprints in H2 15:06:08 in terms of H3 we're not convinced the event reporting is high enough priority atm 15:06:13 hmm, I do I guess 15:06:13 What was the etherpad address used during the summit? 15:06:25 can't remember right now 15:06:33 #link https://etherpad.openstack.org/HavanaXenAPIRoadmap 15:06:59 lol, just found that too 15:07:06 So I will look at this #link https://blueprints.launchpad.net/nova/+spec/xenapi-volume-drivers 15:07:30 I need to look at what the cinder guys are doing around brick. 15:07:43 ah, yes, good point 15:07:51 that is not targeted for H right now 15:07:55 I was tracking the pci pass through blueprint - it's just landing ATM which is great, and it's 95% hypervisor agnostic with only a few changes in the driver needed 15:08:20 that sounds cool 15:08:27 anything people activly working on for H2 15:08:33 I guess the answer was no 15:08:39 Not in terms of the published blueprints no 15:09:16 When we get off the Blueprints topic I'm sure we can say what we have been doing :D 15:10:00 john, do you have any links for the brick work? 15:10:17 afraid not, worth looking at cinder mins 15:10:26 so I have some blueprints 15:10:28 https://blueprints.launchpad.net/nova/+spec/xenapi-large-ephemeral-disk-support 15:10:38 I have that pending review, I removed the config 15:10:56 There was also this one: 15:10:58 https://blueprints.launchpad.net/nova/+spec/xenapi-guest-agent-cloud-init-interop 15:11:14 but I pushed that out to H3, it took a little while 15:11:59 so reviews welcome on the first one 15:12:00 Well I cn have a look - I think that we like the 2TB disk one 15:12:05 yeah 15:12:12 I have some -1s in my bag. 15:12:14 so, any more for blueprints? 15:12:23 matel: hehe 15:12:25 it's hurting the nova review stats though! 15:12:29 10 days! 15:12:43 its really getting slow, the queue is huge 15:13:28 Does the size of the ques has something to do with the number of reviewers? 15:13:38 s/ques/queues/g 15:13:41 a little bit 15:13:49 but mostly just the number of patches added 15:14:01 its the time many people start pushing their code 15:14:08 Okay, we are diverging. 15:14:12 lots of v3 API stuff too 15:14:16 indeed 15:14:39 has anyone else got anything? 15:14:46 jump to open discussion? 15:14:51 I updated the Quantum install wiki. 15:14:57 trying to keep it up to date. 15:15:11 We are looking at full tempest runs 15:15:13 Euan has fixed a bug. 15:15:18 #topic Open Discussion 15:15:33 BFV tests are passing too Mate - don't forget that! good change right there 15:15:43 last gating test that wasn't apssing 15:15:45 passing* 15:15:52 awesome 15:16:02 some good work on tempest it seems 15:16:18 any news on gateing work from NYC? 15:16:20 We found + fixed a stability problem with smokestack and XenServer 15:16:31 We are not touching tempest, we are just looking at what are the failures. 15:16:52 sure, just wondering what the planned path is / timeline is 15:16:54 yeah - so the current plan is that someone (possibly Jim) will implement dependencies in zuul 15:17:13 so you can have a depends-on patch bringing in another patch for testing and merging 15:17:34 that's a pre-requisite for any packaging really as if a nova change needs a packaging change they need to be synchronised 15:18:04 The packaging isn't something we are going to gate on - but if we can get the dependency mnagement in then we can look at gating on smokestack test failures that are unrelated to packaging 15:18:07 erm, I was thinking more XenAPI related 15:18:34 Ah, I have a question. 15:18:48 it comes round to XenAPI with smokestack being more resilient because it currently breaks when packaging changes are needed / merged 15:19:21 and giving us the option of only posting -ve reviews when we know it's a test failure and not a packaging issue 15:19:31 which is a big part of the issue with getting smokestack gating 15:19:32 yes Mate 15:19:36 Could you guys look at it? #link https://bugs.launchpad.net/nova/+bug/1196570 15:19:37 Launchpad bug 1196570 in nova "xenapi: pygrub running in domU" [Undecided,New] 15:20:00 so, I thought we were looking at getting something other than smokestack gating? 15:20:05 So it's about having a disk image, and we would like to ask pygrub to decide if it is a PV or HVM guy 15:20:54 sorry guys, just finish off the discussion around testing, I did not want to be rude. 15:20:59 matel: there is a bit of code that uses pygrub, thinking about it 15:21:19 We're also looking at the option of having a Xenserver-core VM with nova in dom0 running the tempest tests - but if the dependency thing is implemented, smokestack gating is an easy step forward 15:21:31 johnthetubaguy1: this is a code, my issue, that this code is assuming, that you have pygrub in domU 15:21:56 That means that you might end up with different pygrub versions in dom0 and domU - dodgy 15:22:14 only due to the rootwrap? Why does it use pygrub in domU rather than dom0? 15:22:14 matel: https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vm_utils.py#L1921 15:22:17 I would really want to run pygrub in dom0, or have a xapi extension.... 15:22:47 matel: its not code most people use, if they have good glance metadata, but yes, I get your point 15:22:50 johnthetubaguy1: yes, I am referring to that code. 15:23:15 That bit will be trivial to move to dom0 if we want to - it just attaches the VDI to the domu only to run pygrub so that can easily do it in dom0 15:23:25 Personally, we should just default to HVM, and stop worrying about trying to detect it 15:23:45 but I guess we can see what other people think 15:23:59 So in order to get some outcome from this discussion, who prefers which option? A) remove it B)delegate to dom0 15:24:09 HVM isn't supported for most guests :) 15:24:16 only windows is supported in HVM 15:24:31 B - delegate to dom0 or C - leave it if we have to... 15:24:41 to fix the bug, definitely delegate 15:24:51 yeah, that works for me 15:25:00 just wondering if we really need it 15:25:03 tbh, I like the remove. 15:25:16 although I originally did not think about it. 15:25:26 its worth a mail to the list, see what people think 15:25:27 yes we must boot linux guests as PV if we want them to be supportable 15:25:27 That's my favourite code modification - delete. 15:25:37 we are trying to get rid of auto detect 15:25:39 The best thing that could happen to code - get removed 15:25:48 therefore we must keep it or let something else specify it 15:25:50 yeah, you can specify os type in glance, so its not like you can't choose 15:26:00 ok - if you can specify in glance then that's OK 15:26:09 I met with this code while i was booting from volume 15:26:18 So if an image in glance is PV then it'll boot PV I'm happy 15:26:32 The issue is that if you are booting from volume, the metadata might not be there. 15:26:36 yeah, thats the fun one, but you can launch an image that specifis the block device mapping and the correct os type 15:26:47 we just can't boot _all_ guests as HVM and trust they will negotiate up (which is what I thought you were suggesting) 15:28:23 johnthetubaguy1: have you ever tried to do that? 15:28:37 matel: no, actually, its quite a new feature 15:28:56 Okay, so Bob suggests to delegate this to dom0 15:29:03 BobBall: I wasn't thinking they would negociate up, I was more thinking we need a better solution, guessing seems bad 15:29:16 Yeah, we could try for that 15:29:36 Simplest change is removal. 15:29:52 hang on 15:29:54 wait wait wait 15:30:16 if we can typically rely on the metadata to determine if it should be PV or HVM then I'm happy with deleting the autodetect code 15:30:32 yeah, just default to HVM for giggles 15:30:40 I know BFV currently doesn't have that metadata - but if that's a bug then we can still rely on metadata etc 15:30:55 well, you can do BFV from a glance image 15:30:59 then you get metadata 15:31:08 same thing you need for external ramdisk and kernels 15:31:12 yes, so basically these are separate issues. 15:31:16 if we _can't_ reliably rely on the metadata then we need autodetect 15:31:30 (in dom0) 15:31:35 hmm, well maybe 15:31:45 Question: do we want to autodetect if a given block device contains hvm vs pv stuff? 15:32:03 maybe 15:32:30 I say no - if we need to detect the mode then we should have some form of metadata associated with the block device that says what it contains 15:32:58 Okay, so Bob votes for removal. 15:33:00 if the metadata route is typically the canonical source of information then that's what we should always use 15:33:02 we have that in glance, to some extent 15:33:05 sorry for changing my vote. 15:33:26 but now I understand the problem better - and that we will still boot Linux guests as PV then I'm happy 15:33:27 I vote for removal, because I want to make the code happier. 15:33:53 +1 15:34:10 Let's do it this way: I will submit a patch, and you can vote on the change. 15:34:11 and bring it back if we need to, in a better way 15:34:21 in dom0 15:34:23 yeah, the remove is a simple patch 15:34:28 yes, let's YAGNI 15:34:40 + 15:34:41 7 15:34:45 +7 even 15:34:47 seven? 15:34:49 I don't think +1 is enough 15:35:16 Okay, expect a patch soon. 15:36:30 I am adding new items to the sprint backlog - my boss will love it. 15:36:39 :) 15:36:44 so we got anything else? 15:37:02 I fixed some minor bugs last week, but nothing worths mentioning. 15:37:22 I submitted a fix for snapshot reordering 15:37:29 Bad. 15:37:29 coalescing even 15:37:33 Ah. 15:37:39 Okay, missunderstood. 15:37:51 Could you link the change sir? 15:37:53 but it doesn't have a test yet and I think people want a test but I've been super busy on not being able to code :/ 15:38:10 #link https://review.openstack.org/#/c/34528/ <-- Lonely changeset seeking review. 15:38:14 Untested code is broken by desgin. 15:38:19 it's a trivial change 15:38:22 yeah yeah :) 15:38:41 I'm happy to try and add a test (although I'm not quite sure how to test this one!) 15:39:05 it's all about the order in which things get called so I'll have to think about it 15:39:26 that's a huge function, good luck. 15:39:28 and my head's been elsewhere 15:39:32 indeed 15:39:44 no +2 without a test :) 15:39:45 perhaps I should delegate the writing of the test... 15:40:06 unless its "already covered" 15:40:09 1000 story points 15:40:14 you're a mean man! 15:40:16 yes 15:40:21 it's "already covered"... 15:40:23 I bet the reviewers are running coverage. 15:40:24 definitely 15:40:36 … yeah... 15:40:39 the code is being exercised so coverage wouldn't find it 15:40:55 Okay, let's stop it. 15:41:10 the problem is that both sets of code are fully exercised but in a different order :D 15:41:27 The problem, is that the code is not really structured well 15:41:29 can I apply for a "too difficult to test" exception? 15:41:41 So it's not Bob's fault. 15:41:51 yeah! 15:41:53 I can take the job of testing it. 15:42:11 I was kidding mate - I'm not the type to make others test my work 15:42:11 reverse tdd 15:42:15 if the ordering is a problem, lets keep it right 15:42:19 I may ask for your advice though 15:42:45 we might want to extract something sensible. 15:42:48 we'll see. 15:42:53 take it offline 15:42:56 I have an idea on how to test it 15:43:13 just no time this last week 15:43:22 Okay, anything else? 15:43:33 not from me 15:43:41 I'm done as well. 15:43:48 nothing from me 15:43:48 Keen to get back to my terminal. 15:43:51 we all good? 15:43:55 go for it Mate 15:43:55 sure 15:44:00 there's a test waiting for you to write. 15:44:09 yes 15:44:18 And I can remove some lines in exchange 15:44:49 Good pln. 15:44:52 we all done? 15:45:12 ["sure"] * 1000 15:46:06 #endmeeting