15:00:39 #startmeeting XenAPI 15:00:40 Meeting started Wed Sep 25 15:00:39 2013 UTC and is due to finish in 60 minutes. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:43 The meeting name has been set to 'xenapi' 15:00:44 hello all 15:00:50 hands up the for XenAPI meeting? 15:00:55 o/ 15:01:51 hi 15:02:12 euanh is also here 15:02:26 #topic blueprints 15:02:47 hey, so any updates on drafting stuff for Icehouse, and filling out the XenAPI session, or other summit sessions? 15:03:13 nope - we've got a brainstorm tomorrow I hope (need to plan it) to see if we want to submit other summit session 15:03:19 or whether then xenapi stuff can fit in the xenapi session 15:03:31 I did say before that we needed to have a chat about the xenapi session 15:03:41 but I can't "fill it out" until there is an etherpad or something that I can contribute to 15:04:03 sure, I thought we said discuss it this week 15:04:04 and as for drafting stuff for icehouse, still focused on what needs to be done pre-summit ATM 15:04:09 fine by me :) 15:04:25 Let's start to throw in ideas. 15:04:31 although I'm not in the right mental state to remember the things I wanted to raise today! 15:04:35 https://etherpad.openstack.org/IcehouseXenAPIRoadmap 15:05:01 Let's re-visit the previous summit's roadmap first 15:05:44 compute driver events 15:06:06 https://blueprints.launchpad.net/nova/+spec/compute-driver-events 15:06:08 well, sounds like Bob wan't to do this next week 15:06:14 that's still something that we want to do 15:06:25 so hopefully we can get it done in icehouse 15:06:34 Okay, Let's put it there. 15:06:36 the problem is I can imagine it's lower priority than some of the other things we might be doing 15:06:49 Also I don't quite get its status from the blueprint. 15:07:51 that blueprint was the KVM version 15:07:56 we don't haev a XenAPI equivalent 15:08:12 we did have 15:08:16 but it got dropped 15:08:26 oh right 15:08:31 what's the link to that? 15:08:34 Okay, so that's one 15:08:37 so we can use that instead of the KVM blueprint as the ref? 15:08:43 https://blueprints.launchpad.net/nova/+spec/compute-driver-events 15:08:52 https://blueprints.launchpad.net/nova/+spec/xenapi-compute-driver-events 15:08:56 look at the dependency tree 15:09:08 duh - didn't scroll down, sorry. 15:09:11 Okay, we have one. 15:09:13 yup, so that should be on the roadmap 15:10:05 Who is editing the etherpad now? 15:10:09 me 15:10:31 I'm done though 15:10:33 for now 15:10:34 volume drivers 15:10:46 more details? 15:10:49 Should be an easy one. 15:10:57 A refactor mainly 15:11:10 So that you can plug in your volume driver to nova. 15:11:54 which volume driver? 15:12:00 maybe I'm being silly but I'm confused :) 15:12:21 The code that connects a volume to your hypervisor. 15:12:24 its the cinder stuff 15:12:28 yes 15:12:36 refactoring to match the libvirt code 15:12:45 so its easier to plug in additional stuff 15:12:50 a volume being provided by Cinder, using iSCSI? or are you talking about using a different XS SR? 15:13:15 perhaps it's the "additional stuff" that I'm not understanding :) 15:13:17 For example, say XS supports ceph 15:13:21 so whatever, you could lay a file SR over the top of some additional thing you attach, etc 15:13:23 ok 15:13:29 We would need a driver for that 15:13:41 In a nice separated class. 15:13:46 yup, ceph is the perfect example 15:13:51 you just drop it in, and off you go. 15:13:52 okies, understood 15:14:14 I guess that's medium priority for us 15:14:18 Okay, so that's something we want to do (this example is showing one usecase) 15:14:30 Okay, put medium to it for now. 15:14:32 next item. 15:14:43 gpu passthrough 15:14:55 https://blueprints.launchpad.net/nova/+spec/xenapi-gpu-passthrough 15:15:14 I think the priorities mean something different now 15:15:17 I'd say that's medium 15:15:21 russell will change those 15:15:25 sure 15:15:33 I'm talking about my/our priorities 15:15:36 I'd say, let's collect some random stuff for now. 15:15:37 we just need to communicate how important or not it is 15:15:38 as in Citrix 15:15:43 OK 15:15:46 as an input into the prioritsation of what we want to work on 15:15:54 OFC I expect RS to have their own priorities to feed into this 15:15:57 well did you want to do that, and come back with your list of ideas? 15:15:58 it's just a string. 15:16:11 which may influence what citrix implement etc 15:16:33 yep, I have a meeting on monday to chat about RS summit priorities 15:16:34 *confused* 15:16:37 great 15:16:58 I would very much love that to include input into the XenAPI roadmap 15:17:02 So, should we carry on with the gpu stuff? 15:17:02 Just thinking, better use of time, you can discuss at Citrix your prioirty list, then we can review it 15:17:07 yes 15:17:22 vGPU mate? or pci pass through? 15:17:35 https://blueprints.launchpad.net/nova/+spec/xenapi-gpu-passthrough 15:18:01 just to clarify, I see the XenAPI roadmap as what we agree at the summit, its not something we present as such 15:18:07 I think it's still very useful (although should be generic PCI pass through support on Xen) and only a little work 15:18:22 understood and agreed John - but I wanted to get lots down to discuss 15:18:31 perhaps the C priorities should be left out for now, yes 15:18:50 well, feel free to add some of those? 15:18:59 I'll add them to the etherpad yes 15:20:05 cool, so any more for blueprint / summit discussions? 15:20:22 Okay, so as John said, we should discuss these things at the summit, so let's collect ideas on etherpad. 15:20:23 do we want to go through the rest of the etherpad or just add actions to update it for next meeting? 15:20:38 Just an action. 15:20:56 ok 15:21:09 It doesn't make sense to discuss things now, and just present at the summit, as j said. 15:21:46 I guess we just need to make sure, we collect stuff on ether, and everyone understands what those things mean. 15:21:49 perhaps - but I'd rather not have surprises on either side too :) 15:22:57 Yeah 15:23:05 we can go through the list before the summit 15:23:16 but best to go through something that we have all contributed to in the mean time 15:23:19 if you see what I mean 15:23:51 #topic docs 15:23:57 so any updates on this stuff? 15:24:19 nope 15:24:25 we've hit a snag ... updates to docs are stalled ATM 15:24:25 OK 15:24:36 whys that? 15:25:19 technical problems that have sprung up with getting a reusable XVA 15:25:39 XVA of what? nova-compute VM? 15:25:42 which is an important Citrix goal for the summit 15:25:43 yup 15:26:03 whats up with it (can I remember how we may have fixed bits in olympus) 15:26:17 Mate's on the ball 15:26:21 it's a XS bug with networking 15:26:30 ah, fair enough 15:26:32 and the fact that we are passing the flat_network_bridge through as a kernel parameter 15:26:42 if you import an XVA it can screw up the networks 15:26:47 And I am working on to use hvc0 for stack.sh 15:26:59 So users can interact with the vm if needed. 15:27:02 OK, we used the pass the label rather than the names, which helped 15:27:16 but anyways, sounds good 15:27:26 Oh, does nova recognise labels as well? 15:27:27 anything to make getting starte easier 15:27:33 matel: yup 15:27:52 Ah, I didn't know that. 15:27:54 matel: we used that so you can create the xapi6 network, if you want, but just give it a standard name 15:28:20 xapi6 is a bridge name 15:28:27 not a networ name I guess. 15:28:36 yup, well, xapi-network names I was talking about 15:28:37 I will check the nova code. 15:28:46 sure, its in the vif drivers 15:29:10 looks up the network by bridge name and falls back to name-label, or something like that 15:29:13 So you say it could deal with something like "OpenStack VM Network" 15:29:28 I think so, but I don't remember adding spaces 15:29:41 Anyhow, it's a good tip. 15:29:52 #topic Bugs and QA 15:29:58 so, how is gating going? 15:30:41 eurgh... Giving up on xenserver-in-the-cloud for now. Can't get an automatic way to install it in either RS or HP clouds which means it's not suitable for -infra 15:31:06 so the fallback is getting it working with xenserver-core 15:31:12 still with devstack in domu 15:31:32 OK 15:31:46 sounds like a good plan, in terms of ssh-ing into a box 15:32:00 very frustrating though 15:32:01 any major bugs people hitting 15:32:12 RS cloud has some missing things I need, HP cloud has other missing things 15:32:20 *shakes his head frustratingly* 15:32:28 yeah, no one really does hypervisors in the cloud yet 15:32:44 I would talk to us about adding an image for xenserver core 15:32:49 yup 15:32:50 it should be possible 15:32:51 that'd be great 15:32:57 well - just use centos 15:33:00 then we can install xs-c :) 15:33:04 exactly 15:33:05 I've got a funky script to do that 15:33:07 so that's fine for me 15:33:46 yeah, lets get an image sorted for that testing 15:34:03 also, let me know what region you need 15:34:32 any bugs people have? 15:34:48 I don't think so 15:35:04 having real issues running block base live-migration, will raise some launchpad bugs on that one, once we pin down the errors 15:35:08 johnthetubaguy: don't worry about the image - I'm doing initial testing based on CentOS 6.4 only, so that's what I'm spinning up in the RS cloud 15:35:20 full tempest is failing, but that's only because the good old iscsi issue. 15:35:34 joy 15:35:40 exactly 15:35:59 BobBall: OK, if you insist 15:36:43 I want us to skip the iscsi tests in full tempets when running XS 15:36:52 so we can prove the rest is working 15:36:57 and doesn't regress 15:37:11 we know why iscsi doesn't work - and it's not an OS bug 15:37:54 hmm, is that the kernel thing 15:38:16 johnthetubaguy: the refactors are still waiting for you: https://review.openstack.org/#/c/46056/ https://review.openstack.org/#/c/46057/ 15:38:20 yes 15:38:31 well - actually not the kernel 15:38:46 the problem is because we're using netback/front and blkback/front for the same packets on the same machine 15:38:49 it's ugly 15:39:03 but it's not really an OS problem, or one you'd see in deployment 15:39:17 it only happens with devstack serving the iscsi volume from the same VM that is consuming it 15:39:24 (even for a short while) 15:39:27 -VM+host 15:40:03 oh I see 15:40:06 that makes sense 15:40:20 need the multi-machine tests to fix that 15:40:25 cool 15:40:28 bobball: https://github.com/openstack/nova/blob/master/nova/virt/xenapi/network_utils.py#L43 15:40:36 well, to be more realistic... or we need a workaround in Xen 15:40:39 which we're also working on 15:40:50 yay matel! 15:40:53 network name it is! 15:41:01 yup, its much easier that way! 15:41:29 cool, so 15:41:31 I will modify devstack, so we no longer depend on bridge names. 15:41:32 matel: but devstack should also fail earlier if we don't specify it... saves people from having a junk flat_network_bridge when it's not specified 15:41:34 #topic OpenDiscussion 15:41:49 devstack used to work with name_labels, I guess we lots that at some point 15:42:09 devstack will also work with name_labels I think 15:42:18 It works with labels. 15:42:36 cool stuff. 15:42:52 "OpenStack VM Network" -> osvmnet 15:43:07 "OpenStack Public Network" -> ospubnet 15:43:15 or "OpenStack_VM_Network" 15:43:31 is brctl happy with such long names? 15:43:39 it doesn't go there 15:43:43 why does brctl get involved? 15:43:49 nova-network. 15:44:04 *confused* 15:44:05 yeah, not sure it gets used there either 15:44:17 if we find the bridge then we're happy 15:44:18 ohhhhh 15:44:19 hmmmm 15:44:27 we might rely on the name matching 15:44:34 I think thjat's mate's point? 15:44:41 not really 15:44:49 nova-network will create the bridges on the fly 15:45:12 I can't remember quite where it gets those from now, it might be translated to xapi by that point 15:45:20 in domU u have a xapi1 bridge 15:45:33 yeah 15:45:46 but its that converted to a bridge name and saved int he db 15:45:54 or does the db get the name_lable 15:45:57 so it's replicating the same as it gets from /proc/cmdline 15:46:01 flat_network_bridge=xapi1 15:46:01 I guess its the name label 15:46:30 the code path isn't that simple I don't think, its the network entry in the db 15:46:32 brct show 15:46:42 which might be the same thing 15:46:45 brctl show 15:46:45 bridge name bridge id STP enabled interfaces 15:46:45 xapi1 8000.8e3528b79d15 no eth1 15:47:21 sure, I just not sure if it "name_lavbel_foo" will go to xapiN 15:48:19 Seems like the max length is 15 15:48:21 And I don't understand what you are saying. 15:48:24 perhaps it's easiest just to test it and see where it breaks :) 15:48:28 yeah 15:48:37 the config defines the look up in xapi 15:48:44 that goes into the DB in the network table 15:48:53 that is then read by nova-network when creating the bridge 15:49:01 can't remember if it gets changed en-route 15:49:04 probably not I guess 15:49:27 anyways, will leave that with you 15:49:32 anything more? 15:49:48 mysql> select bridge from networks; 15:49:48 +--------+ 15:49:48 | bridge | 15:49:48 +--------+ 15:49:48 | xapi1 | 15:49:49 +--------+ 15:49:50 1 row in set (0.00 sec) 15:49:58 yeah, thats the one 15:50:10 I would expect xapi1 to be there 15:50:22 the question is, if you have its name_label in the config 15:50:28 do you still get xapi one in the DB 15:50:28 yes 15:50:36 probably not, but its worth checking 15:50:53 there is always the description field 15:51:02 Okay so, the question is: db_entry = lookup_bridge_name(something) 15:51:11 yeah 15:51:17 Okay, will check it. 15:51:22 I guess not 15:51:25 I would just test it 15:51:37 Probably not. 15:51:53 Sure, just kicking off a build after the meeting. 15:51:59 sweet 15:52:10 we all done now? 15:53:03 … tumble weed goes past 15:53:13 #endmeeting