15:00:07 #startmeeting XenAPI 15:00:08 Meeting started Wed May 15 15:00:07 2013 UTC. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:11 The meeting name has been set to 'xenapi' 15:00:20 hi, who is around for the meeting today? 15:00:22 you were watching the clock, weren't you john :D 15:00:29 :-D 15:00:44 hi 15:00:47 any more topics from people, other than what is in the agenda page? 15:01:22 nah 15:01:23 OK, looks like just the three of us 15:01:24 not from me 15:01:30 lets get going 15:01:40 I thought we might want to talk about quantum stuff and get other people in to help with that, but we'll go through what Mate's got 15:01:46 at the appropriate point of the Agenda of course! 15:02:06 matelakat, you got anything for the agenda? 15:02:11 other than quantum? 15:02:25 I am 50% regarding to last meetings action 15:02:33 The bugstat page has been updated. 15:02:45 Quantum is up and running. 15:02:52 In the lab. 15:02:53 one sec... 15:03:03 #topic Actions from last meeting 15:03:19 Okay, so 15:03:25 bugstat page updated. 15:03:25 so bob, did you poke dan, I think I saw a promising email>? 15:03:32 cool 15:03:32 I did indeed 15:03:41 I'm sure if you asked him, Dan would consider himself well and truly poked 15:03:42 I updated the wiki 15:04:01 #action matelakat to document bug finder in XenAPI team wiki 15:04:05 The status is that the machines have been re-built and the domU image put back on one of them 15:04:16 we're now ready to move on to the puppet stuff 15:04:18 Whatever that means. 15:04:43 #info smokestack making progress, hosts ready, now to work on fixing up puppet scrips 15:04:50 I think I will try to have a chat with Dan this week, or beginning of next week. 15:05:18 So one change... 15:05:23 cool, its merge his old puppet scripts into the new upstream ones I guess, to add in XenServer support into the puppet modules 15:05:28 we were going to have half the machines running 6.1 and half running 5.6 FP2 15:05:45 OK, then flip over the others if all goes well? 15:05:48 however I realised that since smokestack is running the jobs on a random machine, we don't want differences between the versions meaning some tests pass and some tests fail 15:05:58 indeed 15:06:00 so we have switched all machines to 6.1 which is what we should be focusing on now 15:06:11 ah, gotcha 15:06:15 sounds good 15:06:22 leave the virt option open too 15:06:41 yes - although I haven't testing running 5.6FP2 in a VM under 6.1 15:06:46 that would be an interesting experiment for sure 15:06:48 :) 15:07:07 well lets run before we can walk, obviously 15:07:11 next topic 15:07:12 On to Mate's action? 15:07:20 which one? 15:07:24 I was just browsing the bug report... 15:07:35 saw https://bugs.launchpad.net/nova/+bug/1162973 which looked interesting - aggregate live migration not working 15:07:37 Launchpad bug 1162973 in nova "XCP resource pool - unable to migrate instances" [Medium,Triaged] 15:07:47 do we have any tests for this? 15:07:48 ok, we have a bug section for later, but we can do that now 15:07:55 oh, sorry 15:07:57 mate added them into tempest I think 15:08:09 smokestack progress is also later on the agenda but we covered that(!) 15:08:17 ah, true 15:08:23 Excuse me. 15:08:33 Are we talking about Bob's bug, or not? 15:08:41 we can 15:08:57 Not my bug - just an interesting one :) Did you add aggregate live migrate tests to Tempest mate? 15:08:58 I am current re-writing the live-migrate code paths, cough 15:09:02 Are we at that page regarding to the agenda? 15:09:20 nope, I only have to delete it in a few hours 15:09:29 I never tested any pool config. 15:09:38 never tested means no jenkins jobs 15:09:43 sure, but your tests can test such a configuration 15:09:56 just I assume no one tests to pool support 15:10:00 I tried it once manually, but please don't consider it as a proof for anything 15:10:11 That was a long ago. 15:10:14 indeed 15:10:16 with J. 15:10:24 with fish and chips. 15:10:30 but tempest has tests that work, its just requires some setup 15:10:37 yes. 15:10:47 The same tests as the block migration. 15:10:49 I guess the other point is there are current no automated tests checking that config? 15:11:11 That was more my question - does the Citrix CI test this config for live migration 15:11:13 No automated tests with pools. 15:11:19 I know we test block migration outside of pools 15:11:21 okay 15:11:40 Can we move on? 15:11:46 sure 15:11:54 Where are we regarding to the agenda? 15:12:05 actions 15:12:08 but we've jumped over a bit 15:12:22 next is your action Mate on documenting the bug finder in the XenAPI team wiki 15:12:34 mate said he didn't do it 15:12:39 I raised a new one for next week 15:12:42 I will give it a low priority. 15:12:48 we are done here I guess 15:12:56 #topic blueprints 15:12:56 I'll be focusing on smoke. 15:13:08 +1 to smokestack 15:13:11 oh - what about your action John? 15:13:19 tss tss 15:13:20 its done 15:13:27 I missed that :) 15:13:28 okay : 15:13:30 :) 15:13:30 we did that really early on I thought 15:13:32 tss tss 15:13:36 lol 15:13:42 so, xenapi-server-log 15:13:47 a quick update 15:14:02 chatting with people, its a feature that is used loads 15:14:18 I was hoping to spend a day hacking to see how bad it would look for H 15:14:47 so I am probably going to take that and target for H-2 ish 15:14:51 but will see how it goes 15:14:59 Can I have a Q? 15:15:00 any more from blueprints? 15:15:03 sure 15:15:10 A patch is mentioned in the bp. 15:15:14 yup 15:15:30 its not considered the best way forward 15:15:31 that patch is in all versions of XenServer and XCP though 15:15:37 It doesn not mean, that you need to do anything with that patch, it is just there for documentation purposes 15:15:40 OK 15:15:44 Bob reads my mind. 15:15:45 I assumed Mate was meaning the xen patch? 15:15:55 I might use bits of that patch 15:16:00 it was a xenapi patch I think 15:16:17 https://github.com/jamesbulpin/xcp-xen-4.1.pq/blob/master/log-guest-consoles.patch ? 15:16:21 that's a xen patch - not xapi 15:16:28 oh, that one 15:16:35 Yes, I meant that one. 15:16:36 gotcha 15:16:37 in fact XAPI knows nothing about it at all 15:16:48 its not that important 15:17:06 just for reference 15:17:19 cool, any more for blueprints, any thing for docs, else straight to bugs 15:17:23 I've confirmed that it's in Tampa 15:17:28 I can confirm it's in Boston if you give me a second 15:17:47 yes, it's in Boston too. 15:17:51 Bob, can we use numbers instead of codenames? 15:17:54 But it is a XenServer patch to Xen - which is not upstream. 15:18:09 Sorry; it's in XenServer 6.0 and 6.1 (probably earlier too) - and therefore it'll be in the 1.6 XCP too 15:18:30 So, back to that blueprint. 15:18:33 but it is not present in Xen upstream, therefore a change is needed to upstream xen if you want to use this on xapi-xcp on Ubuntu or CentOS etc 15:18:35 It has two parts. 15:18:46 One is an OS, and the other is a dom0 mod. 15:18:51 ?? 15:18:57 dom0 mod - logrotate, etc. 15:18:57 oh, maybe 15:19:11 and other bits I think, but yes 15:19:22 hoepfully will just add a script to help set that up 15:19:42 Could do the log rotate through a XAPI plugin triggered by domU cronjob... a little ugly, but keeps a clean separation for Dom0 changes 15:20:02 I guess it is a deployer choice 15:20:07 I would trigger it from dom0 15:20:11 in case domu is dead 15:20:14 +1 that was my plan 15:20:40 So dom0 is responsible for baking the rotated log tails 15:20:50 OS is just reading those. 15:20:55 +1 15:21:00 well I was assuming using a loopback filesystem so domu being dead would only mean consoles stopped getting logged at some point when the disk ran out of space but fair enough :) 15:21:26 planning to do both, using loopback inside dom0 15:21:32 and dom0 log rotate 15:21:39 will try it, and see how it goes 15:21:43 ok 15:21:46 and if the VNC terminals stay up 15:21:57 And what happens if disk goes full 15:22:04 I meant loopback goes full. 15:22:18 the logs stop growing for other users on that host 15:22:24 but it doesn't kill the host 15:22:33 And is the guest affected? 15:22:41 might do one per VM, but that is probably overjill 15:22:52 shouldn't be, all part of the testing 15:23:32 Let's move on, leave some space for J 15:23:41 indeed 15:23:58 any more blueprints, or any docs stuff? 15:24:01 Are we at the point, where I can share my quantum experiences? 15:24:09 almost 15:24:13 well does you want to cover the devstack bp mate? 15:24:26 Okay. 15:24:29 Any questions, or reviews that you need us to do or something? 15:24:46 It is showing progress, I had some really exciting feedback this week, and thanks for that again. 15:24:53 I think we are getting there. 15:25:11 #topic OpenDiscussion 15:25:17 As soon as the networking patch is accepted, we can kill out eth0 and the rest of the stuff. 15:25:19 I think we are there already, fire away 15:25:27 Okay. QUantum 15:25:31 Are you prepared? 15:25:39 almost 15:25:40 or the networking stack formally known as quantum 15:25:50 +! 15:25:54 +1 15:25:58 We don't know what is its name. 15:26:05 #link https://github.com/citrix-openstack/qa/blob/master/xenserver-quantum-devstack.sh 15:26:09 if I had a choice, I'd vote for Dave. 15:26:40 So that is a script, which will set up a devstack with quantum. 15:26:55 the firewall driver is a worry, but porbably correct 15:26:56 With using Maru's patches, and some spice on top of that. 15:27:04 which firewall driver? 15:27:04 whats the spice? 15:27:15 an existing devstack installation? 15:27:15 XEN_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver 15:27:41 #link https://github.com/citrix-openstack/quantum/commit/b267632284ebb5f3f66137e85881d976f5d145c7 15:27:48 John, I linked in the "spice" 15:27:56 So, first things first. 15:28:23 Maru's DHCP patch will need to be modified slightly, and I am doing that this week. 15:28:31 Regarding to the firewall driver. 15:29:18 As quantum is implementing the security groups (or not at the moment), you need to turn off the nova one, in order to avoid conflicts. 15:29:25 not sure why you had to change some of that stuff, but not looked at quantum in a while 15:29:52 OK, you used to be able to do both I thought, never mind 15:29:58 So, that ugly patch. 15:30:08 It is about getting things working. 15:30:27 OK, looks like security groups needs some TLC for XCP 15:30:53 So, regarding to security groups. 15:30:54 but its a good step forward, regardless 15:31:28 I turned them off, because otherwise a race condition appears with the agent and the L3 plugin fighting for iptables. 15:31:53 And I spoke with maru. 15:31:55 hmm, fun 15:31:57 OK 15:32:50 any insights? 15:33:03 So he had a question, that we might need to think about. 15:33:17 Does Citrix have any intention of providing a namespace-supporting dom0 kernel for XS/XCP? In Folsom, no L3 filtering was being performed in dom0, so namespace support was only required in domU. However, the security group implementation introduced in Grizzly will require that L3 filtering be performed in dom0, and the lack of namespace support will prevent a configuration that supports overlapping ips. 15:33:43 oh, yeah, that one 15:33:43 So that's something we need to do. 15:34:01 but you could just have a separate machine running dhcp 15:34:02 Honestly, I am not really keen on doing any L3 filtering on dom0 15:34:38 I also think it'd be hard to make that change to support namespaces 15:34:48 why is that? 15:35:06 XS 6.1 is out, 6.2 is nearly out, and my understanding is it needs a kernel change which might be tricky to convince people to take in a hotfix 15:35:44 I think we should need to do some more investigation, to understand the requirements. 15:35:56 If we can do the L3 filtering in the DomU then that would be an easier fit than changing the dom0 kernel on released versions 15:36:56 but we can just use a separate server right? 15:37:09 one dedicated for dhcp 15:37:18 with a kernel that supports that stuff 15:37:38 DHCP is working, and indeed running in DomU 15:37:51 I don't mean domU 15:38:05 I mean a separate full server, not a VM, just runing DHCP 15:38:14 domU - other machine, isn't it the same? 15:38:35 It's not dom0 15:39:17 not quite 15:39:24 so what means that Maru still thinks it needs namespaces in dom0? 15:39:36 well, there is no need for maru's path I thought, if you run in a separate machine 15:39:48 if you run dhcp on every server 15:39:53 something they might do this release 15:39:58 then you would want that on dom0 15:40:06 mostly because the L3 routing is going on there 15:40:27 I am not sure I understand you, John. 15:40:41 erm, could do to draw a picture 15:40:51 good idea. 15:40:55 Let's take it offline. 15:41:11 and carry on with the meeting. 15:41:15 well, basically DHCP could be run on a server not running XenServer 15:41:24 and just do it as with any other hypervisor 15:41:38 its only needed on XenServer to support single box deployments 15:41:51 and multi-host, which is not yet upstream 15:41:56 yes, lets move on 15:42:09 I had one item, Xen Hackathon 15:42:16 it was more an advert that I am going 15:42:41 but that was all really 15:42:50 anything more? 15:43:02 I'm afraid that the hackathon has run out of spaces... 15:43:02 or we can quickly touch quantum? 15:43:09 I was hoping to go, but won't be there 15:43:24 indeed, it was going to be to chat with those who were going 15:43:26 I'm assuming that the focus will be on Xapi in CentOS and not OpenStack? 15:43:35 well, it will be now 15:43:43 I might look at the console-log 15:43:46 if people are interested 15:43:55 that'd be great to get something up stream 15:44:09 oh, I wasn't thinking that side 15:44:22 someone is already upstreaming that patch 15:44:24 oh :D 15:44:29 good to hear! 15:44:45 so any more for any more? 15:45:03 I am done. 15:45:08 I will probably leave for the airport then... 15:45:14 I'm done too 15:45:17 Have a safe trip 15:45:20 y 15:45:20 thank you 15:45:23 see you later 15:45:24 Try not to drink too much guinnes 15:45:25 +s 15:45:26 #endmeeting