17:16:38 #startmeeting xenapi 17:16:39 Meeting started Wed Dec 19 17:16:38 2012 UTC. The chair is johngarbutt. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:16:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:16:42 The meeting name has been set to 'xenapi' 17:16:47 johngarbutt: sorry about that 17:17:08 jgriffith: it happens, no worries, have a good christmas! 17:17:15 #topic blueprints 17:17:20 hi all 17:17:27 hi 17:17:34 Greetings 17:17:36 any blueprints to dicsuss, I think config drive 17:17:48 Yes, I'd love to talk about config drive 17:17:49 mikal: hows it going, I think matelakat took a look at the code you pusshed 17:18:01 y, I have 2 notes 17:18:11 I assume its mostly wrong... 17:18:22 I think it is fine 17:18:44 So the question 1: Is it important, that the filesystem's label is config-2? 17:18:50 (I guess it is) 17:18:55 Yes. cloud-init checks for that. 17:18:57 johngarbutt: hey you :) 17:19:04 it looked like what we were planning, from a quick look over the sholder 17:19:08 smoser would be more definitive, but I'm pretty sure its important 17:19:21 that sounds like what I heard too 17:19:29 Okay, so I think the _generate_disk 's name-label won't be the filesystem's label. 17:19:47 name-label is a sort of name for the vdi. 17:20:16 config-2 is imprtant, yes. 17:20:16 Oh, I can see that 17:20:27 so we need to modify _generate_disk, so that when it calls mkfs, it passes a label value (which should be a new argument I guess) 17:20:30 It just needs to be passed to the mkfs call in _generate_disk. I can add that. 17:20:38 sounds good 17:20:51 And the other stuff is around the vm_utils modification. 17:21:27 so the _generate_disk creates a vdi, and than creates a link to the instance, by creating a vbd. 17:21:40 so after that call, your fresh vdi has a vbd. 17:22:02 And in the next call, you will have another vbd, that connects the very same vdi to your compute node. 17:22:03 what matters is that from the guest there is a block device present that has an iso9660 filesystem on it with label config-2. 17:22:28 okay, so we definitely need to make that labelling happen. 17:22:29 ah, so vfat no good? 17:22:31 smoser: this version will be vfat 17:22:45 (which is unfortunate, but should probably work) 17:22:49 vfat is supported by the code already, I understood there was complexity with ISO9660 for xen? 17:23:09 yes, I guess attaching isos is not as easy as the disks. 17:23:20 just for the record, you should attach disks! 17:23:22 there isn't a super easy way of adding an ISO without that being the only cd drive, from memory 17:23:23 *not* "isos" 17:23:33 the disk should have content that happens to be a ISO9660 filesystem. 17:23:45 okay, that could work, I guess. 17:23:49 just like if you'd done : mkisofs /dev/vdb 17:24:12 that doesn't turn my block device into a cdrom :) 17:24:13 I guess that should work, its just a block device, I think, but would have to check 17:24:17 right 17:24:20 good point 17:24:30 So... Its actually harder to do that anyways. 17:24:45 there was a security issue around this before right? 17:24:53 As best as I can see I'd have to create the iso9660 filesystem to one side and then dd it onto the vbd 17:24:56 it would be good not to add that back 17:25:00 mikal, and what do you think about using the existing config-drive code segments for the filesystem generation? 17:25:12 Whereas with vfat I can just mount the new vbd and do the thing 17:25:36 Ok, so I think we now have three questions in flight and my brain is full 17:25:45 mikal: dd was what we were talking about doing at one point, not very graceful I know 17:25:46 Let's stick with the fs format for a sec... 17:25:55 ok. 17:26:14 smoser: I thought configdrive supported vfat? Its certainly an option in the code. Will cloud-init get angry with a vfat config drive? 17:26:26 smoser: if it wont work, we should remove it from the code 17:26:36 the code probably supports writing vfat, but i'd really like to not do that if possible. 17:26:41 cloud-init will probably find it. 17:26:55 but potentially using vfat just complicates a guest 17:26:58 smoser: yeah, the code _definitely_ is willing to write vfat 17:27:09 smoser: I don't know if anyone actually does it though 17:27:13 btw, do we have any tests that would show how to use configdrive? 17:27:16 smoser: I think it was for backwards compatability 17:27:30 i really dont believe that xen can possibly be silly enough to inspect the content of a thing it is about to attach and say "oh, that has an ISO9660 filesystem on it, I will attach it as a cdrom" 17:27:51 smoser: agreed 17:27:59 if it was, and i booted a system with 2 block devies, and then, from the guest did 'mkisofs /dev/vdb' would a reboot magically make it read-only ? 17:28:24 to xen, this is just data on a disk. 17:28:42 agreed, I was thinking about getting an ISO file read by XenServer 17:28:59 if we make a disk contain an ISO, as you say, that should be fine 17:29:02 smoser: hmmm. The code as released in folsom let's users use a flag (config_drive_format) to request vfat 17:29:15 So I think we'd have to have a more public discussion if we wanted to drop that 17:29:25 mikal, thats fine. i'm not saying rip it out. i'm saying don't proliferate it, or make it the default on a hypervisor. 17:29:36 smoser: ok 17:29:41 make the working expectation be that it is iso9660 always. 17:30:02 So, back to the code? 17:30:03 Alright. I will rearrange the code to do an iso9660, which may or may not require some horrible dd hackery 17:30:12 Yep, so next I think was the vbd thing. 17:30:21 I just saw your review comments. I haven't read them yet. 17:30:27 I assume that's just a case of some refactoring? 17:30:46 yes. 17:30:55 Ok, I'm not too worried about that one then 17:31:00 What was the third thign again/ 17:31:01 ? 17:31:18 I had two, the label, and the vbd stuff. 17:31:20 Oh, code reuse for generation 17:31:34 I think its a really good idea to keep as much of the logic in virt/configdrive.py as possible 17:31:38 That way you get updates for free 17:31:44 +1 17:31:47 +1 17:31:58 cool, that is looking good 17:32:05 A few other quick things -- your file injection didn't support admin passwords. Config drive does. Should config drive in xen set admin passwords? 17:32:09 SO basically, that would mean, that we won't ask _generate_disk to create the fs. 17:32:16 #link https://blueprints.launchpad.net/nova/+spec/xenapi-config-drive 17:32:20 matelakat: correct 17:32:38 matelakat: well, it will create an FS in a temp file, and then copy it across to the block device 17:32:50 mikal: y 17:33:08 mikal: you mean dd, right? 17:33:14 matelakat: yep 17:33:38 So, let's pick up this admin passwords. 17:33:41 one sec, what is the question about password injection? 17:33:48 xen does that using the agent at the moment 17:33:49 :-) 17:33:57 configdrive wants to inject passwords onto the config disk 17:34:05 I think that is fine 17:34:06 Well, I don't understand the agents very well 17:34:15 Is there an agent if you're using config drive? 17:34:15 we added some flags, so agent is optional 17:34:27 let me look for the changeset. 17:34:29 I think we turn off the agent for config drive 17:34:35 at this stage anyway 17:34:47 So therefore we _have_ to have the admin password in the config drive, yeah? 17:34:48 we can look at if there are things it wasn't to do later 17:34:59 mikal: I guess 17:35:11 Cool 17:35:36 #link https://review.openstack.org/15212 17:35:40 we could look at doing later: agenet does later password changes 17:35:53 Yeah, I had a question about that for smoser 17:35:53 hang on now in english... 17:36:08 smoser: does cloud-init only run at boot? How are password changes later done? 17:36:51 the agent can currently use xenstore to do to way communication, so it can reset the password later, there was talk of adding something like a place to post an encrypted password and a place to poll an see if a password reset is required 17:37:04 mikal, tcp, puppet, any other daemon. 17:37:44 smoser: ok, so cloud-init is boot only and then you have to be an adult? That's cool because there's no attempt to update the configdrive with new data later, which would be ... complicated 17:37:47 its more for windows, for users just doing things the old way, they need some other way, so maybe it is a little bit agecase 17:37:57 mikal: xenapi_disable_agent config option could be used to turn off the agent. 17:38:22 OK, so any other configdrive things? 17:38:24 matelakat: cool. I haven't got as far as actually running this code yet. I need to build a test environment first. 17:38:34 No, I think that's it from me. Sorry for taking so much time. 17:38:52 mikal: devstack works well for that, not sure what you guys use internally 17:38:52 johngarbutt: done 17:39:21 not tried it, but you should be able to run XenServer inside virtual box, and run devstack on the virtual box VM 17:39:41 no problem, it was good to chat about that 17:39:42 johngarbutt: that's my plan, but I only downloaded xenserver yesterday 17:39:53 cool 17:40:04 any other blueprint? 17:40:40 anyone got news on the idempotent action stuff? 17:41:06 johngarbutt: speaking of blueprints: https://blueprints.launchpad.net/cinder/+spec/fibre-channel-block-storage 17:41:39 pvo: were your guys going to look at OVS support? 17:41:41 interesting 17:41:55 zykes: i see the plans are KVM only at the mo 17:42:32 zykes: I think there is a new SR being added to help with HBA support to attach to random LUNs, so that might allow XenServer to work with these things 17:43:15 johngarbutt: eta? 17:43:44 zykes: no idea right now, let me find out, there may be something on the public XCP repos somewhere 17:44:00 any more blueprint stuff, before we move to docs? 17:44:10 ovs what btw ? 17:44:18 Open vSwitch 17:44:37 #topic docs 17:44:49 anyone with any specific docs issues today? 17:45:11 The only issue, I guess, that I need to document the XenAPINFS stuff. 17:45:14 johngarbutt: yeh, but for what :) 17:45:26 docs relating to XenServer and XenAPI support 17:45:49 sorry for bothering, but ovs + ? 17:45:54 #action matelakat to document XenAPI NFS 17:46:10 zykes: OVS + XenServer + Quantum 17:46:21 #topic bugs 17:46:36 any killer bugs people want to discuss, preferable XenServer related ones 17:46:51 We had some floating-ip issues this week, see the fix here: 17:47:08 #link https://review.openstack.org/18337 17:47:15 right, with nova-network HA flatdhcp 17:47:23 multihost 17:47:37 sorry yes, that is what I meant with HA 17:47:39 y 17:47:48 And the resize stuff 17:48:12 we ran tempest tests, and the flavor was smaller than the image, and the shrink operation failed. 17:48:24 #action makelakat to raise a resize bug 17:48:24 But I haven't raised a bug. 17:48:29 y. 17:48:51 OK, so any more? 17:49:03 #topic QA 17:49:28 not heard from rackspace QA team yet 17:49:37 some random failures while running 12.04 guest on volume operations. 17:49:44 mostly timeout 17:49:48 there was hope to start co-ordinating efforts 17:49:55 as mentioned in folsom release notes right? 17:50:18 OK, moving on if nothing else... 17:50:26 #topic AOB 17:50:34 Any more for any more? 17:50:42 pass 17:51:01 uhm, johngarbutt doesn't it have ovs support already ? 17:51:32 XenServer has OVS support, Quantum has OVS support, but the two don't play well together 17:51:44 there are two patches pending to fix that 17:51:50 from internap 17:52:02 #topic date of next meeting 17:52:09 next week is Christmas! 17:52:23 I vote we skip next week, and chat again the following week? 17:52:37 What 's the date exactly? 17:52:56 Jan 2nd 17:53:18 sounds like that is everything 17:53:20 thanks all! 17:53:21 hmm, I don't expect too much activity, but let's go for it. 17:53:28 #endmeeting