17:01:25 <hartsocks> #startmeeting VMwareAPI
17:01:26 <openstack> Meeting started Wed Jun 19 17:01:25 2013 UTC.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:27 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:01:29 <openstack> The meeting name has been set to 'vmwareapi'
17:01:44 <hartsocks> Who all is around to talk VMwareAPI stuff and OpenStack?
17:01:46 <ivoks> hi
17:01:50 <kirankv> hi
17:02:33 <hartsocks> Anyone else?
17:02:40 <Eustace> hi
17:03:00 <yaguang> hi
17:03:47 <hartsocks> Let's get started then.
17:03:56 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda
17:04:22 <hartsocks> Let's start this week with bug follow ups.
17:04:28 <hartsocks> #topic bugs
17:04:39 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1180044
17:04:40 <uvirtbot> Launchpad bug 1180044 in nova "nova boot fails when vCenter has multiple managed hosts and no clear default host" [High,In progress]
17:04:57 <hartsocks> I'll start with mine. This was downgraded from Critical to High
17:05:02 <cbananth> Hi
17:05:34 <hartsocks> I've posted a patch, but it is "work in progress" I can't seem to click that button for some reason.
17:05:59 <hartsocks> Our other open Critical bug is:
17:06:02 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1183192
17:06:04 <uvirtbot> Launchpad bug 1183192 in nova "VMware VC Driver does not honor hw_vif_model from glance" [Critical,Fix committed]
17:06:29 <hartsocks> @yaguang that's you I think
17:06:40 <yaguang> this has been fixed
17:06:48 <hartsocks> Okay! Good job!
17:06:49 <yaguang> may be need a  backport
17:07:03 <yaguang> should we backport it to  grizzly?
17:07:42 <hartsocks> I don't see why not myself. Any other opinions?
17:08:35 <kirankv> yes, since its marked critical
17:08:51 <Sabari_> Hi, Sabari here
17:09:27 <hartsocks> I've tagged the bug for back port consideration.
17:09:41 <hartsocks> I see two other bugs that are open + critical
17:09:58 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1184807
17:10:00 <uvirtbot> Launchpad bug 1184807 in nova "Snapshot failure with VMware driver" [Critical,Confirmed]
17:10:38 <tjones> that one could be related to another one i've been working on  - CopyVirtualDisk_Task server responds with not implemented
17:11:04 <tjones> if no one else has taken it I can take a look
17:11:14 <hartsocks> @tjones thanks.
17:11:39 <hartsocks> Next up...
17:11:43 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1192192
17:11:44 <uvirtbot> Launchpad bug 1192192 in nova "Live Migration regression for vmware VCDriver" [Critical,New]
17:12:00 <hartsocks> I've not marked this as "confirmed" yet.
17:12:39 <hartsocks> Can anyone else look at this?
17:12:40 <kirankv> Im not sure if we want to support this since the cluster is a compute node
17:12:55 <kirankv> and we should not expose the hosts within it to O~S
17:13:27 <Sabari_> Thats true. But would it make sense in any other scenario ?
17:13:38 <Sabari_> Like with standalone hosts within VC ?
17:14:43 <hartsocks> The feature actually supports live migration between compute nodes… which may be clusters or stand alone hosts.
17:15:06 <hartsocks> If we want to disable it in VC + Cluster mode… we should say something to that effect I suppose.
17:16:12 <hartsocks> For example: override the live_migration API call in the driver to raise an exception when using clusters.
17:16:44 <kirankv> is it possible to do a migration between a standalone hosts (ESXDriver) to an cluster that uses VCDriver?
17:17:18 <hartsocks> No. It is not.
17:17:30 <hartsocks> At least I don't think it is.
17:17:56 <hartsocks> … hmm ...
17:18:05 * hartsocks imagining a scenario how that could work...
17:18:14 <kirankv> @Sabari, what configuration of standalone hosts are you indicating that live migrate is possible
17:18:42 <hartsocks> Live migration should work in Clusters without DRS and local storage.
17:19:14 <kirankv> i.e with storage vMotion enabled?
17:19:52 * hartsocks googles some references
17:20:44 <kirankv> why setup a cluster and not have DRS enabled!
17:21:01 <Sabari_> I am talking about ESXi hosts managed by a VC. Shoudlnt it be possible to use vMotion ? At least in vSphere 5.1
17:21:16 <Sabari_> Even I need to confirm the scenarios
17:21:19 <hartsocks> I've not found a tidy reference for this.
17:22:39 <hartsocks> vMotion allows "live migration" and that this can be made to work between hosts… so we *can* support this.
17:22:50 <kirankv> lets work on a fix only if we find some compelling scenarios
17:23:14 <hartsocks> I think vMotion has some compelling scenarios.
17:23:31 <kirankv> ok
17:23:45 <hartsocks> People seem to like it anyhow.
17:23:46 <Sabari_> May be we can update the bug with a scenario that is compelling enough.
17:24:08 <Sabari_> We can take it as an Action item
17:24:15 <hartsocks> I've not done any more on this than report it on behalf of another user.
17:24:46 <Sabari_> Same here.
17:24:52 <hartsocks> If anyone wants to take this over I'm not actively working on it.
17:25:08 <hartsocks> #action follow up on "live migration" bug use cases
17:25:23 <Sabari_> I can take it to the point of saying if we need to support this
17:25:38 <hartsocks> @Sabari_ please do.
17:25:45 <Sabari_> Sure
17:25:57 <hartsocks> Okay. Now comes the point in the meeting when I ask...
17:26:06 <hartsocks> #topic blocker bugs
17:26:17 <hartsocks> Does anyone have any blocker issues?
17:26:31 * hartsocks waits on responses
17:27:08 <Sabari_> I guess most of the critical bugs are already being addressed.
17:27:15 <hartsocks> Okay...
17:27:24 <hartsocks> Before I move to blueprints...
17:27:34 <hartsocks> #topic code reviews
17:27:56 <hartsocks> Reviews Ready for +2's:
17:27:57 <hartsocks> https://review.openstack.org/#/c/30036/ <- VNC rebased (same old patch)
17:27:57 <hartsocks> https://review.openstack.org/#/c/29396/
17:27:57 <hartsocks> https://review.openstack.org/#/c/30822/
17:27:57 <hartsocks> https://review.openstack.org/#/c/33482/
17:28:20 <hartsocks> I had a patch that needed to be rebased, now it gets to go through the whole approval process again.
17:28:36 <hartsocks> These are all ready for +2's and I'll be posting about them on Friday to the main list.
17:28:49 <hartsocks> Needs VMwareAPI expert's attention:
17:28:49 <hartsocks> https://review.openstack.org/#/c/27885/
17:28:49 <hartsocks> https://review.openstack.org/#/c/29453/
17:28:49 <hartsocks> https://review.openstack.org/#/c/30282/
17:28:49 <hartsocks> https://review.openstack.org/#/c/30289/
17:28:50 <hartsocks> https://review.openstack.org/#/c/32695/
17:28:50 <hartsocks> https://review.openstack.org/#/c/33100/
17:29:23 <hartsocks> These need attention by people who know VMwareAPI well. Please pick one if you know your API calls well and do a review.
17:29:51 <hartsocks> #topic blueprints
17:30:02 <hartsocks> #link https://blueprints.launchpad.net/openstack?searchtext=vmware
17:30:15 <hartsocks> I'd like to have folks look at...
17:30:24 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
17:30:42 <hartsocks> @yaguang I think that's you again.
17:30:49 <yaguang> yes
17:31:07 <yaguang> more  details  discuss  on it
17:31:20 <hartsocks> #link https://review.openstack.org/#/c/33238/ <- review on the topic
17:31:40 <yaguang> this is a initial  patch
17:32:33 <yaguang> just  resize the  root disk of vm  to the flavor  specified  size  ,
17:32:34 <hartsocks> In particular...
17:32:38 <hartsocks> #link https://review.openstack.org/#/c/33238/3/nova/virt/vmwareapi/disk_util.py
17:33:01 <hartsocks> So… this BP is about getting a VMDK to grow or shrink based on the flavor?
17:33:11 <yaguang> yes
17:33:15 <yaguang> exactly
17:33:46 <hartsocks> I see. So this first pass is about growing the disk.
17:34:13 <hartsocks> Is there more to do (besides figuring out how to test this)
17:34:19 <yaguang> and the second  will add a ephemeral disk
17:34:54 <hartsocks> When I look up "ephemeral disk" it sounds like this is a disk that lives only as long as the VM that owns it.
17:34:58 <hartsocks> Is that right?
17:35:05 <yaguang> yes
17:35:25 <yaguang> it's a  vdb  usually  inside  vm
17:35:39 <yaguang> if you  won't use  remote  volume
17:36:15 <yaguang> so to implement this, we just  create a  disk and attach it to the vm  before  the vm starts
17:36:47 <ssurana> but how would the end user request this in O~S
17:37:02 <hartsocks> If our driver works correctly (sometimes it doesn't) the VMDK should be destroyed when the VM is destroyed. The disk is created and attached before the VM boots too... Isn't that the same as an "ephemeral disk"?
17:37:28 <yaguang> here I want to confirm  is it supported by VMware api to reconfig  vm configration to add a second disk
17:37:47 <ssurana> yes vmware API can be used to add disks to a VM
17:37:55 <yaguang> the use  just  create a  instance with the flavor  he wants
17:38:14 <yaguang> the flavor  can be  add/remove/change  by cloud  admin
17:38:54 <hartsocks> So the difference here would be...
17:39:04 <hartsocks> The user adds a new disk to the VM.
17:39:08 <yaguang> yes,  that's what ephemeral disk is
17:39:12 <kirankv> can the root disk size be changed even when the new instances are linked clones?
17:39:20 <hartsocks> The new disk is only alive as long as the VM?
17:39:55 <hartsocks> @kirankv good question. I don't think so.
17:40:34 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
17:40:55 <hartsocks> I posted this so we could have some discussion about changing the "clone strategy" in the driver.
17:41:23 <hartsocks> I wanted the user to be able to pick which was best for them.
17:42:19 <hartsocks> I think this feature may compliment the resize feature.
17:42:42 <kirankv> #2 in the blueprint offers more dynamic flexibility
17:44:01 <hartsocks> @kirankv yes, that is the "blue sky" part that I don't know if we can actually provide easily. It would be nice to have parts of the driver switch strategies in general. This would be a good first-test of that idea.
17:44:28 <hartsocks> Note: "Full clones do not require an ongoing connection to the parent virtual machine. Overall performance of a full clone is the same as a never-cloned virtual machine, while a linked clone trades potential performance degradation for a guaranteed conservation of disk space. If you are focused on performance, you should prefer a full clone over a linked clone."
17:45:40 <hartsocks> So back to "ephemeral disk" … these are related ideas it seems. We can't actually provide @yaguang's feature without also addressing this problem of how cloning is done.
17:46:48 <hartsocks> Should we fold these into the same large blueprint? Are they too close to each other?
17:47:03 <hartsocks> Should we express some kind of dependency between them?
17:47:32 <kirankv> continuing on ephermal disk these will be independent disks for each instance and therefore we can still add the right disk size based on flavor
17:47:55 <yaguang> it's very close  , all are about how we  handle  disk  usage for instance
17:48:40 <hartsocks> @yaguang do you need the "clone strategy" feature first or will you be able to work without it?
17:49:27 <yaguang> I am concern how to use  root disk shared by instances of the same flavor
17:50:36 <hartsocks> @yaguang yes that's the problem I'm also concerned about… I'm not sure how to solve that without switching to "full clone" strategy universally.
17:51:33 <yaguang> there is a option to allow user to choose whether to use  linked clone or not
17:52:35 <hartsocks> If we make that a new feature...
17:52:47 <hartsocks> where you can check something
17:52:51 <hartsocks> like...
17:52:59 <hartsocks> "is_full_clone(disk)"
17:53:19 <hartsocks> … then you could either allow for a resize or print a warning.
17:54:28 <hartsocks> I will record some of this conversation in the BluePrint for the other "clone strategy" feature.
17:54:29 <yaguang> yes
17:55:12 <hartsocks> Okay. Last 5 minutes.
17:55:17 <hartsocks> #topic open discussion
17:55:27 <hartsocks> Anything else people need to talk about?
17:55:56 <ivoks> thanks to yaguang for staying up so late :)
17:56:36 <hartsocks> Yes! Thank you @yaguang!  That has helped so much!
17:56:50 <kirankv> @ Shawn, https://review.openstack.org/#/c/30628/7/nova/virt/vmwareapi/vm_util.py
17:57:15 <yaguang> glad to  work with you
17:57:36 <kirankv> did you get any additional inputs from your team
17:58:00 <hartsocks> #action hartsocks follow up on https://review.openstack.org/#/c/30628/7/nova/virt/vmwareapi/vm_util.py traversal specs
17:58:29 <kirankv> one last thing, FC support in the VCdriver
17:58:40 <hartsocks> @kirankv I have not. I did, however, finally get a working traversal spec to run under OpenStack! So now we know how to do them.
17:59:14 <hartsocks> FC? I'm drawing a blank. What is FC?
17:59:23 <ivoks> fiber channel?
17:59:26 <hartsocks> Feature Complete?
17:59:28 <kirankv> Fibre Channel LUNs
17:59:31 <ssurana> I will look into the review https://review.openstack.org/#/c/30628/7/nova/virt/vmwareapi/vm_util.py
17:59:33 <hartsocks> Ah.
17:59:33 <tjones> LOL
17:59:35 <kirankv> we have iSCSI suppor
18:00:05 <hartsocks> @ssurana thank you o' honorable traversal spec guru!
18:00:37 <kirankv> for instances in a cluster, when a LUN is attached, should it get presented to all hosts in the cluster or only the one on which the instance is hosted upon.
18:00:45 <hartsocks> @kirankv do you have a link to code?
18:00:57 <kirankv> If its only to one host then the VM cants be live migrated in a DRS cluster
18:01:17 <kirankv> no, i dont have the patch in yet
18:02:05 <hartsocks> @kirankv It sounds like a good BP. I think I heard someone else talk about iSCSI support too though.
18:02:13 <kirankv> ok, we can discuss it in the next meeting
18:02:20 <hartsocks> I will put that on the top of our next meeting agenda
18:02:41 <hartsocks> #action next meeting Fiber Chanel , iSCSI support for VCDriver
18:02:48 <hartsocks> That's all the time we have.
18:02:56 <kirankv> #link https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver
18:03:04 <hartsocks> #openstack-vmware is open all the time
18:03:10 <kirankv> Thanks!
18:03:19 <hartsocks> head over there for further discussion
18:03:23 <hartsocks> #endmeeting