17:00:15 <hartsocks> #startmeeting VMwareAPI
17:00:16 <openstack> Meeting started Wed Jun 26 17:00:15 2013 UTC.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:19 <openstack> The meeting name has been set to 'vmwareapi'
17:00:30 <hartsocks> #topic greetings
17:00:37 <hartsocks> Who's here?
17:00:45 <kirankv> hi!
17:01:11 <hartsocks> @kirankv hey!
17:01:21 <Sabari> Hi, this is sabari here
17:01:36 <cbananth> hi
17:01:59 <danwent> hello
17:02:50 <hartsocks> anyone else?
17:04:24 <hartsocks> Let's get rolling then.
17:04:28 <hartsocks> #topic agenda
17:04:32 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda
17:04:47 <hartsocks> We kicked off with bugs last time so this time let's start with blueprints.
17:05:29 <hartsocks> Is anyone around to discuss Fiber Chanel and iSCSI support for the VCDriver?
17:05:36 <kirankv> yes
17:05:56 <kirankv> the bp is specifically for the fibre channel support
17:06:15 <hartsocks> okay, let's move into that topic then...
17:06:20 <hartsocks> #topic blueprints
17:06:31 <hartsocks> @kirankv do you have a link to the BluePrint?
17:07:07 <kirankv> https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver
17:07:15 <kirankv> #link https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver
17:07:49 <kirankv> the implementation would be similar to iSCSI
17:08:01 <hartsocks> How is this coming?
17:08:27 <kirankv> the only challenge is that should the LUN be presented to all the hosts in the cluster or only the host on which the instance is hosted upon
17:09:17 <kirankv> we are modifying the iSCSI code to enable FC attach as well, otherwise ther will be lot of code repitition
17:09:33 <kirankv> am working on this right now
17:09:56 <kirankv> work in progree patch should be available by Monday
17:10:05 <kirankv> progress*
17:11:22 <kirankv> does anyone object/like the idea of "LUN be presented to all the hosts in the cluster or only the host on which the instance is hosted upon"
17:11:59 <hartsocks> Do we have to decide?
17:12:07 <hartsocks> Can it be a configuration?
17:12:46 <kirankv> well im going with the presentation to all hosts since cinder is working on facilitating multi attach of volumes
17:13:27 <kirankv> it can be a config, by not presenting it would mean live move of the VM will not be possible
17:13:37 <hartsocks> That would seem to allow for the LUN as shared storage between the hosts.
17:14:09 <hartsocks> That second line… would be useful for a LUN being used by a single VM.
17:14:30 <hartsocks> I'm not certain that this is an important use case… but this is all so young I don't want to exclude it.
17:15:11 <hartsocks> If we can only have one...
17:15:23 <hartsocks> I would prefer the LUN be available to all hosts… but...
17:15:35 <hartsocks> I like to keep options open for later.
17:15:44 <hartsocks> Does anyone else have an opinion?
17:16:53 <hartsocks> I'll take that as a resounding *shrug*
17:17:34 <hartsocks> @kirankv any other points on this blueprint?
17:18:03 <kirankv> we will be testing this using 3par driver, but should work for others as well
17:18:44 <kirankv> no other points to add for this bp
17:19:10 <hartsocks> Okay.
17:19:17 <hartsocks> Moving on...
17:19:47 <hartsocks> #link https://blueprints.launchpad.net/glance/+spec/hypervisor-templates-as-glance-images
17:19:58 <hartsocks> Is anyone working on this?
17:20:13 <hartsocks> I think this is you again @kirankv
17:20:28 <kirankv> i was working but now moved to the FC driver
17:20:45 <hartsocks> You have H-2 listed as your target.
17:21:04 <hartsocks> If you get FC driver through … you won't hit H-2 on that blueprint.
17:21:35 <hartsocks> Can you move the Milestone target to H-3? Do you think you could hit that?
17:21:41 <kirankv> it might be difficult, i will change the target date by Monday
17:22:07 <hartsocks> On the plus side...
17:22:10 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
17:22:17 <hartsocks> Feels like it is almost there.
17:22:53 <hartsocks> … or at least moving… I see a new patch up here...
17:23:00 <hartsocks> #link https://review.openstack.org/#/c/30282/
17:23:00 <kirankv> hopefully should make it
17:23:26 <hartsocks> I'll solicit folks who are in this IRC meeting to take a look at the latest patch.
17:23:52 <hartsocks> … so … folks … (you know who you are)
17:24:00 <hartsocks> Okay...
17:24:08 <hartsocks> Any canonical folks in the meeting?
17:24:46 <hartsocks> We have two related BP I'll just touch on...
17:24:50 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
17:25:05 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
17:25:52 <hartsocks> I'll talk to Yaguang Tang off IRC and see if he can move his BP out to H-3 or if we can hand off work for the blocking BP.
17:26:19 <hartsocks> #action followup with Yaguang Tang on BPs
17:26:34 <hartsocks> Are there other BP that people need to discuss?
17:28:09 * hartsocks whistles a merry tune for a moment in case someone is typing
17:28:34 <hartsocks> okay
17:28:39 <hartsocks> #topic bugs
17:28:49 <Sabari> :)
17:28:57 <hartsocks> Heh
17:29:09 <hartsocks> @Sabari you have a pet bug you want to parade out?
17:29:43 <Sabari> I wanted to talk about the live migration bug
17:29:45 <Sabari> https://bugs.launchpad.net/nova/+bug/1192192
17:30:20 <hartsocks> oh dear. I almost forgot about this one.
17:31:07 <Sabari> We can do vMotion between clusters (without shared storage) in VC > 5.1
17:31:32 <Sabari> and with shared storage in VC  5.1
17:31:41 <Sabari> does this mean we should support this feature ?
17:31:55 <hartsocks> I think this interacts with that BP...
17:32:18 * hartsocks looking up BP
17:32:19 <Sabari> but the clusters have to be within the same VC and a bunch of other prerequisites
17:32:31 <hartsocks> https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
17:32:49 <hartsocks> @kirankv does this bug affect your work?
17:33:11 <Sabari> But that just makes all the clusters in VC visible under the same hypervisor
17:33:20 <hartsocks> @Sabari my preference is when we see things like that we should have "sanity check" code that tells an operator why things will fail.
17:33:24 <kirankv> im not sure where the changes are, but should not affect, if it does will rebase :)
17:33:29 <Sabari> I don't think it ll have any effect on this bug
17:33:54 <hartsocks> Okay… so...
17:34:06 <hartsocks> To get vMotion to make sense when using clusters...
17:34:09 <Sabari> Apart from the fact, it cannot pariticipate in live-migration (correct me, if am wrong ?
17:34:36 <hartsocks> you have to have...
17:34:41 <hartsocks> more than one cluster.
17:34:54 <hartsocks> shared datastores between all these clusters...
17:35:10 <hartsocks> and you have to have one vCenter controlling all the shots...
17:35:33 <Sabari> This feature can only work in the following scenario, you have multiple nova service configured with different clusters in the same VC. Then we can support live migration between them
17:35:34 <hartsocks> *and* OpenStack *must* talk to the one vCenter for both clusters?
17:35:43 <hartsocks> Okay.
17:35:53 <hartsocks> So you can have multiple nova-compute nodes.
17:35:55 <Sabari> Shared datastore is a reqiurement only prior to 5.1
17:36:19 <Sabari> yes, you can have multiple nova-compute nodes per cluster in same VC
17:36:47 <hartsocks> I think we have to be prepared for people using this on 5.1 < … for the next few years.
17:36:56 <Sabari> Post 5.1 you need to have a vMotion Network that is stretched between the clusters.
17:37:05 <Sabari> to support non-shared scenario
17:37:23 <hartsocks> At minimum we need to give a warning as to why things won't work when they don't.
17:37:41 <hartsocks> This seems like a big job.
17:37:48 <hartsocks> Is this really a bug?
17:38:20 <Sabari> Yes. A bunch of hardware related compatiibility is run by VC and we can re throw the results of the checks back to the user.
17:38:39 <Sabari> No not a bug - but more like a feature
17:38:56 <hartsocks> But… it was *supposed* to be there in Grizzly…
17:39:28 <hartsocks> (I wrote this bug based on a conversation I read on the mailing list)
17:39:53 <hartsocks> We then spotted that the live migrate code was orphaned.
17:40:09 <Sabari> Hmm, if we see this from ESXDriver perspective, may be something might be working today
17:40:20 <Sabari> *need to check
17:40:34 <hartsocks> Okay. Let's follow up on this later.
17:40:50 <hartsocks> I'm glad this won't hurt @kirankv's work.
17:40:56 <kirankv> :)
17:40:57 <Sabari> yes
17:41:47 <hartsocks> #action Sabari will investigate bug 1192192 from ESXDriver side and report back
17:42:01 <Sabari> sure
17:42:12 <hartsocks> This could be a feature for VC or a bug under ESX.
17:42:43 <hartsocks> next up ...
17:42:47 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1184807
17:43:30 <hartsocks> This is the "snapshot bug"
17:43:53 <hartsocks> anyone around to talk about this?
17:44:42 <hartsocks> @danwent any comment here?
17:45:11 <hartsocks> I'll go follow up on this off line.
17:45:15 <danwent> I just tested it and saw that it didn't work and filed a bug.  I haven't done any investigation around this.  i think tracy was looking at it.
17:45:28 <hartsocks> @danwent okay.
17:46:07 <hartsocks> @danwent she seems to have asked you to clarify something in the bug. Could you look at that when you get a chance?
17:46:16 <danwent> k
17:46:31 <hartsocks> I'm working 2 bugs at the same time...
17:46:40 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1178369
17:46:42 <kirankv> looks like the last comment in the bug is how the snapshot feature works as per the documentations
17:46:49 <hartsocks> #undo
17:46:50 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0x3198ed0>
17:47:25 <hartsocks> … the code base shifted under the bug ...
17:47:45 <kirankv> im not sure of the history, but teh doc says
17:47:49 <kirankv> Snapshots of running instances may be taken which create a new image based on the current disk state of a particular instance.
17:48:08 <kirankv> #link http://docs.openstack.org/trunk/openstack-compute/admin/content/images-and-instances.html
17:48:43 <hartsocks> thanks for posting the link.
17:49:59 <kirankv> im not sure if openstack snapshot feature is same as vCenter snapshot, needs further reading for me
17:50:18 <hartsocks> Let's follow up on that one outside IRC then.
17:50:31 <kirankv> ok
17:50:52 <hartsocks> So I'm tracking 4 total bugs that are open + high/critical … the last two are on my plate.
17:51:14 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1180044
17:51:38 <hartsocks> … the patch for this I personally would reject if someone else were presenting it to me… but it *kinda* works.
17:52:04 <hartsocks> #link https://review.openstack.org/#/c/33576/
17:52:22 <hartsocks> I'm putting it up for review here to solicit comments from other VMwareAPI people.
17:52:50 <hartsocks> The change involves using a traversal spec to sort out inventory details. I will cycle back to writing tests for this shortly.
17:53:03 <hartsocks> It does rely on this patch...
17:53:13 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1178369
17:53:41 <hartsocks> #link https://review.openstack.org/#/c/30036/
17:53:58 <hartsocks> Which you will note has a *regal* history… approaching that of nobility...
17:54:17 <hartsocks> I have been working on this patch since May 21st.
17:54:26 <hartsocks> It is now on revision 17.
17:54:31 * hartsocks bows
17:54:50 <hartsocks> (so if you think you have problems… )
17:54:53 <hartsocks> :-)
17:55:16 <hartsocks> Okay...
17:55:27 <hartsocks> Are there any other bugs I should be actively tracking?
17:56:54 <Sabari> https://review.openstack.org/#/c/30822/ needs +1's it received earlier. I had to rebase as the code base shifted
17:57:30 <Sabari> I have resolved the merge conflicts and uploaded a new patch
17:58:16 <hartsocks> I called out the try-catch block in your patch because it looks like the test might be non-deterministic
17:58:17 <kirankv> @Sabari, will check the patch
17:58:30 <hartsocks> I see you left comments on why this is okay.
17:58:32 <Sabari> @hartsocks, I replied to it in-line. Please take a look
17:58:35 <Sabari> yes
17:59:05 <hartsocks> If I tripped on this, other people might too. Please put some of that in a "#" comment near the try-catch block...
17:59:17 <Sabari> okay
17:59:29 <hartsocks> I will use that comment to raise an issue about the language support issues.
17:59:52 <hartsocks> Okay.
17:59:55 <hartsocks> We're out of time.
18:00:16 <hartsocks> See the #openstack-vmware channel if folks want to have impromptu meetings!
18:00:27 <hartsocks> Otherwise, see you next week.
18:00:32 <hartsocks> #endmeeting