16:59:39 <hartsocks> #startmeeting VMwareAPI
16:59:41 <openstack> Meeting started Wed Aug  7 16:59:39 2013 UTC and is due to finish in 60 minutes.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:59:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:59:44 <openstack> The meeting name has been set to 'vmwareapi'
16:59:49 <hartsocks> Hello stackers!
16:59:57 <hartsocks> Who's around to talk vmwareapi stuff and things?
16:59:58 <tjones> Hi Shawn
17:00:22 <danwent> hi
17:01:34 <hartsocks> anyone else around?
17:01:40 <hartsocks> is Kiran around?
17:01:42 <danwent> garyk was just in the other channel, right?
17:02:16 <tjones> I saw garyk join
17:02:19 <garyk> i'm here - just had to make sure that i was not locked inthe office :)
17:02:35 <hartsocks> okay :-)
17:02:38 <tjones> Lol
17:03:28 <hartsocks> well… VMware people in the house!
17:03:43 <danwent> from earlier, i think kiran is occupied, though eustace may be available.
17:03:55 <Eustace> I;m here
17:04:01 <hartsocks> hey!
17:04:11 <Eustace> hi All...
17:04:19 <tjones> Hi
17:04:53 <hartsocks> Let's cover bugs quickly and focus most of the time on blueprints this week.
17:05:04 <hartsocks> #topic bugs
17:05:24 <hartsocks> Are there any blocker do-or-die bugs we aren't tracking right now?
17:05:54 <hartsocks> I've got 3 new bugs waiting on triage...
17:05:56 <danwent> hartsocks: do we have someone doing a weekly triage of all bugs tagged with 'vmware'?
17:05:58 <danwent> :)
17:06:01 <danwent> good timing
17:06:16 <tjones> Should we bump the prio of do or die bugs for Havana?
17:06:18 <hartsocks> danwent: I'm on the Nova Bug Team and it's my official duty.
17:06:23 <garyk> i'll also try and take a look when i have a few cycles
17:06:36 <hartsocks> Any help is appreciated.
17:06:49 <danwent> hartsocks: k, great
17:06:55 <hartsocks> Let's move the target away from Havana-3 if there's a bug that's lower priority.
17:07:12 <hartsocks> The bug priority is set by a Nova policy.
17:07:23 <hartsocks> … let me see ...
17:07:32 <danwent> i haven't seen any action on this 'high' bug https://bugs.launchpad.net/nova/+bug/1187853
17:07:34 <uvirtbot> Launchpad bug 1187853 in nova "VMWAREAPI: Problem with starting Windows instances on ESXi 5.1" [High,Confirmed]
17:07:44 <hartsocks> The official guide FYI: https://wiki.openstack.org/wiki/BugTriage
17:08:09 <danwent> does that bug actually require a code change, or just different metadata with the image?
17:08:22 <danwent> (i.e., is it a code change or a doc change?)
17:08:28 <tjones> I think both. I can take it if u want
17:08:46 <hartsocks> It has no milestone on it.
17:09:11 <danwent> booting windows vms seems valuable :)
17:09:15 <hartsocks> If you want something "to be done" the "next" milestone communicates that.
17:09:21 <garyk> danwent: the bug will require a code change as it is hard coded
17:09:26 <hartsocks> danwent: that's why I put it as high.
17:09:34 <danwent> cool
17:09:53 <garyk> i think that it is a good bug for low hanging fruit - that is if someone wants to dive in and learn the flow
17:09:55 <hartsocks> danwent: anyone who has bandwidth should see it and pick it up.
17:10:14 <tjones> Lets use the target to communicate to the team its needed or not for Havana.  I'll take the bug
17:10:29 <hartsocks> thanks.
17:11:17 <hartsocks> Anyone else see something that needs addressing?
17:12:21 <tjones> Dan (or someone) can u make sure the must haves are marked Havana-3 and the others are not?
17:13:02 <danwent> tjones: i will take a pass, thanks
17:13:13 <tjones> Thx
17:13:23 <danwent> tjones: you're seeing milestone, right?
17:13:28 <tjones> Yes
17:13:36 <danwent> and someone is already switching the windows bug
17:14:05 <hartsocks> Here's a quick link to filter for open bugs: http://goo.gl/xW4XMu -> just passed through a url shortener 'cuz it's a monster query.
17:14:34 <hartsocks> Using that link you can see there are only 14 open bugs to keep track of and prioritize.
17:14:42 <danwent> hartsocks: https://review.openstack.org/#/c/35374/
17:15:02 <danwent> are we still planning on trying to merge this patch, or just relying on the larger blueprint from HP to handle this
17:15:40 <hartsocks> AFAIK we are still trying to merge this…
17:15:59 <hartsocks> No activity in a while.
17:16:26 <hartsocks> That Sabari is working full out on another project so I don't think we can count on getting this soon.
17:16:31 <danwent> ok, i wasn't sure if it was "compatible" with the approach being taken in the HP patch, as they also allow you to add resource pools as hosts, right?
17:16:36 <garyk> i have a few minor nits with that patch. otherwise it looks good
17:17:06 <hartsocks> If it comes to it I can run a merge on the patches to see what comes out.
17:17:37 <hartsocks> I try to put these things through their paces with an integration environment before giving a +1
17:17:43 <danwent> https://bugs.launchpad.net/nova/+bug/1184807 needs h-3 milestone
17:17:45 <uvirtbot> Launchpad bug 1184807 in nova "Snapshot failure with VMware driver" [High,In progress]
17:18:53 <danwent> what are our plans for https://bugs.launchpad.net/nova/+bug/1192192 ?
17:18:54 <uvirtbot> Launchpad bug 1192192 in nova "Nova initiated Live Migration regression for vmware VCDriver" [Medium,Confirmed]
17:19:05 <danwent> it seems like we should at least give the user a clean error message
17:19:17 <danwent> as we know this should not work with the VC-driver, correct?
17:19:37 <danwent> (or does that change once we have resource pools as hosts, since they could be in the same cluster?)
17:19:40 <hartsocks> I'm bumping that one out past Havana.
17:20:09 <hartsocks> The problem is subtle.
17:20:26 <hartsocks> It's only a problem when you use the clusters. If you use one host name per ESXi host then it works.
17:21:02 <hartsocks> The issue is that nova can't send a command that could work since you need to use a pair of hosts source, destination that are vMotion compatible and that only happens in clusters.
17:21:27 <danwent> yeah, i get why it doesn't work.
17:21:35 <danwent> i'm just wondering if we can have a cleaner failure
17:21:43 <danwent> for example, seems like there is a method check_can_live_migrate_destination
17:21:45 <hartsocks> So it's more interesting in that it potentially highlights an architectural impedance mismatch.
17:22:05 <danwent> could we implement that method, and just always return false, so the user gets a reasonable error message, rather than a traceback?
17:22:19 <hartsocks> That's reasonable.
17:22:30 <danwent> We can document that live migrations should be via vCenter for now
17:22:46 <hartsocks> Ah. More documentation to write.
17:22:52 <danwent> you know me :)
17:23:13 <hartsocks> IIRC other supported hypervisors will try a migration any which way you ask for and then fail.
17:23:20 <tjones> In an expert now. My doc change got merged.
17:23:26 <danwent> hartsocks: maybe create a public wiki page of things we need to document, so others can pitch in as well
17:23:29 <hartsocks> tjones: awesome!
17:23:58 <hartsocks> good idea.
17:24:16 <hartsocks> #action document the needed documents in a document to speed documentation
17:24:32 <danwent> doc^4
17:25:06 <hartsocks> But, we do need to start collecting these. There's starting to be too many for me to keep in my head.
17:25:23 <hartsocks> Okay...
17:25:48 <hartsocks> so I usually do triage after this meeting based on feedback I get here.
17:26:06 <hartsocks> So do we have anything else we need to highlight or triage ASAP?
17:27:05 <danwent> hartsocks: mulit-datacenter
17:27:10 <danwent> where did we leave that?
17:27:20 <hartsocks> Let's move to blueprints then.
17:27:24 <hartsocks> #topic blueprints
17:27:30 <danwent> did we say that we were going to try and get the "simple" patch fixed and backported to grizzly?
17:27:37 <hartsocks> So we're in the last push for Havana
17:27:53 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
17:28:22 <hartsocks> I think there's general consensus that this is *the* most important patch to get into Havana!
17:28:36 <hartsocks> This is Kiran's patch. And he's been on vacation a while now.
17:28:43 <hartsocks> It looks nearly ready to go.
17:29:22 <hartsocks> But there are a few nits that I would want to have cleaned up before a core-reviewer saw it and gave it a −2 … which would waste their time and ours.
17:29:51 <hartsocks> garyk: you're our back port guru… how bad do you think this will be to back port?
17:30:20 <garyk> the aforementioned bp?
17:30:25 <hartsocks> yeah
17:30:34 <hartsocks> multi-something-something.
17:30:41 <hartsocks> multi-clusters.
17:30:46 <danwent> can't backport features to public stable branches
17:30:50 <danwent> only fixes
17:30:58 <danwent> or am i misunderstanding something?
17:31:02 <garyk> yup, danwent is correct.
17:31:09 <hartsocks> I'm sorry.
17:31:17 <danwent> we'll probably want to backport it on our "own" stable branch, for customer to use, and post it publicly though
17:31:18 <hartsocks> I just scrolled back and re-read what I wrote.
17:31:34 <hartsocks> #undo
17:31:35 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0x2553090>
17:31:46 <hartsocks> #undo
17:31:47 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x2553490>
17:32:00 <hartsocks> I'm obviously itching to get to blueprints.
17:32:26 <tjones1> LOL
17:32:53 <danwent> hartsocks: its fine, we can talk about the datacenters thing later if you prefer
17:33:11 <hartsocks> I'm trying to find the patch.
17:33:20 <hartsocks> I have a patch that's in abandoned state.
17:33:47 <hartsocks> I found out my account was screwed up and it looks like the patch doesn't show as under my id even though it has my name on it. Weird.
17:34:11 <hartsocks> Short answer: I think I can fix multi-datacenter … and make it easy to backport.
17:35:17 <hartsocks> #topic blueprints
17:35:29 <hartsocks> irc://irc.freenode.net:6667/#link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
17:35:32 <hartsocks> blah blah blah
17:35:45 <hartsocks> last update 7 days old.
17:36:21 <hartsocks> Next most important...
17:36:27 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support
17:36:43 <hartsocks> that one is you garyk
17:36:59 <garyk> yeah - the first draft of the code is ready for review - /
17:37:08 <hartsocks> awesome
17:37:10 <garyk> review can be found at https://review.openstack.org/#/c/40245/
17:37:24 <garyk> There is one lacking development - need to boot from volume.
17:37:30 <garyk> This will be addressed tomorrow.
17:37:42 <hartsocks> nice
17:37:54 <tjones1> great work gary!
17:38:09 <garyk> gracias. can you guys please take a look and provide comments
17:38:52 <hartsocks> okay...
17:38:54 <garyk> the patch is based on https://review.openstack.org/#/c/40105/6 - there are excepions when volume attachment/detachment takes place
17:39:41 <hartsocks> If it's a requirement then I'll bump it to H3 and call it high-priority
17:40:35 <hartsocks> Looks like you already cleaned up my nit-picks from earlier.
17:40:38 <hartsocks> outstanding!
17:40:44 <garyk> yeah
17:41:06 <hartsocks> Okay… so next on my list is...
17:41:09 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
17:41:19 <hartsocks> Which surprised us as needed by a customer.
17:41:31 <hartsocks> that sentence akward.
17:41:50 <hartsocks> We were surprised when a customer asked for this behavior specifically.
17:41:58 <hartsocks> How's that?
17:42:11 <garyk> the customer is always right
17:42:26 <hartsocks> So this actually has a really solid use-case I didn't know about.
17:42:45 <hartsocks> It is used in conjunction with the config-drive feature.
17:43:28 <hartsocks> I'm trying to classify the failure of config-drive with vSphere as a bug… since the feature is there it's just dead. But this could be forced to be labeled as a blueprint.
17:43:44 <hartsocks> In which case we'll be tracking one more blueprint.
17:44:12 <hartsocks> #action query Tang on progress
17:44:30 <hartsocks> Last (and least) up is: https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
17:44:31 <danwent> hartsocks: small, well-understood features as bugs is likely reasonable, but i would expect that someone might shoot us down if that didn't merge by the havana-3 deadline.
17:45:55 <hartsocks> This is my little feature to expose use_linked_clone to the CLI
17:46:29 <hartsocks> It's there to support the afore mentioned use case where the ephemeral disk, linked_clone, and config-drive work in concert to quickly spin up a VM
17:47:08 <tjones1> i love how you guys are using "aforementioned" in a sentence so much today ;-)
17:47:20 <hartsocks> *lol*
17:48:16 <hartsocks> So the multi-cluster patch (see multi-cluster, multi-datastore, multi-datacenter … so much multi it comfuses me)
17:48:32 <hartsocks> … supports a basic deploy.
17:48:46 <hartsocks> The 2 cinder blueprints support volume operations!
17:49:46 <hartsocks> The ephemeral blueprint + linked_clone blueprint + config-drive support this common do-or-die use case:
17:49:46 <tjones1> Here's where i have the BP for hanava listed (fyi) https://wiki.eng.vmware.com/OpenStack/2013#Critical_BP_for_Havana_3
17:50:32 <hartsocks> 1. spin op instance with OS drive + epehmeral drive
17:50:38 <hartsocks> 2. first boot with config drive
17:51:03 <hartsocks> 3. pull in dependencies using apt-get, yum, puppet whatever
17:51:31 <hartsocks> 4. config network (er… i guess 3 and 4 swap places)
17:51:45 <hartsocks> 5. pull in custom application/data
17:51:57 <hartsocks> … on the ephemeral drive
17:52:21 <hartsocks> So it turns out that all flavors except the smallest usually include 2 drives for the instance.
17:52:37 <hartsocks> The first for the OS… the second for application specific stuff.
17:52:42 <hartsocks> And now we all know!
17:53:12 <hartsocks> And that's why these are our most important fixes/bp to get in.
17:53:19 <hartsocks> #topic open discussion
17:53:24 <hartsocks> Any thoughts?
17:53:26 <garyk> guys, i am sorry but i need to leave now. i'll be online a little later
17:53:44 <hartsocks> garyk: thanks for hanging around :-)
17:54:48 <hartsocks> Okay.
17:54:54 <hartsocks> If that's it we're done early!
17:55:09 <tjones1> gr8
17:55:24 <hartsocks> I'll be pinging all the key players in email so we all stay in step as we drive toward Havana-3 deadline.
17:55:30 <hartsocks> Happy stacking!
17:55:32 <hartsocks> #end
17:55:34 <danwent> hartsocks: thanks, that is great
17:55:48 <hartsocks> #endmeeting