17:01:43 <hartsocks> #startmeeting VMwareAPI
17:01:44 <openstack> Meeting started Wed Jun 12 17:01:43 2013 UTC.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:45 <danwent> or all hartsocks :)
17:01:46 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:01:48 <openstack> The meeting name has been set to 'vmwareapi'
17:01:53 <hartsocks> #topic salutations
17:02:08 <hartsocks> Greetings programmers! Who's up for talking VMwareAPI stuff and nova?
17:02:16 <danwent> i am :)
17:02:34 <hartsocks> anyone else around?
17:02:38 <hartsocks> HP in the house?
17:02:41 <hartsocks> Canonical?
17:02:58 <kirankv> Hi !
17:03:05 <tjones> im here
17:03:05 <danwent> man, its like we are trying to get reviews for our nova code :)
17:03:12 <Divakar> Hi
17:03:16 <hartsocks> *lol*
17:03:47 <hartsocks> ivoks, are you around?
17:03:52 <danwent> looks like Sabari_ is here now
17:04:08 <Sabari_> Hi, this is Sabari here.
17:04:36 <hartsocks> Okay...
17:04:44 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda
17:04:51 <hartsocks> Here's our agenda today.
17:04:55 <hartsocks> Kind of thick.
17:05:11 <hartsocks> Since we started last week with bugs, I'll open this week with blueprints.
17:05:23 <hartsocks> #topic blueprint discussions
17:05:46 <hartsocks> I have a special note from the main nova team...
17:05:51 <hartsocks> They would like us to look at:
17:06:03 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/live-snapshots
17:06:17 <hartsocks> This is a new blueprint to add live-snapshots to nova
17:06:20 <med_> hartsocks, Canonical is lurking
17:06:34 <hartsocks> @med_ hey!
17:06:53 * hartsocks gives everyone a moment to scan the blueprint
17:07:25 <hartsocks> Do we have folks working on or near this blueprint? Can anyone speak to how feasible it is to get this done?
17:08:49 * hartsocks listens to the crickets a moment
17:09:31 <hartsocks> No comments on the "live-snapshots" blueprint?
17:10:03 <hartsocks> note: I need to talk to the main nova team about this tomorrow and say "yes we can" or "no we can't"
17:10:06 <Divakar> from a technical feasibility it is :)
17:11:05 <hartsocks> What about person-power? Do we have someone who can take this on?
17:12:46 <hartsocks> #action hartsocks to follow up on "live-snapshots" to find an owner for the vmwareapi implementation
17:12:52 <hartsocks> Okay, moving on...
17:12:59 <danwent> hartsocks: would be good to check with our storage pm + eng folks on this.
17:13:08 <danwent> do you know alex?  if not, I can introduce you to him.
17:13:29 <hartsocks> @danwent we should definitely follow up then…
17:13:52 <hartsocks> Next blueprint:
17:14:00 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
17:14:09 <hartsocks> How is this coming along?
17:15:00 <kirankv> uploaded a new patch set, got rid of many other minor improvements to make the patch set small for review
17:15:21 <hartsocks> Looks like we need to look at your patch-set 7 collectively.
17:15:23 <danwent> kirankv: great
17:15:35 <danwent> kirankv: are unit tests added yet?
17:15:51 <danwent> last i checked we were in WIP status waiting for those, but that was a while ago
17:16:09 <kirankv> yes, I will run the coverage report and check and try adding if we have missed it for the new code added
17:16:53 <danwent> great
17:17:26 <hartsocks> I am looking to see coverage on any newly added code… in general, if you add a new method I want to see some testing for it.
17:17:27 <hartsocks> I'll mention here that if you want me to track, follow up, or review you changes … add me as a reviewer: http://i.imgur.com/XLINkt3.png
17:17:44 <hartsocks> This is chiefly how I will build weekly status reports.
17:17:57 <hartsocks> Next up:
17:18:12 <hartsocks> #link https://blueprints.launchpad.net/glance/+spec/hypervisor-templates-as-glance-images
17:18:43 <hartsocks> This is the VMware Templates as Glance images blueprint.
17:18:50 <kirankv> Im waiting for the multi-cluster to go through before submitting this one
17:18:57 <hartsocks> It is currently slated for H-2
17:19:01 <hartsocks> Hm...
17:19:17 <hartsocks> You can submit a patch and say Patch A depends on Patch B
17:19:22 <hartsocks> There is a button..
17:19:58 <hartsocks> I'll see about writing a tutorial on that for us.
17:20:01 <kirankv> im only concered about rebasing two patchsets
17:20:23 <hartsocks> You can cherry-pick Patch A for the branch you use for Patch B
17:20:43 <hartsocks> It's a bit more than I want to go into in a meeting, but … there's a "cherry pick" button in gerritt
17:20:48 <kirankv> even now i havent rebased the current patchset, and on openstack-dev mailing list I noticed that the preffered way is to rebase every patchset and submit
17:21:07 <hartsocks> Sure.
17:21:17 <hartsocks> These are not mutually exclusive activities.
17:21:20 <hartsocks> Both are possible.
17:21:28 <hartsocks> Both are preferably done together.
17:21:34 <hartsocks> Our reviews are taking a long time.
17:21:42 <hartsocks> Let's try and do reviews regularly.
17:21:58 <hartsocks> I will start sending emails Monday and Friday to highlight patches that need review.
17:22:07 <kirankv> ok, let me see if i can sublit a patch this week,
17:22:15 <Divakar> reviews are going thru from a +1 perspective
17:22:23 <kirankv> *submit
17:22:24 <danwent> hartsocks: yes, and we also need to get more nova core developers reviewing our patches
17:22:35 <Divakar> we need to get core reviewers attention
17:22:47 <hartsocks> Let's make sure that we can say:
17:23:01 <hartsocks> "If *only* more core developers gave their approval we would be ready"
17:23:08 <hartsocks> Right now, this is not always the case.
17:23:43 <danwent> hartsocks: agree.  we need to make life as easy as possible for the core reviewers by making sure the obvious comments have already been addressed before they spend cycles.
17:24:01 <danwent> med_:  who does canonical have a nova core dev?
17:24:04 <Divakar> Arent all the nova reviews being monitored by nova core reviewers?
17:24:18 <hartsocks> @Divakar they are but...
17:24:35 <hartsocks> @Divakar it's like twitter… so much is happening it's easy to lose the thread
17:24:39 <danwent> Divakar: in theory, there are just a LOT of them, so sometimes areas of the codebase that fewer people are familiar with get less love
17:24:58 <danwent> also is dansmith around and listening?
17:25:18 <danwent> i think he is a nova core who has attended the vmwareapi discussions before
17:25:37 <danwent> hartsocks: am i thinking of the right person?
17:25:46 <hartsocks> @russellb are you there?
17:25:47 <Divakar> We need to see if we can talk to russelb on how to get core reviewers to look into vmware related bps and bug fixes
17:25:56 <hartsocks> @danwent I've talked with Russell the most.
17:26:06 <danwent> hartsocks: yes, but PTLs are very busy :)
17:26:30 <danwent> so definitely let's encourage him, but we also need to make sure others are paying attention to vmwareapi reviews as well.
17:26:42 <hartsocks> @danwent yeah, we should probably bring him in rarely.
17:27:13 <hartsocks> I think if we have 8 +1 votes and we are waiting for two +2 votes that looks pretty clear.
17:27:32 <danwent> yes, but there's a reason we have core devs :)
17:27:34 <hartsocks> I think it will also look like we are a concerted and coordinated effort.
17:27:56 <danwent> anyway, i think we all agree on the need for more core reviews.. i am continuing to encourage people, and I'd appreciate help from anyone else who can do the same
17:28:01 <Divakar> i was not asking for russelb's time.. as PTL he can let his core reviewers attention on these
17:28:55 <hartsocks> Okay… let's agree that our followups should be to...
17:29:00 <hartsocks> 1. get more core reviewers
17:29:17 <hartsocks> 2. be more vigilant on reviews/feedback cycles ourselves
17:29:18 <Divakar> sending a mail with the link to the review asking for +2 might be other option when things are not working
17:29:41 <hartsocks> @Divakar that has not worked favorably for me…
17:30:37 <hartsocks> Let's table this topic since we can't do more.
17:30:55 <hartsocks> #action solicit participation in reviews by core-developers
17:31:14 <hartsocks> #action get regular +1 reviews to happen more frequently
17:31:19 <hartsocks> These are for me.
17:31:25 <hartsocks> Last blueprint...
17:31:34 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
17:31:46 <hartsocks> Has anyone followed up with the developer working on this?
17:32:47 <hartsocks> Anyone from Canonical follow up with Yaguang Tang?
17:33:08 <Daviey> hartsocks: hey
17:33:42 <hartsocks> @Daviey hey, how are we doing? Will we meet H-2 deadline for this?
17:34:15 <hartsocks> Remember: it can take *weeks* for the review process to work out.
17:34:36 <hartsocks> That makes July 18th our H-2 deadline kind of "tight" by that rate of speed.
17:34:41 <Daviey> Ugh
17:35:07 <danwent> qustion on the blueprint, by "ephemeral disks", does tang mean thin provisioned, or something else?
17:35:07 <Daviey> I will follow up with him
17:35:25 <hartsocks> I solicited some help on "ephemeral disks"
17:35:38 <hartsocks> I have two different understandings...
17:35:45 <danwent> hartsocks: help a newbie out :)
17:35:53 <hartsocks> 1. it's a disk that "goes away" when the VM is deleted
17:36:04 <hartsocks> 2. It's a "RAM" disk
17:36:14 <danwent> ah, got it
17:36:27 <hartsocks> Someone was going to follow up with the BP author on that...
17:36:49 <hartsocks> Okay...
17:36:56 <hartsocks> #topic Bugs!
17:37:04 <hartsocks> Or "ants in your pants"
17:37:11 <hartsocks> Tell me, first up....
17:37:25 <hartsocks> Are there any newly identified blockers we have not previously discussed!
17:37:27 <hartsocks> ?
17:38:05 <Daviey> Things look good?
17:38:14 <danwent> https://bugs.launchpad.net/nova/+bugs?field.tag=vmware
17:38:35 <hartsocks> No *new* news is good news I suppose...
17:38:36 <danwent> hartsocks: we're still having issues when more than one datacenter exists, right?
17:38:51 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1180044
17:38:52 <uvirtbot> Launchpad bug 1180044 in nova "nova boot fails when vCenter has multiple managed hosts and no clear default host" [High,In progress]
17:38:53 <danwent> and I haven't seen anyone looking at https://bugs.launchpad.net/nova/+bug/1184807
17:38:54 <hartsocks> So I'll go...
17:38:55 <uvirtbot> Launchpad bug 1184807 in nova "Snapshot failure with VMware driver" [Low,New]
17:39:06 <hartsocks> This is my status update on that.
17:39:23 <hartsocks> Chiefly, the bug root cause is...
17:39:30 <hartsocks> once the driver picks a host...
17:39:43 <hartsocks> it ignores the inventory-tree semantics of vCenter.
17:39:52 <hartsocks> This is the root cause for *a lot* of other bugs.
17:40:10 <hartsocks> For example Pick HostA but accidentally pick a DataStore on HostB
17:40:18 <hartsocks> Or … in the case I first observed...
17:40:20 <Sabari_> Yes, I agree with hartsocks
17:40:44 <hartsocks> Pick HostA then you end up picking a Datastore on HostB which is in a totally different datacenter
17:41:04 <hartsocks> This also indirectly applies to Clustered hosts not getting used...
17:41:23 <hartsocks> and is related to "local storage" problems in clusters...
17:41:39 <hartsocks> (but only because it's the same basic problem of inventory trees being ignored.
17:41:40 <hartsocks> )
17:41:57 <hartsocks> I'm currently writing a series of Traversal Specs to solve these kinds of problems.
17:42:09 <Sabari_> I am working on the bug related to resource pool and I figure out the root cause is the placement of VM within a VC is still unclear to the driver
17:42:12 <hartsocks> I hope to post shortly.
17:42:45 <hartsocks> @Sabrari_ post your link
17:43:11 <Sabari_> https://bugs.launchpad.net/nova/+bug/1105032
17:43:12 <uvirtbot> Launchpad bug 1105032 in nova "VMwareAPI driver can only use the 0th resource pool" [Low,Confirmed]
17:43:40 <danwent> ok, let's make sure this gets listed as "critical"
17:43:53 <danwent> whichever bug we decide to use to track it.
17:44:02 <hartsocks> #action list 1105032 list as critical ...
17:44:20 <hartsocks> #action list 1180044 as critical
17:44:23 <hartsocks> Okay.
17:44:41 <hartsocks> BTW….
17:44:46 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1183192
17:44:47 <uvirtbot> Launchpad bug 1183192 in nova "VMware VC Driver does not honor hw_vif_model from glance" [Critical,In progress]
17:44:47 <kirankv> @Sabari: how are we deciding which resource pool to pick?
17:44:53 <Sabari_> We can obviously allow the driver to be placed in a resource pool specified by the user, but still we need to figure out a way to make a default decision.
17:45:21 <Sabari_> Currently, we don't. VC places the VM in the root resource pool of the cluster
17:46:05 <hartsocks> This is one of those behaviors which might work out fine in production if you just know that this is how it works.
17:46:25 <kirankv> arent we moving scheduling logic into the driver by having to make such decisions?
17:46:28 <hartsocks> Of course, it completely destroys the concept of Resource Pools.
17:47:27 <hartsocks> @kirankv yes… we have several blueprints in flight right now that are essentially doing that.
17:47:45 <Sabari_> Yes, it depends on the admin and the way he has configured VC. If one chooses not to use Resource Pools, he stays fine with the existing setup.
17:48:30 <kirankv> well the blueprints leave the decision to the scheduler, the driver only makes available resource pools also as compute nodes
17:48:31 <Divakar> in a way managing resource pool as compute is resolving this
17:48:43 <hartsocks> We have two time-lines to think about.
17:48:54 <hartsocks> 1. near-term fixes
17:49:03 <hartsocks> 2. long-term design
17:49:32 <med_> danwent, sorry. That would be yaguang as core nova
17:49:41 <Divakar> I dont think we need to worry about the default resource pool in a cluster
17:50:12 <Divakar> let the cluster decide where it wants to place the vm
17:50:14 <danwent> med_: ah, thanks, didn't realize he was a core.  great to hear, now we just need more review cycles from him :)
17:50:24 <med_> :)
17:50:51 <Divakar> in case option of placing it in a resource pool is required, then lets address that through representing the resource pool as compute
17:50:55 <Sabari_> I will take a look at the blueprint and the patch sets
17:51:40 <kirankv> @Sabari: would like to see your patch set too since it addresses the bug
17:51:40 <hartsocks> Is this about ResourcePools or ResourcePools in clusters?
17:52:03 <Sabari_> @kirankv Sure, I am working on it.
17:52:06 <Divakar> if we start putting the scheduler logic inside the driver we will break other logical constructs
17:52:53 <Sabari_> @hartsocks I was talking about resource pools within the cluster.
17:53:32 <hartsocks> @Sabari_ then I have to agree with the assessment about not bothering with a fix. However, stand-alone hosts can have resource pools.
17:53:48 <hartsocks> Is this a valid use case:
17:53:57 <hartsocks> An administrator takes a stand-alone host...
17:54:08 <hartsocks> … creates a Resource Pool "OpenStack"
17:54:23 <hartsocks> and configures the Nova driver to only use the "OpenStack" pool
17:54:25 <hartsocks> ?
17:54:32 <kirankv> @hartsocks: agree that fix is required for stand-alone hosts
17:54:38 <hartsocks> Should we allow that?
17:54:43 <Sabari_> Yes, that's valid too, but that cannot be done at this moment
17:55:09 <hartsocks> @Sabari_ so that's a *slightly* different problem. Is that worth your time?
17:55:39 <kirankv> but im not sure how ESX is mostly used - 1. stand-lone 2. using vCenter? Im thinking its #2 using vCenter
17:55:43 <hartsocks> I think I wholly agree that we don't need to change the Cluster logic though...
17:55:45 <Divakar> solution could be similar to allowing regex that we followed for datastore selection
17:56:23 <hartsocks> @kirankv good point.
17:57:20 <hartsocks> You could have an ESXi driver change and a slightly different change in the VCDriver too
17:57:29 <hartsocks> I'll leave that up for the implementer.
17:57:31 <Sabari_> we still support ESXDriver so that a nova-compute service can talk to a standalone host right. In that case, shouldn't we support resource pools.
17:58:06 <hartsocks> @Sabari_ I think you have a valid point.
17:58:52 <hartsocks> Anything else on this topic before I post some links needing code reviews (by core devs)
17:58:54 <hartsocks> ?
17:59:21 <Divakar> In the cloud semantics do we really want to sub divide a host further into resource pools?  I agree we will need this in a Cluster though
17:59:50 <Sabari_> I think I need to look at the blueprint and the related patch on how it addresses the issue in cluster. In the meanwhile, I don;t have anything more
18:00:12 <hartsocks> @Divakar I'm allowing for a specific use case where we have an admin "playing" with a small OpenStack install. I think we will see that more and more.
18:00:30 <hartsocks> We're out of time...
18:00:40 <hartsocks> I'll post some reviews...
18:00:50 <hartsocks> #topic in need of reviews
18:00:52 <hartsocks> •	https://review.openstack.org/#/c/29396/
18:00:52 <hartsocks> •	https://review.openstack.org/#/c/29552/
18:00:52 <hartsocks> •	https://review.openstack.org/#/c/30036/
18:00:52 <hartsocks> •	https://review.openstack.org/#/c/30822/
18:01:08 <Daviey> hartsocks: just those 4?
18:01:12 <Sabari_> Thanks Shawn
18:01:15 <hartsocks> These are some patches that looked like they were ready to get some +2
18:01:18 <Daviey> Is there a beter way we can track inflight reviews
18:01:20 <Daviey> ?
18:01:25 <hartsocks> Also...
18:01:40 <hartsocks> #link http://imgur.com/XLINkt3
18:01:53 <hartsocks> If you add me to your review it will end up in this list.
18:02:25 <hartsocks> If I look (just before the meeting) and see a "bunch" of +1 votes I'll consider it ready to get some "+2" love.
18:02:56 <hartsocks> Talk to you next week!
18:03:03 <hartsocks> #endmeeting