17:00:51 <hartsocks> #startmeeting VMwareAPI
17:00:52 <openstack> Meeting started Wed Sep  4 17:00:51 2013 UTC and is due to finish in 60 minutes.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:53 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:55 <openstack> The meeting name has been set to 'vmwareapi'
17:01:03 <hartsocks> greetings stackers!
17:01:06 <hartsocks> Who's online?
17:01:50 <hartsocks> \o
17:01:54 <tjones> yo
17:02:39 <hartsocks> Today's the last day to work before feature freeze. I'm sure folks are busy.
17:02:52 <vuil> hi
17:03:07 <danwent> vuil: perfect, was just planning on pinging you to ask if you were joining :)
17:03:10 <danwent> vui
17:03:48 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI
17:04:00 <hartsocks> This is normally the part of the meeting where I ask about bugs.
17:04:09 <hartsocks> But I think everyone is pretty focused on reviews.
17:04:14 <hartsocks> Why don't we start there?
17:04:21 <danwent> seems smart
17:04:28 <hartsocks> #topic reviews
17:04:52 <hartsocks> I've been working off of the priority order list we built a while ago...
17:04:59 <hartsocks> #link https://review.openstack.org/#/c/30282/
17:05:06 <hartsocks> This is the multiple-cluster review...
17:05:19 <hartsocks> It was rev-ed less than an hour ago.
17:05:45 <hartsocks> I'm waiting on the test results from CI before I re-review.
17:06:37 <danwent> yeah, garyk added some additional tests
17:06:44 <danwent> based on russell's request
17:06:54 <danwent> the good news is that we have the attention of two core reviewers
17:07:01 <hartsocks> *go team*
17:07:24 <smurugesan> Hey all, Sabari here
17:07:25 <tjones> yeah!  and russell said it looks good other than tests
17:07:49 <hartsocks> smurugesan: hey glad you joined.
17:08:12 <hartsocks> Hopefully the last round of CI testing comes back clean and we can just have this merge.
17:08:17 <smurugesan> I couldn't make it for the last few weeks. Good to be here :D
17:08:18 <DarkSinclair> Greetings Dan, Shawn.  When appropriate I'd like to make a request if possible.
17:08:48 <hartsocks> DarkSinclair: we'll have plenty of open-discussion time at the end this week.
17:08:55 <hartsocks> Next on my list
17:08:59 <hartsocks> #link https://review.openstack.org/#/c/40105/
17:09:02 <hartsocks> Merged!
17:09:15 <hartsocks> #link https://review.openstack.org/#/c/40245/
17:09:35 <dims> Hi all
17:09:36 <hartsocks> waiting to merge it looks like… has 2 +2's and an approve...
17:09:43 <hartsocks> dims: hey.
17:10:13 <hartsocks> #link https://review.openstack.org/#/c/41387/
17:10:28 <hartsocks> Looks like it's waiting on an approve so it can merge.
17:10:49 <hartsocks> #link https://review.openstack.org/#/c/41600/ Merged!
17:11:02 <hartsocks> #link https://review.openstack.org/#/c/34903/
17:11:09 <hartsocks> Deploy vCenter templates
17:11:09 <hartsocks> 
17:11:24 <danwent> vuil: does it make sense for you to re-rev the template patch yourself?
17:11:27 <hartsocks> vui: looks like you have a −1 here?
17:12:37 <hartsocks> vuil: can you address this issue yourself? I think Kiran is travelling.
17:12:59 <vuil> sorry offscreen
17:13:14 <vuil> I am going to rerev the patch.
17:13:32 <hartsocks> vuil: thanks.
17:13:49 <hartsocks> vuil: if you need help figuring out the process, I'll be around late tonight.
17:13:59 <hartsocks> #link https://review.openstack.org/#/c/37659/
17:14:12 <vuil> need to tease out what else needed to be done to the vm cloned from the template
17:14:28 <vuil> sure thanks Shawn.
17:14:39 <hartsocks> "Enhance VMware instance disk usage" is recently rev-ed.
17:14:51 <hartsocks> It's on my list to re-review next.
17:15:13 <hartsocks> #link https://review.openstack.org/#/c/37819/
17:15:34 <hartsocks> sitting idle (that's the image clone stuff) not as important as the other features.
17:15:51 <hartsocks> The three bugs I was tracking are...
17:16:07 <hartsocks> merged except for: https://review.openstack.org/#/c/33100/
17:16:17 <hartsocks> This is Sabari's ....
17:16:26 <hartsocks> "Fixes host stats for VMWareVCDriver"
17:16:29 <danwent> hartsocks: we should also talk about the second half of the cinder stuff
17:16:31 <smurugesan> yes, I think it's important to show the correct stats to the users
17:16:31 <hartsocks> This is a bug so it can go later.
17:16:50 <danwent> hartsocks: unless i missed that discussion already
17:16:58 <hartsocks> I mentioned them…
17:17:03 * hartsocks pulls up links
17:17:11 <danwent> including: https://review.openstack.org/#/c/43465/
17:17:12 <hartsocks> nova-side cinder support
17:17:26 <danwent> not the nova side, there is an additional patch that is in cinder that has not really been on our radar
17:17:46 <danwent> it is owned by subbu
17:18:11 <danwent> but he is based in india, I believe, so if we get feedback from the core cinder team, we could use someone on our team to re-rev that
17:18:17 <hartsocks> ah.
17:18:40 <hartsocks> Adding it to my list.
17:19:16 <hartsocks> #link https://blueprints.launchpad.net/cinder/+spec/vmware-vmdk-cinder-driver
17:19:32 <hartsocks> Looks like ...
17:19:57 <hartsocks> first review merged, this second one didn't merge yet...
17:20:03 <danwent> hartsocks: exactly
17:20:19 <danwent> I just ping jgriffith and he said it was one of the items the team discussed at their meeting right before us
17:21:01 <hartsocks> danwent: normally, I sit in on jgriffith's meeting … I'll spend some time with this review myself then.
17:21:15 <hartsocks> Any other reviews we need to pay attention to?
17:21:38 <danwent> hartsocks: that is the only I noticed that wasn't on our list already
17:22:08 <hartsocks> #action vmwareapi team members review https://review.openstack.org/#/c/43465/ to support cinder team
17:23:24 <hartsocks> Hopefully, we don't have to apply for exception on any blueprint patches. I will send out a note after the 5th to the Mailing List if there's a significant miss.
17:23:41 <hartsocks> anything else on reviews folks want to bring up?
17:24:34 <hartsocks> Only a few more hours before the 5th. Thanks for staying on top of things!
17:24:41 <hartsocks> #topic bugs
17:24:44 <smurugesan> I would like some +1 s on the host stats patch that you mentioned last
17:24:53 <hartsocks> #undo
17:24:54 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x321a750>
17:25:14 <hartsocks> #action reviews for https://review.openstack.org/#/c/33100/
17:25:25 <smurugesan> Also https://review.openstack.org/#/c/30628/ is a good change that needs to be in
17:26:10 <hartsocks> okay...
17:26:41 <hartsocks> Fortunately, bugs aren't under as much pressure as blueprints (bug fixes can be back ported).
17:27:07 <smurugesan> Good to know that Shawn!
17:27:26 <tjones> and we all love to backport ;-D
17:27:32 <hartsocks> #action give timely follow up on https://review.openstack.org/#/c/33100/
17:27:39 <hartsocks> #undo
17:27:40 <openstack> Removing item from minutes: <ircmeeting.items.Action object at 0x332f510>
17:27:52 <hartsocks> #action give timely follow up on https://review.openstack.org/#/c/30628/
17:28:02 <hartsocks> Okay...
17:28:17 <hartsocks> anything else we need to be aware of in front of the feature freeze?
17:28:47 <hartsocks> #link https://wiki.openstack.org/wiki/FeatureFreeze
17:29:17 <hartsocks> There is an exception procedure for feature freeze. I hope to not have to use it.
17:29:45 <hartsocks> #topic bugs
17:30:12 <hartsocks> Any bugs we need to be aware of? (that we aren't already?)
17:30:39 <hartsocks> going once...
17:30:55 <hartsocks> going twice...
17:30:57 <hartsocks> Okay...
17:31:26 <hartsocks> I think we're all properly focused on getting those critical blueprints through.
17:31:36 <hartsocks> #topic open discussion
17:31:47 <hartsocks> The floor is open… what's going on?
17:32:13 <hartsocks> DarkSinclair: ping
17:32:44 <DarkSinclair> I've had an internal request after an Audit to explore the opportunity of encrypting the vmware virtual center user's password in the nova-compute.conf file.  Is this within teh realm of possiblity?
17:32:57 <hartsocks> heh.
17:33:02 <hartsocks> Possible? yes.
17:33:13 <hartsocks> At the moment you'll have to either…
17:33:26 <hartsocks> use a symmetric cipher
17:33:50 <hartsocks> … that is encrypt the password in the file, then decrypt it in memory right before Python SUDS uses it.
17:33:56 <hartsocks> or...
17:34:05 <hartsocks> we'll have to add SSO support to the driver.
17:34:13 <hartsocks> That's in a blue print for IceHouse...
17:34:22 * hartsocks digs around for blueprint link.
17:34:37 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-sso-support
17:34:45 <DarkSinclair> i was hoping the later ;)  The next Audit request is around accountability and ownership, SSO support would alleviate both concerns.
17:34:52 <hartsocks> The vCenter SSO service starting at vSphere 5.1 ...
17:34:59 <hartsocks> allows for holder of key tokens.
17:35:11 <hartsocks> Yep.
17:35:46 <danwent> DarkSinclair: yeah, this is definitely something we want to work on
17:35:46 <hartsocks> I decided to not try and get SSO into Havana because we were running tight as it was.
17:35:50 <DarkSinclair> Is it possible to proxy authentication of the logged in Openstack user for vCenter authentication?
17:36:17 <hartsocks> hm...
17:36:18 <DarkSinclair> it's understood there's no chance for Havana.
17:36:27 <hartsocks> There's a few moving parts there...
17:36:29 <danwent> DarkSinclair: we'll probably provide a way to backport the change
17:36:41 <DarkSinclair> danwent: even better.
17:37:00 <danwent> but its unlikely that it will be backported to the official branch, unless we can spin it as a security issue
17:37:49 <hartsocks> We should probably have a conversation with some vSphere SSO experts on what the best thing to do is.
17:37:53 <DarkSinclair> My timeline is 6 months to a year, so it aligns with I-release
17:38:19 <hartsocks> There might be something clever we can do with a Keystone plugin either in keystone or in vCenter.
17:38:41 <hartsocks> When you say "proxy" you mean...
17:38:48 <DarkSinclair> does Keystone in havana better support Microsoft AD integration?  (i havent checked yet.  if so, that would be excellent.)
17:39:30 <hartsocks> to be honest I've not watched keystone closely this cycle. I'd have to dig in code.
17:39:55 <danwent> DarkSinclair: not sure, but exploring connections of our SSO and keystone is on our list to explore
17:39:56 <hartsocks> Would AD integration in keystone plus AD integration in vCenter (already there in 5.1) be enough to satisfy your req?
17:40:32 <hartsocks> That would mean there really wouldn't be much integration between keystone and vCenter SSO.
17:40:49 * russellb waves
17:40:51 <hartsocks> But… you would have to re-auth on both sides.
17:41:03 <hartsocks> russellb: hei hei!
17:41:31 <russellb> hartsocks: need anything while i'm around
17:41:32 <russellb> ?
17:41:58 <hartsocks> russellb: we're all anxiously watching those reviews… listed in the channel already.
17:42:02 <danwent> russellb: we really appreciate the work you've been doing on our reviews the past few days
17:42:42 <DarkSinclair> hartsocks: ideally all authentication is against individual users in AD.  SSO is already setup and pulling from AD, if we could re-use that from Openstack and keystone, that'd be perfect
17:42:58 <russellb> danwent: np
17:43:05 <russellb> hartsocks: cool, reviewing what i can
17:43:16 <hartsocks> russellb: thanks.
17:43:32 <danwent> looks like jenkins is struggling a bit :) https://review.openstack.org/#/c/30282/
17:44:11 <hartsocks> russellb: and I have been working on your IPC problem in the test harness a bit… :-)
17:44:23 <russellb> danwent: ah, that neutron thing again :-p
17:44:45 <russellb> though ideally the recheck wouldn't have said "no bug"
17:44:55 <russellb> need to reference the bugs so it's clear how much problems are affecting the gate
17:45:08 <danwent> russellb: agreed
17:45:39 <hartsocks> DarkSinclair: I'll record some of this conversation in the blueprint for discussion in the next design summit.
17:46:42 <danwent> russellb: we should be available on irc if anything pops up during the day
17:46:50 <hartsocks> russellb: if something/someone is stuck you can ping me
17:46:55 <DarkSinclair> hartsocks: thanks.
17:47:49 <russellb> hartsocks: danwent ok will do, will ping if i need updates on anything
17:48:01 <russellb> looking over the multiple cluster one again now
17:48:16 <hartsocks> russellb: thanks.
17:50:05 <hartsocks> Anything else folks need to discuss while we have people's attention?
17:51:27 <DarkSinclair> Will everything work w/ vSphere 5.5 ?
17:51:46 <DarkSinclair> aka: any known got-chas currently?
17:52:00 <hartsocks> DarkSinclair: we have had testing with 5.5 and so far we've not had anything come back to us.
17:52:38 <DarkSinclair> Perfect.
17:54:06 <hartsocks> DarkSinclair: we do have an open bug on using more than one dataCenter … but that's not a 5.5 specific thing… and we have a set of proposed fixes for the problem. They aren't blueprints (feature requests) so they've been lower priority for the last week or two.
17:54:42 <hartsocks> DarkSinclair: just a BTW thing. If you use one datacenter things work well.
17:55:52 <hartsocks> With that, I'll give people back a few minutes.
17:55:58 <hartsocks> Thanks for all your hard work.
17:56:02 <tjones> adios
17:56:15 <vuil> bye
17:56:19 <hartsocks> VMworld and this feature freeze at the same time was a lot to do!
17:56:33 <hartsocks> See you all next week.
17:56:44 <hartsocks> And we're in #openstack-vmware if you need us.
17:56:48 <hartsocks> #endmeeting