17:00:24 <hartsocks> #startmeeting vmwareapi
17:00:25 <openstack> Meeting started Wed Mar  5 17:00:24 2014 UTC and is due to finish in 60 minutes.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:29 <openstack> The meeting name has been set to 'vmwareapi'
17:00:47 <hartsocks> #topic icehouse feature freeze
17:01:10 <chaochin> whoami
17:01:21 <hartsocks> #undo
17:01:21 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x3affb90>
17:01:27 <hartsocks> #topic greetings
17:01:38 <hartsocks> Folks want to go around and introduce them selves?
17:01:49 <browne> Eric here from vmware
17:01:52 <hartsocks> I'm Shawn Hartsock… I run the IRC meeting and keep track of blueprints etc.
17:02:07 <vincent_hou> Vincent Hou from IBM.
17:02:14 <arnaud> Arnaud  VMware
17:02:54 <chaochin> zhaoqin from ibm
17:02:54 <vincent_hou> Good to meet everyone.
17:03:10 <chaochin> hi vincent
17:03:28 <vincent_hou> Hey.
17:03:29 * hartsocks smiles and nods at everyone
17:03:47 <hartsocks> Okay. So for those who might be new...
17:03:58 <hartsocks> #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
17:04:25 <hartsocks> We're still in the IceHouse release cycle. But we've passed Nova's Feature Freeze (known as FF on IRC)
17:05:01 <hartsocks> If you have a blueprint that is not targeted at IceHouse-3 (or i-3) as of today … you have to apply for an Exception to the rule...
17:05:05 <hartsocks> known as a FFE
17:05:23 <hartsocks> #topic icehouse Feature Freeze
17:05:32 <hartsocks> #link https://etherpad.openstack.org/p/vmware-subteam-icehouse
17:05:49 <hartsocks> I've compiled notes on what stayed targeted at IceHouse and what didn't.
17:06:05 <hartsocks> If you're on the Nova project… bad news all around.
17:06:22 <hartsocks> Not a single one of the blueprints (aka features) stayed inside IceHouse.
17:06:40 <hartsocks> That means if you want to try and make this release you need to apply for a FFE.
17:07:39 <arnaud> I am going to apply for a FFE for the image handling code
17:07:41 <hartsocks> I've identified two blueprints I'll be filing for an FFE. If you would be so kind give a +1 by the BP you think I should ask for a FFE on … or a −1 if you disagree and think we should revise the BP in Juno.
17:07:46 <tjones> hartsocks: i have askfed for one for image cache
17:07:47 <arnaud> the base framework has been merged
17:08:07 <hartsocks> could I get you folks to note that on the etherpad for coordination sake?
17:08:09 <tjones> garyk has asked for one for iso
17:08:32 <tjones> arnaud: i recommend you get core review sponsorship ahead of time if possible
17:08:34 <hartsocks> okay, cool.
17:09:11 <arnaud> yep tjones
17:09:15 <mdbooth> arnaud: danpb expressed an interest
17:09:42 <mdbooth> (assuming we're talking about the image cache: sorry I'm late)
17:10:27 <tjones> mdbooth: no this is utilizing the image handling changes that arnaud made in glance
17:10:34 <mdbooth> Ah, sorry
17:10:38 <arnaud> mdbooth: actually no I am talking of the image handling (from glance to nova)
17:11:34 <garyk> i will be back in a few minutes. sorry
17:11:55 <hartsocks> arnaud: are we tracking that? do we need to?
17:12:33 <tjones> hartsocks: yet lets please track it
17:12:40 <hartsocks> link?
17:12:58 <arnaud> I will update the page as soon as I am in the office
17:13:08 <arnaud> the bp has been marked as implemented
17:13:15 <hartsocks> arnaud: okay.
17:13:18 <arnaud> so I need to discuss with the core first
17:13:41 <hartsocks> arnaud: It might already be on my list… https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend
17:14:00 <arnaud> this one is the glance code
17:14:03 <arnaud> this has been merged
17:14:21 <hartsocks> okay, I already marked that one down in the "made it" column.
17:14:27 <arnaud> there is this one https://review.openstack.org/#/c/63975/ and https://review.openstack.org/#/c/67606/
17:15:07 <hartsocks> arnaud: okay the BP links there are broken (on the reviews)
17:15:27 <hartsocks> arnaud: but I was not tracking that one.
17:15:28 <arnaud> yes (I think is because the bp became "implemented"
17:15:29 <arnaud> )
17:16:26 <hartsocks> okay. I've made note of the patch set. We'll clean up the notes later.
17:16:48 <tjones> sabari: do we want to ask for FFE on SPBM?
17:17:19 <tjones> i guess it did not make it into cinder???
17:17:28 <arnaud> I don't think so
17:17:33 <sabari> tjones: let me check if it made in cinder
17:17:51 <sabari> tjones: nope
17:17:59 <arnaud> I didn't even send for review the patch for glance
17:18:20 <tjones> none of the cinder patches merged
17:18:26 <tjones> bummer
17:19:21 <hartsocks> On the bright-side we're fairly well lined up for Juno-1 right?
17:19:28 <arnaud> yes
17:19:29 <tjones> lol
17:19:33 <tjones> yeah....
17:19:43 <arnaud> I think cinder-nova-glance should have spbm in juno1
17:19:47 <sabari> it will be good to have spbm to do away with regex to choose datastores, which is quite painful for an admin.
17:19:53 <tjones> vsan?
17:20:00 <arnaud> vsan juno
17:20:33 <tjones> ok - well it is what it is
17:21:55 <hartsocks> Well. I don't think there's much we can do in this meeting other than self-organizing around who is going to help ask for FFE for which BP.
17:22:15 <hartsocks> So long as we can help each other out on that front, I think we're doing all we can.
17:22:48 <sabari> we already asked for image-cache right ?
17:22:52 <tjones> yes
17:23:01 <tjones> we can't ask for spbm since cinder is not in
17:23:04 <hartsocks> sabari: https://etherpad.openstack.org/p/vmware-subteam-icehouse
17:23:08 <tjones> we have asked for iso
17:23:22 <hartsocks> mark up the etherpad as appropriate so we're coordinated.
17:24:02 <sabari> ok
17:24:08 <tjones> lets just be sure we ask for what we *really* need to support our customers and not what we wish for
17:24:16 <hartsocks> yeah, just to avoid spamming/crossing over each other.
17:24:33 <tjones> IMO it is image cache, ISO, and glance image support.
17:24:50 <tjones> but you can certainly disagree - lets make sure we are on the same page
17:25:18 <hartsocks> BTW: https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage I know of one large customer seeking that feature but they wanted it in spring this year at the earliest.
17:25:24 <tjones> im thinking about the things that really hurt our customers
17:25:47 <hartsocks> The ISO feature seems like a no-brainer. It's a fairly basic admin tool.
17:26:14 <sabari> hatsocks: that bp is partially implemented. We do support root disk of different flavor.
17:26:20 <sabari> just that we miss on ephemeral disks.
17:26:26 <tjones> image cache - without it they fill up their datastores with unused images
17:26:49 <hartsocks> I've got no objection to the image cache system changes.
17:27:08 <hartsocks> sabari: ephemeral disks are a big use-case in OpenStack style deployments in some clouds.
17:27:24 <hartsocks> sabari: without ephemeral disks there are a number of use-cases we just don't support.
17:27:43 <hartsocks> For example, it is common to boot an image with a config drive…
17:27:48 <tjones> hartsocks: do you think we should ask for FFE on that one?
17:28:02 <hartsocks> yes. but I'm admittedly biased. I've worked on it a while.
17:28:14 <tjones> it's been in play since before havana
17:28:16 <sabari> hartsocks: we do support booting with config drives.
17:28:29 <hartsocks> … and then populate the root disk with an OS
17:28:37 <hartsocks> … and the ephemeral disk with the specific application.
17:28:53 <hartsocks> The ephemeral disk typically acts as the application drive for an application server.
17:29:03 <tjones> you said there was 1 customer blocked on this but they don't need it until spring?
17:29:06 <hartsocks> This is a typical deploy strategy for shops using JVM based application servers.
17:29:32 <hartsocks> tjones: yes it was in their future plans. When I saw Tang was going to finish this in Havana I thought we would be well set.
17:30:04 <tjones> ive not been paying attention to that one
17:30:28 <hartsocks> tjones: I thought it had been in play so long and *so well* reviewed that it would be a sure fire merge by now.
17:30:41 <tjones> ugh
17:30:56 <hartsocks> tjones: this is the *second* attempt at ephemeral disks. The first was in Havana-2
17:31:06 <hartsocks> (that was July 2013)
17:31:35 <hartsocks> so I've been shepherding that for 9 months now. It's kind of become personal at this point. :-/
17:31:50 <tjones> jenkins barfed on both dependent patches
17:31:54 <hartsocks> nice.
17:32:08 <sabari> yup, we should consider it next. I based it on the fact that there hasn't been any talk on vmware community forum about ephemeral disk support.
17:32:45 <hartsocks> fair enough.
17:33:00 <tjones> BTW - v3 diags did make it
17:33:04 <tjones> so we had 1
17:33:09 <tjones> in nova
17:33:10 <hartsocks> :-)
17:33:14 <hartsocks> yay!
17:33:30 <sabari> but v3 itself may not right - i didn't follow that thread completely.
17:33:30 <browne> sabari: curious, what vmware community forum do you monitor?
17:33:35 <sabari> so apologies if i missed something.
17:33:39 <tjones> true
17:33:43 <tjones> i lost track of it too
17:34:06 <sabari> browne: #link http://communities.vmware.com/community/vmtn/openstack
17:34:28 <browne> sabari: nice, thx
17:35:41 <hartsocks> I want to talk about some of driver-specific-snapshot BP at some point today. If we run out of time, remind me.
17:35:55 <hartsocks> It's definitely a Juno BP though… priorities.
17:36:02 <vincent_hou> OK
17:36:22 <hartsocks> BTW: I've started ...
17:36:30 <hartsocks> #link https://etherpad.openstack.org/p/vmware-subteam-juno
17:36:38 <hartsocks> for what it's worth...
17:36:50 <vincent_hou> https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot
17:37:40 <hartsocks> vincent_hou: you guys brought up some thing that really made me think about how the current driver works and I think the current snapshot system is fundamentally wrong. But, more on that later.
17:37:50 <chaochin> :vincent_hou I did not undstande your last comment in this BP
17:38:12 <chaochin> sorry, typo
17:38:50 <mdbooth> I don't know if it's relevant, yet, but for Juno I think we'd be well served to serialise development somewhat
17:39:02 <chaochin> :vincent_hou can you explain it a little more?
17:39:04 <hartsocks> Interested parties in driver-specific-snapshot … let's talk over on #openstack-vmware after we clear the icehouse related discussion. I just didn't want to forget you :-)
17:39:24 <hartsocks> mdbooth: okay, you've piqued my interest.
17:39:47 <mdbooth> hartsocks: Just thinking of ways to reduce the 'in-flight'
17:39:53 <vincent_hou> chaochin: which comment or sentence exactly?
17:40:12 <mdbooth> If we pick a set of features which can be developed either entirely independently
17:40:14 <chaochin> your last comment today..
17:40:15 <tjones> mdbooth: any suggestions on making this better are HIGHLY appreciated
17:40:15 <mdbooth> Or serially
17:40:35 <mdbooth> Then absolutely hammer on a much smaller set until they get merged
17:40:38 <mdbooth> Before moving on
17:41:27 <mdbooth> Right now we've got a massive in-flight
17:41:44 <mdbooth> And so many inter-dependencies it makes my head hurt
17:42:23 <hartsocks> mdbooth: I'm not sure how we ended up with these long-chain dependencies between patches but that is definitely a problem.
17:42:28 <vincent_hou> chaochin: we will discuss it later.
17:42:33 <vuil> I wholeheartedly agree Matt. The independent stuff we can call out and have them be worked on well, independently.
17:42:45 <chaochin> vincent_hou: thx
17:43:22 <vuil> We can be more focused in the 'hammering', but the 'move on' part depends a lot on the review process, and till things merge, they will just always interdependent on one anothers' changes.
17:44:00 <tjones> and the longer it takes the worse it gets
17:44:23 <mdbooth> Develop, then swap reviews until you get attention? ;)
17:44:35 <hartsocks> We could just deal with merge conflicts as they happen.
17:44:42 <tjones> take refactoring vmops (or even spawn).  it has to happen, but it has to happen quickly so we can take advantage of it
17:45:42 <hartsocks> mdbooth: do you still have those lovely graphs you posted on the ML?
17:45:57 <hartsocks> mdbooth: care posting them here with a #link tag?
17:46:01 <mdbooth> Let me dig
17:46:31 <mdbooth> #link http://lists.openstack.org/pipermail/openstack-dev/2014-February/028077.html
17:46:46 <mdbooth> #link http://lists.openstack.org/pipermail/openstack-dev/attachments/20140225/d0de8dd7/attachment.svg
17:46:54 <mdbooth> #link http://lists.openstack.org/pipermail/openstack-dev/attachments/20140225/d0de8dd7/attachment-0001.svg
17:47:53 <sabari> I think there should be more utility patches that gets shared between features in-review. Otherwise, the common code will be reimplemented at different places.
17:47:57 <mdbooth> I think we need to ask for help, but we also need to focus on what we can do
17:48:29 <hartsocks> What's interesting is the Nova bubble on the number of days in flight graph is 10 days and vmwareapi is 12 days.
17:48:35 <hartsocks> Did I read that right?
17:48:44 <mdbooth> Yeah
17:48:57 <mdbooth> I'm actually not convinced that number is correct
17:49:11 <hartsocks> mdbooth: does that mean our code is not getting looked at for 12 days or that it's not merging for 12 days?
17:49:12 <mdbooth> However, I haven't spent the time digging
17:49:24 <hartsocks> If that number is accurate...
17:49:43 <mdbooth> It's supposed to be a measure of the time it takes the final patch version to be approved
17:49:56 <hartsocks> I *might* draw the conclusion that our code *is* getting looked at but it's being rejected at a rate much higher than other people's.
17:50:38 <hartsocks> I would accept that as supporting the conclusion that for whatever reason we're getting rebuffed on code-quality (whatever that might actually mean don't take it personally folks).
17:50:51 <mdbooth> I can do some more work on this if people think it's interesting
17:50:56 <hartsocks> I do.
17:51:06 <mdbooth> However, it didn't seem to get much traction from the people who matter
17:51:09 <mdbooth> So I'm not convinced
17:51:31 <hartsocks> Well, sometimes these things are sales-pitches not substance.
17:51:48 <hartsocks> But I think the real value in your graph is the two numbers together.
17:51:48 <garyk> it really would be nice if there was a core person working on the driver. that would help considerabley.
17:52:11 <garyk> if it is any concillation the xen drivers also take time to get reivews in and they have i think 4 cores
17:52:37 <hartsocks> garyk: I'm sure that would help, primarily I think it's at matter of meeting their expectations which we simply don't know what they are yet as a group.
17:53:05 <garyk> i think that each core has different expectations (which is a healthy thing).
17:53:29 <hartsocks> and we can't match those if we get random cores?
17:53:34 <hartsocks> core-reviewers that is.
17:53:50 <garyk> that is the $ million question
17:54:09 <hartsocks> Which I think is what mdbooth's number *might* answer.
17:54:41 <hartsocks> I would love to be able to drill down to the patch level with these stats.
17:55:12 <tjones> we know we are suffering from a quality perspective in our driver.  we've been hesitant to make large changes due to the lengthy review process.  but now we are at a point were we *have* to do it.  so we need to make a plan for doing that in juno.  that's what we need to figure out - is how to do most effectively and to get it in in juno-1 so we can take advantage of it
17:55:24 <sgordon_> mdbooth, do you have the script(s) you used to generate up somewhere in case someone wants to have a play
17:55:43 <mdbooth> sgordon_: Check out the original mail post. There's a link to the code.
17:55:47 <sgordon_> ta
17:56:19 <hartsocks> tjones: yes, unfortunately we are in a chicken-and-the-egg type of problem I think.
17:57:22 <hartsocks> and for the record, garyk is right… we would be helped tremendously by having a core-reviewer working with us. Then we would at least know we were measuring the right metrics for one of our +2 votes :-)
17:57:43 <mdbooth> We can help ourselves with in-flight though by having less stuff in flight
17:58:09 <hartsocks> mdbooth: so your council is literally ....
17:58:10 <mdbooth> I think we've hit a point where it's counter-productive both to ourselves and external reviewers
17:58:19 <hartsocks> *do less*
17:58:22 <hartsocks> :-)
17:58:29 <tjones> ok lets take that.  to have less stuff in flight we need smaller patches that have good tests and we need to get them in quickly
17:58:30 * mdbooth heads for the beach
17:58:45 <vincent_hou> guys, it seems that we are overrunning.
17:58:56 <hartsocks> we have 2 minutes ;-)
17:59:26 <chaochin> hartsocks: I still have one question
17:59:29 <hartsocks> tjones: so for that maybe as a group we should look at just addressing the top blueprints say 3 at a time and working down a list holding reviews.
17:59:30 <mdbooth> tjones: Also perhaps more review swaps to get more attention
17:59:38 <mdbooth> Although garyk is a machine
17:59:59 <hartsocks> chaochin: are you around for the next 30 minutes I'll be on #openstack-vmware
18:00:01 <vuil> I am pretty sure he literally is one
18:00:05 <hartsocks> Well that's time.
18:00:06 <chaochin> hartsocks: Will we hope to let glance to recognize VM template in the future?
18:00:07 <tjones> mdbooth: by review swap - you mean review other peoples patchs that are *not* vmware patches
18:00:16 <mdbooth> tjones: Yes
18:00:30 <tjones> mdbooth: yes we def need to do that
18:00:31 * hartsocks calls time
18:00:37 <hartsocks> #endmeeting