17:00:27 <hartsocks> #startmeeting VMwareAPI
17:00:28 <openstack> Meeting started Wed Jan 15 17:00:27 2014 UTC and is due to finish in 60 minutes.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:32 <hartsocks> hi!
17:00:32 <openstack> The meeting name has been set to 'vmwareapi'
17:00:36 <hartsocks> Who's around?
17:00:40 <browne> hi
17:00:47 <garyk> i am here
17:00:50 <hartsocks> brief intros this week?
17:00:51 * mdbooth is here
17:02:01 <AndyDugas_> AndyDugas from VMware is lurking
17:02:10 <hartsocks> I'm Shawn Hartsock, your friendly neighborhood community secretary aka community chair
17:02:18 <AndyDugas_> Hey Shawn
17:02:38 <browne> browne - Eric Brown from VMware
17:02:47 <hartsocks> :-) hey Andy
17:03:08 <hartsocks> browne: welcome to the new guy!
17:03:22 <mdbooth> Matt Booth: Red Hat and OpenStack n00b
17:03:43 <hartsocks> mdbooth: welcome new guy :-) all hands are welcome
17:04:00 * mdbooth got his ATC wings today when a 2 line patch in the libvirt driver was accepted :)
17:04:10 <hartsocks> :-)
17:04:14 <garyk> congratulations
17:04:56 * hartsocks listens for a moment 'cuz he knows it's just 9am in California
17:05:40 <hartsocks> we're doing short intros for those just joining...
17:06:40 <hartsocks> okay...
17:06:51 <hartsocks> just do this if you join late btw
17:06:55 <hartsocks> \o
17:06:59 <tjones> \o
17:07:01 <hartsocks> heh
17:07:01 <tjones> :-D
17:07:08 <tjones> do what?  i just joined
17:07:13 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI
17:07:20 <hartsocks> tjones: just getting rolling
17:07:37 <hartsocks> #undo
17:07:38 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0x3b8fb50>
17:07:53 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda
17:08:12 <hartsocks> Okay, meeting agenda, blueprints, bugs, open discussion
17:08:21 <hartsocks> #topic blueprints
17:08:37 <hartsocks> We're just a few days away from icehouse-2 feature freeze
17:08:56 <hartsocks> We've been tracking our BP for icehouse-2 here:
17:09:04 <hartsocks> #link https://etherpad.openstack.org/p/vmware-subteam-icehouse-2
17:09:12 <hartsocks> I've just updated the page.
17:09:57 <hartsocks> It turns out that there was a review deadline *yesterday* where if your patch for your feature aka blueprint wasn't set to status 'ready for review' you got *bumped* to icehouse-3.
17:10:29 <hartsocks> Unfortunately the 3 of the top 7 blueprints we identified were bumped.
17:10:51 <hartsocks> Actually, the top 3 priority BP were bumped :-(
17:11:09 <garyk> not sure that i understand the point about the review deadline
17:11:18 <tjones> hartsocks: we also need to have CI listening to all patches etc by i-2.  CI is blocked on the image cache AND the session management BP
17:11:23 <garyk> all of the BP's menationed have received reviews.
17:11:46 <hartsocks> well, don't shoot the messenger
17:11:56 <hartsocks> I'm just doing y'all's book keeping here.
17:12:17 <tjones> i know but….  is there anything that can be done??  image cache has been reviewed up, down, and sideways
17:12:24 <hartsocks> The short of it is if you didn't manually set the status "Needs Review" the core-reviewers assumed you weren't paying attention.
17:13:07 <hartsocks> So what we need to do is get garyk & vuil to set "needs review" … and we'll need to call attention to those blueprints.
17:13:13 <tjones> i checked image cache yesterday and it was needs review
17:13:35 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management
17:13:42 <hartsocks> currently set to icehouse-3
17:14:00 <tjones> right and it's in  "needs code review"
17:14:05 <hartsocks> ah
17:14:12 <hartsocks> I think I see why it was bumped...
17:14:12 <tjones> just like yesterday
17:14:18 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/multiple-image-cache-handlers
17:14:51 <hartsocks> tjones: yeah, that all went down last night.
17:15:16 <tjones> we will not be able to make the CI commitments without image cache
17:15:16 <hartsocks> the image-cache-handlers BP is a dependency of the vmware-image-cache-management BP
17:15:54 <hartsocks> #action call out https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management to russellb
17:16:08 <mdbooth> CI?
17:16:31 <tjones> continuous integration tests
17:16:35 <mdbooth> Thanks
17:16:40 <garyk> mdbooth: there is the upstream gating.
17:16:55 <garyk> each driver is meanet to have their own CI. the vmware one is called minesweeper
17:17:21 <garyk> at the moment we are trying to speed up the time taken to run all of the tests
17:17:37 <garyk> with the image cache temp patch the times are improved as we can run tests in parallel
17:17:41 <tjones> because we have a requirements to run within 4 hours
17:17:59 <mdbooth> Got it
17:19:24 <hartsocks> okay.
17:19:37 <hartsocks> I've put a note on the BP to russellb and I'll see about calling this out later.
17:19:45 <hartsocks> after this meeting.
17:20:24 <hartsocks> The other BP that slipped are still important but I think we can tolerate slipping a bit better because it doesn't endanger Minesweeper.
17:20:45 <hartsocks> BTW: anyone in here want to give an update on Minesweeper while we're talking about it?
17:21:03 <garyk> hartsocks: i can try
17:21:29 <garyk> the last few days there have been infrastructure issues which have been resolved (or are in the process of being resolved)
17:21:42 <garyk> this has caused a lag in the scores from the minesweeper
17:21:53 <garyk> the queue of reviews is also terribly long
17:22:04 <garyk> thats about all at the moment
17:22:19 <hartsocks> garyk: do we have a public link for that so we can show the core-reviewers?
17:22:24 <mdbooth> Does it have a web interface, btw?
17:22:32 <hartsocks> garyk: I know we expose logs now.
17:22:38 <tjones> hence the importance of getting parallel runs
17:22:49 * hartsocks digs around for a link to Minesweeper logs
17:22:59 <garyk> mdbooth: it is jenkins like the upstream one. there logs from the runs are posted but access is currently not poissble
17:23:01 <tjones> the logs are posted with the patches
17:23:23 <mdbooth> Ok
17:23:24 <tjones> web access is not possible you mean.  the logs are accessible
17:23:25 <garyk> https://review.openstack.org/#/dashboard/9008
17:23:40 <hartsocks> #link http://162.209.83.206/logs/
17:23:47 <garyk> tjones: correct.
17:23:49 <hartsocks> The directory numbers are review numbers.
17:24:23 <hartsocks> looks like we have no publicly viewable UI to show health of the Minesweeper itself.
17:24:38 <garyk> please note that the minesweeper is currently working with 2 projects - nova and neutron
17:25:38 <hartsocks> anything else to cover here before I start whining to core-reviewers?
17:25:43 <hartsocks> :-)
17:26:04 <garyk> i have spoken with vipin and can give an update regarding the oslo support.
17:26:06 <tjones> sorry for jumping all over you hartsocks - but this is critical
17:26:08 <garyk> just let me know when
17:26:26 <russellb> the review queue is long in general though, not just vmware stuff
17:26:35 <hartsocks> russellb: hey
17:26:38 <russellb> hi
17:27:23 <hartsocks> russellb: brief summary is this BP https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management improves our Minesweeper performance significantly ...
17:27:35 <hartsocks> but it was bumped to icehouse-3
17:27:44 <russellb> wasn't set to "needs code review"
17:27:52 <russellb> we bumped everything not ready for review
17:27:57 <russellb> since we have a big enough queue of stuff ready
17:28:03 <hartsocks> Well, without it we're in danger of not being able meet the 4 hour test turn-around requirement.
17:28:30 <hartsocks> If we can get some slack either on the time for testing to turn around or on the BP we'll be in much better shape.
17:28:46 <russellb> the deadlines for all of this are icehouse
17:28:50 <russellb> not icehouse-2
17:29:51 <hartsocks> I think we had Jan 22nd in mind for some reason. Might have been an internal deadline now that I think about it.
17:29:59 <russellb> ok
17:30:15 <hartsocks> So as a driver we're cool?
17:30:18 <tjones> actually i thought it was THE deadline (not internal)
17:30:29 <russellb> we did say icehouse-2 was a soft deadline actually ... and we'd start putting warning messages after icehouse-2
17:30:32 <russellb> let me pull up the wiki page ...
17:30:37 <dansmith> right,
17:30:40 <tjones> if not - all i can say is phew!
17:30:49 <dansmith> but for things like this where we're steaming towards success, I think we're fine
17:31:01 <dansmith> we don't need to put in a warning because minesweeper is taking five hours, IMHO
17:31:06 <russellb> https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan
17:31:14 <russellb> dansmith: +1
17:31:18 <garyk> dansmith: i agree with you. great progress has been made
17:31:35 <russellb> based on the existing progress, totally agreed no warning for this driver
17:31:50 * hartsocks bows graciously
17:31:50 <dansmith> I think the warnings go in for drivers that have no (current) hope of meeting the deadline, like powervm would have been, and docker still is (I think)
17:32:09 <russellb> yeah
17:33:06 <russellb> i think we should just start a -dev thread at icehouse-2 asking for status from everyone
17:33:31 <dansmith> or just do it on the date and watch the fireworks
17:33:35 <dansmith> either way is fine with me :P
17:33:40 <russellb> you guys are in good shape
17:33:48 <hartsocks> good to hear
17:34:44 <hartsocks> anything else on these Blueprints that were moved from icehouse-2 to icehouse-3?
17:36:14 <hartsocks> Okay,
17:36:33 <hartsocks> Blueprints for vmwareapi not in Nova…
17:36:51 <hartsocks> #link https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type I had noted for icehouse-2 now set for icehouse-3
17:36:58 <hartsocks> That's in Cinder.
17:37:10 <hartsocks> I'll send a note to Subbu on this one.
17:37:41 <hartsocks> garyk: you said you spoke to the team doing our Oslo work.
17:37:47 <garyk> hartsocks: yes
17:37:53 <hartsocks> garyk: now would be a good time to talk about that :-)
17:38:11 <garyk> at the moment vipin is addressing all of the comments posted on the patch set. he will be posting another patch hopefully tomorrow.
17:38:29 <garyk> please note that this is an initial forklift and later patches will deal with improvements
17:38:32 <hartsocks> I feel bad for beating him up a bit in review.
17:38:53 <garyk> the purpose is to get common code into oslo and be used by nova/cinder/glance and if you guys have been following ceilometer
17:39:24 <garyk> he has addressed most. :)
17:39:37 <hartsocks> I've had a hard time getting real time chat with Vipin, we're too many timezones apart.
17:39:50 <hartsocks> #link https://etherpad.openstack.org/p/vmware-oslo-session-management-notes
17:40:05 <hartsocks> I've made this as an attempt to catch notes we might want to share on the topic.
17:41:07 <hartsocks> In general, I know we don't want to shift the API while doing the "forklift" process but I still want to try and address error logging and handling issues that make debugging a nightmare for us in some corner cases.
17:42:00 <hartsocks> okay.
17:42:14 <hartsocks> other blueprints?
17:43:31 * hartsocks waits for network lag and slow typists
17:44:29 <hartsocks> #topic bugs
17:44:50 <hartsocks> I'll send out my mailing update after this meeting…
17:44:59 <hartsocks> #link https://etherpad.openstack.org/p/vmware-subteam-icehouse-2
17:45:10 <hartsocks> has my preliminary report
17:45:29 <hartsocks> #link https://review.openstack.org/#/c/62587/
17:45:52 <garyk> that is stable/havana
17:45:58 <hartsocks> heh.
17:46:03 <hartsocks> bug in my report system.
17:46:32 <garyk> most serious issue at the moment is the https://review.openstack.org/#/c/64598/
17:46:49 <garyk> minesweeper passes on this but for some reason it is failing to do the +1
17:47:04 <garyk> that is causing a ton of our patches to fail the CI
17:47:42 <hartsocks> #action shill for one more +2 on https://review.openstack.org/#/c/64598/
17:48:19 <hartsocks> :-)
17:48:36 <hartsocks> okay, open discussion time?
17:49:07 <hartsocks> Next week we'll have to focus on bugs to make up for shorting them this week.
17:49:22 <garyk> will there be a meeting next week?
17:49:31 <hartsocks> good question...
17:50:00 <garyk> would it be possible that at the nova meetup whoever is going to try and maybe sit with one or more cores to review a blueprint
17:50:00 <tjones> i'll be in costa rica ;-)
17:50:04 <hartsocks> Jan 22nd is not a holiday as far as I see...
17:50:10 <hartsocks> tjones: unacceptable!
17:50:14 <tjones> hee hee
17:50:35 <hartsocks> Monday Jan 20th is a holiday so no meeting Monday m'kay?
17:50:47 <hartsocks> #topic open discussion
17:50:58 <garyk> my thinking is that if a ton of nova people are int he same room then why not divide into grouos and have a ton of eyes on certain bp's. maybe russellb can priorities X BP's and thena group of people spend a session on those (cross my fingers that we may have 1 or 2 in the mix)
17:52:54 <hartsocks> garyk: that's a good idea and I had hoped we could get some good skills sharing with core-reviewers
17:53:31 <hartsocks> garyk: one of my focuses has been on use of Python 3's mock libs… I hope I can contribute more broadly there...
17:53:54 <garyk> nice
17:54:17 <hartsocks> I'm currently toying with the idea that we should be able to Mock *any* observed failure mode in production/test and code for it in a unit test…
17:54:32 <hartsocks> … not sure how realistic that is, however.
17:54:57 <browne> when does openstack plan to switch over to python 3?
17:55:28 <hartsocks> I have no idea… but the code is slowly being walked into python 3 compatibility.
17:55:43 <browne> ah
17:55:52 <hartsocks> Right now you have to write things so they work from 2.6 through 3.3 if I recall correctly.
17:55:59 <garyk> the nova client has it as part of the gating
17:56:33 <hartsocks> I've put some notes in some places that say things like "once python 2.6 is dropped rewrite"
17:56:39 <mdbooth> Another Red Hatter mentioned to me today that mock is preferred over mox in new code. Is that true in the vmwareapi driver?
17:56:50 <hartsocks> I would like it to be.
17:57:05 <browne> yeah, i believe mox doesn't work in python 3.x
17:57:24 <hartsocks> I have some broken code which is WIP where I'm playing with mock over mox: https://review.openstack.org/#/q/topic:bp/vmware-soap-session-management,n,z
17:57:35 <hartsocks> WIP: work in progress
17:57:45 <browne> https://wiki.openstack.org/wiki/Python3
17:58:21 <hartsocks> BTW: there is a mox lib that works in Python 3 but IIRC it's not officially supported or something like that.
17:58:45 <hartsocks> browne: ah, it's on that page, mox3
17:58:56 <browne> yeah, just saw that too
17:59:12 <hartsocks> We're out of time.
17:59:36 <hartsocks> We're over on #openstack-vmware if anyone needs to chat. Sometimes I'm quite colourful over there.
17:59:49 <mdbooth> Good night
17:59:51 <hartsocks> Otherwise, same time next week in here. We'll focus on bugs.
18:00:05 <hartsocks> #endmeeting