14:02:06 <esberglu> #startmeeting powervm_driver_meeting
14:02:06 <openstack> Meeting started Tue Feb 28 14:02:06 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:11 <openstack> The meeting name has been set to 'powervm_driver_meeting'
14:02:51 <efried> #topic In-tree change sets
14:03:57 <esberglu> Looks like all the spawn destroy changes (1 - 4) passed CI
14:03:59 <esberglu> Awesome
14:04:07 <efried> https://review.openstack.org/438119 is the bottom-most spawn/delete change set.  It is +1/+1, and ready for core reviews.  I mentioned it in #openstack-nova yesterday, and will bring it up in the meeting on Thursday.
14:05:04 <efried> https://review.openstack.org/438598 (#2) is jenkins/CI +1, ready for in-house review.  This makes spawn/delete actually create/destroy the LPAR.  No TaskFlow, no extra_specs.
14:05:42 <efried> https://review.openstack.org/#/c/427380/ is power on/off & reboot, now third in the chain, because why not.  It's jenkins/CI +1, ready for in-house review.
14:06:02 <efried> https://review.openstack.org/#/c/438729/ adds TaskFlow.  Ready for in-house review.
14:06:34 <efried> https://review.openstack.org/#/c/391288/ adds flavor extra specs, and brings us up to where that change set was before, with one small exception: no conf.
14:07:32 <esberglu> #action all: Review in-tree changesets
14:07:38 <efried> Nod.
14:07:59 <efried> I'll continue to work on piling the rest on top.
14:08:21 <efried> Now, to smooth the way for core reviews, we got some advice that we should be reviewing their stuff too.
14:08:28 <efried> I tried to do some of that yesterday.
14:08:55 <efried> TBH, most of it is beyond me.  I can check for spelling and formatting errors, or procedural stuff, but that's usually it.
14:09:20 <thorst> sorry for joining late
14:09:24 <efried> But I guess we learn as we go.  Thanks tonyb for the tips on reviewing stable/* - will keep that in mind.
14:09:27 <thorst> efried: Honestly, even that is OK.
14:09:36 <thorst> with a comment that you just looked at code structure
14:10:14 <efried> And I'll pick up on some of the functionality the more I do it, which is overall a good thing.
14:10:23 <thorst> +1
14:10:36 <efried> #action all: Do some nova reviews.
14:11:06 <esberglu> Anything else in-tree?
14:13:37 <efried> Not from me.  thorst adreznec ?
14:13:53 <thorst> nada (at end I'd like to propose a new topic be added to the meeting though)
14:13:57 <adreznec> Nope
14:14:24 <efried> We got anything OOT?
14:14:48 <esberglu> I don't
14:14:52 <thorst> if we have new blueprints, it'd be a good time to file them.
14:15:14 <thorst> there was discussion at the PTG about new blueprints.  Some will impact us.  We need to do a review and see where we have new work to do there
14:15:24 <adreznec> Yeah, and at some point here we're probably going to want to look over the Pike blueprints in general
14:15:34 <thorst> adreznec: were you setting something up for that?
14:15:42 <adreznec> thorst: Sure, I can set something up
14:15:45 <thorst> I get to play the card of, fairly soon I'll be out of here
14:15:50 <thorst> (baby and all)
14:16:25 <efried> Ugh, I have a moldy on my radar for SR-IOV max bandwidth.  And another for SR-IOV stats for ceilometer.
14:17:28 <efried> I fear that may exceed my max bandwidth.
14:17:31 <thorst> I'd be more interested in the SR-IOV stats persionally
14:17:46 <thorst> yeah, lets do a review of the Pike blueprints, because I suspect there are more important things in there.
14:18:10 <thorst> Looking into ways for us to get more bw as well...
14:18:30 <esberglu> #action all: Review pike blueprints
14:19:18 <efried> Don't we normally do that by having someone (adreznec) set up a wiki page with a big table of all the blueprints, and then have a series of calls to discuss them?
14:19:35 <adreznec> efried: Yeah, I'll scrape that data together again
14:19:42 <efried> Thanks adreznec
14:21:48 <esberglu> Alright next topic
14:21:56 <esberglu> #topic CI
14:22:08 <esberglu> CI is looking good again after that bug this weekend
14:22:45 <esberglu> One of the ssp groups is down, I'm going to get that back up today
14:23:26 <efried> esberglu Gonna tell them what happened there?  ;-)
14:23:27 <adreznec> esberglu: Any idea what caused it to go down? Network? SAN? VIOS just angry?
14:23:40 <efried> Oh, please, can I?
14:23:54 <esberglu> Haha sure
14:24:23 * thorst curious
14:24:36 <efried> esberglu reinstalled one of the nodes, and accidentally specified the SSP repo & one of the data disks as the nvl & VIOS boot disks.
14:24:57 <thorst> hah
14:25:10 <thorst> totally understandable that this could happen
14:25:14 <efried> Totally.
14:25:17 <thorst> it should be brought up with Hsien in NL scrum
14:25:18 <efried> I blame the installer.
14:25:23 <thorst> because I could see a user doing that.
14:25:24 <thorst> tote
14:25:27 <thorst> totes
14:25:27 <adreznec> Mhm
14:25:28 <esberglu> efried: I went through the installer again yesterday afternoon, didn't see anywhere to specify disks?
14:25:43 <efried> Problem is, how would you know if the disks are in use?
14:25:46 <thorst> esberglu: That's the problem it seems.  The installer just picks the 'biggest disk'
14:25:55 <thorst> and that's a super problem when it comes to SSPs.
14:25:56 <efried> Well, no, the repo disk was the smallest by far.
14:26:03 <efried> So something else happened there.
14:26:04 <thorst> maybe its smallest disk
14:26:09 <thorst> there is some decision logic in there.
14:26:15 <thorst> we need Hsien/Minh to weigh in
14:26:24 <efried> I thought you got the option to select disks
14:26:31 <thorst> SDE mode you do
14:26:36 <efried> ...without actually going into that text file at the end.
14:26:36 <thorst> not standard.
14:26:38 <adreznec> efried: I don't think we expose that for standard nvl
14:26:46 <efried> Wow, that seems like a mistake.
14:26:48 <adreznec> (against my wishes)
14:26:51 <adreznec> Yeah
14:26:55 <adreznec> We should definitely revisit that
14:27:09 <efried> Is there a decent way to identify local vs. SAN disks?
14:27:25 <efried> Cause that would be a reasonable criterion too, for defaulting.  Use local disks first.  That would have avoided this problem.
14:27:51 <efried> But regardless, we should definitely be prompting the user, I would think.
14:29:06 <efried> Okay, so
14:29:23 <efried> #action esberglu to re-reinstall that node and rebuild the SSP
14:29:42 <efried> #action adreznec thorst to corner @changh & @minhn about disk selection in the installer.
14:30:05 <esberglu> The other thing I wanted to do was track down the neo systems that were slated for OSA CI
14:30:21 <esberglu> I think wangqwsh has neo4 and we were going to use that
14:30:38 <esberglu> and the other was neo50 and I think thorst: has that?
14:30:51 <thorst> esberglu: I thought that we were just adding them to our overall pool
14:30:59 <thorst> and that our overall pool actually had enough capacity as is
14:31:07 <thorst> so if we can keep those systems for other work...I'd like that for now
14:31:16 <thorst> if we're at capacity and need more, that's another issue
14:31:28 <adreznec> I'd be interested to know what our utilization looks like
14:31:40 <adreznec> especially since we're looking at adding the OSA CI now
14:31:45 <adreznec> On top of the OOT and in-tree runs
14:31:48 <thorst> right.
14:32:26 <thorst> so maybe lets first figure out utilization, and then see if we need another pool?
14:32:30 <esberglu> Oh yeah I wanted to mention that. We haven't been running ceilometer / neutron since the in-tree stuff
14:32:53 <adreznec> At all?
14:33:09 <esberglu> We should have enough capacity for all of that once I get this SSP back up
14:33:44 <thorst> yikes.
14:33:45 <esberglu> Yeah
14:36:29 <efried> What else?
14:36:38 <esberglu> Just thinking about the OSA CI development
14:36:51 <thorst> the OVS bits there...need to dig up those notes
14:36:57 <esberglu> It will really limit us to have to share staging between OSA CI dev
14:36:58 <thorst> I think we were going to do some deploy with vxlan networks
14:37:10 <esberglu> And testing changes for the current CI
14:37:21 <thorst> esberglu: I think we can maybe ask qingwu for neo4 to be part of that
14:37:29 <thorst> I think its essentially free atm
14:37:50 <esberglu> Okay
14:38:19 <esberglu> #action: esberglu: See if we can use neo4 for CI
14:38:33 <thorst> congrats though on the stability of the CI...I'm pretty impressed with how you've been keeping it going so well
14:38:45 <efried> +1
14:39:30 <esberglu> Thanks
14:39:51 <thorst> time for new topic discussion or other things to move on to?
14:40:03 <esberglu> New topic
14:40:20 <thorst> so we have nbante and Jay helping us with testing.  Which has been great.
14:40:54 <thorst> but I think that we should add a topic around 'what new functions were added this week that could benefit from a test run from them'
14:41:01 <thorst> second set of eyes type thing
14:41:47 <thorst> thoughts?
14:42:13 <efried> For sure.  At least those two should be involved in this meeting and we should have a topic around what they're up to and what's on their horizon.
14:42:26 <esberglu> How are they testing? Just curious
14:43:11 <efried> bjayasan is using devstack, pulling down in-tree change sets, and manually running nova CLI spawn/delete, etc.
14:43:40 <efried> He's eventually supposed to be trying to set up and run tempest.
14:44:01 <efried> And looking toward potentially developing some PowerVM-specific tempest tests.
14:44:03 <thorst> yeah...intention is bjayasan uses the in-tree tempest config.  nbante deploys via OSA and uses the oot tempest ci config (to give a SDE style env a go)
14:44:17 <nbante> I sucessfully setup OSA on last Friday , where I able to do some basic deploy using UI
14:44:45 <nbante> I am exporing those options now. Next step is to configure temptest
14:45:09 <thorst> nbante: we've got some experience there, so I think we'll be able to help out  :-)
14:45:25 <nbante> that's good
14:45:26 <efried> nbante Can you coordinate with esberglu and bjayasan to get that going for both of you?
14:45:35 <nbante> sure
14:45:42 <efried> Thanks.
14:45:51 <efried> I can help out too if esberglu is overloaded.
14:46:04 <thorst> FYI - I need to drop here.  :-(
14:46:13 <thorst> but sounds like everything is in pretty good order.  :-D
14:46:17 <nbante> sure, will check. Thanks
14:46:20 <efried> Are we about done?
14:46:24 <esberglu> I think we are wrapping up anyways
14:46:32 <esberglu> Any final topics?
14:49:14 <esberglu> #endmeeting