13:01:15 <esberglu> #startmeeting powervm_driver_meeting
13:01:16 <openstack> Meeting started Tue Jun 27 13:01:15 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:01:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:01:19 <openstack> The meeting name has been set to 'powervm_driver_meeting'
13:01:28 <efried> \o
13:01:33 <mdrabe> o/
13:02:04 <edmondsw> o/
13:02:12 <efried> AndyWojo Well, I wouldn't say what we've got in tree is usable at this point.  Were there other third-party (out-of-tree) drivers in the survey?
13:02:33 <esberglu> #link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda
13:02:38 <esberglu> #topic In Tree Driver
13:02:50 <esberglu> #link https://etherpad.openstack.org/p/powervm-in-tree-todos
13:03:37 <esberglu> Anything to talk about here? Or still just knocking through the todos?
13:03:53 <edmondsw> AndyWojo which survey is that? I thought the Ocata survey was closed a while back. We are working to get on the survey.
13:04:40 <jay1_> efried: what operations are ready for test in IT ?
13:05:06 <efried> jay1_ That hasn't changed in a while.  SSP disk was the last thing we merged.
13:05:16 <jay1_> ok..
13:05:22 <edmondsw> jay1_ no network yet IT
13:05:51 <jay1_> that would be the next one to do ?
13:06:03 <edmondsw> jay1_ but you can deploy with SSP boot disk and do some things like stop, restart... see the support matrix
13:06:08 <edmondsw> efried you have a quick link for that?
13:06:18 <efried> Config drive will probably be next, cause it's easy.
13:06:42 <edmondsw> jay1_ network will be one of the priorities for queens, along with config drive
13:06:55 <efried> #link https://docs.openstack.org/developer/nova/support-matrix.html
13:06:58 <jay1_> okay
13:08:34 <jay1_> efried, in that matrix PowerVM refers OOT only ?
13:08:41 <efried> jay1_ IT only
13:08:52 <jay1_> ah ok.
13:08:59 <efried> OOT we have a lot more green check marks
13:09:01 <esberglu> http://nova-powervm.readthedocs.io/en/latest/support-matrix.html
13:09:05 <esberglu> That's OOT
13:09:48 <edmondsw> esberglu we need to change the OOT version's checkmarks to green... would be much easier to read
13:09:55 <edmondsw> I'll throw that on the TODO
13:10:17 <esberglu> +2
13:11:59 <esberglu> Alright sounds like that's it IT
13:12:13 <esberglu> #topic Out Of Tree Driver
13:13:21 <jay1_> when is the next ISCSI integration point ?
13:13:28 <jay1_> is that integration done ?
13:14:05 <efried> Have you heard anything from chhavi about the latest pypowervm + https://review.openstack.org/#/c/467599/ ?
13:14:28 <jay1_> no
13:14:29 <efried> She was going to sniff test that to make sure we didn't need any further pypowervm fixes so we can cut a new release.  I want to get that done pretty quickly here.
13:14:53 <edmondsw> efried agreed
13:15:24 <edmondsw> jay1_ please talk to chhavi about this. I'll send a note as well to try to push this along
13:16:00 <jay1_> edmondsw: sure
13:18:47 <edmondsw> note sent
13:18:52 <esberglu> If it turns out you do need pypowervm fixes let me know and I can push a run through CI with it when ready
13:19:32 <esberglu> Nvm just clicked on the review
13:20:06 <esberglu> I don't think it would hit any changes going through our CI?
13:20:22 <efried> It what?
13:21:02 <efried> The pypowervm that's merged right now is copacetic.  Last thing merged was the power_off_progressive change, and you already tested that.
13:21:13 <efried> The question is whether we're going to need anything else in 1.1.6
13:21:14 <esberglu> Well any pypowervm changes would be related to ISCSI right? Which isn't part of the CI
13:21:38 <efried> esberglu Well, right, but a regression test wouldn't be a bad thing.
13:21:41 <esberglu> So I don't know that the changed paths would get hit
13:21:50 <esberglu> Yeah I can push one anyways just to be safe
13:23:15 <edmondsw> mdrabe efried should we talk about https://review.openstack.org/#/c/471926/ now?
13:23:35 <efried> okay
13:23:36 <edmondsw> we've had emails flying back and forth... hash it out?
13:24:01 <edmondsw> mdrabe you still here?
13:24:07 <mdrabe> Yea I'm gonna whip that up this afternoon I think
13:24:28 <edmondsw> what exactly does that whipping entail? ;)
13:24:36 <mdrabe> With the caching, and evacuating on instance deletion events
13:25:05 <edmondsw> how do you plan to demonstrate perf improvement to satisfy efried?
13:25:10 <mdrabe> Respond to efried's comments and introduce the caching to event.py
13:25:35 <mdrabe> Stop calling that instance object retrieval
13:25:38 <efried> I think we're out of runway to get arnoldje to test this.
13:25:52 <edmondsw> right, I was afraid of that
13:26:06 <efried> Who's his replacement, and does said replacement have the wherewithal and time to do it?
13:26:31 <edmondsw> I haven't heard of a replacement... I can ask
13:26:54 <mdrabe> If anything I can test it myself, though I don't have any fancy performance tools
13:27:07 <AndyWojo> edmondsw: The OpenStack User Survey. Only PowerKVM was on the list, I selected other and filled in PowerVM, since I'm in the middle of implementating it
13:28:11 <efried> mdrabe Yeah, I'm obviously concerned that it *works*, but that's not sufficient for me to want to merge it.  We have to have a demonstrable nontrivial performance improvement to justify the risk.
13:28:31 <mdrabe> For the caching I'm still concerned in the pvc case around management of LPARs
13:28:52 <efried> When arnoldje validated the PartitionState change, he was able to produce hard numbers.
13:29:13 <efried> My fear is that this change is bigger & more pervasive, but will yield a smaller return.
13:29:49 <mdrabe> I've no hard numbers, but he said something of a 10-12% deploy time improvement
13:30:00 <edmondsw> AndyWojo I think the last user survey is closed. But I'm hoping to have PowerVM on the October one.
13:30:01 <mdrabe> But there're fewer NVRAM events that PartitionState events
13:30:09 <mdrabe> during deploy
13:30:15 <mdrabe> than*
13:30:43 <AndyWojo> edmondsw: they just sent an e-mail out about the user survey is now open, and it's for June - Dec
13:30:57 <edmondsw> mdrabe efried yeah, arnoldje had estimated something like 5% improvement for this
13:30:57 <efried> 7.2% improvement was what he said for the PartitionState change.
13:31:01 <AndyWojo> Openstack Operators List
13:31:35 <edmondsw> AndyWojo ok, hadn't seen that yet... guess we missed the boat. Will shoot for the next one then
13:35:20 <edmondsw> annasort gave me a couple names to do perf testing now, I'll ping them to you efried mdrabe
13:35:33 <mdrabe> edmondsw Yea I got em
13:35:45 <efried> edmondsw Ping anyway, maybe your names are different than mine.
13:36:49 <edmondsw> pinged you both on slack
13:37:40 <mdrabe> K so I'll work on that. good?
13:37:50 <efried> Cool man.
13:38:46 <esberglu> Alright lets move on to CI then
13:38:58 <esberglu> #topic PowerVM CI
13:39:34 <esberglu> The network issues caused quite a bit of inconsistency so I redeployed last night
13:40:01 <esberglu> Then the control node's /boot/ dir filled up which also caused a bunch of inconsistencies
13:40:14 <esberglu> Is the proper way to clean that out
13:40:21 <efried> Just can't get a break, can ya
13:40:26 <esberglu> apt-get autoremove?
13:40:53 <edmondsw> esberglu what filled that partition? Ideas on how to prevent that in future?
13:41:31 <esberglu> edmondsw: I'm pretty sure you can just run apt-get autoremove and it cleans it out, however I'm no expert on apt
13:41:41 <esberglu> But since it was at 100% that command was also failing
13:41:56 <esberglu> So I had to manually go in and clean out the old ones
13:42:04 <efried> I wouldn't expect /boot to be affected by autoremove.
13:42:10 <efried> Do you have old kernels lying around?
13:42:16 <efried> I had that happen.
13:42:23 <esberglu> efried: Yeah
13:43:11 <efried> dpkg -l | grep linux-image
13:43:29 <efried> If you see more than one rev, you can *probably* apt-get remove all but the newest.
13:44:11 <esberglu> efried: That sounds scary, what happens if the newest errantly gets deleted?
13:44:23 <efried> You don't boot.
13:44:25 <efried> But don't do that.
13:44:33 <edmondsw> :)
13:44:40 <esberglu> efried: Yeah we just have to make sure that the logic is really good
13:44:51 <efried> This is not something I would automate, dude.
13:44:57 <efried> Do it once to free up space.
13:45:10 <efried> Manually type in the full package names of the old ones.
13:45:14 <esberglu> efried: Yeah but I want to add a step that would clean this every time
13:45:18 <edmondsw> you could automate detection of the problem... cron job that emails you if it sees things are getting filled up?
13:45:26 <edmondsw> but right, don't automate cleanup
13:45:27 <esberglu> And I read something last night saying apt-get autoremove would do that
13:45:28 <efried> "every time" isn't a thing that should happen for old kernel images.
13:45:36 <efried> autoremove won't hurt.
13:45:43 <efried> But I don't think it's likely to help /boot most of the timee.
13:45:46 <efried> time
13:47:12 <esberglu> efried: Okay. I'll try to find that article I was reading, but stick with manual cleanup for now
13:47:20 <efried> You could definitely work up a cron job to keep you informed of filling file systems.
13:47:49 <esberglu> That's all I had for CI
13:47:59 <esberglu> #topic Driver Testing
13:48:06 <esberglu> We kinda covered this above
13:48:21 <esberglu> Any other thoughts about it?
13:52:50 <jay1_> any tentative dcut as such, to close the pike changes ?
13:53:56 <edmondsw> jay1_ the stuff we're still working on for pike is mostly doc changes
13:54:58 <edmondsw> I've got a change in progress for disabling the compute service if there's no VIOS or we can't talk to NovaLink REST API
13:55:06 <edmondsw> that's about it, I think
13:55:25 <jay1_> edmondsw: how about ISCSI merging, do we have any planned date ?
13:56:22 <edmondsw> jay1_ oh, I thought you were talking about IT... we're not doing iSCSI IT for pike, but yeah, we will be doing that OOT
13:57:02 <edmondsw> efried, I think there are still some IT changes that we need to push to OOT for pike, right? anything else you can think of?
13:57:30 <edmondsw> jay1_ you can look over the TODO etherpad: https://etherpad.openstack.org/p/powervm-in-tree-todos
13:57:48 <jay1_> edmondsw: sure
13:57:48 <efried> edmondsw Should all be in the etherpad, I hope.
13:57:56 <edmondsw> yep
13:59:58 <esberglu> #topic Other Discussion
14:00:10 <esberglu> Any last words?
14:01:09 <edmondsw> supercalifragilisticexpialodocious
14:02:07 <esberglu> lol
14:02:10 <esberglu> #endmeeting