Tuesday, 2017-06-27

*** thorst has joined #openstack-powervm00:01
*** thorst has quit IRC00:03
*** thorst has joined #openstack-powervm01:04
*** thorst has quit IRC01:10
*** mdrabe has quit IRC02:34
*** thorst has joined #openstack-powervm03:35
*** thorst has quit IRC03:41
*** esberglu has quit IRC04:01
*** thorst has joined #openstack-powervm04:38
*** thorst has quit IRC04:43
*** thorst has joined #openstack-powervm06:47
*** thorst has quit IRC06:52
*** k0da has joined #openstack-powervm07:29
*** thorst has joined #openstack-powervm08:49
*** thorst has quit IRC08:53
*** tonyb_ has joined #openstack-powervm09:30
*** tonyb has quit IRC09:35
*** thorst has joined #openstack-powervm10:26
*** thorst has quit IRC10:29
*** smatzek has joined #openstack-powervm11:26
*** jpasqualetto has joined #openstack-powervm12:08
openstackgerritOpenStack Proposal Bot proposed openstack/ceilometer-powervm master: Updated from global requirements  https://review.openstack.org/47791712:08
openstackgerritOpenStack Proposal Bot proposed openstack/networking-powervm master: Updated from global requirements  https://review.openstack.org/47797612:14
openstackgerritOpenStack Proposal Bot proposed openstack/nova-powervm master: Updated from global requirements  https://review.openstack.org/47798412:17
*** edmondsw has joined #openstack-powervm12:19
*** kylek3h has joined #openstack-powervm12:29
AndyWojoI am filling out the OpenStack Survey that just got opened up, and PowerVM is not on the hypervisor list for me to choose.12:40
*** mdrabe has joined #openstack-powervm12:49
*** jay1_ has joined #openstack-powervm12:57
*** esberglu has joined #openstack-powervm13:00
esberglu#startmeeting powervm_driver_meeting13:01
openstackMeeting started Tue Jun 27 13:01:15 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.13:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.13:01
openstackThe meeting name has been set to 'powervm_driver_meeting'13:01
efriedAndyWojo Well, I wouldn't say what we've got in tree is usable at this point.  Were there other third-party (out-of-tree) drivers in the survey?13:02
esberglu#link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda13:02
esberglu#topic In Tree Driver13:02
esberglu#link https://etherpad.openstack.org/p/powervm-in-tree-todos13:02
esbergluAnything to talk about here? Or still just knocking through the todos?13:03
edmondswAndyWojo which survey is that? I thought the Ocata survey was closed a while back. We are working to get on the survey.13:03
jay1_efried: what operations are ready for test in IT ?13:04
efriedjay1_ That hasn't changed in a while.  SSP disk was the last thing we merged.13:05
edmondswjay1_ no network yet IT13:05
jay1_that would be the next one to do ?13:05
edmondswjay1_ but you can deploy with SSP boot disk and do some things like stop, restart... see the support matrix13:06
edmondswefried you have a quick link for that?13:06
efriedConfig drive will probably be next, cause it's easy.13:06
edmondswjay1_ network will be one of the priorities for queens, along with config drive13:06
efried#link https://docs.openstack.org/developer/nova/support-matrix.html13:06
jay1_efried, in that matrix PowerVM refers OOT only ?13:08
efriedjay1_ IT only13:08
jay1_ah ok.13:08
efriedOOT we have a lot more green check marks13:08
esbergluThat's OOT13:09
edmondswesberglu we need to change the OOT version's checkmarks to green... would be much easier to read13:09
edmondswI'll throw that on the TODO13:09
esbergluAlright sounds like that's it IT13:11
esberglu#topic Out Of Tree Driver13:12
jay1_when is the next ISCSI integration point ?13:13
jay1_is that integration done ?13:13
efriedHave you heard anything from chhavi about the latest pypowervm + https://review.openstack.org/#/c/467599/ ?13:14
efriedShe was going to sniff test that to make sure we didn't need any further pypowervm fixes so we can cut a new release.  I want to get that done pretty quickly here.13:14
edmondswefried agreed13:14
edmondswjay1_ please talk to chhavi about this. I'll send a note as well to try to push this along13:15
jay1_edmondsw: sure13:16
edmondswnote sent13:18
esbergluIf it turns out you do need pypowervm fixes let me know and I can push a run through CI with it when ready13:18
esbergluNvm just clicked on the review13:19
esbergluI don't think it would hit any changes going through our CI?13:20
efriedIt what?13:20
efriedThe pypowervm that's merged right now is copacetic.  Last thing merged was the power_off_progressive change, and you already tested that.13:21
efriedThe question is whether we're going to need anything else in 1.1.613:21
esbergluWell any pypowervm changes would be related to ISCSI right? Which isn't part of the CI13:21
efriedesberglu Well, right, but a regression test wouldn't be a bad thing.13:21
esbergluSo I don't know that the changed paths would get hit13:21
esbergluYeah I can push one anyways just to be safe13:21
edmondswmdrabe efried should we talk about https://review.openstack.org/#/c/471926/ now?13:23
edmondswwe've had emails flying back and forth... hash it out?13:23
edmondswmdrabe you still here?13:24
mdrabeYea I'm gonna whip that up this afternoon I think13:24
edmondswwhat exactly does that whipping entail? ;)13:24
mdrabeWith the caching, and evacuating on instance deletion events13:24
edmondswhow do you plan to demonstrate perf improvement to satisfy efried?13:25
mdrabeRespond to efried's comments and introduce the caching to event.py13:25
mdrabeStop calling that instance object retrieval13:25
efriedI think we're out of runway to get arnoldje to test this.13:25
edmondswright, I was afraid of that13:25
efriedWho's his replacement, and does said replacement have the wherewithal and time to do it?13:26
edmondswI haven't heard of a replacement... I can ask13:26
mdrabeIf anything I can test it myself, though I don't have any fancy performance tools13:26
AndyWojoedmondsw: The OpenStack User Survey. Only PowerKVM was on the list, I selected other and filled in PowerVM, since I'm in the middle of implementating it13:27
efriedmdrabe Yeah, I'm obviously concerned that it *works*, but that's not sufficient for me to want to merge it.  We have to have a demonstrable nontrivial performance improvement to justify the risk.13:28
mdrabeFor the caching I'm still concerned in the pvc case around management of LPARs13:28
efriedWhen arnoldje validated the PartitionState change, he was able to produce hard numbers.13:28
efriedMy fear is that this change is bigger & more pervasive, but will yield a smaller return.13:29
mdrabeI've no hard numbers, but he said something of a 10-12% deploy time improvement13:29
edmondswAndyWojo I think the last user survey is closed. But I'm hoping to have PowerVM on the October one.13:30
mdrabeBut there're fewer NVRAM events that PartitionState events13:30
mdrabeduring deploy13:30
AndyWojoedmondsw: they just sent an e-mail out about the user survey is now open, and it's for June - Dec13:30
edmondswmdrabe efried yeah, arnoldje had estimated something like 5% improvement for this13:30
efried7.2% improvement was what he said for the PartitionState change.13:30
AndyWojoOpenstack Operators List13:31
edmondswAndyWojo ok, hadn't seen that yet... guess we missed the boat. Will shoot for the next one then13:31
edmondswannasort gave me a couple names to do perf testing now, I'll ping them to you efried mdrabe13:35
mdrabeedmondsw Yea I got em13:35
efriededmondsw Ping anyway, maybe your names are different than mine.13:35
edmondswpinged you both on slack13:36
mdrabeK so I'll work on that. good?13:37
efriedCool man.13:37
esbergluAlright lets move on to CI then13:38
esberglu#topic PowerVM CI13:38
esbergluThe network issues caused quite a bit of inconsistency so I redeployed last night13:39
esbergluThen the control node's /boot/ dir filled up which also caused a bunch of inconsistencies13:40
esbergluIs the proper way to clean that out13:40
efriedJust can't get a break, can ya13:40
esbergluapt-get autoremove?13:40
*** thorst has joined #openstack-powervm13:40
edmondswesberglu what filled that partition? Ideas on how to prevent that in future?13:40
esbergluedmondsw: I'm pretty sure you can just run apt-get autoremove and it cleans it out, however I'm no expert on apt13:41
esbergluBut since it was at 100% that command was also failing13:41
esbergluSo I had to manually go in and clean out the old ones13:41
efriedI wouldn't expect /boot to be affected by autoremove.13:42
efriedDo you have old kernels lying around?13:42
efriedI had that happen.13:42
esbergluefried: Yeah13:42
efrieddpkg -l | grep linux-image13:43
efriedIf you see more than one rev, you can *probably* apt-get remove all but the newest.13:43
esbergluefried: That sounds scary, what happens if the newest errantly gets deleted?13:44
efriedYou don't boot.13:44
efriedBut don't do that.13:44
esbergluefried: Yeah we just have to make sure that the logic is really good13:44
*** thorst has quit IRC13:44
efriedThis is not something I would automate, dude.13:44
efriedDo it once to free up space.13:44
efriedManually type in the full package names of the old ones.13:45
esbergluefried: Yeah but I want to add a step that would clean this every time13:45
edmondswyou could automate detection of the problem... cron job that emails you if it sees things are getting filled up?13:45
edmondswbut right, don't automate cleanup13:45
esbergluAnd I read something last night saying apt-get autoremove would do that13:45
efried"every time" isn't a thing that should happen for old kernel images.13:45
efriedautoremove won't hurt.13:45
efriedBut I don't think it's likely to help /boot most of the timee.13:45
esbergluefried: Okay. I'll try to find that article I was reading, but stick with manual cleanup for now13:47
efriedYou could definitely work up a cron job to keep you informed of filling file systems.13:47
esbergluThat's all I had for CI13:47
esberglu#topic Driver Testing13:47
esbergluWe kinda covered this above13:48
esbergluAny other thoughts about it?13:48
jay1_any tentative dcut as such, to close the pike changes ?13:52
*** smatzek has quit IRC13:52
edmondswjay1_ the stuff we're still working on for pike is mostly doc changes13:53
*** thorst has joined #openstack-powervm13:54
edmondswI've got a change in progress for disabling the compute service if there's no VIOS or we can't talk to NovaLink REST API13:54
edmondswthat's about it, I think13:55
jay1_edmondsw: how about ISCSI merging, do we have any planned date ?13:55
*** thorst has quit IRC13:56
edmondswjay1_ oh, I thought you were talking about IT... we're not doing iSCSI IT for pike, but yeah, we will be doing that OOT13:56
edmondswefried, I think there are still some IT changes that we need to push to OOT for pike, right? anything else you can think of?13:57
edmondswjay1_ you can look over the TODO etherpad: https://etherpad.openstack.org/p/powervm-in-tree-todos13:57
jay1_edmondsw: sure13:57
efriededmondsw Should all be in the etherpad, I hope.13:57
esberglu#topic Other Discussion13:59
esbergluAny last words?14:00
openstackMeeting ended Tue Jun 27 14:02:10 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-06-27-13.01.html14:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-06-27-13.01.txt14:02
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-06-27-13.01.log.html14:02
esbergluefried: Turns out there is a command for removing old kernels14:19
*** smatzek has joined #openstack-powervm14:21
edmondswesberglu cool14:27
edmondswefried do we need to s/e/str(e)/ for something like this?14:28
edmondswLOG.debug('Instance with PowerVM UUID %s not found: %s', pvm_uuid, e)14:28
edmondswI can never remember...14:28
edmondswefried nm, seems like it should work as-is14:33
efriededmondsw I can never remember that one either.14:44
efriedIt's also a different answer for py2/py314:45
efriedBut I think the oslo_log library fixed it up so you don't need to do anything to it anymore.14:45
efriedAnd according to the guidelines, you're actually not _supposed_ to do anything to it, because that allows the string processing to be deferred into the logger, so it only happens if the log message is going to be emitted.14:46
*** mdrabe has quit IRC14:52
openstackgerritEric Fried proposed openstack/nova-powervm master: Clean up log messages  https://review.openstack.org/47669514:56
*** mdrabe has joined #openstack-powervm15:10
*** k0da has quit IRC15:11
*** thorst has joined #openstack-powervm15:13
esbergluefried: edmondsw: thorst: Any powervm patches that need a CI recheck? It's back online15:20
efriedI don't see anything in my queue15:21
esbergluefried: I rechecked the log one for you15:22
esbergluAnyone know what the status of that is? It's been sitting around for a while15:23
efriedthorst ^^ ??15:23
thorstI think it's still a TODO with manas, but must be very low priority15:23
thorstmdrabe: can you verify?15:23
mdrabeManas hasn't mentioned that at all to me15:24
efriedI can fix up the typos.15:24
efriedHaven't we already done the work for this?15:24
thorstnot ure15:25
*** thorst has quit IRC15:29
openstackgerritEric Fried proposed openstack/nova-powervm master: Allow dynamic enable/disable of SRR capability  https://review.openstack.org/39957915:31
efriedmdrabe IIRC, this capability was implemented at the core level - we didn't need changes in pypowervm or the community code.15:32
mdrabeI thought it was a different attribute though15:32
efriedI'm 95% sure there was no enforcement/checking at or above pypowervm, so it should just work.15:33
mdrabeThere probably should be enforcement15:33
esbergluefried: edmondsw: We aren't using zuul-merger correctly15:33
mdrabeNot enforcement, but validation15:33
esbergluWe should be using the refs that it creates, not pulling from gerrit15:33
efriedmdrabe There's a capability bit in managed_system15:34
mdrabeThere actually might be logic that says "Hey you're trying to flip RR while the partition is active - FAIL"15:35
mdrabeI'd need to look at the code though, I haven't done that15:36
efriedmdrabe https://github.com/powervm/pypowervm/commit/97f15c2e646e4d5273f1dd8ddb684430819a09b615:38
efriedmdrabe There's no checks like that in pypowervm or nova_powervm.  And if there are in pvc, that's not the business of this blueprint.15:40
esbergluefried: edmondsw: Actually I take that back. Since we have the patching logic we need to use gerrit. And I don't really want to be pulling some stuff from gerrit and some from the zuul-mergers15:40
mdrabeefried: https://github.com/powervm/pypowervm/blob/develop/pypowervm/utils/validation.py#L500-L51015:40
mdrabeI don't think nova-powervm uses that, but pvc does15:40
efriedhmph, I stand corrected.15:40
efriedmdrabe So then it looks like we would need a pypowervm change.15:42
efriedStill not really sure a nova-powervm blueprint is pertinent.15:43
mdrabeYea, it'd probably be pretty lightweight15:43
mdrabeYea I'm wondering why the bp was put up15:43
efriedmdrabe Might as well get that change into this 1.1.6, I guess.15:48
mdrabeefried I'll send a note to Manas to see if he wants to drive that, I got an unrelated question for ya...15:52
mdrabeDo you remember why the high bit on LPAR UUIDs can't be set?15:52
efriedmdrabe I'm already working on the pypowervm change FYI.15:52
mdrabeOh nice15:52
efriedmdrabe The Power platform uses a different UUID format version than OpenStack.15:53
efriedI don't know the specific version numbers and whatnot, could find it out if necessary.15:53
efriedBut the one Power uses always has the high bit zero, and the one OpenStack uses doesn't.15:53
efriedmdrabe Yeah, that double lookup thing sucks ass.15:54
mdrabeefried: Hypothetical: What if we just knocked off the high bit of every instance UUID on deploy?15:56
efriedmdrabe At first blush, I don't hate that idea at all.15:57
efriedWe're already compromised collision-wise.15:57
efriedIt's just a chance we're taking.15:57
efriedSo we might as well commit to it.15:57
mdrabeCompromised how?15:57
efriedThe hypothetical situation where there's two instances whose UUIDs only differ by that one bit.15:58
efriedThat's why.15:58
efriedWe *do* knock that bit off.15:58
efriedWe can't tell Nova to change the UUID.15:58
efriedSo in nova it is what it is.15:58
efriedSo yeah, in the scenario where two instances are created with UUIDs that only differ by that one bit, I believe creation of the second one will just fail.15:59
mdrabeBy the time it gets to the driver it's already set in motion?15:59
efriedOh yeah.  By the time we get to the driver, the Instance object has already been created and populated.16:00
mdrabeYea and probably other objects that reference the UUID16:00
efriedI suppose we could... fail to deploy any instance that came at us with the high bit set.16:01
efriedSo, half of them :)16:01
efriedNot a viable solution.16:01
efriedThe only real problem is the reverse mapping.16:01
efriedAnd of course, any solutions to that problem are pretty heavyweight and/or brittle.16:02
efriedLike trying to maintain a mapping cache in the driver.16:02
mdrabeRight, I don't wanna have to look up the instance object at all given the LPAR UUID, if all I'm trying to find is the instance UUID16:02
efriedmdrabe Right, here's where I thought it would be nice to have a (nova) API for instance_exists(uuid)16:03
efriedA very quick db check to see if that UUID is in there at all.  It would still have to go across the wire, but it would be sending basically a bool instead of a whole Instance object.16:03
efriedMaking a case for the usefulness of that API in the community might be tough.16:04
mdrabeI remember discussing that with arnoldje and there was something weird about that but I can't remember what16:05
*** thorst has joined #openstack-powervm16:11
efriedmdrabe 551116:25
efriedthorst ^^16:26
efriedAnd I say we merge that spec.16:26
mdrabeefried +116:29
efriedmdrabe +1 to merging the spec too?16:29
*** burgerk has joined #openstack-powervm16:29
mdrabeYea, we'll get some test around16:30
mdrabearound it*16:30
*** smatzek_ has joined #openstack-powervm16:35
efriedthorst If you get a chance, clarify that process?16:36
efriedTalking about merging https://review.openstack.org/#/c/399579/ -- not sure what order that's supposed to happen, or what it commits us to, or whatever.16:36
thorstit just means that we will get it done16:36
thorstby Pike16:36
thorstif already done, then just merge away  :-)16:37
efriedThe nova-powervm side will just be updating the pypowervm rev, which the bot will do for us ultimately.16:37
thorstthen yeah, I'm good with the merge16:37
thorstI'll do a quick review16:37
*** smatzek has quit IRC16:37
efriedthorst 5511 as well please16:37
thorstthere are no changes to the resize flow for this?16:38
efriedthorst AFAICT, the resize flow doesn't ever touch this in nova-powervm.16:38
thorstthe spec says that the 'resize' will update that attribute16:38
thorstso if you deployed it the old way16:39
efriedI suppose you could mebbe do it via an extra_spec in your flavor when you resize.16:39
thorstyou could update it to the new way16:39
efriedRight, original flavor has the bit off, new flavor has the bit on.16:39
efriedBut nothing in the community code looks at that field specifically.16:40
thorstok.  Yeah, if that's all transparent and this is just a pypowervm change...then +216:40
thorstwhich it sounds like it is16:40
efriedthorst I'm *pretty* sure that's the case.  mdrabe said he would get some testing done around it.16:40
thorstI -1'd 5511...I don't think that attribute was always there16:41
thorstseems like a get(attr, False) would be safer16:42
efriedthorst The attr is definitely there - responded with link.16:44
*** burgerk_ has joined #openstack-powervm16:46
thorstsince beginning of time?16:47
efriedthorst It's pypowervm-to-pypowervm.16:50
efriedIf you're running the code I changed, you're running against the code that always has that field in the dict.16:50
*** burgerk has quit IRC16:51
efried...which defaults to False if the REST API doesn't return that field.16:51
efriedesberglu -- new test we need to disable?16:55
efriedesberglu https://review.openstack.org/#/c/413606/17:03
efriedesberglu And we don't implement that driver method.17:03
*** jay1_ has quit IRC17:05
*** smatzek has joined #openstack-powervm17:07
*** smatzek_ has quit IRC17:08
*** burgerk_ has quit IRC17:09
esbergluefried: 551217:38
*** k0da has joined #openstack-powervm17:53
efriedmdrabe 5511 is merged.17:58
*** k0da has quit IRC17:58
efriedesberglu +217:58
esbergluefried: thx17:58
efriedInstance recheckability?17:58
efriedesberglu ^ ?17:59
esbergluefried: Yep17:59
*** jay1_ has joined #openstack-powervm18:05
*** dwayne has quit IRC19:01
*** dwayne has joined #openstack-powervm19:17
*** jay1_ has quit IRC19:35
efriedesberglu Did you rebuild VIOSes recently in any of the CI nodes?19:44
efriedWherever this guy ran doesn't have a media repo
efriedAnd I guess we don't have the logic to create one if it doesn't exist?? I really thought we did.19:45
edmondswefried added a late comment on 551119:47
efriededmondsw Okay.  You feel like proposing the change?19:48
esbergluefried: No not recently. Probably a month ago?19:50
efriedesberglu Do you have a way to figure out which VIOS that guy ran on, and verify whether there's a media repo there?19:51
efriedIf there ain't, I suspect we may have a timing bug :(19:51
efriedI mean, if there is.19:51
openstackgerritMerged openstack/nova-powervm master: Updated from global requirements  https://review.openstack.org/47798419:53
esbergluefried: Command for checking?19:53
efriedesberglu Uhhh.  Stand by.19:54
efriedesberglu pvmctl repo list19:54
esbergluefried: Yeah it does20:12
*** thorst has quit IRC20:21
*** smatzek has quit IRC20:31
*** k0da has joined #openstack-powervm20:33
efriedesberglu I rechecked that guy.  I expect we'll never see this problem again.  Probably not worth fixing tbh.20:39
esbergluefried: Sounds good. How did you know that's what it was? Just so I can keep an eye out20:40
efriedesberglu HTTP error 400 for method PUT on path /rest/api/web/File/contents/91878c76-0088-4bfb-97bc-2589b3cca13c: Bad Request -- The target VIOS does not have a MediaRepository.  One must be created before attempting to upload an ISO file into the repository.20:40
efriedThat should never happen.20:40
efriedCause we check and create the repo before we use it.20:41
efriedAnd the creation shouldn't be necessary here, cause it already exists.20:41
efriedThis seems like it would have to be a REST server problem.20:42
efriedesberglu We don't get the REST logs yet, do we?20:43
esbergluefried: Nope...20:44
esbergluNow that I don't have to redeploy every 2 minutes I'm starting to burn through the backlog a lot quicker20:44
esbergluSo it should be coming soon20:44
efriedesberglu Okay, so yeah, that's gotta be a problem on the REST side, and we won't get any debug from changh et al without the logs.20:45
esbergluefried: Still a one-off though you're thinking?20:46
efriedesberglu Unless that pvm-rest and/or VIOS is broke.20:46
efriedCause nothing in that code path has changed in forever.20:46
esbergluEh I don't think so. I wasn't seeing any issues on that system20:47
efriedesberglu Any idea what's going on here?
efriedI'm posting another recheck there.20:48
esbergluefried: That's the error we used to git when get was failing clones. Except I believe it was error 9 instead of 2420:48
esbergluget when git.o.o20:49
esbergluSo probably just a networking glitch? We never found the root cause20:52
*** k0da has quit IRC20:54
*** jpasqualetto has quit IRC20:59
*** thorst has joined #openstack-powervm21:36
*** thorst has quit IRC21:42
*** k0da has joined #openstack-powervm21:42
*** esberglu has quit IRC21:43
*** esberglu has joined #openstack-powervm21:58
*** esberglu has quit IRC22:04
*** dwayne has quit IRC22:06
*** mdrabe has quit IRC22:14
*** thorst has joined #openstack-powervm22:16
*** thorst has quit IRC22:17
*** kylek3h has quit IRC22:23
edmondswesberglu why would we want to update pypowervm past for ocata CI runs?22:51
*** dwayne has joined #openstack-powervm23:03
*** k0da has quit IRC23:22
*** k0da has joined #openstack-powervm23:32
*** thorst has joined #openstack-powervm23:51
*** thorst has quit IRC23:52

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!