Tuesday, 2018-03-20

*** k0da has quit IRC01:01
*** nullsign has joined #openstack-powervm01:59
*** zerick has quit IRC03:14
*** zerick has joined #openstack-powervm03:18
*** zerick has quit IRC03:20
*** chhagarw has joined #openstack-powervm04:49
*** AlexeyAbashkin has joined #openstack-powervm08:04
*** k0da has joined #openstack-powervm08:21
*** k0da has quit IRC10:45
*** chhagarw has quit IRC11:01
*** chhagarw has joined #openstack-powervm12:07
*** edmondsw has joined #openstack-powervm12:17
*** AlexeyAbashkin has quit IRC12:48
*** AlexeyAbashkin has joined #openstack-powervm13:10
edmondswesberglu any idea why this still hasn't merged? https://review.openstack.org/#/c/553168/13:16
edmondswseems like the gate must have lost it... how do we get it to go again, remove and readd +@?13:16
efriedOr I can try tacking on a +W13:17
edmondswwell we both did :)13:18
*** openstackgerrit has joined #openstack-powervm13:29
openstackgerritMerged openstack/ceilometer-powervm master: Updated from global requirements  https://review.openstack.org/55316813:29
*** tjakobs has joined #openstack-powervm13:36
*** esberglu has quit IRC13:39
*** esberglu has joined #openstack-powervm13:42
edmondsw#startmeeting PowerVM Driver Meeting14:00
openstackMeeting started Tue Mar 20 14:00:08 2018 UTC and is due to finish in 60 minutes.  The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: PowerVM Driver Meeting)"14:00
openstackThe meeting name has been set to 'powervm_driver_meeting'14:00
edmondswagenda: https://etherpad.openstack.org/p/powervm_driver_meeting_agenda14:00
edmondsw#topic In-Tree Driver14:00
*** openstack changes topic to "In-Tree Driver (Meeting topic: PowerVM Driver Meeting)"14:00
edmondswesberglu I just gave you a few easy comments on the vscsi commit14:01
edmondswthen that should be good for efried to look at14:01
efriedI'm still ploughing my way through my review backlog, and the in-tree patches are on it.14:02
edmondswwe're stacking things up for the nova cores to look at, with none of them actually looking for a while14:02
esbergluedmondsw: Saw that. Yep once I fix those it's ready for review14:02
efriedWe ought to have runways kicked off "soon" - like maybe this week14:02
edmondswefried I would look at that one next of the IT patches14:02
esbergluI live tested attach/detach again which looks good and also got extend_volume working14:02
edmondswnext on my list is snapshot, then disk adapter14:02
esbergluedmondsw: Cool14:02
edmondswesberglu anything else you want to say for IT?14:03
efriedin which case we can queue up the couple we have ready14:03
esbergluI just had a question about microversions14:03
edmondswefried sorry I didn't follow that14:03
edmondswask away14:03
esbergluWhen I was issuing commands using the cinder CLI it was defaulting to 3.014:04
esbergluIn cinder/api/openstack/api_version_request.py14:04
esbergluI changed the  _MIN_API_VERSION = "3.42"14:05
esbergluThe level required for extending attached volumes14:05
esbergluIs that the proper way to go about changing the min?14:05
efriedFor the CLI?  Yes.  It isn't?14:05
esbergluI couldn't find a clear answer in my googling14:05
edmondswwait... maybe I'm not following what you're trying to do14:05
edmondsware you working on a cinder change?14:06
efriededmondsw: I got this.14:06
esbergluedmondsw: No I was testing extend_volume in the vSCSI change14:06
efriedesberglu: Different CLIs are different.14:06
edmondswthen you just need to set an env var, not alter code14:06
* edmondsw letting efried explain14:07
efriedSome default to minimum version.  Some default to a specific version.  Some negotiate latest version with the server per call.  Clearly cinder is the first one.14:07
efriedAnd yes, there's an env var and/or flag you can specify to the CLI to get a different version.14:07
efriedAND the CLI has to know how to handle whatever operation you're trying to execute.14:07
efriedSo sometimes it may not be sufficient just to increase the version.14:08
edmondswI think in this case I checked and it should be, but that was last week so... :)14:08
efriedBut it sounds like in this case you determined that the CLI *does* support it, and that you need to use 3.42 to get that support switched on.14:08
efriedSo you're doing the right thing.14:08
edmondswefried by altering code? why would you say that over using an env var?14:09
efriedeh?  I never said altering code.14:09
edmondswesberglu said he changed _MIN_API_VERSION in cinder/api/openstack/api_version_request.py14:09
efriedOh, I didn't follow that esberglu was actually changing code.  My bad.14:09
edmondswor at least that's how I read it14:09
esbergluYeah I should have used an env. var but the end result is the same14:09
efriedesberglu: There'll be an env var and/or CLI option to set the microversion.14:09
efriedDo that instead.14:09
efriedBut if you're just testing locally, meh.14:10
efriededmondsw: Runways.  In Nova.  Basically a mechanism to promote more equitable distribution of review time.  See https://etherpad.openstack.org/p/nova-runways-rocky14:10
esbergluAnyways, point is that extend_volume works for attached vSCSI volumes if using the right microversion14:10
esbergluThat's all I had14:11
esbergluJust need reviews on snapshot and vscsi14:11
edmondswefried we need to get the spec approved...14:12
esbergluefried: Saw that you asked about fast approve there, hopefully someone will push it through14:12
edmondswI pinged melwitt the other day and she said just to put it on https://etherpad.openstack.org/p/rocky-nova-priorities-tracking which it was, but no activity yet14:12
*** k0da has joined #openstack-powervm14:13
efriedYeah.  Re spec approval, when we talked about it the other day, I mentioned that we shouldn't be too strict about the "whole approved blueprint" aspect of this.14:13
efriedSo like, I fully intend to put the spec down as part of the runway if it's not approved by the time we get this going.14:13
efriedesberglu: btw, are you planning to go on vacation any time soon?14:14
efriedor be otherwise unavailable to address review comments quickly?14:14
esbergluefried: Nothing more than a day or 2 until July14:14
efriedcause that would be the main thing stopping us from queuing up a runway.14:14
edmondswwe won't want to wait for "The code for the blueprint must be 100% ready for review"14:15
edmondswI read the "If a blueprint is too large" following sentence as saying it still has to be 100% ready, but it doesn't have to all be reviewed in the same runway... but I hope I'm wrong there14:16
efriedjgwentworth was supposed to leave the discussion in a separate etherpad.14:17
efriedIt's at the bottom.14:17
edmondswcool, I'll read over that later14:18
edmondswand add comments as appropriate14:18
edmondswtx for the link14:18
edmondswanything else for IT?14:18
edmondsw#topic Out-of-Tree Driver14:18
*** openstack changes topic to "Out-of-Tree Driver (Meeting topic: PowerVM Driver Meeting)"14:18
edmondswI did some more reviewing on the refactor... need to finish that and post the comments14:19
efriedI did some reviewing there.  Did I post comments?14:19
efriedI can't remember whether I saved 'em.14:19
edmondswyep, I see 'em14:20
efriedI'm still -0.5 until we get some live testing.  Which is gonna suck.14:20
edmondswyeah, I think we're all agreed this will need some good testing14:20
edmondswjust wanted to get comments addressed first, so we're not doing that multiple times14:21
edmondswand I uncovered something interesting while digging through this... we don't have iSCSI live migration implemented14:21
edmondswso I added that to the TODO list. PowerVC wants it14:21
edmondswthe other big thing OOT is https://review.openstack.org/55217214:22
edmondswhope to see a pypowervm release today or tomorrow so we can propose a requirements bump and unblock that14:23
edmondswanything else to discuss OOT?14:23
efriedhave we released pypowervm yet?  Or tagged it or whatever?14:23
efriedCause I had one comment in the refactor about something that we should be doing in pypowervm instead of nova-powervm.14:24
edmondswI asked about that yesterday, and you and hsien both said sure, but it hasn't been done14:24
efriedSo if we could get that into the release, that would help.14:24
edmondswlet's drive that today14:24
esbergluefried: edmondsw: IIRC that was a vscsi comment that will apply IT as well14:25
edmondswah, yeah, that would apply to the IT commit as well14:25
edmondswanything else?14:26
efriedWell, first we need someone to confirm that always caching it is The Right Thing.14:26
efriedI don't know from WWPNs, so that someone ain't me.14:26
edmondswcertainly... you want to discuss that here or after the mtg?14:27
edmondswI'd rather do after so I can stop and think about it14:27
edmondsw#topic Device Passthrough14:27
*** openstack changes topic to "Device Passthrough (Meeting topic: PowerVM Driver Meeting)"14:27
edmondswefried and I went through use cases yesterday and took some notes14:28
edmondswI need to work that up and get a mtg on the calendar with the NovaLink guys14:28
edmondswefried you have the floor14:29
efriedNothing to add14:29
edmondsw#topic PowerVM CI14:29
*** openstack changes topic to "PowerVM CI (Meeting topic: PowerVM Driver Meeting)"14:29
edmondswesberglu ?14:30
esbergluSince the start of the weekend CI has not been looking great, a bunch of new failures and long runs14:30
esbergluI've been taking inventory of failures here https://etherpad.openstack.org/p/powervm_tempest_failures14:31
esbergluThat's what I'll be working on today14:31
edmondswesberglu sure... focus on that and hold off on the vscsi IT commit respin while we workout the question of whether to move that into pypowervm14:32
esbergluedmondsw: Yep I'm gonna just sit IT until we start getting some movement so I don't have to rebase the world14:32
esbergluOther than that the CI management upgrade is ongoing, facing some roadblocks upgrading nodepool14:32
esbergluWe're currently nodepool 0.3.014:32
esbergluStarting in 0.4.0 they don't allow the flow of taking an image, doing some stuff, taking a snapshot of it and spawning from the snapshot14:33
esbergluAnd instead you have to use diskimage-builder14:33
esbergluWhich from what I've read so far only supports 14.0414:33
esbergluAnd I really don't want to go back to 14.04 from 16.0414:33
edmondswI hope that's not right... someone you can catch on IRC to talk about that?14:34
esbergluBut there may be a way around that, I need to do some more recon14:34
edmondswnot sure who... maybe tonyb?14:34
esbergluedmondsw: I really haven't looked to much into, just saw a blurb about it. I'm sure there are people using new nodepool with 16.0414:35
esbergluSo there's got to be a solution14:35
esbergluThe other thing I need to do is update the CI firmware14:36
edmondswgetting the CI stable again is obviously the priority, but you might want to shoot off a couple feelers so that you have suggestions ready to try when you can get back to this14:36
esbergluSo I will need to find a good time to take the CI down for a day or so14:36
edmondswthat related to the undercloud moving to queens, or just normal need to apply security updates and such?14:37
esbergluSecurity updates14:37
edmondswok good14:37
edmondswmaybe do that on a Friday?14:37
esbergluedmondsw: Yeah that was my plan, probably not until next week though14:38
esbergluThat's it for me14:38
edmondsw#topic Open Discussion14:38
*** openstack changes topic to "Open Discussion (Meeting topic: PowerVM Driver Meeting)"14:38
esbergluI'll try to time it so we don't have much in the pipeline getting blocked14:38
edmondswI had one thing to bring up here14:38
edmondswwe talked a little about the logo the other day: http://eavesdrop.openstack.org/irclogs/%23openstack-powervm/%23openstack-powervm.2018-03-14.log.html#t2018-03-14T19:03:1914:39
edmondswbut I don't think I saw esberglu chime in14:39
edmondswany thoughts?14:39
edmondswor if anyone else is lurking and wants to throw something out there...14:39
esbergluedmondsw: +1 on gorilla, thought that was a great idea14:39
edmondswthen barring something unexpected, I'll ask for gorilla14:40
edmondswthat's it from me14:40
esberglunothing else from me14:40
edmondswsure we can talk about that... let me go back to your comment14:40
efrieddef get_physical_wwpns(adapter):14:41
efried    """Returns the active WWPNs of the FC ports across all VIOSes on system.14:41
efried    :param adapter: pypowervm.adapter.Adapter for REST API communication.14:41
efried    """14:41
efried    vios_feed = vios.VIOS.get(adapter, xag=[c.XAG.VIO_STOR])14:41
efried    wwpn_list = []14:41
efried    for vwrap in vios_feed:14:41
efried        wwpn_list.extend(vwrap.get_active_pfc_wwpns())14:41
efried    return wwpn_list14:41
efried    def get_active_pfc_wwpns(self):14:41
efried        """Returns a set of Physical FC Adapter WWPNs of 'active' ports."""14:41
efried        # The logic to check for active ports is poor.  Right now it only14:41
efried        # checks if the port has NPIV connections available.  If there is a14:41
efried        # FC, non-NPIV card...then this logic fails.14:41
efried        #14:41
efried        # This will suffice until the backing API adds more granular logic.14:41
efried        return [pfc.wwpn for pfc in self.pfc_ports if pfc.npiv_total_ports > 0]14:41
efriedSo the question is: does the list of pfc_ports, or their number of npiv_total_ports, never* change?   (*without, like rebooting)14:42
* efried has GOT to figure out how to turn off emojis in this IRC client)14:42
edmondswphysical WWPNs shouldn't change unless you hotplug an adapter... but I think you can do that?14:42
efriedno idea14:43
edmondswso this may have been an oversight when the code was first written and we've just never hit an issue because folks don't hotplug FC adapters on a regular basis14:43
efriedIf you can, then the nova-powervm code is wrong.14:43
efriedyeah, that.14:43
edmondswwe should have this conversation with seroyer14:43
edmondswlet's move that to slack so we can pull him in14:44
edmondswanything else before we close here?14:44
efriednothing from me.14:44
*** openstack changes topic to "This channel is for PowerVM-related development and discussion. For general OpenStack support, please use #openstack."14:44
openstackMeeting ended Tue Mar 20 14:44:50 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:44
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2018/powervm_driver_meeting.2018-03-20-14.00.html14:44
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2018/powervm_driver_meeting.2018-03-20-14.00.txt14:44
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2018/powervm_driver_meeting.2018-03-20-14.00.log.html14:44
efriedtest: :)15:06
efriedMan, that took some hacking.15:07
edmondswwhat irc client?15:35
efriedSame as I use for email (that isn't us.ibm.com)16:48
*** AlexeyAbashkin has quit IRC16:53
*** AlexeyAbashkin has joined #openstack-powervm18:14
*** AlexeyAbashkin has quit IRC18:14
*** AlexeyAbashkin has joined #openstack-powervm18:15
*** AlexeyAbashkin has quit IRC18:44
*** chhagarw has quit IRC19:02
*** AlexeyAbashkin has joined #openstack-powervm19:03
esbergluedmondsw: efried: I'm seeing an error in CI when trying to create lpars where there aren't enough available processor units to complete the request19:30
esbergluAt first glance I thought we might just need to allocated more to our AIO tempest vms19:30
esbergluBut this is only ever happening for in-tree CI runs, so I'm worried this is a side effect of something else19:31
efriedThis sounds vaguely familiar...19:31
edmondswdoesn't sound familiar to me, so I'll defer to efried...19:32
esbergluExample run, I haven't looked into it much, thought I would float it out and see if you guys had any ideas first19:33
esbergluLooks like there are some differences in the VMBuilder code between IT and OOT19:39
esbergluIT doesn't have a few of the processor conf options that OOT does19:42
esbergluCould this be related to the missing proc_units_factor opt?19:43
esbergluI'm not 100% clear on what that does from the description19:44
esberglu"Factor used to calculate the processor units per vcpu. Valid values are: 0.05 - 1.0"19:44
efriedBasically a multiplier for CPUs.  Could definitely do it.19:45
efriedFor every one physical CPU we have, multiply by 1/that value to get the number of virtual CPUs we can supply.19:46
esbergluOOT is setting that to 0.1, where IT is using the pypowervm default of 0.519:49
efried0.5 or 0.05?19:49
efriedSo we'll have 5x less CPU in tree.19:49
efriedBut has something changed recently that is pushing us over the line?19:50
efriedHow frequently are we running up against this, and since when?19:51
*** AlexeyAbashkin has quit IRC19:51
efriedDoes it go away if we bump down our concurrency level?19:51
esbergluefried: Last time we were seeing this error the pok network was on the fritz and CI was in a bad state in general19:53
esbergluThe frequency has gone way up in the last week or so19:53
efriedAnd we use the pypowervm default in tree because we decided not to expose that conf option?19:54
efriedOr we expose it and just aren't setting it?19:54
esbergluefried: We didn't expose the conf option19:54
esbergluHaven't tried changing the concurrency level19:54
efriedConcurrency level is not preferred solution because it'll slow down our runs.19:55
esbergluWe're gonna have to carry an experimental patch forward in IT CI runs to even confirm this is the issue19:55
efriedLet's just try to meld in a patch that hardcodes that to 0.119:55
efriedyeah, that.19:55
efriedYou got that covered or want me to propose it?19:55
esbergluefried: I've can do it19:56
tonybesberglu, edmondsw: diskimage-builder supports 16.04 ATM, and we're working on 18.04 support for "from image" builds, the ubuntu-minimal element should work for 18.04 today20:24
esberglutonyb: Thanks for the info!20:25
tonybesberglu: np20:25
esbergluThat should make the CI mgmt upgrade much simpler20:25
tonybesberglu: If you get stuck on diskimage-builder $stuff ping me20:26
esberglutonyb: Will do tx20:27
edmondswtonyb tx much!20:27
*** k0da has quit IRC20:28
esbergluedmondsw: efried: 6407 for experimental proc_units_factor patching20:30
*** esberglu has quit IRC20:31
*** openstackgerrit has quit IRC20:33
*** esberglu has joined #openstack-powervm20:34
*** k0da has joined #openstack-powervm20:41
*** edmondsw has quit IRC20:51
*** AlexeyAbashkin has joined #openstack-powervm21:19
*** AlexeyAbashkin has quit IRC21:23
*** esberglu has quit IRC21:35
*** esberglu_ has joined #openstack-powervm21:40
*** esberglu_ has quit IRC21:45
*** tjakobs has quit IRC21:49
*** AlexeyAbashkin has joined #openstack-powervm22:19
*** AlexeyAbashkin has quit IRC22:23
*** k0da has quit IRC22:30
*** AlexeyAbashkin has joined #openstack-powervm23:19
*** AlexeyAbashkin has quit IRC23:23

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!