Tuesday, 2017-07-25

*** thorst_afk has quit IRC00:03
*** chhavi has joined #openstack-powervm00:06
*** chhavi has quit IRC00:10
*** jwcroppe has joined #openstack-powervm00:14
*** jwcroppe has quit IRC00:32
*** svenkat has joined #openstack-powervm00:56
*** jwcroppe has joined #openstack-powervm02:07
*** jwcroppe has quit IRC02:12
*** svenkat has quit IRC02:20
*** esberglu has joined #openstack-powervm02:35
*** esberglu has quit IRC02:40
*** jwcroppe has joined #openstack-powervm02:57
*** chhavi has joined #openstack-powervm04:09
*** esberglu has joined #openstack-powervm04:23
*** esberglu has quit IRC04:28
*** thorst_afk has joined #openstack-powervm04:38
*** thorst_afk has quit IRC04:43
*** jwcroppe has quit IRC05:30
*** esberglu has joined #openstack-powervm06:11
*** esberglu has quit IRC06:16
*** thorst_afk has joined #openstack-powervm06:39
*** thorst_afk has quit IRC06:44
*** thorst_afk has joined #openstack-powervm08:40
*** thorst_afk has quit IRC08:45
*** esberglu has joined #openstack-powervm08:53
*** esberglu has quit IRC08:57
*** openstackgerrit has quit IRC10:17
*** thorst_afk has joined #openstack-powervm10:24
*** smatzek has joined #openstack-powervm11:27
*** smatzek has quit IRC11:29
*** smatzek has joined #openstack-powervm11:29
*** svenkat has joined #openstack-powervm12:04
*** svenkat has quit IRC12:23
*** svenkat has joined #openstack-powervm12:23
*** jwcroppe has joined #openstack-powervm12:25
*** jwcroppe has quit IRC12:30
*** edmondsw has joined #openstack-powervm12:36
*** apearson has joined #openstack-powervm12:50
*** esberglu has joined #openstack-powervm12:55
*** jwcroppe has joined #openstack-powervm12:59
esberglu#startmeeting powervm_driver_meeting13:00
openstackMeeting started Tue Jul 25 13:00:56 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.13:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.13:00
openstackThe meeting name has been set to 'powervm_driver_meeting'13:01
esberglu#topic In Tree Driver13:01
esberglu#link https://etherpad.openstack.org/p/powervm-in-tree-todos13:01
esbergluOh and here's the agenda13:02
esberglu    #link https://etherpad.openstack.org/p/powervm-in-tree-todos13:02
esberglu    #link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda13:02
esbergluNo topics for in tree.13:02
edmondswyeah, I don't have anything for IT13:03
esbergluReminder that this week (thursday) is the pike 3 milestone (feature freeze)13:03
efriedKeeps failing CI.  I imagine it's for whatever reason everything is failing CI, and we'll get to that later?13:04
esbergluefried: Yeah. I'll update on that status later13:04
edmondswefried should we open a bug with which to associate that change?13:04
efriedWouldn't think so.  It's not a bug.13:05
efriedand we don't need to "backport" it.13:05
edmondswI thought it fixed something, but must be wrong13:05
efriedeven if it doesn't make Pike, we don't need to backport it to Pike.13:05
esbergluedmondsw: efried: I've been seeing errors about power_off_progressive in CI, assuming this is related somehow?13:06
efriedYes, that would be related.13:07
efriedI haven't seen those.13:07
efriedWell, shit.13:07
efriedGuess we spend some time on that soon.13:07
efriedBut maybe not too soo.13:07
efriedBecause feature freeze.13:07
esbergluefried: I don't have logs handy atm, I'll link them when I get to CI status13:08
efriedThought we ran that bad boy through when it was first proposed.  But mebbe not.13:08
efriedTwill suck if power_off_progressive is STILL broken.13:09
efriedHate that code.13:09
esbergluefried: The related error is not hitting every run so we may have ran it and just not hit the issue13:09
efriedWe would have needed to run with a develop pypowervm and in-flight nova[-powervm] at the time, which I guess we might have done.  But anyway...13:10
efriedI don't have anything else in-tree.  Moving on?13:10
esbergluAnyone have anything OOT or ready to move straight to PCI passthrough?13:11
mdrabeNVRAM stuff13:11
mdrabeefried: Discuss that later perhaps?13:11
efriedthorst_afk is home with sick kid, would like him involved in both of those discussions.13:12
efriedSo the NVRAM improvement was what, again?  3.4%?13:12
edmondsw#topic Out Of Tree Driver13:12
efried...not nothing, but pretty marginal.13:13
thorst_afkI'm here.13:13
*** thorst_afk is now known as thorst13:13
efriedmdrabe And this was run through a pvc suite that actually exercises the NVRAM/slotmgr code paths?13:13
efriedHas it been run through pvc fvt kind of thing that hammers those code paths for functionality?13:14
mdrabethorst: This is about https://review.openstack.org/#/c/471926/13:14
mdrabeIt can't run through fvt until it merges13:14
thorstthat is ... quite the change set.13:14
efriedYeah, that's the problem.13:14
thorstmdrabe: it'd have to be cherry picked.13:14
*** jay1_ has joined #openstack-powervm13:15
mdrabethorst: Not necessarily within nova-powervm13:15
efriedIt completely swizzles the way nvram and slot management is stored & retrieved in swift.13:15
thorstfor a 3.4% improvement in deploy path?13:15
efriedBackward compatible, right mdrabe?  Upgrade-able?13:15
thorstoverall deploy path13:15
mdrabeYes backward compatible13:15
mdrabeIt doesn't affect the slot manager path greatly13:15
efriedBy which I mean: if you have pvc running the old code, so objects are stored in swift the old way, then you upgrade pvc, it still works on the old objects?13:16
thorstand if you have some nova computes running at old level and new nova computes running at new level13:16
thorstdoes it all just work...13:16
thorstcause agents don't get upgraded all at once.13:17
mdrabeefried: The objects aren't stored any differently13:17
thorstI think I need to spend some time reviewing this (I'm just asking naive questions)13:17
mdrabeIt's still keyed off the instance UUID regardless13:17
efriedOh, right, they were always keyed by UUID in swift - the difference is the way they were passed around nova-powervm.13:17
efriedSo yeah, thorst, that's the debate.  This is a nontrivial change across code that we chucked together at the last minute during that one TCC to cover the RR emergency, so I'm kinda scared of touching it.13:18
mdrabeActually efried this isn't that code13:19
thorstyeah, that nightmare is in pypowervm mostly13:19
mdrabeMost of the NVRAM stuff was already there, we slapped together the slot manager stuff13:19
thorstthis nvram stuff was from kyleh long ago13:19
thorstso the basic change here is - move from list to set, and allow passing in the uuid or the instance, instead of just the instance.13:20
thorstthus reducing the number of instance lookups I assume13:20
efriedThat's the gist13:20
mdrabethorst: Yes, the performance improvement comes from avoiding getting the instance object in the NVRAM event handler13:20
thorstlet me review in more detail, but I think its a solid change.  The net though is, yeah, it needs some good test pre-merge13:20
thorstso we've got to figure out a plan to do that....patch onto some PVC's I assume13:21
thorstsince PVC always has this turned on13:21
mdrabeIt might be worth noting there's also performance improvement in idle compute time13:21
mdrabe? to me or thorst?13:21
thorstI think he's asking for an explanation13:22
thorstand if he isn't, I am13:22
mdrabeAny NVRAM events that come in won't grab the instance object13:22
thorstthat's ... nice13:22
mdrabeNVRAM events aren't exclusive to deploys13:22
thorstright right...13:22
mdrabethorst: I'm not the biggest fan of the caching that's done in the change at the moment13:23
efriedOkay, so the plan is to move forward carefully, get some good pvc fvt (somehow)13:23
mdrabeI'd be curious on your review thoughts for that bit13:24
thorstI'll review today13:24
esbergluMove on to PCI?13:25
edmondswI think so13:25
esberglu#topic PCI Passthrough13:26
efriedNo progress on my end since last week.13:26
edmondswI don't have anything new here, but I wanted to make sure this is a standing item on the agenda going forward13:26
edmondswso that's all for today then, probably13:26
*** kylek3h has joined #openstack-powervm13:26
esbergluOkay on to CI13:27
esberglu    #link https://etherpad.openstack.org/p/powervm_ci_todos13:27
esberglu#topic Devstack generated tempest.conf13:27
esbergluI've got 3 changes out in devstack, need a couple more13:27
esberglu    https://review.openstack.org/#/c/486629/13:27
esberglu    https://review.openstack.org/#/c/486671/13:27
esberglu    https://review.openstack.org/#/c/486701/13:28
esbergluThen just need to backport those as needed and add the new options to the local.conf files13:28
esbergluAt that point we should be ready to use the generated tempest conf13:29
esberglu#topic CI Status13:29
*** smatzek has quit IRC13:29
esbergluThe last couple weeks the networking issues have been affecting the CI, taking down systems and leading to poor performance13:30
esbergluNow that all of that seems to have settled down (fingers crossed) I'm gonna take inventory of tempest failures13:30
efriedzat what's been causing the timeouts?13:30
esbergluefried: Seems like they have been way up when the network has been poor. Anecdotal though13:31
efriedk, well, we'll keep an eye on it.13:31
efriedProgress on the mystery 500?13:31
esbergluAnyhow I'll let you guys know what I find when looking through the failures.13:32
esbergluefried: Haven't looked back into it13:32
efriedAnd the power_off_progressive thing - you'll pop me links when you see it again?13:33
esbergluefried: Yep13:33
efriedThat's prolly all we can do here for CI, then.13:34
esberglumdrabe: Any progress on the jenkins/nodepool upgrade stuff? Haven't touched base on that in a while13:34
mdrabeesberglu Nah been entirely pvc of late13:35
mdrabebut I wanted to ask13:35
mdrabeHow tough would it be to go to all zuul?13:35
mdrabeIt's zuulv3 right?13:35
esberglumdrabe: Last time I checked zuul v3 wasn't ready for 3rd party CI's. But that may have changed13:36
esbergluOnce zuul v3 is ready I want to move to it13:36
esbergluI shouldn't be too hard. Mostly just moving our jenkins jobs definitions over to zuul (where they would be defined with ansible instead)13:37
esbergluAlright let's move on13:38
esberglu#topic Driver Testing13:38
esbergluDevelopments here?13:38
efriedjay1_ chhavi How's iSCSI going?13:40
efriedthorst wanted to make sure we got an update here.13:40
jay1_Yeah, Last week encountered the volume attach issue13:40
chhaviefried: ravi and jay is looking into it, i am not currently working on it.13:41
jay1_and by the time Chhavi/Ravi debug it further system UI was not loading might be bcz of the NW issue we got last week13:41
jay1_so, I reconfigured the system and again facing the Volume creation issue13:42
jay1_Ravi is looking into it13:42
edmondswfrom talking to ravi, he has very limited time to spend on this13:45
edmondswso we're going to have to figure that out13:45
edmondswnext topic?13:49
esberglu#topic Open Discussion13:49
esberglu#subtopic Test subtopic13:49
esbergluJust curious if that command is a thing or not13:50
esbergluI don't have anything further13:50
esbergluThanks for joining13:50
openstackMeeting ended Tue Jul 25 13:50:56 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)13:50
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-07-25-13.00.html13:51
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-07-25-13.00.txt13:51
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-07-25-13.00.log.html13:51
jay1_edmondsw You have any plans on the RTC story creation? or any other thoughts ?13:51
edmondswjay1_ I'm trying to get to it, but I honestly have no idea what I would create...13:52
jay1_Okay.. but that would give more clarity on sprint by sprint target13:53
jay1_also on the overall goal13:53
*** smatzek has joined #openstack-powervm14:04
*** smatzek has quit IRC14:05
*** smatzek has joined #openstack-powervm14:05
*** tjakobs has joined #openstack-powervm14:56
thorstefried: should I W+1 485571?15:20
efriedthorst sho15:21
thorstI'm also thinking of W+1 on 46851515:22
efriedthorst Would be nice to have the cherry-pick source in there.15:24
efriedthough it's linked in gerrit, so no biggie.15:24
efriedBut sure, that was just a cherry pick with a mod for log translation, right?15:24
*** openstackgerrit has joined #openstack-powervm15:29
openstackgerritMerged openstack/nova-powervm master: Removed older version of python added 3.5  https://review.openstack.org/48557115:29
*** smatzek has quit IRC15:29
thorstefried: 468515?15:29
*** smatzek has joined #openstack-powervm15:30
thorstthat was an OVS live migration one15:30
thorstI know pvc is using a cherry pick of it now, so its tested.15:30
efriedthorst I'm fine with it.  I'll promote to +215:30
*** jwcroppe has quit IRC15:34
*** smatzek has quit IRC15:58
*** smatzek has joined #openstack-powervm15:59
*** edmondsw has quit IRC15:59
*** edmondsw has joined #openstack-powervm16:00
*** edmondsw_ has joined #openstack-powervm16:03
*** jwcroppe has joined #openstack-powervm16:04
*** edmondsw has quit IRC16:05
*** edmondsw_ has quit IRC16:07
*** tjakobs has quit IRC16:07
*** edmondsw has joined #openstack-powervm16:18
*** edmondsw has quit IRC16:22
*** edmondsw has joined #openstack-powervm16:24
*** edmondsw has quit IRC16:28
*** edmondsw has joined #openstack-powervm16:52
*** edmondsw has quit IRC16:56
*** smatzek has quit IRC17:06
*** jwcroppe has quit IRC17:38
*** jwcroppe has joined #openstack-powervm17:48
efriedesberglu Seen this one before?18:03
efried2017-07-25 17:48:37.755 | +++functions-common:oscwrap:2538             openstack project show admin -f value -c id18:03
efried2017-07-25 17:48:41.183 | Failed to discover available identity versions when contacting Attempting to parse version from URL.18:03
efried2017-07-25 17:48:41.183 | Could not determine a suitable URL for the plugin18:03
*** apearson has quit IRC18:03
esbergluefried: Is that from CI or from a system you are stacking?18:03
efriedthe latter18:03
*** tjakobs has joined #openstack-powervm18:04
esbergluefried: Ping me creds and I can take a look18:04
esbergluefried: Not much to go off in the stack log. And none of the services are logging at this point. Looking at your local.conf right now but it looks standard18:09
efriedesberglu Yuh, that's where I had gotten to.18:10
esbergluefried: Have you been stacking and unstacking this system?18:10
efriedFirst time.18:10
efriedFresh install18:10
esbergluWelp there goes that theory. Thought maybe something got left around and a ./clean.sh would fix it18:11
efriedTrying unstack, pull devstack, restack.18:12
esbergluefried: Stacks that fail this early are hard to debug. Sometimes I just wipe out all of the openstack directories and let them get recloned which occaisonally works18:14
efriedthe openstack dirs were absent before this stack.18:15
efriedSo refreshing them seems unlikely to yield a different result.18:15
efriedBut will try, if this doesn't work.18:15
esbergluefried: Not for your system. I was just saying when they fail this early I end up having to resort to solutions like that18:16
efriedyeah, I get it.18:17
efriedsame result18:18
esbergluefried: Try the syslog. I'm gonna look through recent devstack commits quick18:19
efriedesberglu How do I look at the syslog?18:19
esbergluefried: /var/log/syslog18:19
esbergluWith whatever you like looking at logs with18:20
efriednothing interesting in there, I don't think.18:22
esbergluefried: Nothing recent in devstack standing out either18:22
efriedWe just dropped a big ol' keystoneauth1 change that tries to do things like getting version numbers from URLs.18:22
efriedBut I can't imagine this wouldn't have shown up in some other dsvm gate if that was rotten.18:23
efriedincluding our CI, for that matter.18:23
esbergluefried: Yeah I imagine it would have been caught. Just looked at some CI systems, that URL is correct18:27
* efried has never, ever, ever stacked successfully the first time.18:27
esbergluefried: It's not hitting the CI stacks (at least not yet)18:29
esbergluMaybe some sort of dependency issue?18:30
esberglurabbitmq-server is running, sometimes that doesn't start up properly and causes weird failures18:30
efriedrebooting.  Cause whytf not.18:32
*** apearson has joined #openstack-powervm18:45
*** chhavi has quit IRC19:02
efriedesberglu Is there a reason we're using mod_wsgi instead of uwsgi?19:04
esbergluefried: It didn't work for us right away and other stuff has been higher priority19:05
esbergluIts on the TODO list19:05
efriedesberglu Well, maybe it's time I tried it.  How would I do that?19:05
esbergluefried: Does this mean you got your base stack working?19:05
esbergluIn the local.conf there is a line WSGI_MODE=...19:06
esbergluJust remove/comment that out19:06
esbergluAnd it will then default to uwsgi19:06
esbergluCan't remember what the issue with it was. IIRC switching to uwsgi didn't cause the stack itself to fail19:07
esbergluBut caused issues with the services after stacking19:07
efriedoh, goodie.19:08
efriedThe comment in the local.conf says something about placement not starting.19:08
efriedMebbe that's fixed.19:08
efriedAnyway, trying it...19:08
*** smatzek has joined #openstack-powervm19:13
*** jay1_ has quit IRC19:25
esbergluefried: We have been enabling cinder in CI but not running any of the cinder tests. I enabled the cinder tests just to see what would happen and it didn't look good19:43
esbergluYou okay with just disabling cinder until I have a chance to look into it?19:43
efriedesberglu Sure.19:43
efriedesberglu Once I converted to uwsgi, keystone logs showed up under journalctl.  The logs showed tracebacks with pastedeploy19:44
esbergluefried: I'm disabling it with the other local.conf changes for the devstack generated tempest.conf. I'll leave a comment19:44
efriedStupid, stupid pastedeploy.19:44
esbergluefried: sudo pip install -U pastedeploy??19:46
esbergluI thinks that's what I had to do last time I hit issues with pastedeploy19:47
efriedI generally apt-get remove python-paste* and pip remove Paste*19:47
efriedpip only showed Paste, which seems like less than usual.  And I've never actually nailed down what the root of the problem is; I just know that typically removing everything paste-related will fix it.19:48
esbergluefried: Any luck?20:34
efriedesberglu Got past that part, now it's failing on image create, weirdly.  Cause when I run that image create command manually, it works fine.20:35
efriedLooks like g-api may be choking on uwsgi, so gonna revert that to mod_wsgi and try again.20:36
*** edmondsw has joined #openstack-powervm20:40
*** apearson has quit IRC20:56
*** smatzek has quit IRC20:57
*** esberglu has quit IRC21:31
*** svenkat has quit IRC21:39
*** esberglu has joined #openstack-powervm21:45
*** kylek3h has quit IRC22:12
*** svenkat has joined #openstack-powervm22:38
*** thorst has quit IRC23:06
*** tjakobs has quit IRC23:08
*** edmondsw has quit IRC23:10
*** edmondsw has joined #openstack-powervm23:11
*** edmondsw has quit IRC23:15
*** jwcroppe has quit IRC23:36
*** jwcroppe has joined #openstack-powervm23:37
*** jwcroppe has quit IRC23:37
*** svenkat has quit IRC23:46

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!