Friday, 2017-04-07

*** thorst has joined #openstack-powervm00:11
*** edmondsw has joined #openstack-powervm00:15
*** thorst has quit IRC00:15
*** edmondsw has quit IRC00:19
*** thorst has joined #openstack-powervm00:21
*** thorst has quit IRC00:28
*** zerick has quit IRC00:40
*** zerick has joined #openstack-powervm00:42
*** thorst has joined #openstack-powervm00:52
*** thorst has quit IRC00:52
*** thorst has joined #openstack-powervm01:26
*** thorst has quit IRC01:26
*** chas has joined #openstack-powervm02:16
*** chas has quit IRC02:21
*** zerick has quit IRC03:16
*** jay1_ has joined #openstack-powervm03:32
*** adreznec has quit IRC04:26
*** zerick has joined #openstack-powervm04:26
*** adreznec has joined #openstack-powervm04:27
*** shyama has joined #openstack-powervm04:32
*** chas has joined #openstack-powervm05:16
*** jay1_ has quit IRC05:16
*** chas has quit IRC05:20
*** jay1_ has joined #openstack-powervm05:30
*** esberglu has joined #openstack-powervm05:49
*** esberglu has quit IRC05:53
*** openstackgerrit has joined #openstack-powervm06:12
openstackgerritOpenStack Proposal Bot proposed openstack/nova-powervm master: Updated from global requirements
*** jay1_ has quit IRC07:21
*** shyama has quit IRC07:25
*** k0da has joined #openstack-powervm07:37
*** jay1_ has joined #openstack-powervm07:39
*** chas has joined #openstack-powervm07:40
*** chas has quit IRC07:41
*** chas has joined #openstack-powervm07:41
*** chas has quit IRC07:44
*** chas has joined #openstack-powervm07:44
*** chas has quit IRC07:49
*** shyama has joined #openstack-powervm07:51
*** shyama_ has joined #openstack-powervm07:57
*** shyama has quit IRC07:59
*** shyama_ is now known as shyama07:59
*** shyama has quit IRC08:28
*** esberglu has joined #openstack-powervm08:32
*** esberglu has quit IRC08:36
*** esberglu has joined #openstack-powervm09:29
*** esberglu has quit IRC09:34
*** chas has joined #openstack-powervm09:45
*** chas has quit IRC09:50
*** jay1_ has quit IRC09:51
*** chas has joined #openstack-powervm11:04
*** chas has quit IRC11:09
*** openstackgerrit has quit IRC11:18
*** thorst has joined #openstack-powervm11:47
*** shyama has joined #openstack-powervm11:58
*** edmondsw has joined #openstack-powervm12:09
*** shyama_ has joined #openstack-powervm12:12
*** shyama has quit IRC12:15
*** shyama_ is now known as shyama12:15
*** jpasqualetto has joined #openstack-powervm12:22
*** jpasqualetto has quit IRC12:27
*** jpasqualetto has joined #openstack-powervm12:40
*** esberglu has joined #openstack-powervm12:43
*** chas has joined #openstack-powervm13:05
*** smatzek has joined #openstack-powervm13:10
*** chas has quit IRC13:10
*** apearson has joined #openstack-powervm13:10
*** smatzek has quit IRC13:12
*** smatzek has joined #openstack-powervm13:12
*** mdrabe has joined #openstack-powervm13:14
thorstesberglu: do we have the capability to get certain runs through on the CI?13:28
thorstI noticed the nodes weren't cleaning up still...but I think you had a manual way to do that?13:29
esbergluI mean I can go through and manually wipe them out. But I took it back down to debug this morning13:29
thorstI really want efried's fixes through today, and we need to fix the big ceilometer-powervm backlog we have at the moment.13:29
thorstwe've got kind of a mess in the ceilometer space atm...13:29
thorstmaybe we just enable for the -powervm projects during this interim while we debug overall?13:30
esbergluYeah. I wish I would have kept the staging up so we could just look at the results there13:30
esbergluCuz now I can't get that back up either13:30
esbergluAlso some of the runs going through were failing like everything13:31
*** shyama has quit IRC13:31
esbergluSo I don't really know that trying to get it back up is a good solution13:31
thorstesberglu: well, everything will fail until we get efried's change delivered.13:32
thorstthey'll just hang indefinitely13:32
esbergluOh yeah duh13:32
thorstso we need to push that through to get out of the suck13:32
esbergluHow about this13:32
esbergluI try to get staging back up this morning in the background13:33
esbergluWhile I continue looking into the production issue13:33
esbergluIf I don't make progress by midday13:33
esbergluI deploy prod with just the *-powervm projects13:33
thorstOK - my goals today...get efried's patch in and get the crap in ceilometer-powervm fixed up and delivered...13:34
esbergluDepends on how fast we need efried's fix in today I guess13:34
thorstso a mid day go round for that is good for me13:34
thorstmid day maybe being 11 AM CST?13:34
thorstcause the CI will take 90 min or so to go through for his change13:34
thorstand then the ceilometer-powervm ones need to run behind that13:35
esbergluMaybe I should just go ahead and do it now13:35
esbergluBecause deploying management will take an hour13:35
esbergluand then building the image template will take another 4013:36
thorstand while that's going you can debug staging as well?13:36
thorstI'm good with that13:36
thorstnot sure about efried...he may not like it13:36
thorstbut he's not here to argue atm13:36
esbergluI'll let you know when it's ready13:36
thorstthx dude13:36
esbergluBut this means CI probably won't be truly fixed until early next week13:37
thorstwe just need a way to get the -powervm backlog solved13:37
thorstcause my queue kept me awake last night.13:37
esbergluYep. Sounds like a plan13:37
*** adreznec has quit IRC13:39
*** dwayne has quit IRC13:47
*** adreznec has joined #openstack-powervm14:00
*** thorst has quit IRC14:15
*** thorst has joined #openstack-powervm14:18
*** adreznec has quit IRC14:23
*** smatzek has quit IRC14:37
*** thorst has quit IRC14:39
*** dwayne has joined #openstack-powervm14:40
efriedHey guys, sorry, had an awards assembly for a kid.  It's not a diaper, but it takes longer ;-)14:41
efriedesberglu You need anything from me atm?14:41
*** adreznec has joined #openstack-powervm14:42
*** thorst has joined #openstack-powervm14:43
*** thorst has quit IRC14:43
*** thorst has joined #openstack-powervm14:44
efriedesberglu Is the CI in a state where I can try a recheck on ?14:47
esbergluefried: No we were just discussing what to do since CI was down and we need to get some changes through14:47
esbergluHere's the plan14:47
esbergluNodepool still isn't deleting nodes14:47
esbergluMeaning I have to manually clean them up until it is fixed14:47
esbergluFor the time being we have limited production CI to only the *-powervm projects14:48
efriedBut that doesn't affect the ability to do runs, right?14:48
esbergluTo limit the amount of manual cleanup needed14:48
efriedJust means you have to babysit the pool?14:48
efriedOkay, so let's do that at least long enough to get through.14:48
esbergluI redeployed the prod pool again, nodes are spawning as we speak14:48
esbergluYou can kick off a recheck now14:48
esbergluzuul will pick it up14:48
esberglubut it will still be maybe 20 min before a node is ready to run it on14:49
esbergluIn the meantime I'm working on staging CI to see if this nodepool stuff is hitting us there14:49
esbergluTo determine whether it's an environmental thing14:49
esbergluor not14:49
esbergluBut long story short you can get runs through on anything *-powervm14:50
esbergluBut not nova for now14:50
thorstesberglu: and the ready nodes have the latest pypowervm?14:51
*** adreznec has quit IRC14:52
esbergluYeah the out of tree runs pick up develop14:52
*** adreznec has joined #openstack-powervm14:53
thorstefried: my Ceilometer gameplan...14:58
thorst1) get your fix in for CI.  Make sure it works.14:58
thorst2) I think gordc's change here is ready to go in once it passes CI:
thorst3) Then his update here:
thorstI'll rebase that one once 2 is in14:59
thorst4) Merge this in once CI is good:
*** mdrabe has quit IRC14:59
thorst5) Rebase Gautam's change with gordc's from items 2/315:00
thorstI suspect item 5 won't be much at that point, I'll connect up with Gautam offline to see if there are still changes to be made15:00
esbergluefried: thorst: Kicked off a recheck for 1)15:04
*** mdrabe has joined #openstack-powervm15:06
*** chas has joined #openstack-powervm15:06
efriedesberglu thorst Looking at the console log for the last failing run, funkiness is happening with pypowervm.15:06
efriedTo the point where I'm not sure we're getting the version we want.15:06
efriedFirst of all, it's grabbing the develop branch into /opt/stack/pypowervm.  Which would be okay, since develop is past 1.1.1 - but I think it's still not what we want.15:07
thorstefried: we did that because we had to patch things into pypowervm for the remotability aspects15:07
efriedBut then later on, as networking-powervm is installing, it claims to be using pypowervm from /usr/local/lib/python2.7/dist-packages/pypowervm15:09
efriedAnd I have no idea what version that guy is.15:09
efriedSame thing happens when installing ceilometer-powervm15:09
efriedPoint is, by the time stack is done, I'm not sure which instance we're actually using, or what version it's at.15:10
efriedIma check the compute log now and see if I can figure that out.15:11
*** chas has quit IRC15:11
thorstefried: yeah...super ... odd15:11
efriedcompute is using /usr/local/lib...15:11
efriedesberglu thorst Ah, I think we may have a stale marker LU.15:12
efriedOr did, at the time.15:13
thorstI have to defer to esbergluhere...15:13
esbergluShould be good then this run. I ran the cleaner that wipes all the vms and lus15:13
efriedokay, good.15:13
esbergluBefore deploying mgmt15:13
esbergluefried: Nodes just came online, run just started15:15
efriedThe bad(ish) news is that we don't actually know whether that pypowervm is 1.1.1 (or later).  esberglu is there a node running that's already stacked that I can log into quick?15:15
*** shyama has joined #openstack-powervm15:15
efriedPM me an IP?15:16
efriedThe good(ish) news is that networking- and ceilometer-powervm are still reqing pypowervm without a version, so it ain't gonna blow away whatever version is there.15:16
efriedBut it's still a mystery why it ends up in /usr/local/lib15:16
efriedesberglu After this shitstorm has abated, I think we should fix it not to use develop - just to use the requirements version.  We can still patch in local2remote.  If it's a matter of finding out where it's installed, we can do that too.15:17
esbergluI'm already doing that for in-tree15:18
esbergluSo no problem15:18
esbergluWe are using the u-c for in-tree15:18
esbergluDo we want that or g-r?15:19
esbergluI think they are the same right now15:19
esbergluBut don't have to be15:19
efriedesberglu I don't think we should be doing anything - let pypowervm get pulled in from the nova req.15:20
esbergluWe can't15:20
efriedWe need pypowervm - including remote - before we stack.15:20
esbergluSo we have to wipe the reqs before stacking15:20
efriedDo we need it before we clone nova-powervm?15:20
efriedand/or nova?15:21
esbergluYeah we need it to be in for the ready node script15:21
esbergluWhich is before that15:21
efriedWell, we could still get around that.15:21
efriedesberglu So if we wanted to make sure we get the nova-powervm (or whatever) requirements.txt version of pypowervm (and everything else), and we wanted to do it up front before cloning anything, we can do something like:15:28
efriedsudo pip install -r
thorstefried: I thought we actually already do that for each project15:29
efriedI just tried that and it works.15:29
thorstas part of the nightly build or something15:29
efriedthorst During stacking.  This is before that.15:29
thorstno, I thought as part of nightly15:29
thorstso that during stacking it took less time15:29
efriedNow, we might want to be a tad smarter about it and try to figure out how to make a URL explicitly to the change set we're testing.15:29
efriedthorst All I can tell you is the console log for a given run shows us explicitly cloning and installing pypowervm.15:29
thorstefried: right...but I think that's because we ripped pypowervm out of the requirements.txt15:30
esbergluLet me explain what happens now15:30
thorstI'm just saying, if we don't rip pypowervm out of the requirements it may already be there...15:30
esbergluThere seems to be confusion15:30
esbergluSo during the prepare_node_powervm script (nightly image build)15:31
esbergluWe install develop pypowervm15:31
esbergluand apply local2remote15:32
esberglubecause we need it for the ready node script15:32
esbergluThen when a run gets kicked off15:32
esbergluFor in-tree: We install pypowervm from requirements upper-constraints15:33
esbergluFor out of tree: we install develop15:33
esbergluBoth with the local2remote applied15:34
efriedIn both cases?15:34
esbergluYeah right now it installs again for OOT (dumb, we don't need to do that, already installed)15:34
efriedOkay, that's cool.  So we're replacing the image build's develop+local2remote during the run anyway.15:35
esbergluActually not dumb15:35
esbergluIn case new changes are in develop that we want15:35
esbergluAfter it gets installed we wipe pypowervm from15:35
efriedBut we're removing the pypowervm line from requirements (just g-r, or also project requirements files?)15:35
esberglurequirements: upper-constraints and global-requirements15:36
esbergluAs well as nova and nova-powervm requirements15:36
esbergluWhen we stack, devstack thinks it's a version that isn't allowed15:36
esbergluand overwrites15:36
efriedOkay.  Nothing during stacking itself actually needs pypowervm, though (right?)15:36
esbergluI don't believe so15:37
efriedAnd your script gets control again after devstack so it can do more stuff.15:37
*** k0da has quit IRC15:37
efriedSo I kinda suspect if there's a requirements bump of pypowervm within one of our change sets, it won't get honored anyway, as we're currently set up.15:37
efriedThat said, we shouldn't be getting requirements bumps in random change sets anymore anyway - just from the bot.15:38
efriedIf we ever happen to need one of those to make CI succeed, we'll be scrood.15:38
efriedesberglu Does the per-run stuff need pypowervm+local2remote *before* stacking?15:39
efriedLet's table that for a sec - if it does, we'll just have an extra step.15:40
efriedSo what I think we should do is, first thing after devstack, explicitly pip install -r <project that we're testing>/requirements.txt and then apply local2remote to whatever pypowervm exists on the other side of that.15:41
efried(We should also put an explicit version of pypowervm into networking-powervm and ceilometer-powervm requirements.txt - I think that's going to come once the bot stuff is active.)15:41
thorstefried: yeah...I agree that networking-powervm and ceilometer-powervm can now be pinned15:41
thorstinstead of develop15:42
efriedAnd if it turns out we don't need pypowervm+local2remote *before* devstack, we can just take that part out.15:42
*** mdrabe has left #openstack-powervm15:42
*** mdrabe has joined #openstack-powervm15:42
* efried resists making another diaper joke15:42
* thorst remembers the 4 am projectile from 8 hours ago...15:44
efriedadreznec - Once this mess is sorted out, there's change sets in the pipe to add networking- and ceilometer-powervm to the requirements update bot, right?15:45
thorstomg adreznec is here?15:46
efriedOkay, so how's that plan sound esberglu?15:46
thorstquick!  pile on work!15:46
adreznec and, then
adreznecOne part of it already merged15:46
efriedesberglu ==> First thing after devstack, explicitly pip install -r <project that we're testing>/requirements.txt and then apply local2remote (and any other custom patches) to whatever pypowervm exists on the other side of that.15:47
*** shyama has quit IRC15:47
esbergluefried: Sounds reasonable. I'm looking at the scripts quick to make sure we aren't missing anything15:48
adreznecefried: esberglu I know we've had issues in the past with needing updates to the patch to match latest develop...15:49
adreznecHow are we planning to handle multiple patch versions15:49
efriedadreznec We're at a place where we shouldn't be using anything from develop except local2remote anymore.15:50
esbergluefried: Won't we need to apply the patch before pip installing (after stacking)?15:51
efriedSwhy we went through all the trouble of breaking the pypowervm release process free.15:51
adreznecBut what if 1.1.1 and 1.1.2 have changes in between that require different patch versions15:51
adreznecand Pike reqs 1.1.1 and Queens reqs 1.1.215:51
esbergluRemind me why we can't just get the local2remote stuff included in pypowervm?15:53
thorstesberglu: local2remote basically makes it so we look at the RMC connection and identify the IP address to remote to15:54
thorstbut then we mask it to look like its local15:54
thorsteven though its remote...15:54
efriedthorst It could possibly be done, though right now we've got some defaults in the patch that we should remove (like username/password for th REST server)15:55
esbergluIt would make this all a lot easier15:55
efriedTrue dat.15:55
efriedCourse it wouldn't help us until we release 1.1.2 ;-)15:56
esbergluIt would knock out all the issues we are discussing right now15:56
thorstremoving it?  Sure.  But yeah, that15:56
thorstand also, how to find user name/password15:56
efriedLet me look into it.15:56
thorstand I don't want the nova-powervm code having options to pass that in15:56
thorstbecause then it looks like we support running nova-powervm from a remote server15:56
thorstand we really don't...just for CI15:56
adreznecAt one point we talked about possibly having pypowervm look for those creds in a config file15:57
thorstadreznec: I don't remember that...but I don't hate that idea.15:57
adreznecThat could be dropped into a CI environment15:57
adreznecCan't remember why we didn't go that route though15:58
esbergluYeah and I could just ansible vault that file15:58
thorstprobably because if someone knew about it they could still make a case for remotability15:58
thorstwhere as this was so hidden (because we were literally patching the code)15:58
thorstbut it's biting us as we mature.15:58
esbergluYeah. It will get especially sloppy once we are running multiple branches for in-tree and out-of-tree15:59
thorstyeah, so I'mgood with something for 1.1.2 that mimicks this behavior16:01
thorstwill just be a pain supporting newton, pike, etc...16:02
efriedthorst esberglu We _do_ support having the creds in a config file, but we generate the file with defaults if we don't find it.  And I'm pretty sure that's the code path we're taking in the CI, cause we're not shipping that file, and the setup is only getting done once.16:02
thorstefried: when did that support drop in?16:02
thorstand what about an IP16:02
efriedI dunno, I remember working on it a few months ago, but I don't remember if I introduced it.16:03
efriedThe IP is detected based on RMC something-or-other.16:03
thorstwe have to give it an address to use...though I suspect we could hard code that to or something16:03
thorstwhoa...then maybe problem solved?16:03
efriedWell, here's what I'm thinking.16:03
efriedI'd like to put the vast majority of this code into a separate .py that lives in the CI project.16:04
efriedAnd have the pypowervm setup do a conditional local import of that .py maybe based on some env var (_PYPOWERVM_LOCAL2REMOTE_DO_NOT_USE_THIS_UNLESS_YOU_ARE_US=1).16:05
efriedBetter yet: _PYPOWERVM_LOCAL2REMOTE_DO_NOT_USE_THIS_UNLESS_YOU_ARE_US=/path/to/local2remote.py16:06
thorsta dev flag?16:06
thorstadreznec: how's that sit with you?16:06
thorstI can be convinced of that personally16:07
thorst(does this push us out to not having the nova-powervm change merged today?)16:08
efriedNono, separate discussion.16:08
efriedThe nova-powervm change should be CIing right now, and it has all the pieces in place from what esberglu and I can tell.16:08
adreznecthorst: efried seems fair, though we'll have to support the existing patch mechanism for a while I guess due to Newton/Ocata16:08
thorstchip away at it16:09
efriedRoger wilco.  Will need esberglu's help figuring out how to ship the files.  esberglu Where do you want them located?16:10
efriedI think I'll prolly want to ship a .py and a .sh16:10
esbergluThey should go in16:10
efriedSince all of the host discovery nonsense is shell stuff anyway.16:10
efriedesberglu rgr.  On it.16:11
esbergluThis is fantastic16:11
thorsta rare fantastic.  :-)16:17
*** shyama has joined #openstack-powervm16:24
*** chas has joined #openstack-powervm17:07
*** chas has quit IRC17:12
thorstefried: no love on your change17:23
thorstlooks to me like a timeout again17:23
efriedResults not posted yet?17:23
thorsto, duh17:24
thorstI'm a moron17:24
efriedYou looking at a different branch?17:24
thorstI was looking at Apr 6th results17:24
thorstI assumed they had to be done by now17:24
efriedThough I would have expected results by now.17:24
efriedwhich doesn't bode well.17:24
efriedShould take <2h, neh?17:24
thorst2 hour 10 min by now17:25
thorst(I'm on the jenkins)17:25
thorstjust hopped on the node17:26
thorstnot timing out, but hitting some exceptions17:26
thorstHTTP error 400 for method PUT on path /rest/api/web/File/contents/c4fc0760-6c99-412f-8743-6854534560fe: Bad Request -- REST002C Content-Length specified in header does not match that of the meta file: 104,857,60017:26
thorstso uhh...looks to me like it found a legit bug?17:27
thorstefried: want on the system?17:36
thorstwhile it's still running?17:36
efriedthorst Was that in config drive?17:37
efriedWhat do we have that's 100MB?17:37
efriedWe're not using a dummy image, are we?17:37
thorstthat's the ole dummy image17:38
thorstwhich is 100 MB17:38
thorstbut I mean, images can be a couple megs17:38
efriedk, thought we were using a real image these days.17:38
thorstwe just increase the size to 1 GB later17:38
efriedSo this is a PITA; I don't really want the REST server checking File sizes.17:39
efriedAnd I think we can get around it.17:39
efriedDamn.  Not without another pypowervm change.17:40
efriedleast, not for SSP.17:40
efriedDamn, damn, damn.  Why wouldn't this show up in testing?17:40
*** jpasqualetto has quit IRC17:43
thorstyou used whole round numbers17:44
efriedthorst as opposed to what?17:45
efriedIt's a byte size at this point.  We don't have fractional bytes.17:47
efriedThis is the byte size we get directly from glance.17:47
efriedIt passes alll the way through.17:47
thorstwell, that should be fine, right?17:59
thorstwhy would the content length be different then?17:59
thorstis it because we round up to a gig?18:00
thorstthat may be something we can fix on the REST side...18:00
efriedthorst Yah, but we wouldn't want to have to require a new pvm-rest back to newton.18:02
efriedwould we?18:02
thorstnot sure...I mean, its not like you can go back once a new one is out18:02
efriedThere's no rounding, btw.18:02
thorstwell, maybe we just need Hsien to look asap18:03
efriedI looked at this with apearson already.18:03
thorstand the fix is in rest or?18:03
efriedthorst Wait, is that node still running??18:03
efriedI need to look at the REST logs quick.18:04
thorstno, finished finally18:04
thorstkick off a recheck18:04
thorstwait an hour18:04
thorstthen you have two hours with it18:04
thorstalthough, REST logs are on the NL18:04
thorstand that's still alive18:04
esbergluThe node is still alive though18:04
efriedohh, riight.18:04
esbergluSince they arent deleting18:04
efriedYeah, I need the neo.  PM me that?18:04
thorstI have the neo IP18:04
thorstsending your way18:05
efriedOkay yeah.18:05
esbergluI found something suspicious that might be behind the nodepool stuff18:06
thorstdoooo tell18:06
esbergluWhen I check the status of jenkins18:06
efriedSo this is a problem we were seeing with the requests library defaulting Content-Length to zero when it detects a certain type of input stream.18:06
efriedI thought I had it sussed.18:06
efriedAnd I tested it, dammit.18:06
esbergluThe status is18:07
esbergluActive: active (exited)18:07
esbergluI'm looking at what that means18:07
esbergluBut I think that should be18:07
esbergluactive (running)18:07
thorstactively exited  :-)18:07
efriedSo assuming it really doesn't work the way I've got it in pypowervm now, the possible solutions are:18:08
efriedIn pypowervm, quit populating the stupid f_size field on the File.  apearson asserts that, if that field is left empty, they don't do that check.18:08
efriedIn REST, accept Content-Length of zero.18:09
efriedapearson proposed a change to do just that, a few months ago, but we decided we didn't need it, so we abandoned it.18:09
thorstefried: maybe we could make a local2zero patch that we put in pypowervm as part of the CI?18:09
efriedI can't imagine this has anything to do with local vs remote.18:09
thorstif we do the content length of the change in nova-powervm?18:09
efriedThat's the thing - local or remote should now be using the exact same code path - IO_STREAM and API upload.18:10
thorst(which I like more now...)18:10
thorstmy joke was lost on you...disregard the patch thing18:10
efriedNo, but it's a valid thing to think through anyway.18:11
efriedWe can't finagle it from nova-powervm.  Change has to be in pypowervm.18:11
thorstI'd say we should stop filling in the field on the file18:12
thorstand try that18:12
thorstMAYBE do a patch to get CI running until 1.1.2 goes out?18:12
thorst(which may need to be sooner rather than later...)18:12
efriedesberglu How long to get a new quick-and-dirty pypowervm patch pulled in?18:13
*** jpasqualetto has joined #openstack-powervm18:13
esbergluJust have to reload the jobs18:13
efriedesberglu 5109 (thorst)18:14
efriedThe real solution will be somewhat less sledgehammery.18:14
efriedSimpler might be to get apearson to put out a new pvm-rest with the Content-Length=0 tolerance and respin the node image with it.18:16
esbergluefried: Jobs are updated. Shoot18:17
efriedShoot what?  Recheck?18:17
efriedthorst The other thing we could do is reintroduce IterableToFileAdapter.  Then the solution would be in nova-powervm only.  We only ever saw this problem through the use-the-iterable-directly-from-glance path (even though I f**king FIXED it).18:18
thorstwell, maybe that's our solution for earlier releases18:20
thorstand maybe even this release18:20
thorstuntil we do 1.1.2 properly...18:20
efriedThe earlier releases do have IterableToFileAdapter, I believe.18:20
efriedCan't remember exactly why.18:20
esbergluefried: Yeah you can recheck, 5109 is in18:21
*** smatzek has joined #openstack-powervm18:21
efriedBecause we thought we could make them work with an older pypowervm - IO_STREAM + coordinated?18:21
efriedk, recheck going.18:21
esbergluNice we got a +2 on the 1st in-tree patch18:32
efriedI think he's taking it back.18:33
*** smatzek has quit IRC18:33
esbergluwell shit18:33
esbergluSmall stuff? I didn't look yet18:33
esbergluAssuming it must be18:33
efriedesberglu Will in-tree CI pass if we put up a new patch there?18:35
thorstthey don't translate log messages anymore...interesting!18:35
esbergluIt won't even run18:35
efriedesberglu Can it be MADE to run?18:36
efriedOr maybe we just stay quiet about it and let it NOT run.18:36
efriedthorst See comment about differentiating between our driver and libvirt+phyp18:36
efriedI didn't even know that existed.18:37
efriedOr what it means.18:37
efriedCan you help me come up with some text quick to explain that in our docstring?18:37
thorstyeah...the libvirt+phyp project is something that I believe is dead, calls back to an HMC, and I have no clue how it does virtual I/O18:37
thorstadreznec knows a bit more.18:37
esbergluefried: I could potentially turn on nova runs long enough to just let your run through then disable right after18:37
efriedesberglu Stand by with that.18:38
adrezneclibvirt+phyp is basically just it sshing into HMC/IVM and running CLI18:38
adreznechasn't been updated in ages afaik18:39
efriedShould I just add "...using NovaLink"?18:39
efriedOr "PowerVM NovaLink implementation of Compute Driver."?18:39
thorstyeah, but I do wonder if those questions will persist.  But I don't feel like code is the right place to sort that out18:40
efriedthorst adreznec Do virt drivers include any kind of readme in the code tree?  Wouldn't have thought so.18:41
thorstI don't think so18:42
adreznecI mean you could definitely add Novalink in there to clarify18:42
thorstbut we do have our powervm wiki18:42
adreznecNot that I've seen18:42
efriedKind of place where we would say, "Requires NovaLink with such-and-such packages blah blah"18:42
thorstand maybe reference it there?18:42
thorstwe can put a blurb in that...18:42
thorstand then in the code, maybe a link to the wiki?  That still feels a bit weird but...18:42
adreznecThere's this for hyper-v18:42
adreznecMaybe we could do something similar for powervm?18:43
thorstright, but in time?18:43
thorstlike...we'd need a while to get to the point to do that18:43
thorstlike more than skeleton code  :-)18:43
adreznecwell... maybe it's just a skeleton doc18:43
thorstpeace with that18:43
efriedShould I include that PowerVM link in the docstring?18:44
adreznecI mean the lxc one is pretty short18:44
efriedYabut those are not in the code tree.18:45
efriedI think mriedem just wants the docstring to make it clear that this isn't libvirt+phyp.18:45
efriedI don't think he's asking for a full reference doc here.18:45
efriedI'm looking short-term, just in this change set.18:45
adreznecI vote we just add the Novalink word like you said18:47
thorstthen yeah, call out that it requires PowerVM NovaLink18:47
adreznecand link to the novalink page18:47
adreznecand be done with it then18:47
esbergluefried: Not gonna turn on CI quick for in tree #1?18:51
efriedthorst What do you think?18:51
thorstif it'll pass...OK18:52
efriedWe've demonstrated that it passes.  Not sure mriedem is going to be looking for it again at this point.18:52
efriedLong as it doesn't show up failed.18:52
esbergluI think we just leave it18:52
esbergluIf he asks, just say our CI is having issues18:52
esbergluWe know that it is fine at this point18:52
esbergluIt has passed like 20 times18:52
esbergluWell not that many but still18:52
esbergluAnd it's already +2 again18:53 back to the CI fiasco!18:55
* efried is rebasing the whole pile again, and getting rid of _L*18:57
*** k0da has joined #openstack-powervm18:58
thorstso who is doing something for CI now?18:59
thorstwhat's the two sentence there?18:59
*** thorst has quit IRC19:03
*** chas has joined #openstack-powervm19:08
esbergluStill have no real idea what's going on with the nodepool deletions19:08
esbergluBut it is also hitting staging now19:08
esbergluWhich helps narrow it down19:08
efriedthorst bailed on us.19:09
efriedI'm getting ready to test the local2remote stuff.19:09
efriedBut first, lunch.19:09
efriedWhen thorst gets back, remind me to tell him about TaskFlow's PrintingDurationListener.  Think it means we can get rid of BaseTask.19:10
esbergluGeez late lunch. Will do19:11
*** chas has quit IRC19:12
*** thorst has joined #openstack-powervm19:32
thorstlooks like POK net just died...19:44
efriedthorst TaskFlow has a way to do something like:19:53
efriedwith PrintingDurationListener(engine,
efriedAnd that'll log the duration of each task for us19:54
efriedSo we could get rid of most of our base PowerVMTask.19:55
efriedThe only thing it would theoretically still need would be the instance var we're requiring in the constructor, used for logging.  But we could just make that mandatory in the tasks themselves.19:55
thorstyeah...I do like having it in oslo.log though19:56
efriedHaving what in oslo.log?19:56
thorstit's so nice for a good log viewer19:56
thorstjust putting in the instance.uuid and getting all the messages related to that19:56
efriedOh, yeah, we can still do that, just saying we would change the class sigs for our Tasks to accept that explicitly.19:57
thorstif you lose makes debug harder.19:57
thorsteven for printing the amount of time each task took?19:57
efriedOh, you want the instance ID in that one too...19:57
efriedWell, we could extend the listener.19:57
efriedAnyway, wishlist.19:58
efriedI'll open a lp bug so we don't forget.19:58
efriedCause it ain't happenin now.19:58
efried FYI20:02
openstackLaunchpad bug 1680947 in nova-powervm "Use PrintingDurationListener, get rid of PowerVMTask" [Wishlist,New]20:02
*** apearson has quit IRC20:16
thorstfair enough20:17
*** apearson has joined #openstack-powervm20:18
*** esberglu has quit IRC20:27
*** esberglu has joined #openstack-powervm20:30
*** apearson has quit IRC20:34
*** apearson has joined #openstack-powervm20:37
efriedthorst Apparently I can't have >=1.1.0 in nova requirements if g-r says >=
efriedThat's not what they done taught me in 3rd grade math, but whatever.20:50
thorstwell, makes sense to me20:51
*** apearson has quit IRC20:53
efried>=1.1.1 is >=1.1.0.  And I woulda thought g-r would go first.20:53
efriedwhat, as I say, ever.20:53
esbergluIt goes the other way I think20:53
esbergluSo you can have >=1.1.2 in nova reqs?20:54
esbergluNot really sure how all that works though20:54
esbergluI just though of something that might be responsible for the nodepool stuff20:55
esbergluJenkins has this ssh slaves plugin that we use20:55
esbergluAnd it published a warning about man-in-the-middle attacks and recommended upgrading20:55
esbergluJenkins has this thing where you can delay the upgrade until next restart20:55
*** apearson has joined #openstack-powervm20:55
esbergluSo I did that and kind of forgot20:55
esbergluBut the timeframe makes sense20:55
esbergluSo once the network is back up I'm gonna try downgrading that to the previous version20:56
esberglu(it turns out staging is NOT broken, just production)20:56
esbergluAnd staging has the old version20:57
esbergluDowngrading isn't a good plan long term because of that vulnerability20:57
esbergluBut hopefully it will at least confirm my suspicion20:57
thorstand maybe let us fix it in staging instead of trying to fix it in production21:01
*** edmondsw has quit IRC21:02
*** edmondsw has joined #openstack-powervm21:02
*** edmondsw has quit IRC21:07
*** chas has joined #openstack-powervm21:09
*** apearson has quit IRC21:13
*** chas has quit IRC21:14
esbergluefried: You still around?21:14
esbergluLooks like your run didn't go any better this time around21:14
esbergluSame error it looks like....21:15
esbergluHuh that patch didn't get into pypowervm...21:16
esbergluI can't get onto anything to see why not right now though21:18
esbergluefried: I'm heading out for the day. But if the network comes back up at some point I can recheck this weekend21:19
*** esberglu has quit IRC21:19
*** thorst has quit IRC21:22
*** apearson has joined #openstack-powervm21:26
*** jpasqualetto has quit IRC21:34
*** jpasqualetto has joined #openstack-powervm21:51
*** k0da has quit IRC21:52
*** dwayne has quit IRC22:02
*** esberglu has joined #openstack-powervm22:07
*** esberglu has quit IRC22:12
*** jpasqualetto has quit IRC22:13
*** mdrabe has quit IRC22:14
*** apearson has quit IRC22:20
*** thorst has joined #openstack-powervm22:22
*** esberglu has joined #openstack-powervm22:28
*** thorst has quit IRC22:41
*** esberglu has quit IRC23:03
*** chas has joined #openstack-powervm23:09
*** chas has quit IRC23:14
*** esberglu has joined #openstack-powervm23:34
*** esberglu has quit IRC23:38
*** thorst has joined #openstack-powervm23:39
*** thorst has quit IRC23:43
*** thorst has joined #openstack-powervm23:56
*** esberglu has joined #openstack-powervm23:56
*** thorst has quit IRC23:57

Generated by 2.14.0 by Marius Gedminas - find it at!