13:31:38 <esberglu> #startmeeting powervm_ci_meeting
13:31:39 <openstack> Meeting started Thu Feb  9 13:31:38 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:31:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:31:43 <openstack> The meeting name has been set to 'powervm_ci_meeting'
13:31:46 <thorst_> o/
13:31:49 <xia> o/
13:31:56 <xia> #help
13:32:16 <efried> o/
13:32:58 <efried> #topic Power-off issues?
13:33:48 <esberglu> Sure
13:34:43 <efried> So the good news (esberglu please confirm) is that PS11 of the power.py refactor ran through the CI with no appreciable difference in failure rate.
13:34:54 <esberglu> Yep
13:35:13 <efried> Which means it's at least working to the point of backward compatibility, at least in the code paths that are hit by the CI.
13:35:25 <efried> With the caveat that that doesn't include some things, including IBMi.
13:35:44 <efried> The bad news (well, neutral news, really) is that we're still seeing failures.
13:35:55 <thorst_> still seeing power off failures or general failures?
13:35:57 <efried> Which I really don't want to address in the current patch set, which I'd like to reserve for pure refactor.
13:36:05 <efried> power off failures, at least.
13:36:43 <efried> I'm still working on the whole "enumerating all the job options" thing.  Should have a patch set up for a look by lunchtime, I believe.
13:37:05 <thorst_> OK - just to verify...with the patch, we still see power off failures?
13:37:10 <efried> yes
13:37:10 <thorst_> or just 'failures'
13:37:13 <thorst_> k.
13:37:21 <esberglu> Still power off failures
13:37:46 <efried> Before we merge that sucker, I would like to run some live tests on IBMi.  nvcastet has volunteered to help me out in some fashion there.  I think by sliding me a disk with an IBMi image on it.
13:38:23 <thorst_> so efried, was the refactor not supposed to help with power off failures?
13:38:26 <thorst_> just make it more managable?
13:38:26 <efried> No.
13:38:39 <efried> Just make it easier to debug and fix said failures.
13:38:48 <thorst_> ok
13:38:52 <thorst_> confusion cleared.
13:39:15 <efried> So I think once I get the PowerOpts patch up, I'll first investigate those failures and try to put up a separate change set (on top of the refactor) that addresses them.
13:39:32 <efried> With esberglu's handy-dandy live patching gizmo, we ought to be able to run that through the CI fairly easily, yes?
13:39:39 <esberglu> Yep
13:39:46 <efried> #action efried to finish proposing PowerOpts
13:39:59 <efried> #action efried to investigate power-off failures and propose a fix on top.
13:40:20 <efried> #action efried to live test on IBMi (and standard, for that matter).
13:40:53 <efried> Anything else on the power-off issue for now?
13:41:05 <efried> esberglu Other topics?
13:41:21 <esberglu> #topic CI redeploy
13:41:45 <esberglu> Just wanted to say that the redeploy finished last night
13:42:04 <thorst_> jobs going through yet?
13:42:07 <esberglu> So now we are running 1.0.0.5 across the board
13:42:11 <esberglu> Yep.
13:42:13 <thorst_> neat
13:42:25 <esberglu> I haven't looked at any results yet though
13:42:34 <thorst_> that's good to know for the CI host server....CPU utilization on that sucker is like 10%
13:42:40 <thorst_> after we moved everything to the SAN
13:44:11 <esberglu> That's all I had for that, just wanted to update
13:44:27 <esberglu> #topic In Tree CI
13:45:11 <esberglu> I think we need to talk about how we want to handle moving the in-tree runs from silent to check when we are ready
13:45:44 <esberglu> Because if we start posting results, it will fail everything until PS1 is through
13:45:56 <esberglu> which is a lot of red coming from our CI
13:46:20 <efried> Can it be as simple as checking for the presence of, say, our driver.py?
13:46:40 <efried> Or do we not know that until too late in the process?
13:47:09 <efried> I guess we could inspect the commit tree and bail out if we don't see that first change set's commit hash / Change-Id.
13:47:49 <thorst_> efried: yep
13:47:52 <thorst_> that's what we should do
13:48:04 <thorst_> if the commit message (probably?) has the word powervm in it, we publish.
13:48:38 <thorst_> and for the oot driver, if the commit message has a set of files from the nova project and contains the word powervm, we should just not run (because we'll fail)
13:48:44 <thorst_> (due to duplicate options)
13:48:55 <efried> Or if the file list in the change set contains '/powervm/'?
13:49:19 <efried> Wait, why do we need to do something special for OOT?
13:49:35 <thorst_> The OOT driver will always fail on an IT driver change set.
13:49:37 <efried> Oh, you mean we don't run the *in-tree* CI on *out-of-tree* patch sets.
13:49:44 <thorst_> because the OOT driver has duplicate options
13:49:57 <efried> Gotcha.  So it should be as simple as whether the change set is in the nova-powervm project, neh?
13:50:04 <thorst_> so if a patch set comes in that is in tree for PowerVM, we should avoid running the OOT driver change
13:50:12 <thorst_> otherwise we post a +1 and a -1 in the same patch
13:50:21 <efried> Sorry, yeah, I had it backwards.
13:50:26 <thorst_> k
13:50:40 <thorst_> once it merges, we can remove the opts from the oot.
13:50:44 <efried> Right.
13:50:45 <thorst_> and be happy again
13:50:59 <efried> So esberglu Do you know how to make all of that happen?
13:51:10 <efried> I can help out with the git commands if you need.
13:51:45 <efried> #action esberglu to set up mutually-exclusive running/publishing of CI results for in- and out-of-tree.
13:51:56 <efried> #action efried to assist as needed.
13:52:07 <efried> (that's not going to show up right in the minutes)
13:53:15 <esberglu> Cool. That's all I had for in-tree
13:53:24 <esberglu> Any other topics?
13:53:33 <thorst_> I'm assuming that once we get in-tree going, we flip back to ansible CI?
13:53:42 <thorst_> I know that the openstack-ansible team is still waiting there.
13:53:52 <esberglu> Yep
13:53:52 <adreznec> Yeah
13:54:04 <adreznec> FYI I discussed that a bit with Jesse last week
13:54:13 <thorst_> ok - yeah, that was my next question
13:54:22 <thorst_> do they understand we still are targeting that?
13:54:27 <thorst_> (seems like they do)
13:54:29 <adreznec> Gave him a bit of status on where we were with CI (the whole in-tree driver, etc)
13:54:34 <adreznec> Yeah, theydo
13:54:36 <adreznec> *they do
13:54:40 <thorst_> k.  Assume you'll connect up more at PTG?
13:54:44 <adreznec> Yep
13:54:46 <adreznec> That was the plan
13:55:03 <thorst_> rockin
13:55:08 <thorst_> that was the only other thing I had
13:55:27 <adreznec> Just curious - wangqwsh esberglu how much work do you think is left there?
13:56:05 <thorst_> I know the whole OVS thing needs to be solved...
13:56:40 <wangqwsh> openstack can be installed via osa, but not run tempest to test it
13:58:18 <wangqwsh> so need to compose some codes for tempest for powervm osa ci
13:59:08 <efried> Is that what Nilesh is supposed to be doing?
13:59:23 <thorst_> Nilesh is supposed to do some tempest tests with it, yeah
13:59:31 <thorst_> we know that other env's have gotten that running
14:00:27 <adreznec> Right, you can definitely run tempest against OSA with PowerVM. For the most part it really shouldn't be all that different than running it against a devstack AIO
14:00:34 <adreznec> Since it's just calling into the APIs
14:04:19 <esberglu> Cool. Sounds like we are starting to get that back on the radar, but we aren't too far away
14:04:37 <esberglu> Anything else?
14:05:28 <wangqwsh> yes, when we can continue to do for powervm osa ci? after the in-tree ready, right?
14:08:29 <esberglu> If anyone has free cycles they can go for it. I reserved systems for the infrastructure
14:08:31 <wangqwsh> a question related to convert instance's uuid to powervm uuid.
14:08:37 <esberglu> Otherwise yes, after in-tree
14:08:54 <efried> wangqwsh Is that a CI-related question, or should it wait til after the meeting?
14:09:27 <wangqwsh> not ci question
14:09:29 <wangqwsh> ok
14:09:46 <esberglu> #endmeeting