14:02:15 #startmeeting PowerVM Driver Meeting 14:02:16 Meeting started Tue Jun 26 14:02:15 2018 UTC and is due to finish in 60 minutes. The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:20 The meeting name has been set to 'powervm_driver_meeting' 14:02:32 link: https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 14:02:45 ping gman-tx mdrabe mujahidali chhagarw 14:03:08 #topic In-Tree Driver 14:03:10 link: https://etherpad.openstack.org/p/powervm-in-tree-todos 14:04:05 edmondsw: It would be neat to include TechM in these meetings once they're set up. 14:04:10 efried agreed 14:04:26 I don't know that we have a lot of IT work going on right now 14:04:52 efried anything you want to talk about here? 14:05:04 Will defer to dev passthru topic. 14:05:10 k 14:05:21 #topic Out-of-Tree Driver 14:05:39 #link https://etherpad.openstack.org/p/powervm-oot-todos 14:06:10 mdrabe any update on the MSP work? 14:06:19 Haven't started it yet 14:06:53 k 14:07:10 I saw a new patch set from chhagarw but haven't looked yet 14:07:43 esberglu has proposed some things porting changes back from the IT driver 14:07:52 I think I still owe him a couple reviews there 14:08:11 Yep. They should be pretty easy reviews 14:08:12 I think I'm up to date on those. 14:08:21 yeah, I looked at a couple on Friday I think 14:08:42 maybe all of them? I don't recall now 14:08:42 Everything we needed to backport is either covered there or by a TODO on the etherpads 14:08:58 tx 14:09:02 anything else for OOT? 14:09:20 #topic Device Passthrough 14:09:24 efried ^ 14:09:53 Okay, so Jay is back from his PTO (btw he left Verizon and is now working for yahoo) and I'm hoping he'll merge the last nrp-in-alloc-cands patch today. 14:10:00 or at least this week. 14:10:26 Which will clear the way for us to implement our update_provider_tree with nested GPUs 14:10:44 Which means edmondsw and I need to talk about what that's going to look like. 14:10:49 And probably write it down. 14:11:29 Though procedurally we want to avoid having to propose a blueprint to nova, because we're past spec freeze. We maybe should have done that at the beginning of the cycle. 14:12:00 do it OOT first? 14:12:12 and port IT for stein? 14:12:19 Yeah, that's a good point. 14:12:58 Anyway, the first todo is to have that design discussion. edmondsw, when are you available to do that? 14:13:17 good question :) 14:13:31 I can probably find some time this afternoon if that works 14:13:36 yuh. 14:13:50 Does mdrabe need to be involved too? 14:14:27 doesn't have to be, but he might want to join us 14:15:04 invite him 14:15:45 sounds like we're done there for now 14:15:47 Oh, I figured you would do the calendar thing since you've got all the restrictions. 14:15:54 efried ok :) 14:15:59 Anyway, beyond that I've still got some fup to do integrating new placement-isms into the scheduler - the newer generation handling stuff is exposing races (as it should) and I need to fix 'em. 14:16:12 and I owe cyborg some spec review. 14:16:34 The more I look at cyborg the more involved I feel we're going to need to be to do our accelerator stuff. 14:16:46 Like, we're going to need to write os-acc plugins as soon as that's a thing. 14:16:48 yes, I've wanted us involved from the start 14:17:06 https://review.openstack.org/#/c/577438/ <== review me 14:17:20 I tried to push that when I first heard of it but nobody had bandwidth to get really involved there 14:17:45 I'm glad you're getting more involved there 14:17:46 Well, we're only now getting to the point where the nova/cyborg interaction is being defined. 14:17:57 so it's not like we missed any boat or anything. 14:18:02 right, I think we're fine 14:18:16 yup, just need to stay on top of it. 14:18:25 ++ 14:18:29 anything else? 14:18:43 Not sure if there are others on the pvc team who want/need to become familiar in this space. 14:19:08 nope, that's it from me. 14:19:19 at this point I think it would be me and mdrabe, maybe madhavi as well 14:19:44 #topic PowerVM CI 14:19:57 #link https://etherpad.openstack.org/p/powervm_ci_todos 14:20:30 as of yesterday, I think the CI was ill. Is it back? (I haven't looked yet today) 14:20:42 PowerVM CI is failing for everything 14:20:48 http://ci-watch.tintri.com/project?project=nova 14:20:53 I looked at ci-watch yesterday, looks like it has been failing since it got back online 14:21:05 Failing the test_show_update_rebuild_list_server test 14:21:11 mujahidali have you looked at this? 14:21:18 Which was updated here https://github.com/openstack/tempest/commit/1fa4464404dd4400e1c0669dda29d696d3e5badb 14:21:29 And here's the gerrit review https://review.openstack.org/#/c/526485/ 14:21:54 I haven't looked into why that test is failing, that should be enough to get mujahidali started 14:22:14 esberglu tx 14:22:30 We can add it to the skip list temporarily if it isn't easy to debug 14:22:49 I'll try to help take a look at that after the mtg 14:23:54 esberglu there was an email question to you about moving the zuul merger instances to mujahidali 14:24:21 please look for that and reply when you can, should be quick 14:24:35 edmondsw: I responded to the thread right before the meeting 14:24:37 yeah, I looked into the failure, but didn't get much from the logs, thanks esberglu for the hlp 14:24:42 oh, I see it now, sorry 14:25:24 mujahidali when the CI is having issues like this, please ping me and/or efried so we know 14:25:42 sure 14:25:43 that you've seen it 14:25:51 and are working. And maybe we can help 14:26:50 the change that was blocking stable/pike vSCSI CI work merged (in neutron, if I remember correctly), so that should be unblocked when we can get back to it 14:27:49 Ocata? 14:28:02 https://review.openstack.org/#/c/574726/ 14:28:05 esberglu oh, yeah... ocata 14:28:25 in devstack, for neutron 14:29:05 mujahidali in last week's meeting we agreed that you would split out the pike and queens stuff so that we could get that merged 14:29:09 and we would do ocata later 14:29:22 but if that has merged now, it may be easier to just do ocata here as well 14:29:27 unless you hit more issues 14:29:34 +1 14:29:46 doesn't look like anything has happened with 6596 since last week 14:30:19 mujahidali please take a look at that once we get the CI working again 14:30:42 sure 14:30:55 will try to close it asap 14:31:37 esberglu do we have to use x86 for the zuul merger nodes, or could they be ppc64le? 14:31:50 I have started looking into the multinode-setup as well , currently, trying out the steps mentioned by :esberglu on Staging environment. 14:32:58 ++ 14:33:03 edmondsw: They could be ppc64le 14:33:40 I don't know of any reason they couldn't be I should say 14:33:55 Should be fine as long as its still Ubuntu 16.04 I would think 14:34:03 so mujahidali you might want to try setting those up in os4pcloud, where I think we can keep them from expiring 14:35:00 not sure it's worth the risk... but something to consider 14:35:22 esberglu any other pros to having in jupiter? 14:35:37 I will first deploy the ppc64 instances and will add them in host list and see if it's working then will revert to jupiter 14:35:53 e.g. other things run there, so we're not susceptible to networking issues with os4pcloud today 14:35:59 *if ppc not working 14:36:44 mujahidali: Test it out on staging first 14:36:52 ++ 14:36:57 ++ 14:37:10 anything else for CI? 14:37:16 are there any open issues with nova-powervm CI 14:37:31 chhagarw yes, the CI is totally busted right now 14:37:38 we're working on it 14:38:56 #topic Open Discussion 14:39:14 we got back the draft image of our PowerVMStackers mascot 14:39:20 I emailed it around 14:39:41 everyone ok with it? The feedback I've heard so far has been positive 14:39:44 edmondsw: I like it 14:39:55 +1 14:40:35 efried and gman-tx also gave thumbs up 14:41:00 and svenkat 14:41:27 unless I hear something negative shortly, I'll reply to Kendall and tell her it's good. 14:41:34 that's all I had... anything else? 14:42:40 chhavi__ did you want to give an update on iSCSI work? 14:43:08 you missed the OOT discussion where that would normally go 14:44:06 guess not 14:44:11 alright, thanks folks 14:44:13 #endmeeting