14:01:04 #startmeeting PowerVM Driver Meeting 14:01:04 Meeting started Tue Aug 7 14:01:04 2018 UTC and is due to finish in 60 minutes. The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:08 The meeting name has been set to 'powervm_driver_meeting' 14:01:10 edmondsw: I'm going to be late, gotta step away for 10. 14:01:18 efried ack 14:01:25 link: https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 14:02:21 mdrabe mujahidali you around? 14:02:34 yes 14:02:35 yep 14:02:46 alright, we'll go ahead and get started and efried can catch up 14:03:24 #topic In-Tree Driver 14:03:49 I don't have anything to discuss here... anyone else? 14:04:11 esberglu did all the blueprints you completed get marked as such? 14:04:33 edmondsw: I think so. I'll verify and let you know 14:04:38 tx 14:05:18 I was hoping that we would be able to do a little more work on vSCSI (multiattach and bfv) but I think we've run out of time there 14:05:34 so I think that can be marked complete and we'll have to pick up in a later release 14:06:02 same for localdisk (image caching) 14:06:26 #topic Out-of-Tree Driver 14:06:36 #link https://etherpad.openstack.org/p/powervm-oot-todos 14:07:28 the switch from fileio to loop merged: https://review.openstack.org/#/c/568374/ 14:08:16 and we got networking-powervm fixed https://review.openstack.org/#/c/588096/ after a recent neutron change 14:08:34 and updated to use neutron-lib 1.18 https://review.openstack.org/#/c/585469/ 14:09:06 ō/ Sorry about that. 14:09:25 np 14:09:35 I think the top remaining item for OOT is getting the docs fixed 14:09:49 but I haven't had time to spend on that 14:09:54 so hopefully this week 14:09:58 any other comments for OOT? 14:10:25 oh mdrabe I'm still trying to get a dual-node devstack for you to test MSP 14:10:54 been trying to kill many birds with one stone there, and it's been painful. Lab outages haven't been helping either 14:11:14 Yup thanks for letting me know 14:11:17 the good news is that when I get it working, we should be able to devstack much easier in future 14:11:30 WHEN I get working 14:11:48 #topic Device Passthrough 14:11:52 efried take it away 14:12:33 Worked a bit more code yesterday, now three patches in the series, bottom two ready (ish). I keep revising earlier patches as I work on later ones, so expect some continuing churn, but feel free to put eyes on the code. 14:13:50 The topic has been getting some attention from others - currently discussing with CERN in #openstack-nova, who wants to help contribute to the effort by writing a similar spec for Nova for Stein. 14:14:10 interesting 14:14:45 For libvirt, obviously, but this would be great exposure, good to have others reviewing and contributing mind share, good to have CERN flushing out stuff that applies to huge deployments, etc. 14:15:49 This is clearly what we were going for eventually anyway - having a generic device passthrough framework in nova proper - so it's a nice to have the prospect of not doing all the work ourselves :) 14:16:11 I definitely don't want it all powervm-specific 14:16:27 or any more than it has to be 14:17:01 right, for sure. E.g. the traits, I'm thinking some of those will be common, but e.g. drc name/index will be power-specific. We'll need to figure all that out. 14:17:09 yep 14:17:26 I suspect there will be some overlap in the traits, but we've "prepared" by namespacing ours with _POWERVM_ so if that happens, it'll be okay. 14:17:59 anyway, good to be getting attention from outside of powervm, is what I'm getting at. 14:18:24 #link bottom of device passthrough series https://review.openstack.org/#/c/579289/ 14:18:33 That's it for me, unless questions/comments/concerns 14:19:44 sounds good, thanks 14:19:52 #topic PowerVM CI 14:20:04 #link https://etherpad.openstack.org/p/powervm_ci_todos 14:20:11 mujahidali ^ 14:20:23 first, how is the CI looking after the outage? 14:20:49 I redeployed the copmplete CI and it's looking fine now. 14:20:58 \o/ 14:20:59 We're in the green again! :) 14:21:27 helps when neutron doesn't kick us while we're down 14:22:25 What's next here? Finishing up the stable vSCSI? 14:22:40 I tried to stack the stage env for multinode setup, but when I posted a comment "powervm:recheck" I didn't see any job triggered on jenkins. 14:23:26 mujahidali: Did you check the zuul logs? And the zuul merger logs? That's where I usually start if runs aren't getting kicked off properly 14:23:51 Zuul is responsible for monitoring gerrit for the powervm:recheck comments and kicking off jobs 14:24:57 I saw zuul merger logs and I think there is some repo cloning problem. 14:25:15 is this related to that one VM that you wanted to resize with more disk? 14:25:36 no, it's on the other vm. 14:26:11 edmondsw: If you're resizing that one, the staging openstack controller, I would also resize the production controller 14:26:27 esberglu mujahidali I haven't figured out how to resize it 14:27:00 Could also experiment with smaller images than 30G 14:27:14 not top of my priority list atm... not knowing what this is affecting 14:28:21 To change the size of the default image all you need to do is change the powervm.large flavor in neo-os-ci/ci-ansible/roles/devstack-control/tasks/main.yml 14:28:54 and then redeploy to clear out the old ones? 14:29:05 Yeah you would have to redeploy cloud and management 14:29:18 that should be something mujahidali can try 14:29:30 I will give it a try on stage env. 14:29:48 right, I meant redeploy staging 14:30:04 yes 14:30:17 mujahidali: We used to use 16G images which were too small. I don't know why we went to 30G. I'd try somewhere in the middle 14:30:29 maybe 25GB ? 14:31:00 Cutting to 25GB would save 20G of space on the system 14:31:27 right 14:32:05 I think I just need to change the disk option to 25 and redeploy the stage env esberglu?? 14:32:36 Yep. Change the powervm.large flavor size in that file I posted above and redeploy 14:32:52 edmondsw: Something else I was thinking about. Right now the vSCSI CI run is on demand only 14:32:53 yeah, got it. 14:33:05 Which means it is essentially never being run 14:33:20 Might be nice to have some sort of scheduled runs just to verify it isn't breaking 14:33:33 Once a week or however often you think is reasonable 14:33:36 esberglu agreed 14:33:50 I'll add that to the todo list mujahidali 14:34:20 okay 14:34:36 Has any progress been made on stable vSCSI? I left some comments on that patch, but idk if the fixes were ever attempted with all of the outages 14:35:02 Let me know if you guys have any questions/issues going forward there 14:35:49 we were able to stack and there were some tempest failure for stable/ocata, that we weren't able to resolved dut to lab outages. 14:37:52 We're carrying this patch in CI 14:37:54 https://review.openstack.org/#/c/565239/ 14:38:04 I keep forgetting about it and they've requested a few more changes 14:38:12 I'll try to get to it sometime this week 14:38:32 That's all I had today 14:38:55 esberglu tx 14:39:00 mujahidali anything else? 14:39:16 nope 14:39:18 #topic Open Discussion 14:39:25 anyone have anything here? 14:40:45 I'll be asking you how to "open a ticket" 14:40:49 otherwise, nothing from me. 14:41:06 ack 14:41:09 #endmeeting