14:28:57 #startmeeting PowerVM Driver Meeting 14:28:57 Meeting started Tue May 8 14:28:57 2018 UTC and is due to finish in 60 minutes. The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:28:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:29:01 The meeting name has been set to 'powervm_driver_meeting' 14:29:18 #link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 14:29:27 #topic In-Tree Driver 14:29:35 #link https://etherpad.openstack.org/p/powervm-in-tree-todos 14:29:38 Blocked by CI 14:29:53 yep... anything else to say here? 14:30:12 edmondsw: Nothing that I won't hit during the CI topic 14:30:56 #topic Out-of-Tree Driver 14:31:14 #link https://etherpad.openstack.org/p/powervm-oot-todos 14:31:39 I just put up a patch to use is_volume. 14:31:51 https://review.openstack.org/#/c/566888/ 14:32:14 I'll add a TODO to port that to IT 14:32:22 I already commented on the IT patch. 14:32:29 efried: Sweet tx 14:32:37 cause there's still a couple tweaks you need to do there anyway, once the CI is going, yah? 14:32:50 efried: Yeah I think so 14:34:45 thorst continues to ask me about MSP support 14:35:08 the NovaLink REST changes are in, but we need to use them in pypowervm and nova-powervm 14:35:28 I've been trying to get to that for a while, unsuccessfully 14:35:46 efried if you have any bandwidth, could use your help here 14:36:06 MSP, MSP... 14:36:12 mover service partition 14:36:13 so we can do live migrations without a VIOS? 14:36:26 basically, allows you to specify which IP to use for the LPM traffic 14:36:44 so if you have multiple interfaces, you can make it go over the one you want 14:36:46 oh, this was the argument about how to specify multiples, and stuff? 14:36:49 e.g. over a 10G instead of 1G 14:37:08 okay; you'll have to bring me up to speed 14:37:09 I think it can be a list 14:37:12 yep, I can do that 14:37:42 gman-tx you were going to check on the status of https://review.openstack.org/#/c/556495/ ? 14:37:51 haven't seen any movement there for a while 14:39:28 we also have someone trying to use ceph and hitting issues 14:39:59 efried are you going to ask him to open a defect for this new problem? 14:40:21 Yes, I suppose that's appropriate. 14:40:22 tjakobs I'm hoping you can take a look at this 14:40:33 I'll forward you the email while we're waiting on a defect 14:42:08 #topic Device Passthrough 14:42:12 efried ? 14:42:27 granular should hopefully hit the gate today or tomorrow. 14:42:46 nrp-in-alloc-cands is stalled a little bit, but shaping up; we hope to have it merged before the summit. 14:43:12 update_provider_tree is getting traction in xen and libvirt, and will probably hit ironic soon. 14:43:40 I'm still thinking it behooves us to wait for nrp-in-alloc-cands before trying to make headway on nova-powervm there. 14:43:48 I was just gonna ask :) 14:43:49 Though I figured out a cheat the other day 14:44:24 If nrp-in-alloc-cands doesn't make it at all, we can cheat by making the GPUs technically "sharing providers" -- that only share with their single host RP. 14:44:36 granular will still work for that. 14:44:52 Because take a look at the test setup for granular-without-nrp: 14:45:14 https://review.openstack.org/#/c/517757/37/nova/tests/functional/api/openstack/placement/fixtures.py@423 14:46:06 and, if you have time, the gabbi tests: 14:46:16 https://review.openstack.org/#/c/517757/37/nova/tests/functional/api/openstack/placement/gabbits/granular.yaml 14:46:41 point is, we've got a contingency (a better one than using custom resource classes) if nrp-in-alloc-cands stalls out. 14:46:49 doesn't make rocky for any reason. 14:47:09 This contingency is way better because the flavors won't have to change at all when we cut over to the nrp modeling 14:47:22 that cutover would be transparent outside of the virt driver, really. 14:47:49 I think that's all I've got to say about that. Any questions, comments, concerns? 14:48:26 thanks 14:48:32 #topic PowerVM CI 14:48:46 #link https://etherpad.openstack.org/p/powervm_ci_todos 14:48:49 esberglu ^ 14:49:22 Still making no headway on the connection issues. Hoping that edmondsw and/or thorst can block off some in the next day or two to help me out 14:50:36 Scenario testing for master branches is ready. I've got +2 from efried, waiting for another look from edmondsw 14:50:58 esberglu next on my list 14:51:11 esberglu: Remind me, will that unblock some part of the IT series? 14:51:18 There's some follow on work to be done there, but what is done will unblock snapshot and localdisk 14:51:19 yes, snapshot and localdisk 14:51:31 good deal. 14:51:40 The conn issues are holding up what? 14:51:42 esberglu what's the followon work, and is it in the TODO etherpad? 14:52:04 edmondsw: Not yet, it's on my list 14:52:16 1) Get scenario tests for stable branches working 14:52:53 2) Make a new base image with curl installed so that we can verify the metadata service (more details in changeset) 14:53:07 3) See why keypair auth_method isn't working 14:53:32 Those should be some good items for mujahidali to work on 14:53:55 sure 14:54:09 mujahidali: I will send you some more details after the meeting 14:54:42 I also put up a patchset that enables cinder on the tempest AIO vms. I was able to get that to stack 14:54:52 mujahidali: Were you able to give that a try? 14:55:10 yeah 14:55:26 mujahidali: Any questions/problems there, or were you able to stack? 14:55:48 I am applied the patch, and got the stack with cinder 14:56:05 Next step there is getting a cinder configuration that allows us to attach and detach volumes 14:56:33 Then we can start looking at what we're going to do with tempest there 14:56:54 That's pretty much all I have for CI 14:57:31 esburgu: yeah, looking into steps, but it requires neo server installation. need your help there. 14:57:32 efried to your question about conn issues... the whole CI is broken until we resolve that 14:57:39 o 14:57:55 so that's the top priority, but I think we're pretty much at a loss at the moment 14:57:59 and have been for over a week 14:58:01 which means we're not actually unblocked until that's resolved. 14:58:05 yep 14:58:23 * efried gets on the roof and sends up the thorst signal 14:58:45 esberglu I don't really have any idea how I can help there, unfortunately 14:58:55 not talking about time... rather, I don't have any ideas 14:59:48 edmondsw: Yeah I'm going to step back from square one and just go through everything I know and that I've tried because I'm also at a loss 14:59:58 Hopefully something will click 15:00:03 Or thorst can save the day 15:00:38 esberglu would it help to talk through that together? 15:01:02 edmondsw: It might, let me organize my thoughts first 15:01:03 maybe that would help trigger an idea for me 15:01:05 k 15:01:31 #topic Open Discussion 15:01:37 anything else? 15:01:59 I'm starting to split dev time with PIE next week 15:02:36 That's all for me 15:03:06 mujahidali you said you need help with neo installation? 15:03:16 yeah 15:03:28 has anyone sent you the wiki page on that? 15:03:45 esberglu 15:03:56 I can try to help you there so esberglu can focus on some of these other things 15:04:01 mujahidali: You shouldn't be installing anything 15:04:16 esberglu oh, then what does he need to be doing? 15:04:58 After running ./stack.sh for cinder enabled tempest AIO vm, we need to modify the cinder config file 15:05:06 so that we can actually attach/detach volumes 15:05:19 This is all within the AIO vms, no installation needed 15:05:25 esberglu: the link that you provide in the mail is pointing me to a wiki that is telling me to install neo. 15:05:55 mujahidali: Yes, but you should only be running the steps under "Enabling cinder vSCSI volume support" 15:06:06 Starting with editing the cinder.conf file 15:06:12 okay 15:07:16 mujahidali: I'll send you an email with more details about vSCSI and the next steps for scenario testing 15:07:33 alright, I think that's it for today's meeting. Thanks! 15:07:36 #endmeeting