14:28:57 <edmondsw> #startmeeting PowerVM Driver Meeting
14:28:57 <openstack> Meeting started Tue May  8 14:28:57 2018 UTC and is due to finish in 60 minutes.  The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:28:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:29:01 <openstack> The meeting name has been set to 'powervm_driver_meeting'
14:29:18 <edmondsw> #link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda
14:29:27 <edmondsw> #topic In-Tree Driver
14:29:35 <edmondsw> #link https://etherpad.openstack.org/p/powervm-in-tree-todos
14:29:38 <esberglu> Blocked by CI
14:29:53 <edmondsw> yep... anything else to say here?
14:30:12 <esberglu> edmondsw: Nothing that I won't hit during the CI topic
14:30:56 <edmondsw> #topic Out-of-Tree Driver
14:31:14 <edmondsw> #link https://etherpad.openstack.org/p/powervm-oot-todos
14:31:39 <efried> I just put up a patch to use is_volume.
14:31:51 <edmondsw> https://review.openstack.org/#/c/566888/
14:32:14 <esberglu> I'll add a TODO to port that to IT
14:32:22 <efried> I already commented on the IT patch.
14:32:29 <esberglu> efried: Sweet tx
14:32:37 <efried> cause there's still a couple tweaks you need to do there anyway, once the CI is going, yah?
14:32:50 <esberglu> efried: Yeah I think so
14:34:45 <edmondsw> thorst continues to ask me about MSP support
14:35:08 <edmondsw> the NovaLink REST changes are in, but we need to use them in pypowervm and nova-powervm
14:35:28 <edmondsw> I've been trying to get to that for a while, unsuccessfully
14:35:46 <edmondsw> efried if you have any bandwidth, could use your help here
14:36:06 <efried> MSP, MSP...
14:36:12 <edmondsw> mover service partition
14:36:13 <efried> so we can do live migrations without a VIOS?
14:36:26 <edmondsw> basically, allows you to specify which IP to use for the LPM traffic
14:36:44 <edmondsw> so if you have multiple interfaces, you can make it go over the one you want
14:36:46 <efried> oh, this was the argument about how to specify multiples, and stuff?
14:36:49 <edmondsw> e.g. over a 10G instead of 1G
14:37:08 <efried> okay; you'll have to bring me up to speed
14:37:09 <edmondsw> I think it can be a list
14:37:12 <edmondsw> yep, I can do that
14:37:42 <edmondsw> gman-tx you were going to check on the status of https://review.openstack.org/#/c/556495/ ?
14:37:51 <edmondsw> haven't seen any movement there for a while
14:39:28 <edmondsw> we also have someone trying to use ceph and hitting issues
14:39:59 <edmondsw> efried are you going to ask him to open a defect for this new problem?
14:40:21 <efried> Yes, I suppose that's appropriate.
14:40:22 <edmondsw> tjakobs I'm hoping you can take a look at this
14:40:33 <edmondsw> I'll forward you the email while we're waiting on a defect
14:42:08 <edmondsw> #topic Device Passthrough
14:42:12 <edmondsw> efried ?
14:42:27 <efried> granular should hopefully hit the gate today or tomorrow.
14:42:46 <efried> nrp-in-alloc-cands is stalled a little bit, but shaping up; we hope to have it merged before the summit.
14:43:12 <efried> update_provider_tree is getting traction in xen and libvirt, and will probably hit ironic soon.
14:43:40 <efried> I'm still thinking it behooves us to wait for nrp-in-alloc-cands before trying to make headway on nova-powervm there.
14:43:48 <edmondsw> I was just gonna ask :)
14:43:49 <efried> Though I figured out a cheat the other day
14:44:24 <efried> If nrp-in-alloc-cands doesn't make it at all, we can cheat by making the GPUs technically "sharing providers" -- that only share with their single host RP.
14:44:36 <efried> granular will still work for that.
14:44:52 <efried> Because take a look at the test setup for granular-without-nrp:
14:45:14 <efried> https://review.openstack.org/#/c/517757/37/nova/tests/functional/api/openstack/placement/fixtures.py@423
14:46:06 <efried> and, if you have time, the gabbi tests:
14:46:16 <efried> https://review.openstack.org/#/c/517757/37/nova/tests/functional/api/openstack/placement/gabbits/granular.yaml
14:46:41 <efried> point is, we've got a contingency (a better one than using custom resource classes) if nrp-in-alloc-cands stalls out.
14:46:49 <efried> doesn't make rocky for any reason.
14:47:09 <efried> This contingency is way better because the flavors won't have to change at all when we cut over to the nrp modeling
14:47:22 <efried> that cutover would be transparent outside of the virt driver, really.
14:47:49 <efried> I think that's all I've got to say about that.  Any questions, comments, concerns?
14:48:26 <edmondsw> thanks
14:48:32 <edmondsw> #topic PowerVM CI
14:48:46 <edmondsw> #link https://etherpad.openstack.org/p/powervm_ci_todos
14:48:49 <edmondsw> esberglu ^
14:49:22 <esberglu> Still making no headway on the connection issues. Hoping that edmondsw and/or thorst can block off some in the next day or two to help me out
14:50:36 <esberglu> Scenario testing for master branches is ready. I've got +2 from efried, waiting for another look from edmondsw
14:50:58 <edmondsw> esberglu next on my list
14:51:11 <efried> esberglu: Remind me, will that unblock some part of the IT series?
14:51:18 <esberglu> There's some follow on work to be done there, but what is done will unblock snapshot and localdisk
14:51:19 <edmondsw> yes, snapshot and localdisk
14:51:31 <efried> good deal.
14:51:40 <efried> The conn issues are holding up what?
14:51:42 <edmondsw> esberglu what's the followon work, and is it in the TODO etherpad?
14:52:04 <esberglu> edmondsw: Not yet, it's on my list
14:52:16 <esberglu> 1) Get scenario tests for stable branches working
14:52:53 <esberglu> 2) Make a new base image with curl installed so that we can verify the metadata service (more details in changeset)
14:53:07 <esberglu> 3) See why keypair auth_method isn't working
14:53:32 <esberglu> Those should be some good items for mujahidali to work on
14:53:55 <mujahidali> sure
14:54:09 <esberglu> mujahidali: I will send you some more details after the meeting
14:54:42 <esberglu> I also put up a patchset that enables cinder on the tempest AIO vms. I was able to get that to stack
14:54:52 <esberglu> mujahidali: Were you able to give that a try?
14:55:10 <mujahidali> yeah
14:55:26 <esberglu> mujahidali: Any questions/problems there, or were you able to stack?
14:55:48 <mujahidali> I am applied the patch, and got the stack with cinder
14:56:05 <esberglu> Next step there is getting a cinder configuration that allows us to attach and detach volumes
14:56:33 <esberglu> Then we can start looking at what we're going to do with tempest there
14:56:54 <esberglu> That's pretty much all I have for CI
14:57:31 <mujahidali> esburgu: yeah, looking into steps, but it requires neo server installation. need your help there.
14:57:32 <edmondsw> efried to your question about conn issues... the whole CI is broken until we resolve that
14:57:39 <efried> o
14:57:55 <edmondsw> so that's the top priority, but I think we're pretty much at a loss at the moment
14:57:59 <edmondsw> and have been for over a week
14:58:01 <efried> which means we're not actually unblocked until that's resolved.
14:58:05 <edmondsw> yep
14:58:23 * efried gets on the roof and sends up the thorst signal
14:58:45 <edmondsw> esberglu I don't really have any idea how I can help there, unfortunately
14:58:55 <edmondsw> not talking about time... rather, I don't have any ideas
14:59:48 <esberglu> edmondsw: Yeah I'm going to step back from square one and just go through everything I know and that I've tried because I'm also at a loss
14:59:58 <esberglu> Hopefully something will click
15:00:03 <esberglu> Or thorst can save the day
15:00:38 <edmondsw> esberglu would it help to talk through that together?
15:01:02 <esberglu> edmondsw: It might, let me organize my thoughts first
15:01:03 <edmondsw> maybe that would help trigger an idea for me
15:01:05 <edmondsw> k
15:01:31 <edmondsw> #topic Open Discussion
15:01:37 <edmondsw> anything else?
15:01:59 <esberglu> I'm starting to split dev time with PIE next week
15:02:36 <esberglu> That's all for me
15:03:06 <edmondsw> mujahidali you said you need help with neo installation?
15:03:16 <mujahidali> yeah
15:03:28 <edmondsw> has anyone sent you the wiki page on that?
15:03:45 <mujahidali> esberglu
15:03:56 <edmondsw> I can try to help you there so esberglu can focus on some of these other things
15:04:01 <esberglu> mujahidali: You shouldn't be installing anything
15:04:16 <edmondsw> esberglu oh, then what does he need to be doing?
15:04:58 <esberglu> After running ./stack.sh for cinder enabled tempest AIO vm, we need to modify the cinder config file
15:05:06 <esberglu> so that we can actually attach/detach volumes
15:05:19 <esberglu> This is all within the AIO vms, no installation needed
15:05:25 <mujahidali> esberglu: the link that you provide in the mail is pointing me to a  wiki that is telling me to install neo.
15:05:55 <esberglu> mujahidali: Yes, but you should only be running the steps under "Enabling cinder vSCSI volume support"
15:06:06 <esberglu> Starting with editing the cinder.conf file
15:06:12 <mujahidali> okay
15:07:16 <esberglu> mujahidali: I'll send you an email with more details about vSCSI and the next steps for scenario testing
15:07:33 <edmondsw> alright, I think that's it for today's meeting. Thanks!
15:07:36 <edmondsw> #endmeeting