13:00:36 <esberglu> #startmeeting powervm_driver_meeting
13:00:37 <openstack> Meeting started Tue Aug 22 13:00:36 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:40 <openstack> The meeting name has been set to 'powervm_driver_meeting'
13:01:07 <efried> \o
13:01:28 <mdrabe> o/
13:01:35 <efried> just getting through the morning rebases 8@
13:01:58 <esberglu> #topic In Tree Driver
13:02:25 <esberglu> I was planning on reviving the config drive WIP patch today
13:02:34 <efried> cool
13:02:54 <esberglu> #action esberglu: Update WIP config drive patch
13:03:11 <esberglu> Not much else going on there atm right?
13:03:28 <efried> nope
13:03:52 <edmondsw> o/
13:03:56 <esberglu> #topic Out of Tree Driver
13:04:33 <esberglu> Anything to report?
13:04:35 <mdrabe> I'll have reviews for PPT ratio support out sometime this week after I live test
13:04:42 <mdrabe> 5750
13:04:58 <mdrabe> it for me
13:05:28 <efried> What's PPT ratio?
13:05:45 <mdrabe> partition page table ratio
13:06:18 <efried> Yeah, you're gonna have to spell that out at least once in the commit message, if not explain wtf it is.
13:06:23 <mdrabe> Look at base_partition.py in that review for an explanation
13:07:18 <efried> Cool.  Put that shit in the commit message.
13:07:24 <edmondsw> we had another meeting about iSCSI
13:07:49 <mdrabe> K
13:08:07 <edmondsw> sounds like we may be able to get some pvm changes going for iSCSI
13:08:22 <edmondsw> have another meeting about that at the end of the week
13:10:21 <esberglu> Okay cool. Move on to PCI then?
13:10:23 <efried> I opened a bug
13:10:39 <efried> https://bugs.launchpad.net/nova-powervm/+bug/1711410
13:10:41 <openstack> Launchpad bug 1711410 in nova-powervm "get_available_resource/get_inventory should account for resources used by the NovaLink, VIOSes, and non-OS partitions" [Undecided,New]
13:11:00 <efried> That'll probably be mine, eventually, unless someone else wants to have a crack at it.
13:13:00 <efried> That's all I have for OOT.
13:13:23 <esberglu> #topic PCI Passthrough
13:13:32 <efried> Progress is being made.
13:14:00 <efried> I have been making chicken-scratch notes on what I've been doing.  Yesterday I put them into an etherpad: https://etherpad.openstack.org/p/powervm-pci-passthrough-notes
13:14:18 <efried> It's mostly notes about how the nova code works.  At the bottom is some detail about changes I'm making to make stuff work.
13:15:14 <efried> On my test system I've gotten to a point where 1) I can get a PCI claim to succeed and appear in the instance object to the spawn request; and 2) I can get it recognizing more than one type of PCI device at the same time.
13:15:50 <efried> Turns out the latter requires explicit unique PCI addresses, which have to be spoofed because PAL (Power Ain't Linux).
13:16:35 <efried> changh has prototyped the REST code to return assigned partition UUID and type (and others) within the ManagedSystem IOSlot list.
13:17:31 <efried> Now that we've seen that PoC, I may rework 5749 to assume it's in place.
13:17:36 <openstackgerrit> Sridhar Venkat proposed openstack/nova-powervm master: WIP: Use OVS Attributes within pypowervm  https://review.openstack.org/486702
13:17:55 <efried> Though it's likely to be next week before the REST function actually drops.
13:18:08 <efried> So I'm going to wait on that and keep experimenting with sandbox code on my victim system.
13:18:21 <efried> btw, for reference, the PCI address spoofing is 5755
13:18:55 <efried> Also for reference, the draft spec is here: https://review.openstack.org/494733
13:19:16 <efried> And the blueprint meta-doc is here: https://blueprints.launchpad.net/nova-powervm/+spec/pci-passthrough
13:19:59 <esberglu> efried: Nice sounds like good progress! I still owe you another read through the spec
13:20:43 <efried> Currently I'm working on a) Making sure I can claim multiple devices (homogenous and heterogeneous); and b) Doing the actual attach in spawn.
13:21:56 <efried> On the table for discussion here...
13:22:26 <efried> Do we want to work this effort in nova proper instead of (or in parallel with) OOT
13:22:49 <efried> Con: it'll slow us way tf down.
13:23:12 <efried> Pro: Exposure, and therefore increased likelihood of getting traction if we run up against something for which we require nova changes.
13:25:25 <edmondsw> efried I think we probably do want to work this in the communities vision
13:26:18 <edmondsw> we can always do more/faster in parallel in nova-powervm, but if/when we need a community change it's going to be harder/slower to get that if they're out of the loop
13:26:38 <efried> Okay, so what I'll do is get a little further with the implementation, we can polish up the spec as a team, and then we can cut over and file a bp with nova.
13:26:45 <edmondsw> yep
13:27:14 <edmondsw> as for reworking your change to assume we have the stuff changh is adding, don't we need to also work with an older pvm level?
13:27:19 <efried> no
13:27:32 <edmondsw> ok then... that certainly helps
13:27:35 <efried> yeah
13:28:04 <efried> We will need to lockstep pypowervm releases, though, which is kind of a pita.
13:28:57 <efried> Okay, I think that's about all for PCI for now.  Still lots of work to do and a lot of unknowns.  But momentum is good.
13:29:23 <esberglu> #topic PowerVM CI
13:29:32 <esberglu> Things are looking a lot better with the timeout increase
13:29:34 <edmondsw> tx efried
13:30:11 <esberglu> Especially if you discount the runs that are failing due to that adapter delete/create serialization issue
13:30:35 <esberglu> The compute driver is still occasionally failing to start on some runs
13:30:46 <esberglu> But the n-cpu log is being cut short on those runs
13:31:12 <efried> almost like the process is being killed?
13:31:53 <esberglu> efried: Yeah either that or an issue with how we output the journald logs.
13:32:00 <esberglu> But I'm leaning towards the 1st
13:32:33 <esberglu> Because I've only seen the logs get cut off on the runs when the driver fails to come up properly
13:32:47 <efried> and we fixed the journald business by stopping the services
13:33:04 <efried> though that actually shouldn't be necessary at all.
13:33:11 <efried> and probably had no effect :)
13:33:40 <esberglu> You can see all of the other services get stopped but not n-cpu
13:34:25 <esberglu> I'll investigate that further this pm and see if I can get some more information
13:35:00 <edmondsw> is anything happening on the serialization issue?
13:35:28 <esberglu> There's a bug open for it. I can ask again today
13:36:26 <esberglu> That's all I had for CI
13:36:29 <efried> changh has been mentioning in scrums that he's been looking into it.
13:36:57 <edmondsw> cool
13:36:59 <efried> I believe we've all agreed there is a bug at or below REST at this point.
13:37:18 <edmondsw> good... I thought they were pushing back and wasn't sure how seriously they were taking it
13:37:39 <efried> We should keep pestering changh during scrums, make sure it stays in the forefront.
13:38:26 <esberglu> #topic Driver Testing
13:38:40 <esberglu> jay1_: Anything new on your end?
13:41:14 <jay1_> esberglu: wanted to understand ISCSI issue fixing ?
13:42:34 <jay1_> edmondsw:Also for S3 I could see there are two stories present
13:42:46 <edmondsw> jay1_ you're supposed to be testing vscsi and fc this sprint
13:43:03 <edmondsw> we're working on getting iSCSI fixed, but that will be a process involving pvm rest changes
13:43:44 <jay1_> ah okay..
13:44:05 <edmondsw> jay1_ fyi, I've added several things to the stacking issues etherpad
13:44:12 <edmondsw> make sure you don't stumble over any of those
13:45:14 <edmondsw> jay1_ https://etherpad.openstack.org/p/powervm_stacking_issues
13:45:52 <jay1_> edmondsw can you add more info to the stories like in the acceptance
13:46:08 <edmondsw> jay1_ yes
13:46:42 <jay1_> OOT + VSCSI , you mean with SSP ?
13:47:07 <jay1_> is it like we have to run Tempest scripts or manual effort is also required ?
13:47:09 <edmondsw> jay1_ no... vSCSI is an alternative to SSP
13:47:35 <edmondsw> jay1_ I'd prefer tempest
13:48:18 <edmondsw> so you'd setup an environment using vSCSI instead of SSP and then run tempest
13:49:09 <jay1_> okay.. how about pike and queens
13:49:12 <thorst_afk> vSCSI is a connection type.  I think vSCSI for SSP for the nova boot volume.  vSCSI for FC for the cinder volume
13:49:24 <thorst_afk> or just boot from Cinder (though that's sometimes harder)
13:49:28 <thorst_afk> just my 2 cents
13:49:30 <thorst_afk> :-)
13:51:08 <esberglu> #topic Open Discussion
13:51:15 <esberglu> Anything else today?
13:51:29 <edmondsw> jay1_ the stories for this sprint are pike
13:52:09 <jay1_> edmondsw sure
13:53:38 <jay1_> thorst_afk last release NPIV with FC we wanted to do, same even now ?
13:54:00 <edmondsw> jay1_ that's the 2nd story
13:54:09 <jay1_> hmm
13:54:09 <edmondsw> one for vSCSI, one for NPIV
13:57:08 <jay1_> sure
13:58:58 <esberglu> #endmeeting