Tuesday, 2017-06-06

*** svenkat has joined #openstack-powervm00:30
*** jwcroppe has joined #openstack-powervm00:32
*** thorst has joined #openstack-powervm00:40
*** thorst has quit IRC00:44
*** thorst has joined #openstack-powervm01:07
*** thorst has quit IRC01:07
*** jwcroppe has quit IRC01:08
*** svenkat has quit IRC01:15
*** thorst has joined #openstack-powervm01:41
*** thorst has quit IRC02:06
*** mdrabe has quit IRC02:15
*** thorst has joined #openstack-powervm02:26
*** thorst has quit IRC02:26
*** jwcroppe has joined #openstack-powervm02:29
*** thorst has joined #openstack-powervm02:42
*** thorst has quit IRC02:42
*** thorst has joined #openstack-powervm02:43
*** thorst has quit IRC02:47
*** thorst has joined #openstack-powervm03:14
*** chhavi has joined #openstack-powervm03:30
*** thorst has quit IRC03:32
*** edmondsw has joined #openstack-powervm04:41
*** edmondsw has quit IRC04:46
*** efried has quit IRC04:52
*** efried has joined #openstack-powervm05:04
*** thorst has joined #openstack-powervm05:29
*** thorst has quit IRC05:34
*** jwcroppe has quit IRC05:42
*** jwcroppe has joined #openstack-powervm06:10
*** thorst has joined #openstack-powervm06:30
*** thorst has quit IRC06:35
*** jwcroppe has quit IRC06:47
*** jwcroppe has joined #openstack-powervm07:29
*** thorst has joined #openstack-powervm07:31
*** thorst has quit IRC07:35
*** edmondsw has joined #openstack-powervm08:17
*** edmondsw has quit IRC08:22
*** thorst has joined #openstack-powervm08:32
*** thorst has quit IRC08:51
*** k0da has joined #openstack-powervm08:58
*** jwcroppe has quit IRC09:33
*** thorst has joined #openstack-powervm09:48
*** thorst has quit IRC09:52
*** svenkat has joined #openstack-powervm11:44
*** thorst has joined #openstack-powervm11:54
*** edmondsw has joined #openstack-powervm11:55
*** jwcroppe has joined #openstack-powervm11:59
*** kriskend has joined #openstack-powervm12:17
*** mdrabe has joined #openstack-powervm13:02
esberglu#startmeeting powervm_driver_meeting13:02
openstackMeeting started Tue Jun  6 13:02:57 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.13:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.13:03
openstackThe meeting name has been set to 'powervm_driver_meeting'13:03
esberglu#topic In Tree Driver13:03
esberglu#link https://etherpad.openstack.org/p/powervm-in-tree-todos13:03
efried"Fixing" the get_info business led alll the way down the rabbit hole.13:04
efriedIn the end, mriedem said we should just remove all the unused fields from InstanceInfo, everywhere.13:05
efriedThis will impact the OOT driver if/when it merges.13:06
edmondswthat's for instances... were we also talking about host stats the other day? Are there unused fields to remove there as well?13:07
thorstefried: you should probably run that by mdrabe.13:07
efriedWe weren't talking about host stats.13:07
thorstI suspect pvc would be impacted (and I bet other OS products)13:08
efriedthorst Yeah, I was thinking it will probably be a good idea to blast the dev ML on this one.13:08
thorstyep.  But just ping mdrabe on the side too.  I'm not sure how much they view the ML13:08
thorstI know I can't (don't) keep up13:08
efriedesberglu Depending how mriedem's comment plays out, this might impact your support matrix change.13:09
efriedNot sure if he's gonna ask to remove that whole section.13:09
esbergluefried: ack13:09
efriedI think that's it for me in tree.13:10
edmondswI wanted to ask an IT question13:10
efriedfloor is yours13:10
edmondswso when we were looking at the support matrix it clicked for me that our SSP support that merged IT is only ephemeral13:10
edmondswwhen we've talked about 2H17 priorities we've talked about network, config_drive, and vSCSI13:11
edmondswis vSCSI there ephemeral or data or both?13:11
edmondswand is vSCSI the top priority for data disk attach/detach, not SSP?13:12
efriedI don't remember a vSCSI discussion.  iSCSI maybe?13:12
edmondswthorst had said vSCSI13:12
mdrabeDo we want vSCSI IT?13:12
thorstthorst said cinder (via vSCSI)13:12
mdrabe(o/ btw)13:12
thorstvSCSI is simply a way to connect storage to a VM13:12
edmondswmdrabe read up, there was something for you above13:12
efriedGotcha.  So the VSCSIVolumeAdapter.13:13
thorstwhen we talk about it in terms of Cinder, we typically mean FC volumes to a VM13:13
thorstin fact in PVC, we simplified vSCSI to just mean that13:13
thorstbut vSCSI is used for SSP, iSCSI, FC PV, etc...13:13
thorstso I probably used the wrong language there13:13
thorstI meant cinder support via vSCSI13:13
efriedReally, for FC?  I thought we had a fibre channel mapping that was different from a VSCSI mapping.13:14
thorstFC also has this fancy NPIV support13:14
efriedAnyway, separate discussion.13:14
thorstwhich is like a SR-IOV like thing for FC...though, yeah, separate discussion13:14
efriedPoint is, we're looking to support the VSCSIVolumeAdapter in tree.13:14
edmondswthorst, in terms of the support matrix, what should we be trying to flip to partial/complete among the storage.block items?13:15
thorst945 - partial, 972 - complete (though we can add NPIV later), 993 - missing (for now?  If we can tuck in awesome)13:16
edmondswI think you're saying L972 via cinder vSCSI13:16
thorstreality is that today, everyone is FC.  So that's the hole we should fill first for IT.13:16
edmondswwhat about cinder via SSP?13:16
thorstno cinder driver for SSP13:17
edmondswoh, really13:17
mdrabeeveryone is FC? you mean Power folks?13:17
thorstwe talked about making one...but it never came to fruition13:17
thorstPowerVM - everyone is FC13:17
efriedStill open ;-)13:17
thorstrest of world...not so much13:17
efriedLast action in January13:17
thorstefried: yeah...13:17
thorstwe were hoping that would then allow us to make a cinder driver13:17
thorstI think people got pulled in other directions13:17
thorstlike iSCSI...and my other crazy volume connectors13:18
edmondswthorst so when PowerVC uses SSP for data volumes... how is it doing that without a cinder driver?13:18
thorstedmondsw: they have a cinder driver, but it isn't upstreamed yet13:18
esbergluAnything else IT?13:19
mdrabeI've a question13:19
efriedReal quick, back to the get_info discussion, this also came out of it: https://review.openstack.org/#/c/471106/13:19
mdrabeWhere does os-brick come in to play with volume connectors?13:19
thorstmdrabe: good q...shyama was looking into that.  Its a way to replace (I think?) the connection_info object (bdm)13:20
thorstnot super sure13:20
edmondswI've been working on deactivating the compute service when we can't get a pvm session or there are no VIOS ready, but not ready to put up for review quite yet13:20
mdrabeOk yea can discuss later13:21
esbergluAlright moving on13:22
esberglu#topic Out Of Tree Driver13:22
efriedPerf improvement change (https://review.openstack.org/469982) - I owe another patch set.13:22
efriedBut the testing came back good on that, so once those fixups are in, I think we're good to go there.13:23
efriedThen I plan to look into the "don't need a whole instance for the NVRAM manager" thing.13:23
efriedwhich could also yield perf improvements... maybe.13:23
efriedGotta do that quick before arnoldja moves on to bluer pastures.13:24
thorstefried: I don't disagree with what you did13:24
thorstI just feel dirty about it13:24
thorst'lets just wait 15 seconds for everything'13:24
thorst'because this API vomits up events'13:24
efriedWell, anything PartitionState13:24
thorstso I don't disagree...I just think its bleh13:24
efriedIt's always that way with perf improvements.13:25
thorstyep yep yep13:25
efriedMost of the time they make the code uglier.13:25
thorstjust letting my voice be heard.  :-p13:25
esbergluThis week is the pike 2 milestone (thursday), so I will be tagging the repos accordingly.13:25
*** smatzek has joined #openstack-powervm13:25
mdrabeefried is there a LP bug for that?13:26
efriedfor the perf thing?13:26
mdrabeyea... I guess it's not technically a bug13:26
openstackLaunchpad bug 1694784 in nova-powervm "Reduce overhead for redundant PartitionState events" [Undecided,New]13:26
mdrabeAh neat thx13:27
efriedThis might have been said last week, but the get_inventory thing is on hold pending further baking of the infrastructure.13:29
efriedThey've hit a snag with the design of shared resource providers.13:29
efriedIt's going around the ML at the moment.  Not sure how that's gonna shake out.  An elegant solution is not yet forthcoming.13:29
efriedSubject line, in case you want to follow along at home: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources13:30
efriedI guess that's an IT/OOT thing.13:30
efriedOh, I wanted to bring up t9n13:31
efriedI saw an email a couple days ago that may affect our stance on how aggressive we become about removing translations from various places.13:31
thorstwhat does that stand for?13:32
edmondswwhat was the email?13:32
efriedRight now the policy we're following in nova-powervm is just not translating any new log messages, and removing from anything we happen to touch while doing mods.13:32
thorstso the 9 stands for len('ranslatio')13:33
efriednetworking-powervm is subject to a hacking rule that disallows *any* translation.13:33
efried(thorst, yeah, like i18n, etc.)13:33
efriedand k8s ;-)13:33
thorst(got it - finally understand i18n too)13:33
efried...and I'm not sure whether we've even talked about a policy for pypowervm.13:33
* thorst feels dumb13:33
thorstefried: well, pypowervm is consumed by more ways than OpenStack...we've got two or three other direct users.13:34
thorstI think any change there would need to be run by them13:34
thorstwe'd probably want to ask clbush from a CLI perspective too.13:35
edmondswI assumed efried was going to talk about nova-powervm, not pypowervm13:35
edmondswoh, missed a linee13:35
* edmondsw feels dumb, joining thorst13:35
efriedSo okay, agree that discussion outside this group is needed for pypowervm.13:36
efriedWhat about nova-powervm?13:36
thorstI dunno, I'm dragging my feet on that13:36
thorstand I'll admit, its really because I know pvc likes those messages translated.13:36
efriedIt's probably not worth going all out and removing everything.13:36
edmondswthorst that's not true... PowerVC doesn't want log message translated13:37
efriedthorst That's the email I was referring to, yeah.13:37
thorsto, huh13:37
thorstwell, then yeah.  I'm fine with either being proactive or lazy about it then13:37
edmondswthorst PowerVC wants consistency, it just doesn't want to spend the resources to scrub the translations it already has in place13:37
thorstgot it.13:37
edmondswbut a note was sent just a couple days ago abount starting to scrub things if/when you can13:37
thorstwell, then ... same goes for ceiometer-powervm too13:38
thorstthat one is probably easier to do (and probably could benefit from a patch set done against it)13:38
edmondswI'd probably prioritize ceilometer-powervm above nova-powervm13:38
efriedOkay, upshot for nova-powervm and ceilometer-powervm is: no need to hold back if you feel like scrubbing out all the log t9n from those guys.13:38
efriedBut it's not a high priority.13:38
*** k0da has quit IRC13:39
efriedI added it to the etherpad https://etherpad.openstack.org/p/powervm-in-tree-todos line 6913:39
edmondswthat it for OOT?13:40
efriednothing else from me.13:40
esberglu#topic PowerVM CI13:41
*** k0da has joined #openstack-powervm13:41
efriedesberglu Okay, so you moved the CI to-dos out to another etherpad.13:41
esbergluefried: Yeah I linked it in the other one13:42
esbergluI can move it back if that's what people prefer13:42
esbergluBut I wanted to track tempest failures there and it was becoming a lot of info13:42
efriedesberglu I'm fine with it as long as everything's cross-linked.  I added a backpointer from the CI one to the original.13:42
esbergluefried: Good call.13:43
efriedWhat's the difference between WORKING and CURRENT?13:43
esbergluStuff that I'm actually doing (in staging) vs stuff that's just on the list13:43
esbergluWe still need to figure out a way to get the VNC tests working13:44
esbergluAnd check what tests (if any) can be enabled with SSP merged13:44
edmondswesberglu change "CURRENT" to "NEXT"?13:44
edmondswefried that clearer?13:44
*** tjakobs has joined #openstack-powervm13:44
efriedYeah, that would be fine.  Not a big thang.13:45
esbergluCI has been looking really good since the last couple fixes last week13:46
esbergluWhich should open up some time to start knocking this list out13:46
esbergluI'm gonna go through and prioritize the list today13:46
esbergluBeen seeing way less of the timeout errors since I upped the time limit. Which to me points to slow over hanging.13:47
edmondswesberglu can you make looking at the tempest failures part of that prioritized list?13:47
*** thorst is now known as thorst_afk13:47
esbergluedmondsw: Yeah13:47
edmondswesberglu so you did merge that timeout bump?13:48
esbergluedmondsw: I thought we were just putting it in temporarily for investigation purposes. But I can13:48
efriedHope not.  We need to have a lively discussion first.13:49
edmondswno, I just asked because you're seeing "way less"13:49
esbergluedmondsw: I just live patched the jenkins jobs13:49
edmondswif it didn't merge, wouldn't it only be in effect on a one-by-one basis?13:49
efriedBasically, my stance on this is that our CI isn't just testing "does it work.... eventually?"  It's also there to alert us to what I'll call "performance problems" for lack of a better term.13:50
efriedSo if stuff is taking a long time, we need to figure out why it's taking a long time, not just increase the timeout.13:51
edmondswefried yep, I think we all agree there13:51
efriedI would even go so far as to say, if we had the space for it, we should be *decreasing* timeouts to highlight things that are taking longer than they ought.13:51
edmondswI'll even agree with that... once we get these current timeouts figured out / addressed13:52
esbergluShould I remove the timeout increase now? It will be easier to find failing runs to investigate that way13:53
efriedesberglu When you've got the space to really start digging into them, yes.13:53
efriedNot necessary if it's just going to result in more failures but no action.13:54
edmondsw+1 or when one of us pings you that we have that time13:54
esbergluefried: Ok. I want to do a couple other things first (like get the neo logged) which should help for debugging13:54
efriedfo sho.13:54
edmondswesberglu I'm not seeing getting the neo logged on your list13:55
efriedlet me know if you need help figuring out how to do that; I have a couple of ideas.13:55
esbergluedmondsw: Yeah that list is a WIP13:55
esbergluThat's all I had for CI13:55
esberglu#topic Driver Testing13:55
esbergluAny progress here?13:56
efriedWe don't have testers on.  But thorst_afk added https://etherpad.openstack.org/p/powervm-in-tree-todos starting line 9213:56
efried...pursuant to our call the other day.13:56
thorst_afkefried: we're lining up the test resources still.  I don't think any tangible change, just formulating plan13:56
esbergluAny discussion needed here? Otherwise I'll move on, running close to time13:57
thorst_afkdon't think so13:58
esberglu#topic Open Discussion13:58
esbergluAny final thoughts before I call it?13:58
efriedIt's really confusing in my HexChat interface that esberglu and edmondsw both start with 'e' and have the same number of letters.13:58
efriedMy old IRC client had different colors for each user.  Haven't figured out how to do that in HexChat.13:58
thorst_afkefried: bringing the real problems to light13:59
efriedYou can count on me.13:59
thorst_afkI'd make a quip...but yeah, we do count on you13:59
mdrabeI got a q actually13:59
thorst_afkalright, I need to bail.  Need to go spread the gospel of open vswitch14:00
mdrabeFor test, what's the desired deployment route, devstack or OSA?14:00
thorst_afkmdrabe: for now, devstack due to simplicity of setup14:00
thorst_afkwhich is not all that simple, until you compare to OSA.14:00
*** kriskend has quit IRC14:00
efriedHah, ironic considering OSA is supposed to be the thing that makes it simple.14:01
thorst_afkefried: OSA is the thing to make OpenStack production grade14:01
mdrabeWould this be an opportunity to iron out the OSA path then?14:01
efriedSorry, thorst_afk Yeah.  mdrabe No.14:01
thorst_afkmdrabe: kinda.  Lets chat more when I'm off the phone14:01
mdrabeI'd say wainot, but sounds like it's complicated14:02
efriedesberglu Think we're done here.14:05
openstackMeeting ended Tue Jun  6 14:05:31 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:05
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-06-06-13.02.html14:05
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-06-06-13.02.txt14:05
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-06-06-13.02.log.html14:05
*** kriskend has joined #openstack-powervm14:09
efriedYeeeessssss.  /set text_color_nicks on14:11
*** mdrabe has quit IRC14:11
*** mdrabe has joined #openstack-powervm14:12
esbergluefried: What were your ideas for getting the hostname printed?14:18
esbergluIs there a way to do that with pvmctl?14:29
efriedesberglu Oh, maybe.  I was thinking you could use the RMC discovery business in local2remote and then do a DNS lookup.14:37
efriedor yeah, after local2remote is set up, you could ask pvmctl for the managed system's IP and then DNS that.14:38
efriedesberglu Looks like the managed system IP is the FSP (which makes sense).  But generally looking that up is still gonna be useful.14:40
efriedesberglu The nvl doesn't seem to have an RMC IP set, which is annoying, but also makes sense I guess.14:43
esbergluefried: I found an easier way14:43
efrieddo tell14:43
esberglusudo /usr/sbin/rsct/bin/getRTAS14:43
esbergluThat's in local2remote.py14:43
esbergluAnd it prints the hostname as part of it14:44
esbergluI can just run that at the start of the jenkins job14:44
*** mdrabe has quit IRC14:47
efriedesberglu I haven't succeeded in getting just the neo hostname out of that output yet.14:54
efriedWe don't really want the IPs in there.14:54
esbergluefried: Yeah I'm trying to get a bash command to do it right now14:55
efriedAnnoyingly, this prints the wrong hostame: host `python -c 'import local2remote; print local2remote._get_local_rmc_address()'`14:55
efriedDo we have a way to map from 'vhmccloudvm126' to 'neo40'?14:57
efriedAlternatively, this works, but prints some extra gorp: sudo /usr/sbin/rsct/bin/getRTAS | sed 's/.*HscHostName=\([^;]*\);.*/\1/'14:59
esbergluCould just toss a grep neo on the end14:59
efriedIf we can count on the neo's hostname starting with 'neo'15:00
efriedsudo /usr/sbin/rsct/bin/getRTAS | sed -n 's/.*HscHostName=\(neo[^;]*\);.*/\1/p'15:00
esbergluefried: Yeah they all do in CI and that shouldn't change15:00
efriedCool, then the above works.15:01
esbergluefried: Sweet thanks for the assist. I'll toss it on staging quick to confirm it works15:01
efriedand is way faster than loading up local2remote and running _get_local_rmc_addr (which does things like pinging the host)15:01
efriededmondsw Talk to me about https://review.openstack.org/#/c/469982/10/nova_powervm/tests/virt/powervm/test_event.py@6415:02
edmondswefried give me 2 minutes15:03
edmondswefried, alright, I'm here15:05
efriedSo I wrote the test to walk through each code branch in order.15:06
efriedAre you saying something is missed?15:06
edmondswefried not exactly... I think with the current implementation this probably tests everything15:07
edmondswbut if someone changed things, it could pass when it shouldn't in a case I was trying to point out15:07
edmondsw```self.mock_driver.nvram_mgr = None``` shouldn't be set when you test [] and ['foo', 'bar', 'baz'], in case the existence of nvram_mgr somehow affects that15:08
edmondswtoo picky?15:09
edmondswprobably too picky15:10
efriedWell, I'll argue that the foo/bar/baz case is validated in the next chunk via the foo/PartitionState/bar case.  And the [] case is reductive but effectively the same.15:10
efriedI'll agree it's always possible to change code to be wrong but let existing tests pass ;-)15:11
edmondswI changed to +115:12
edmondswyou said you owed another patch... was it just for these comments, or something else?15:12
efriedJust for these.  I'm at least gonna fix the typo.  And look at the other thing.15:12
*** mdrabe has joined #openstack-powervm15:13
openstackgerritEric Fried proposed openstack/nova-powervm master: Performance improvements for Lifecycle events  https://review.openstack.org/46998215:16
efriededmondsw Done ^^15:16
efriedthorst_afk You happy with ^^ ?15:24
efriedesberglu https://review.openstack.org/#/c/470999/ still needs the commit message updated.15:29
esbergluefried: Oh missed that comment. One sec15:30
esbergluefried: Done15:30
esbergluefried: See 5398 for the neo hostname change15:31
efriedesberglu +215:32
efriedesberglu You addressed one comment on the commit message, but not the other.15:36
esbergluWow I'm blind apparently15:37
efriedesberglu Copied it over to PS4 for your convenience ;-)15:37
esbergluefried: Done. Unless there are other invisible comments15:42
*** k0da has quit IRC15:46
mdrabeefried for that get_info business is there any plan for this https://github.com/openstack/nova-powervm/blob/stable/liberty/nova_powervm/virt/powervm/vm.py#L100 or is it gonna stay as is?16:11
efriedmdrabe In stable/liberty specifically?  No plans to change anything earlier than pike afaik.16:12
mdrabe^ That's liberty sorry but more or less the same for master16:12
efriedmdrabe Yeah, that would be the idea.  To change it to look like https://review.openstack.org/#/c/471146/2/nova/virt/powervm/vm.py16:13
efriedDo y'all use those other fields for something?16:13
mdrabeLittle bit yea16:13
mdrabeI just need to watch out for it is all16:14
efriedThe way we've got it implemented with the subclass, we could leave it alone OOT and it would still work.  Which fields do you use?16:14
mdrabeWe use _get_property to do basically the same thing the state property does for that object16:15
efriedmdrabe You use that derived InstanceInfo directly?16:16
mdrabeSo we should be fine getting rid of everything else same as IT, just need to respond to the change when it comes16:16
efriedmdrabe As opposed to just using get_vm_qp directly?16:17
mdrabewell it's... lemme fin dit16:18
mdrabefind it*16:18
efriedmdrabe Seems like you could swap over to using get_vm_qp directly - you could do that any time, and arguably should, since _get_property is private ;-)16:18
mdrabeWe use that, that returns the extended object16:20
*** kriskend has quit IRC16:20
mdrabeBut we can just as easily use .state16:20
mdrabeFrom whatever object, I don't think it'd matter16:20
efriedmdrabe get_info is staying around, and its response is still an InstanceInfo (a nova.virt.hardware.InstanceInfo instead of our subclass), and it still has a .state.  So if that's all you're doing, you're safe.16:22
efriedmdrabe If you're using nova_powervm.virt.powervm.vm.InstanceInfo directly, and/or referencing any fields other than .state from the get_info result, you'll be affected.16:23
mdrabeRight that's what we're doing now16:23
mdrabeefried got a sec about https://bugs.launchpad.net/nova-powervm/+bug/1694784?16:50
openstackLaunchpad bug 1694784 in nova-powervm "Reduce overhead for redundant PartitionState events" [Undecided,New]16:50
efriedmdrabe Sho16:50
mdrabearnoldje noticed that the NVRAM manager, even though it's using the instance object throughout, only ever really needs the intance UUID16:51
efriedmdrabe Yuh, that's the next thing I was planning to look at.16:52
efriedSeparate change, tho16:52
mdrabeSo for 2) under that bug, the part about getting the instance object before making nvram_mgr calls, I've been looking into getting rid of that call entirely16:52
mdrabeok so you're aware of this16:52
efriedmdrabe Yes, I'm aware.16:53
efriedHowever, given the fix that's out there, I'm actually not positive we're going to see a benefit from making the nvram mgr change.16:53
efriedwait, thinking again... maybe we would.16:53
mdrabeIt's not the nvram mgr change that sees the performance improvement, it's avoiding the get_instance call16:53
efriedYeah, I know.16:54
efriedSo yeah, cause today we're doing the instance lookup right away with nvram mgr on; I was thinking that we already get the benefit because we cache & send that instance object down to the PartitionState handler, which would be doing that instance lookup anyway.16:54
efriedmany of those PartitionState events never happen, so their instance lookups would never happen.16:55
efriedSo if we can avoid the lookups at the outset in the nvram branch, we would actually end up skipping a whole bunch.16:55
efriedAND we can get rid of that goofy cache.16:55
efriedwhich will make the code cleaner.16:55
mdrabeWell we might still need a cache16:56
efriedSo hell, even if there winds up being no perf improvement, I would support the change just for the code cleanup aspect.16:56
efriedNo, we wouldn't.16:56
efriedWe would have no way to populate it.16:56
efriedAnyway, are you looking into making this change?16:56
efriedSo I shouldn't?16:56
mdrabeFor PVM UUID to OpenStack UUID16:56
mdrabeAlthough the NVRAM mgr could just use the PVM UUID16:57
mdrabeBut that would be a problem for pvc16:57
mdrabeI'll do it16:57
efriedmdrabe The OpenStack UUID lookup goes to the PVM REST server.  The get_instance goes to the nova db.  Not sure which one is more expensive, but avoiding the latter will certainly help us.16:58
efriedmdrabe Cool, looking forward to seeing that.16:58
mdrabeI believe we already have the LPAR wrapper16:59
efriedmdrabe Nope.16:59
efriedThe event just has the URI16:59
efriedOh, sorry, hold on, I was really confused above.16:59
efriedIn order to figure out the OpenStack UUID, we do indeed have to do get_instance - sometimes twice.  It doesn't go to the PVM REST server.17:00
mdrabeefried here right https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/event.py#L98 ?17:02
mdrabeThat's arnoldje's area of concern at least17:03
efriedmdrabe You may as well start by looking at the code I'm just about to merge.17:03
efriedSorry, thought I had added you to ^^17:03
efriedjust did that.17:03
efriedYou're gonna have to relearn it ;-)17:04
efriedpretty well refactored.17:04
efriedmdrabe So yeah, here: https://review.openstack.org/#/c/469982/11/nova_powervm/virt/powervm/event.py@9817:04
mdrabeyea still grabbing the instance17:05
mdrabeAnd at a higher frequency if I understand this change correctly I think?17:06
efriedmdrabe No, purely taking NVRAM events into account, with this change, we're grabbing the instance with roughly the same frequency.17:07
efriedmdrabe Digging into NVRAMManager, I don't see anything there to preclude storing by PVM UUID.  What's the reason that doesn't work for pvc?17:07
mdrabeVMs not managed by PVC17:08
efriedBecause you have to account for upgrade paths from previous versions that used the instance?17:08
mdrabeWe don't wanna hold NVRAM info for those VMs17:08
efriedI don't see how that's a problem ;-)17:08
efriedIf no pvm UUID, you don't store anything, nah?17:09
mdrabeExcept we're processing events for all LPARs right?17:09
mdrabeSome might not be managed17:10
mdrabeSo we get an NVRAM for an unmanaged LPAR and now we're storing NVRAM info for it17:10
mdrabeThe way to know would be to check if there's an instance object behind it17:10
efriedwait, sorry, I gotcha.  LPARs not managed by pvc.  Not instances that aren't LPARs.17:11
mdrabebut we're trying to avoid looking up the instance object17:11
efriedmdrabe Yeah, and there's no way to see if there's an instance behind it without doing get_instance.17:11
mdrabemhm, but it's better with a cache17:11
efriedOh, yeah, I see where you're going with that - but the nature of the cache would have to change entirely.17:11
efriedIt would have to be *just* a UUID mapping cache.17:12
efried...which you'd have to figure out a way to age and expire entries.17:12
efriedCause you can't count on getting notified on deletion.17:12
efriedI deliberately didn't implement that part of #3 in the bug report for that reason.17:13
mdrabenotified in what way though?17:13
mdrabevia event?17:13
mdrabeWhy can't we count on that?17:14
efriedThe instance can be deleted without telling pvm about it.  Not sure what the notifications look like when you migrate.  All bets are off if you get CACHE_CLEARED.  Those are off the top of my head.17:15
*** k0da has joined #openstack-powervm17:15
efriedYou could account for most of those, but I can guarantee you'd still have leaks somehow.17:15
efriedAnd the more you try to close the holes, the bigger the code gets - and at some point you have to weigh that against the benefit you're really deriving from it.17:16
efriedWhich, again, is why I didn't mess with it for PartitionState.17:16
efriedEven with aging/expiring, which would be easier in the long run.17:16
efriedI dug into the nova API, btw, because I would expect it would be way cheaper to do an 'exists' check in the db that just returns a bool, as opposed to pulling the whole Instance record and building an object and sending it across the wire.17:17
efriedBut no such API exists.17:17
mdrabeOo yea that'd be nice17:17
mdrabeAlthough nova doesn't really have this problem17:18
efriedYou should propose one.  There's probably a bunch of places in the nova codebase that could benefit from that.17:18
efriedAnd definitely from external consumers of the nova API.17:18
mdrabeI'm gonna ask arnoldje about that17:19
mdrabeI wonder how much faster an exists check would be vs returning the entire object17:20
efriedmdrabe Does pvc not maintain its own databases related to instances?17:20
mdrabeIt's still a query17:20
mdrabeNah it's all nova17:20
efriedThe NVRAM and slot maps are the *only* things you store?17:20
mdrabeFor swift yea I think so17:21
efriedNot talking about swift.  Though I guess I am.17:21
efriedSo yeah, I see the issue.17:21
mdrabeYea it could be stored anyhow I suppose17:21
mdrabeThe Instance object table specifically is pure nova in pvc17:22
efriedI was gonna say it shouldn't be too much extra storage - but you really never know what the env is gonna look like.17:22
mdrabeWhat shouldn't be extra storage?17:22
efriedAlthough I really wouldn't expect people to be hosting a whole bunch of non-openstack LPARs on a compute node running openstack.17:22
efriedHypothetically storing NVRAM for all LPARs that send you notifications.17:23
mdrabeAh right right17:23
efriedIs the size of the 'unmanaged' set bounded/finite?17:23
efriedWell, it's both of those things, but is it reliably small?17:23
mdrabeIt's bounded to scale reqs17:24
efriedYa know, to that point, how are we accounting for dropping records for deleted LPARs in the NVRAM & slot mgrs?17:24
mdrabeBut it's tricky to say what folks will or won't do, I've seen some weird stuff done17:24
mdrabeMmm that's a good q17:24
efriedI guess there's prolly a periodic sweep that does a full LPAR feed GET and scrubs whatever ain't there.17:25
mdrabeIdk if we do any kind of scrubbing17:25
mdrabeI would hope so17:25
mdrabebut I'm not aware of that17:25
efriedBut where?17:25
efriedI would have thought it would be... here.17:25
mdrabeI think we just do it sychronously with delete17:25
efriedOh, yeah, that'd do it.17:26
mdrabeStill can get stales though17:26
mdrabeI'm thinking of negative cases primarily17:27
mdrabesomething fails in destroy, maybe the instance record is gone but the NVRAM / slot map objects remain stored wherever they are17:27
mdrabeActually idk if that can happen17:28
efriedThat would be a bug, one would hope.17:29
efriedBut hopefully rare enough to be a vanishingly small problem.17:30
mdrabeyea, anyway I'm getting hungry, gonna grab some grub. I'll dive into this afterwards17:30
*** k0da has quit IRC17:43
openstackgerritMerged openstack/nova-powervm master: Fix some reST field lists in docstrings  https://review.openstack.org/46745217:47
openstackgerritMerged openstack/nova-powervm master: Performance improvements for Lifecycle events  https://review.openstack.org/46998217:50
openstackgerritMerged openstack/networking-powervm master: Updated from global requirements  https://review.openstack.org/46557217:55
efriedmdrabe ^^ now merged, pull your master branch accordingly.17:55
*** chhavi has quit IRC17:55
esbergluCan we abandon that?17:58
esbergluthorst_afk: efried: adreznec: ^17:58
thorst_afkI defer to adreznec17:59
efriedesberglu Rereading.  I can't remember why we were doing this, but we had a really good reason at the time.18:00
efriedesberglu To work with this, we would just have to add `enable_service pvm-q-sea-agt` to our local.confs, right?18:04
efried...which oughtta be harmless even if it's already there (i.e. if we put that in place before this merges)?18:05
*** k0da has joined #openstack-powervm18:05
esbergluefried: IIRC we already put a change into neo-os-ci that has that in preparation for the above merging18:05
esbergluBut I thought we didn't want to merge the above anymore for some reason18:05
efriedSo we should be able to recheck and have it work?18:05
esbergluLet me look in the code quick and make sure18:06
esbergluefried: Yeah it's enabled in both local.conf files18:06
efriedSo I think it's a good idea for us to do this anyway.  We have OVS support OOT, so you theoretically wouldn't need to have either of these enabled (if you're a regular consumer).18:07
efriedLet's powervm:recheck this sucker and make sure it has legs, then merge it already.18:07
openstackgerritEric Berglund proposed openstack/networking-powervm master: Switch to manual service enablement for devstack plugins  https://review.openstack.org/41666718:07
thorst_afkefried: but you wouldn't install networking-powervm if you were using the OVS18:07
thorst_afkthat's my issue with it18:07
thorst_afkif you're installing networking-powervm, you're using either SR-IOV or SEA18:07
efriedthorst_afk But rarely both.18:07
thorst_afkwhy would you install it but not use anything from it.18:07
thorst_afksure, but at least one.18:07
efriedOkay, that's a fair point.  But there would be no way to disable one of them.18:08
efriedI'm also thinking this is a pretty common model for external services to need enable_service in their local.confs18:09
esbergluefried: You could disable_service the one you don't want if you only want to run one I think? Same thing essentially18:10
esbergluUnless the enable_service here https://review.openstack.org/#/c/416667/3/devstack/settings18:10
esbergluoverrides that18:10
esbergluI guess the enable_service way still seems better to me though18:11
efriedI agree, and I'd rather move forward with it than abandon it.18:12
efriedOr we can continue to let it languish.18:12
efriedBut esberglu is in cleanup mode, clearly ;-)18:12
*** k0da has quit IRC18:12
esbergluefried: Gotta take advantage when CI isn't on fire18:16
efriedesberglu While you're not nailing down the timeouts, anyway.18:17
esbergluefried: Yeah I'm gonna turn the timeout back down this afternoon once I finish up the last couple things18:18
adreznecesberglu: efried Sorry, just finished reading backscroll. Are we going to try and get that merged in then?18:24
esbergluadreznec: Yep that's the plan. Just waiting for CI results18:24
adreznecI see esberglu rebased it18:24
efriedadreznec I'd like to, but I think thorst_afk is still on the nay side.18:24
adreznecWell he's afk, so clearly we just need to merge it before he gets back to object18:25
esbergluefried: Test timeouts are back to the default for CI18:31
thorst_afkI'm still of the mindset that if you're installing it, you likely want SEA enabled18:32
thorst_afkI'll also admit that the only ones using devstack are right here18:33
thorst_afkif this group wants it shut off in devstack / networking-powervm...I'll just ignore the review  :-)18:33
*** jpasqualetto has joined #openstack-powervm18:43
*** thorst_afk has quit IRC18:54
*** thorst_afk has joined #openstack-powervm18:56
*** thorst_afk has quit IRC19:00
*** thorst_afk has joined #openstack-powervm19:14
*** thorst_afk has quit IRC19:15
*** thorst_afk has joined #openstack-powervm19:16
*** k0da has joined #openstack-powervm19:23
edmondswthorst_afk sure, if you're installing it you likely want either SEA or SR-IOV agents enabled, but not necessarily both, right?19:27
efriedI would contend *usually* not both.19:28
*** mdrabe has quit IRC19:46
openstackgerritMerged openstack/networking-powervm master: Switch to manual service enablement for devstack plugins  https://review.openstack.org/41666719:56
*** mdrabe has joined #openstack-powervm19:56
thorst_afkefried edmondsw: sure.  And neutron has both the Linux Bridge and OVS agents.  It by default now turns on the OVS and if you want to use LB you turn off OVS and enable LB...20:42
edmondswthorst_afk you suggesting that SEA be on by default but not SR-IOV?20:43
thorst_afkedmondsw: yes.20:43
edmondswthat makes sense20:45
*** smatzek has quit IRC20:46
*** svenkat has quit IRC21:16
*** thorst_afk has quit IRC21:19
*** thorst_afk has joined #openstack-powervm21:20
esbergluefried: Of course when I'm finally hoping to see a timeout there aren't any coming through...21:23
efriedOf course.21:24
*** thorst_afk has quit IRC21:24
esbergluefried: edmondsw: I put up 2 changes, 1 to neo-os-ci and 1 to powervm-ci21:35
edmondswesberglu looking now21:35
esbergluThis is moving prep_devstack.sh for the same reason we moved the local.conf files21:35
esbergluWe can merge the powervm-ci one with no impact to production21:35
edmondswesberglu tempest really doesn't have a new version for stable/ocata, still uses master there?21:35
esbergluedmondsw: Technically you are supposed to be used tempest master for the last 3 releases21:36
esbergluBut it doesn't work for newton21:36
edmondswesberglu I was also going to ask about the newton block21:36
esbergluAt least that was my understanding based on21:36
edmondswwhy just 13.0.0... should we be figuring out the latest 13.y.z?21:37
edmondswand if master is supposed to work with newton, why doesn't it?21:38
esbergluSo at some point we were checking out different tempest versions for every branch. I think due to misunderstanding how far back master supported21:39
esbergluMoving ocata to master worked but newton had issue (something with our security config in tempest IIRC)21:40
esbergluAnd it just fell into the backlog of stuff and I haven't had a chance to look again21:40
esbergluIt might be missing from the TODO list though let me check21:40
esbergluedmondsw: And I don't think there is a 13.y.z release other than 13.0.021:42
edmondswesberglu no, there doesn't seem to be... I was just thinking they could decide to cut one to fix something21:42
edmondswesberglu fyi: https://github.com/openstack/tempest/blob/16.0.0/README.rst#release-versioning21:42
edmondswwith 12.y.z there was a 12.0.0, 12.1.0, and 12.2.0, but they don't seem to have done anything like that since21:44
edmondswmaybe we just adjust if/when they do that before we get newton using master, if that's what it should really be using21:44
*** mdrabe has quit IRC21:44
*** tjakobs has quit IRC21:46
esbergluedmondsw: I added it to the todo list. I don't think there is really a huge rush to get newton on master unless you feel otherwise21:47
esbergluNot sure exactly what you mean by that last line21:48
edmondswesberglu no, I don't think there's any rush21:48
edmondswesberglu I meant if there is a 13.0.1 to fix some bug, we could update this script at that point to use 13.0.1, and not necessarily worry about it now21:49
esbergluThey won't ever release a 13.0.121:49
esbergluThey only have a master branch so they can't add a 13.y.z tag anymore21:50
esbergluOnly 16.0.1 and up21:50
edmondswah, true21:50
edmondswesberglu did you have to make any changes in prep_devstack.sh, or is it just a straight copy?21:52
esbergluI moved the ssh key and git global configuration steps to the image template build21:53
esbergluIt also sets the pypowervm_repo using $(git remote get-url origin)21:54
esbergluin line 18521:54
esbergluBecause that was previously the internal morpheus21:54
esbergluNow we clone from morpheus instead of github21:54
esbergluIn the image-template build21:54
esbergluSo that command will set it to the internal repo without us putting the internal url in the public repo21:55
esberglu(at least that's the idea)21:55
esbergluedmondsw: I'm heading out. Will address any concerns tomorrow21:57
edmondswesberglu good night21:57
esberglusame to you21:57
*** k0da has quit IRC22:02
*** jpasqualetto has quit IRC22:18
*** thorst_afk has joined #openstack-powervm22:50
*** jwcroppe has quit IRC23:07
*** thorst_afk has quit IRC23:09
openstackgerritOpenStack Proposal Bot proposed openstack/nova-powervm master: Updated from global requirements  https://review.openstack.org/47012123:22
*** svenkat has joined #openstack-powervm23:31

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!