Thursday, 2018-03-22

*** edmondsw has joined #openstack-powervm02:42
*** edmondsw has quit IRC02:49
*** AlexeyAbashkin has joined #openstack-powervm03:17
*** AlexeyAbashkin has quit IRC03:21
*** edmondsw has joined #openstack-powervm04:05
*** edmondsw has quit IRC04:10
*** chhagarw has joined #openstack-powervm04:13
*** edmondsw has joined #openstack-powervm05:54
*** edmondsw has quit IRC05:59
*** AlexeyAbashkin has joined #openstack-powervm06:16
*** AlexeyAbashkin has quit IRC06:21
*** adi_____ has quit IRC07:25
*** adi_____ has joined #openstack-powervm07:26
*** edmondsw has joined #openstack-powervm07:42
*** edmondsw has quit IRC07:46
*** AlexeyAbashkin has joined #openstack-powervm07:56
*** AlexeyAbashkin has quit IRC08:17
*** AlexeyAbashkin has joined #openstack-powervm08:18
*** arunman has joined #openstack-powervm09:02
*** edmondsw has joined #openstack-powervm09:30
*** edmondsw has quit IRC09:35
*** k0da has joined #openstack-powervm09:35
*** k0da has quit IRC10:26
*** AlexeyAbashkin has quit IRC11:00
*** AlexeyAbashkin has joined #openstack-powervm11:00
*** edmondsw has joined #openstack-powervm12:15
*** esberglu has joined #openstack-powervm13:48
esbergluefried: edmondsw:
esberglu^ That's passing for me locally13:49
edmondswdid you tox --recreate ?13:50
esbergluedmondsw: yeah13:50
edmondswit doesn't really make sense to me why it would be failing for queens...13:50
esbergluedmondsw: Me neither13:56
openstackgerritArun Mani proposed openstack/nova-powervm master: Add affinity score check attribute to flavor
edmondswesberglu nova mtg starting14:01
*** k0da has joined #openstack-powervm14:03
*** arunman has quit IRC14:04
*** tjakobs has joined #openstack-powervm14:05
edmondswesberglu if you can address that one thing on the network hotplug patch, I can move that back to ready for wider review14:07
*** tjakobs has quit IRC14:07
esbergluedmondsw: done14:11
*** tjakobs has joined #openstack-powervm14:14
esbergluedmondsw: I think the networking-powervm tox is failing because it is still using master neutron for some reason14:59
esbergluWhy that's only happening for zuul and not local idk14:59
edmondswI saw stable/queens specifically called out in one place there14:59
edmondswdoesn't mean it's not *also* pulling master... :(15:00
*** arunman has joined #openstack-powervm15:08
*** arunman has quit IRC15:24
edmondswesberglu is the CI ok? still no votes on the commits I put up yesterday.15:29
edmondswjust powervm:recheck, right?15:30
openstackgerritMerged openstack/ceilometer-powervm stable/queens: tox needs to pull from stable/queens
openstackgerritDoug Hellmann proposed openstack/ceilometer-powervm master: add lower-constraints job
*** tjakobs has quit IRC16:07
efrieduh oh, here they come.16:07
*** tjakobs has joined #openstack-powervm16:08
esbergluedmondsw: keeps failing zuul but seems to be a zuul issue16:17
esbergluI'm just gonna keep rechecking16:17
edmondswyay zuul...16:18
efriedThere's something blocking zuul right now.  Hold on, let me find it.16:19
efriededmondsw, esberglu: This one needs to merge to fix the gate:
esbergluefried: ah tx for the info16:21
*** AlexeyAbashkin has quit IRC16:34
openstackgerritMatthew Edmonds proposed openstack/networking-powervm stable/queens: tox needs to pull from stable/queens
tjakobsefried: do you know if there is a reason we can't just use the connection_info (BDM) during migration? It already has the udid saved from the connect_volume flow16:59
*** k0da has quit IRC17:00
efriedtjakobs: I recently found out that the BDM is not the connection_info.17:07
efriedthe latter is a member of the former.17:07
efriedThat may not be remotely relevant to your question :)17:08
efriedtjakobs: I'm gonna go out on a limb and say: since you're already planning to get set up to test all this live, try it :)17:09
tjakobsexactly what I wanted to hear...17:09
efriedyou're welcome.17:10
*** arunman has joined #openstack-powervm17:11
*** esberglu_ has joined #openstack-powervm17:39
*** esberglu has quit IRC17:41
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: Use vios_uuids property to process required vioses for iSCSI
chhagarwedmondsw: updated the changes with the UT.17:45
*** arunman has quit IRC17:56
chhagarwtjakobs,edmondsw: Why do we need a global lock while getting the iscsi initiator @lockutils.synchronized("PowerVM_iSCSI_Initiator_Lookup")17:58
chhagarwCurrently only novalink mgmt partition acts as an initiator, but if vios also host iscsi, then they will have a separare initiator iqn18:01
edmondswchhagarw looks like that needs to be cleaned up a bit... should only lock when it's actually doing discovery, not when it's just going to return something already been discovered18:04
edmondswand will also need changes to support VIOS as initiator18:04
edmondswtraditional VIOS18:05
chhagarwedmondsw: This can be moved to the volume/ to return the initiator18:06
edmondswchhagarw volume/ does seem like a better place for something that is iscsi-specific18:07
chhagarwlock based on the uuid18:07
edmondswvios uuid, right? yes18:07
chhagarwI will have that change, can u have a look on the active vios_uuid one.18:08
*** AlexeyAbashkin has joined #openstack-powervm18:15
*** AlexeyAbashkin has quit IRC18:20
edmondswchhagarw comments posted... some new, some repeated from my previous comments18:38
*** k0da has joined #openstack-powervm19:03
efriededmondsw: We currently have the ability via extra_specs to specify shared vs dedicated procs, right?19:03
edmondswthere's also dedicated_sharing_mode and shared_proc_pool_name19:06
edmondswshared_weight, etc.19:06
esberglu_efried: I don't think there is a good way to just apply the snapshot CI patch to only the snapshot change19:11
esberglu_I mean we could just check every change number and match against the snapshot change19:12
efriedesberglu_: So what do you suggest?  Wait until everything under it has merged, then enable snapshot?19:12
esberglu_But applying the patch would require the internal gerrit link19:12
efriedcause mriedem threatened to cock-block you if he doesn't see snapshot tests in the CI runs for the snapshot patch.19:12
efriedAnd I'm not sure he was kidding.19:12
esberglu_No he was serious19:12
esberglu_But we don't know which patch we are testing until later in the CI flow (prep_devstack) so we can't have internal links19:13
esberglu_I guess we could put a script that applies the snapshot CI patch into the snapshot we spawn from, then call that script from prep_devstack?19:14
esberglu_Gross but could work19:14
esberglu_Let me think about it a little bit. Cause mriedem also didn't like us applying outstanding patches to all nova CI runs19:16
efriedesberglu_: It would be acceptable if you could somehow put up another patch that pulls in the snapshot patch and includes those tests.  mriedem does that kind of shit all the time.19:18
edmondswesberglu_ is there a way to run the tests without failures causing the CI to -1 ?19:18
efriedDon't know that that helps us at all.19:18
esberglu_edmondsw: Not sure what you mean?19:19
edmondswsomething between skip and running them normally19:19
edmondswrun, but just report the results, don't consider when deciding whether the CI reports success|failure19:20
edmondswseems like "experimental" tests would be a nice feature, but may not exist19:21
esberglu_edmondsw: I think we would have to define a whole new CI pipeline for that19:21
edmondswmoving on then...19:22
edmondswesberglu why can't prep_devstack watch for the gerrit review for snapshot, and update the local.conf it's gonna use?19:24
edmondswesberglu_ ^19:24
edmondswas a temp hack until we can merge 636319:25
edmondsw(would update 6363 to remove that hack at the same time it does things the right way)19:26
esberglu_edmondsw: Yeah that would work for snapshot since it only has the one local.conf change19:26
esberglu_I would like to come up with something to actually pull in patches though19:26
esberglu_Not guarantee that when we come across this in the future it will be 1 line changed in 1 file19:27
edmondswsure, if you can come up with something better, I'm all for it19:27
esberglu_edmondsw: I'm cool with that for snapshot though19:27
esberglu_I can put something up19:28
esberglu_edmondsw: We have some code for patching pypowervm with internal patches, I can maybe rework that for powervm-ci19:29
esberglu_I think that's all or nothing too though19:29
edmondswesberglu_ what about network hotplug... shouldn't we have the same chicken-egg issue there?19:30
esberglu_Either applying to pypowervm for all runs or none19:30
esberglu_edmondsw: Nope19:30
esberglu_Network attach is defined in the driver capabilites19:31
esberglu_Which somehow propogates through to tempest19:31
esberglu_edmondsw: However I'm looking into the CI for that right now19:31
esberglu_Idk that our current CI ever does attach_interface19:32
edmondswif it doesn't, we gotta fix that ASAP19:32
edmondswsnapshot isn't totally ready for nova core reviews, but the network hotplug is19:32
edmondswhow are we gonna fix that? RMC takes so long to become active...19:33
esberglu_Not a clue19:36
esberglu_It takes like 5 minutes right?19:36
esberglu_But I think it's accurate that RMC has to be active to attach19:37
edmondswmore like 1019:38
esberglu_edmondsw: I don't think we can justify adding 20 minutes to each CI run for 2 tests19:38
edmondswthe tempest results on the network hotplug patch show one AttachInterfacesTestJSON test passing: test_add_remove_fixed_ip19:39
esberglu_But sounds like nova is gonna make us19:39
esberglu_edmondsw: I don't think that actually hits the attach_interface19:39
edmondswthey show another skipped... test_create_list_show_delete_interfaces_by_fixed_ip19:39
edmondswand then they don't show the 2 that you linked from the blacklist19:40
edmondswwhy is that one skipped that's not in the blacklist?19:40
edmondswmeans our tempest config is wrong, yes?19:40
esberglu_We have shared networks19:41
edmondswic... do we have to use shared networks?19:42
edmondswI assume if we didn't, we'd have to add that to the blacklist or it would have the same RMC problem19:43
edmondswbut I'm wondering if nova will want us to stop skipping that test as well...19:43
esberglu_edmondsw: Yep it would hit that19:43
edmondswesberglu_ fix your nick? :)19:44
*** esberglu_ is now known as esberglu19:44
edmondswmuch better19:44
edmondswit's those little things... ;)19:44
esbergluWe don't have to used shared but IIRC we used to hit tons of issues before using shared19:45
esbergluBut that was forever ago19:45
esbergluI think I was still in school19:45
edmondswso let's put that on the back burner and focus on the RMC issue first19:45
edmondswhow long does a CI run take, minus setup and teardown... i.e., actually running tests?19:46
esberglu25 - 30 minutes normally19:47
edmondswI wonder if we could write something that creates a server toward the beginning of that, and then tests attach/detach toward the end.. or at least 10 min between them19:51
esbergluNo idea how we would tell tempest to use that server for those tests19:52
edmondswwould probably have to be totally different test19:52
edmondswdoes tempest run everything serially?19:54
edmondswI think it does some things in parallel, no?19:54
esbergluNope it's customizable19:54
esbergluWe use concurrency of 419:54
edmondswso one really long test wouldn't necessarily kill performance19:55
edmondswlong as it's not one of the last tests run19:55
esbergluIdk that we would ever be able to get such a test into tempest19:58
edmondswmaybe it wouldn't have to be a totally new test...19:59
edmondswif we could add something in setup for that class to create 2 server instances, and modify the existing tests to use those instead of creating the instances when the tests are run20:00
edmondswthen put an "if powervm" in the setup to wait for RMC to become active20:00
edmondswno need to wait unless powervm20:01
edmondswI think they might accept that. If not, could we carry it as a patch?20:01
esbergluI don't think that they would even accept that, but we can try. We could carry it as a patch, just means another thing to maintain20:04
esbergluedmondsw: brb I need coffee. I can start putting something together to see if your idea works20:06
esbergluAnd we can worry about how to carry it forward later20:06
*** k0da has quit IRC20:43
*** AlexeyAbashkin has joined #openstack-powervm20:45
esbergluefried: edmondsw: 6418 for snapshot CI20:47
esbergluWait it's not right, one sec20:48
esbergluOkay it's ready now20:49
efriedesberglu: done20:50
esbergluefried: tx20:51
esbergluI'm gonna put a run through manually with that quick and make sure everything is cool before merging20:51
*** AlexeyAbashkin has quit IRC20:52
edmondswesberglu I think you have the path wrong20:55
edmondswgotta run20:55
esbergluedmondsw: Yep you're right good catch20:56
*** edmondsw has quit IRC20:56
*** k0da has joined #openstack-powervm20:56
*** edmondsw has joined #openstack-powervm20:56
*** edmondsw has quit IRC21:01
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: Use vios_uuids to process required vioses for iSCSI
*** chhagarw has quit IRC21:12
*** AlexeyAbashkin has joined #openstack-powervm21:17
*** AlexeyAbashkin has quit IRC21:21
*** tjakobs has quit IRC21:38
*** esberglu has quit IRC21:44
*** esberglu has joined #openstack-powervm21:44
*** esberglu has quit IRC21:49
*** edmondsw has joined #openstack-powervm22:46
*** AlexeyAbashkin has joined #openstack-powervm23:16
*** AlexeyAbashkin has quit IRC23:20
*** k0da has quit IRC23:39
*** k0da has joined #openstack-powervm23:54
*** esberglu has joined #openstack-powervm23:54
*** esberglu has quit IRC23:59

Generated by 2.15.3 by Marius Gedminas - find it at!