Friday, 2018-04-13

*** boveir has joined #openstack-powervm00:08
*** boveir has quit IRC00:10
*** manous_ has joined #openstack-powervm01:27
*** manous_ has quit IRC01:37
*** AlexeyAbashkin has joined #openstack-powervm06:37
*** AlexeyAbashkin has quit IRC06:58
*** AlexeyAbashkin has joined #openstack-powervm07:41
*** AlexeyAbashkin has quit IRC07:46
*** k0da has joined #openstack-powervm10:54
*** manous has joined #openstack-powervm12:16
*** apearson has joined #openstack-powervm12:35
openstackgerritprashkre proposed openstack/nova-powervm master: Return iSCSI Initiatos for all VIOSes
*** edmondsw_ has joined #openstack-powervm12:46
*** edmonds__ has joined #openstack-powervm12:47
*** edmondsw has quit IRC12:50
*** edmondsw_ has quit IRC12:51
*** edmondsw has joined #openstack-powervm12:54
*** efried is now known as fried_rice12:55
*** edmondsw_ has joined #openstack-powervm12:56
*** edmonds__ has quit IRC12:56
*** edmondsw has quit IRC12:59
*** prashkre has joined #openstack-powervm13:00
openstackgerritprashkre proposed openstack/nova-powervm master: WIP: Return iSCSI Initiatos for all VIOSes
fried_riceprashkre: Are you WIPing that because you're still testing?13:13
prashkrefried_rice: Auto UT's are failing, looking at them13:14
fried_riceprashkre: Are you running the tests locally?13:14
prashkrefried_rice: yes. I am running locally.13:15
edmondsw_fried_rice I added my own comments on those iscsi patches13:35
*** tjakobs has joined #openstack-powervm13:38
*** edmondsw_ is now known as edmondsw13:39
*** prashkre has quit IRC13:44
*** prashkre has joined #openstack-powervm13:53
*** esberglu has joined #openstack-powervm14:00
*** prashkre has quit IRC14:05
*** prashkre has joined #openstack-powervm14:09
*** prashkre has quit IRC14:21
*** prashkre has joined #openstack-powervm14:22
*** AlexeyAbashkin has joined #openstack-powervm14:45
*** AlexeyAbashkin has quit IRC14:49
*** openstackgerrit has quit IRC14:50
*** prashkre has quit IRC15:07
*** k0da has quit IRC15:18
*** prashkre has joined #openstack-powervm15:35
catmandohey all15:52
catmandoanyone about?15:52
fried_ricecatmando: Sort of, for a little while, then out for a couple hours.  What's up?16:08
catmandois there any channel where svc people hang out?16:17
catmando(i am assuming you work for ibm)16:17
fried_ricecatmando: If there was, it would be here or possibly #openstack-cinder.  Let me ask...16:37
fried_ricecatmando: (yes, IBM)16:38
fried_ricebtw, by "would be here" I meant "but nobody SVC hangs around here, that I know about"16:38
fried_ricecatmando: Okay, Gerald confirms that, if they're anywhere, it's #openstack-cinder16:39
fried_rice...but they'll be in Chinese time zones for the most part.16:39
*** gman-tx has joined #openstack-powervm16:40
fried_ricecatmando: gman-tx; gman-tx, likewise.16:41
fried_riceHe might be able to help you with whatever your question is.16:41
fried_riceAnd with that, I'm out for a couple hours.16:41
*** fried_rice is now known as fried_rolls16:41
catmandoHey gman-tx16:43
gman-txHey catmando16:43
catmandoQuick question. Do you know much about SVCs16:43
gman-txfrom a cinder sandpoint yes ... but cinder doesn't fully take advantage of everything the SVC can do16:44
catmandoWe've had a total cluster failure, all nodes down with error 564. It may not be something you've ever seen before16:45
gman-txwhats the question?16:45
catmandoJust a shot in the dark16:45
catmandoWe did open a ticket, but I am curious if anyone has seen this before16:45
gman-txyuck a hardware failure16:45
gman-txnope haven't see that one before16:47
gman-txToo many machine code crashes have occurred?16:48
gman-txwhat your firmware level?16:50
*** esberglu has quit IRC16:52
manoushi all17:03
manousis it possible to install openstack powervm on ibm x86 server ?17:03
*** k0da has joined #openstack-powervm17:41
*** apearson has quit IRC17:52
*** esberglu has joined #openstack-powervm17:56
*** manous has quit IRC17:57
*** apearson has joined #openstack-powervm18:38
prashkrefried_rice: Hi. Sorry, didn't understood your comment at
prashkreIt should be legit to pass [] (which will effectively make this method just return whatever's already in the _ISCSI_INITIATORS cache without the expensive get_active_vioses lookup).19:03
prashkrefried_rice: you mean to say "return _ISCSI_INITIATORS if vios_ids is None" ?19:07
*** apearson has quit IRC19:25
*** fried_rolls is now known as fried_rice19:26
fried_riceprashkre: No, I mean if you pass [] for vios_ids, we should not go try and get active VIOSes again.19:29
fried_riceprashkre: Though I still think this is a little weird, and maybe we should rethink it.19:30
fried_riceprashkre: Because we need to make sure we find the same initiator every time from get_volume_connector.19:32
*** apearson has joined #openstack-powervm19:37
*** gman-tx has quit IRC19:41
fried_riceprashkre: I think I have a solution.  I left comments in the relevant places.19:48
prashkrefried_rice: sure.19:53
*** gman-tx has joined #openstack-powervm19:57
prashkrefried_rice: Sorry, I am new to storage, please don't laugh at me if I raise poor questions. I remember initiator IQN on VIOS is used to connect to initialize connection.20:07
fried_riceprashkre: I promise you, I know less about storage and ISCSI than you do.20:07
prashkrefried_rice: Since we are caching these initiators, what if the VIOS1 goes down whose IQN we are always picking.20:08
fried_riceI'm leaning heavily on folks like gman-tx to indicate that this patch is doing the right thing.  I can just look at it from the perspective of code correctness and style, really.20:08
fried_riceprashkre: I agree that's an issue, and I think that's actually just tough luck.  Because if we used VIOS1 to connect, and then VIOS1 goes down, I don't think it does us any good to try to disconnect via VIOS2.20:09
fried_ricegman-tx: please confirm that that's what you meant by this comment:
fried_ricePerhaps gman-tx is suggesting we actually use the instance ID in some way to figure out which VIOS we used for attach, so that we can reliably find the same one for detach.20:11
fried_riceIf we set it up as I suggested, we can be sure we'll always use the same for both.20:11
fried_riceBut it does leave a hole as you suggest20:11
fried_ricewhich is that, if we've been happily using VIOS1 for everything, and VIOS1 goes down, it would be nice if the next *attach* could use VIOS2 instead.20:12
gman-txyou can't disconnect from vios2 if you connected to vios1 ...20:12
fried_riceeven if attempting a *detach* from same would fail.20:12
fried_riceI'm not sure how we resolve this without a) refreshing the list of active VIOSes every time, b) using that state data to make sure we're retrieving a live initiator, and c) somehow saving which VIOS we used for which instance so that we can tell whether we *must* return the same one (even if it's down?) or if we can switch to a different one.20:13
gman-txsomewhere we would have to remember which vios we connected to20:14
fried_ricegman-tx: Is the "failure" any worse if we try to detach from VIOS2 than if we try to disconnect from VIOS1 when it's down?20:14
fried_riceI guess that would be confusing in the logs, no matter how we tried to paper over it.20:14
fried_riceI sure wouldn't love the idea of trying to persist instance IDs.  There be tygers.20:15
fried_riceUsing a hash only helps us if we can be assured we have the full list when we start.20:16
fried_riceI gotta run, guys.  Should be back... later.20:16
gman-txhash is a bit better but doesn't always get us the org value20:16
prashkregman-tx: do you mean, remember that instance_1 is connected through vios1, instance_2 through vios2 like that ?20:17
gman-txif you disconnect from vios 1 while it is down you leave the hdisk mapped to the VM ... a stale mapping20:17
prashkregman-tx: thanks for clarification. signing off for today. will catch you later.20:20
gman-txyea need to step away my self20:21
*** esberglu has quit IRC21:15
*** tjakobs has quit IRC21:30
*** esberglu has joined #openstack-powervm22:20
*** esberglu has quit IRC22:21
*** apearson has quit IRC22:26
*** prashkre has quit IRC22:32
*** k0da has quit IRC23:05

Generated by 2.15.3 by Marius Gedminas - find it at!