Tuesday, 2017-12-05

*** chhavi has joined #openstack-powervm00:00
*** chhavi has quit IRC00:05
*** thorst has joined #openstack-powervm00:08
*** thorst has quit IRC00:09
*** svenkat has joined #openstack-powervm01:16
*** svenkat has quit IRC03:22
*** edmondsw has quit IRC03:26
*** chhavi has joined #openstack-powervm03:33
*** thorst has joined #openstack-powervm03:44
*** thorst has quit IRC03:46
*** thorst has joined #openstack-powervm03:53
*** thorst has quit IRC03:54
*** edmondsw has joined #openstack-powervm05:14
*** edmondsw has quit IRC05:18
*** chhavi has quit IRC05:53
*** chhavi has joined #openstack-powervm08:55
*** catmando has joined #openstack-powervm11:08
*** edmondsw has joined #openstack-powervm11:58
*** edmondsw has quit IRC12:05
catmandohey all12:44
catmandocan anyone help with a VIOS NPIV issue?12:44
*** edmondsw has joined #openstack-powervm13:06
*** chhavi__ has joined #openstack-powervm13:27
*** svenkat has joined #openstack-powervm13:38
*** esberglu has joined #openstack-powervm14:03
esberglu#startmeeting powervm_driver_meeting14:04
openstackMeeting started Tue Dec  5 14:04:17 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.14:04
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:04
*** openstack changes topic to " (Meeting topic: powervm_driver_meeting)"14:04
openstackThe meeting name has been set to 'powervm_driver_meeting'14:04
* efried_cya_wed is not really here14:04
efried_cya_wedUnless anyone has any specific issues for me...14:05
esbergluedmondsw: You around? We can just keep it informal today14:05
esbergluefried_cya_wed: I don't have anything for ya14:05
edmondswesberglu yes, was waiting for you to login14:05
edmondswI'm fine with keeping it informal14:06
edmondswwanted to catch up on CI status and how OVS testing is going14:06
esbergluedmondsw: I was working with drew last week. neo44 has been a PITA to get installed, I'm checking with the lab to see if it's cabled right14:07
esbergluI'm thinking I might use one of the staging CI systems to start testing SEA in parallel14:07
edmondswsounds good14:07
edmondswI dropped you a few comments on those, but haven't had time to look all through them14:08
esbergluI saw your comments on both of those, was gonna wait until you had a chance to look at that some more before testing14:08
esberglu*before addressing14:08
edmondswI might be able to look this afternoon?14:08
*** thorst has joined #openstack-powervm14:09
esbergluCool. Other than that I started putting together the first vSCSI patch14:09
esbergluWill likely have some questions there as I go along14:09
esbergluedmondsw: As far as CI goes, the only real thing hitting us consistently is the in-tree networking tempest failures14:11
esbergluI haven't been eager to fix those since we don't actually have networking in-tree14:11
esbergluSo it's just a problem with certain tests trying to use networks when they can't14:12
edmondswhow are we failing on networking in-tree if we don't have networking in-tree?14:12
edmondswoh, I see... some tests assume networking must be possible?14:12
edmondswand we're not skipping them for some reason?14:12
esbergluWhat I think is happening is that certain other tests are creating networks. So if those networks exists at the time of the test they attempt to get used14:13
esbergluIf the networks don't exist all is good and we pass14:13
* efried_cya_wed is actually leaving now. Have a great day!14:13
edmondswefried_cya_wed u2!14:13
edmondswesberglu yeah, sounds like a bug but one that should no longer impact us as soon as we get OVS merged, if you're right14:14
esbergluedmondsw: Yep14:14
edmondswso I can understand why you wouldn't prioritize that14:14
edmondswas long as we're staying on top of rechecks14:15
esbergluIt's just part of the weirdness of not having networking for CI14:15
edmondswoh, I guess we will have this until SEA is merged, not just OVS, since CI uses SEA right?14:15
esbergluedmondsw: Yeah14:15
edmondswI'd be surprised if SEA is merged before January... you ok living with this that long?14:16
esbergluedmondsw: I can spend an afternoon looking at it this week or next. It probably comes down to disabling a group of tests14:17
edmondswI'll let you decide priority there at this point14:18
esbergluedmondsw: That's all I had. Anything else from you?14:19
edmondswas for OVS testing... at what point should we talk about giving up on neo44 and finding another system to test that?14:20
esbergluedmondsw: I'm opening a lab request right after this. See what they say and then come back to that question?14:20
esbergluedmondsw: Unless you're aware of a free system14:21
edmondswwith folks heading off on vacation, there should be systems available14:21
edmondswbut I don't know of one in particular14:21
edmondswesberglu I also wanted to ask if your ecal is up-to-date?14:21
esbergluYeah I updated it yesterday14:22
edmondswcool, tx14:22
edmondswesberglu wow, you're really around that long?14:22
esbergluI'm not taking anything off until christmas week14:22
esbergluI burned almost all of my vacation last january14:23
edmondswok man, sorry14:23
edmondswif anything comes up and you need help, don't hesitate to call me14:23
esbergluedmondsw: No problem. Sounds good!14:24
edmondswthat's it from me14:24
*** openstack changes topic to "This channel is for PowerVM-related development and discussion. For general OpenStack support, please use #openstack."14:24
openstackMeeting ended Tue Dec  5 14:24:41 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:24
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-12-05-14.04.html14:24
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-12-05-14.04.txt14:24
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-12-05-14.04.log.html14:24
catmandohey chaps14:30
catmandoi thought i wouldn't interrupt the meeting14:31
catmandoi have a question regarding VIOS and NPIV (specifically how Storage Connectivity Groups work with NPIV)14:31
catmandoam i right in my assumption (not that i can find this in documents) that powervc cloud uses scg to dynamically create virtual fc mappings using npiv when one creates an instance?14:34
catmandoapologies if this is incorrect, i'm an app developer who's currently working on and learning about powervc14:35
catmandothe problem that I have is: we recently update the firmware on the SVC (which is backed by a Storwize) and now the npiv is showing as broken in powervc14:36
catmandothe error is: "NPIV: Unknown fabric". However, the switches seem fine (nothing was changed on them) and the two VIOS report their physical FC adapters as having an NPIV fabric14:37
catmandoeven stranger: existing LPARs that have vfc mappings are coming up as expected14:38
catmandobut with the scg failing, I cannot create any new LPARs14:38
catmandono ideas14:48
edmondswcatmando sorry, looked away... reading14:50
*** apearson has joined #openstack-powervm14:51
edmondswcatmando sounds like you need to open a PMR14:53
catmandoedmondsw: you're right, of course. i am still relatively new within the company and i'm still chasing the internal teams to get our lab support correctly set up14:54
edmondswcatmando let me see if I can wrangle one of the PowerVC storage guys to join here. This channel is typically for pure OpenStack and SCGs are PowerVC-specific14:55
catmandoi know :)14:55
catmando:( not :)14:55
catmandoit's just that I have nowhere else to go14:55
* catmando sheds a single tear14:55
edmondswcatmando no worries, will try to get you some help14:55
edmondswbut nowhere isn't true, is it? You can open a PMR, right? ;)14:56
catmando@edmondsw i have no idea at the moment if the lab setup i am working on has a support agreement15:00
catmandoand it's been slow work getting people to answer me15:00
edmondswcatmando are you IBM?15:00
edmondswif so, ping me on ST or Slack, same shortname15:01
catmandonope, i work for an IBM partner15:01
edmondswah, ok15:01
catmandohttp://www-304.ibm.com/partnerworld/gsd/solutiondetails.do?solution=50455&expand=true&lc=en <<<--- this one15:02
*** gman-tx has joined #openstack-powervm15:10
*** thorst has quit IRC16:15
esbergluedmondsw: Looking at the best way to bring in18:30
esbergluWe had something similar in networking and waited until the SEA change to split out the parent classes18:31
edmondswesberglu I'm tempted to rework the way that was done OOT...18:32
esbergluedmondsw: What's your idea?18:33
edmondswFibreChannelVolumeAdapter adds one method with no impl, whereas VscsiVolumeAdapter is much more extensive18:38
edmondswesberglu I suspect it was done the way it was because FC was implemented first? But I'm thinking about changing the inheritance hierarchy18:39
edmondswmaking VscsiVolumeAdapter inherit from PowerVMVolumeAdapter18:40
edmondswAnd make FibreChannelVolumeAdapter be the one that hangs off on its own (inherits from object), if we even really need it (worth it for one method?)18:43
edmondswthat also would make more sense to someone reviewing a vSCSI patch, which I think addresses your concern18:43
edmondswat least for now you could just add the wwpns method onto PVVscsiVolumeAdapter and not mess with FibreChannelVolumeAdapter18:47
edmondswthe downside of this of course is that it's different from OOT, and it's nice when the two match up cleanly... so I'd love to hear what efried_cya_wed thinks when he's back tomorrow18:47
esbergluedmondsw: I see where you are going with this. But I guess my initial question was whether or not we need all of that inheritance in the 1st vSCSI patch18:57
edmondswesberglu I don't think it hurts us or makes review much more difficult. And it will make things easier as we port more in. So I think I'd keep the different levels18:59
edmondswmight even simplify review to an extent... 1) thinks that are common to all vol adapters, 2) things that are common to vscsi-based vol adapters, 3) the specific vscsi FC vol adapter we are adding here19:01
edmondswI would move all the volume.py content into vscsi.py, though, since it's all vscsi-specific... not sure why that was made a separate file and given such a generic name.19:03
edmondswesberglu ^19:03
edmondswor if we are trying to avoid doubling the size of vscsi.py, rename vscsi.py -> vscsifc.py & volume.py -> vscsi.py ?19:07
esbergluedmondsw: I'm on board with this. I think it makes more sense to consolidate into vscsi.py19:07
*** gman-tx has quit IRC19:11
*** chhavi__ has quit IRC19:16
*** chhavi has quit IRC19:16
*** gman-tx has joined #openstack-powervm19:32
*** csky has joined #openstack-powervm19:32
edmondswcatmando I've got csky here to help you19:32
cskycatmando: Hi. Matthew was telling me about your question19:35
cskyA firmware update on the SVC should not have any affect on the fabric mapping status for your host/VM side ports19:36
*** esberglu_ has joined #openstack-powervm19:40
*** csky has quit IRC19:41
*** csky has joined #openstack-powervm19:42
cskycatmando: Sorry I got temporarily disconnected for some reason19:42
*** esberglu has quit IRC19:43
edmondswcatmando csky told me "NPIV: Unknown fabric" means the FC port appears to be cabled, but not to a registered fabric.19:51
*** csky_ has joined #openstack-powervm19:54
*** csky has quit IRC19:55
*** csky_ is now known as csky19:55
cskycatmando: edmondsw is right - my message didn't make it through. You go to fabrics and make sure you have the switches registered that you expect. Then go to the Configuration-->FC Port Configuration page and see which VIOS ports are reporting Unknown fabric. Manually log into the fabric switch and confirm that those ports are reporting.19:59
*** gman-tx has quit IRC20:00
catmandocsky: many thanks, will do20:13
*** gman-tx has joined #openstack-powervm20:15
*** efried_cya_wed has quit IRC20:20
*** efried_cya_wed has joined #openstack-powervm20:30
*** edmondsw has quit IRC20:30
*** csky has quit IRC20:53
*** csky has joined #openstack-powervm21:20
*** edmondsw has joined #openstack-powervm21:30
*** svenkat has quit IRC21:50
*** apearson has quit IRC21:51
*** apearson has joined #openstack-powervm21:51
*** apearson has quit IRC21:52
*** apearson has joined #openstack-powervm21:52
*** apearson has quit IRC21:53
*** apearson has joined #openstack-powervm21:53
*** apearson has quit IRC21:53
*** apearson has joined #openstack-powervm21:54
*** apearson has quit IRC21:54
*** gman-tx has quit IRC22:04
*** apearson has joined #openstack-powervm22:23
*** esberglu_ has quit IRC23:01
*** esberglu has joined #openstack-powervm23:01
*** svenkat has joined #openstack-powervm23:03
*** esberglu has quit IRC23:06
*** gman-tx has joined #openstack-powervm23:10
*** esberglu has joined #openstack-powervm23:14
*** apearson has quit IRC23:14
*** esberglu has quit IRC23:18
*** gman-tx has quit IRC23:21
*** svenkat has quit IRC23:26
*** edmondsw has quit IRC23:34
*** edmondsw has joined #openstack-powervm23:35
*** csky has quit IRC23:51
*** edmondsw has quit IRC23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!