Wednesday, 2016-11-09

*** esberglu has quit IRC00:10
*** thorst_ has joined #openstack-powervm00:12
*** jwcroppe has joined #openstack-powervm00:29
*** jwcroppe has quit IRC00:30
*** jwcroppe has joined #openstack-powervm00:30
*** chas_ has joined #openstack-powervm00:44
*** chas_ has quit IRC00:49
*** seroyer has joined #openstack-powervm00:56
*** apearson has joined #openstack-powervm01:04
*** wangqwsh has joined #openstack-powervm01:22
*** chas_ has joined #openstack-powervm01:45
*** chas_ has quit IRC01:49
*** smatzek has joined #openstack-powervm01:56
adreznecHey thorst_. wangqwsh are you there?02:01
adreznecAll right, lets kick things off then02:02
adreznec#startmeeting PowerVM CI Meeting02:02
openstackMeeting started Wed Nov  9 02:02:35 2016 UTC and is due to finish in 60 minutes.  The chair is adreznec. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.02:02
openstackThe meeting name has been set to 'powervm_ci_meeting'02:02
adreznec#topic Current status02:03
adreznecSo, I haven't had a chance to look at the latest revision of your patchset that was put up wangqwsh. Do you want to run us through status?02:03
wangqwsh1. pypowervm adapter patch's done. 2. we can install openstack with osa02:04
thorst_whoa, that's pretty good.02:04
wangqwshI update the script related to tempest variables02:05
thorst_so what's next there?  Tempest tests?02:05
adreznecAwesome, you've gotten things running in the staging environment then?02:05
wangqwshthere is an issue in galera while running tempest02:05
wangqwshi am trying to find the root casue.02:06
wangqwshcasue -> cause02:06
adreznecIn galera while running tempest? Are we seeing database failures?02:06
wangqwshthe mysql is not running02:06
adreznecHmm ok02:06
adreznecHow much memory are we assigning to these VMs?02:06
adreznecI know we've been seeing some issues in OSA AIO testing with 8g lately02:07
thorst_that's too little?02:07
adrezneclately being yesterday/today02:07
adreznecYeah, issues with mysql running out of memory. Still trying to debug exactly why02:07
thorst_yikes...that's a fair amount.02:07
thorst_Lets definitely not double that...but instead maybe step to 10...02:07
adreznecIt's the absolute minimum req for OSA AIO FWIW02:08
thorst_and it probably doesn't help that we're SMT-802:08
adreznecRecommended is 1602:08
thorst_though I doubt that has the same impact to OSA as it does devstck02:08
thorst_yikes...and this would be all our nodes...02:08
thorst_not just OSA nodes?02:08
adreznecYeah, I'm not sure how much we can tie back to that02:08
adreznecUnless we decide to split nodes02:08
adreznecI know, we already have memory issues02:09
adreznecThe OSA gate upstream uses 8gb02:09
thorst_hmm...well, it generally isn't too stressed (each node runs 4 at once, each with 8 GB, but has 128 GB memory)02:09
adreznecSo I think we should be able to make it work with that amount with tweaking... just not sure what's the issue02:09
thorst_so we have SOME space...but we need to leave more for the VMs during the tempest runs themselves.02:09
thorst_lets maybe try 12 and back track from there...02:10
adreznecwangqwsh: how far into tempest are you making it?02:10
adreznecTempest definitely shouldn't take that long to run02:11
wangqwshbecause I restart the db by manual :)02:11
adreznecAh, I see02:12
wangqwshif not, a few seconds02:12
thorst_tempest used to take 4 hours until we bumped up the threads02:12
adreznecYeah, I was just hoping we wouldn't have that same issue here02:13
thorst_yeah....just need to make sure we have the right thread count02:14
thorst_but it sounds like overall we're making good progress there  :-)02:15
adreznecAll right, so wangqwsh do you want to try bumping things to 10gb (or 12gb) and see if we still have galera issues?02:15
adreznecAll right02:15
wangqwshi used this image for CI, Ubuntu_16-04-01_2g.img02:16
adreznec#action wangqwsh to try increasing OSA VM memory to 10gb/12gb and retesting02:16
wangqwshis that fine?02:16
adreznecThat should be right02:16
adreznecAll right. On to other status02:17
adreznecthorst_ has his well named fifo_pipo patch on deck02:17
thorst_actually efried has taken over driving that02:17
thorst_and PipesOutLightsOut has become RESTAPIPipe02:17
thorst_I like my name better.02:17
adreznecIt'll always be fifo_pipo to me02:18
adreznecand git02:18
thorst_so that's under testing02:18
adreznecAll right02:18
thorst_making progress...hopefully more known tomorrow02:18
adreznecI believe esberglu was testing that, right?02:19
adreznec#action esberglu to continue testing RESTAPIPipe patch with assistance from efried02:20
adreznecWho are here only in spirit02:21
adreznecAll righty then02:21
adreznec#topic Next steps02:21
adreznecLooking forward then, it seems like things are starting to shape up02:21
thorst_definitely.  I think the next thing - perhaps out of CI - is the pypowervm integrated into global-requirements02:22
thorst_asking for that to be done...but - not really tied to CI02:22
adreznecwangqwsh: it sounds like next up after the galera you're going to be digging into tempest config?02:22
adreznecthorst_: yeah, I spent some time with that today. I'm having some issues with getting setuptools to build with appropriate versioning02:23
thorst_:-/  alright.  We need to get that into the priority view next week...once CI stabilizes02:23
adreznecI talked with esberglu about how he supposedly got setuptools/pbr to disregard PEP440 before, but he couldn't remember and also had no notes02:23
adreznecI'll continue digging and see what I can get worked out here02:24
adreznecHopefully the necessary pypowervm version will be up tomorrow02:24
adreznec#action adreznec to get pypowervm update published to pypi to drive towards getting pypowervm in g-r02:24
thorst_yikes...I thought he had a wiki page of notes somewhere on it...02:25
adreznecunfortunate because this is the same issue I hit last time I tried02:25
thorst_maybe we ask him to do it from scratch and he stumbles upon the same result?02:26
adreznecThis is where I'd normally put a random, semi-related giphy02:26
adreznecI'll give it a bit more trying tomorrow02:26
adreznecand if I can't get it I'll walk over to his desk02:26
thorst_sounds good02:26
adreznecand we'll figure it out02:26
thorst_agenda for next week...getting the logs published on nova patches...02:26
adreznecSo I know esberglu isn't here, but it sounds like mriedem wants us to change the way we're handling our skip list?02:27
thorst_yeah, I sent that to esberglu already and he took that as a TODO02:28
thorst_but...for good measure02:28
thorst_#action esberglu to switch temptest skip list from name to test id02:28
adreznecYeah, I saw that mentioned earlier02:28
adreznecAre we going to have to start maintaining 2 skip lists now?02:28
adreznecOne for dsvm and one for osa?02:28
thorst_I don't see a need ... yet02:29
thorst_might need for integrated nova driver versus out of tree though02:29
adreznecin osa since we're using ovs we'll be supporting a theoretical superset of function02:29
adreznecsec groups, l3, etc02:29
thorst_right...but lets start small and keep consistent02:30
thorst_then add complexity as we go02:30
adreznecintegrated vs OOT we'll definitely need two lists, at least at first02:30
adreznecAll right, so wangqwsh should be fine using the existing tempest configs then02:31
adreznecin terms of tests to run/skip02:31
adreznecOk, so I know we're over time here02:32
thorst_its because we're all watching florida02:32
adreznecIt's always Florida02:32
thorst_any other aspects we need to cover?02:32
adreznecwangqwsh anything else from your end?02:32
adreznecAll right02:33
adreznecThat's a wrap then02:33
adreznecThanks all!02:33
openstackMeeting ended Wed Nov  9 02:33:27 2016 UTC.  Information about MeetBot at . (v 0.1.4)02:33
openstackMinutes (text):
*** thorst_ has quit IRC02:39
*** thorst_ has joined #openstack-powervm02:40
*** chas_ has joined #openstack-powervm02:45
*** thorst_ has quit IRC02:48
*** chas_ has quit IRC02:50
*** seroyer has quit IRC03:06
*** smatzek has quit IRC03:36
*** thorst_ has joined #openstack-powervm03:46
*** chas_ has joined #openstack-powervm03:46
*** tjakobs has joined #openstack-powervm03:48
*** chas_ has quit IRC03:51
*** tjakobs has quit IRC03:54
*** thorst_ has quit IRC03:54
*** kylek3h has quit IRC04:08
*** tjakobs has joined #openstack-powervm04:16
*** tjakobs has quit IRC04:45
*** thorst_ has joined #openstack-powervm04:52
*** thorst_ has quit IRC04:59
*** kylek3h has joined #openstack-powervm05:09
*** kylek3h has quit IRC05:15
*** apearson has quit IRC05:31
*** chas_ has joined #openstack-powervm05:48
*** tjakobs has joined #openstack-powervm05:49
*** chas_ has quit IRC05:52
*** tjakobs has quit IRC05:56
*** thorst_ has joined #openstack-powervm05:56
*** thorst_ has quit IRC06:04
*** wangqwsh has quit IRC06:26
*** chas_ has joined #openstack-powervm06:48
*** esberglu has joined #openstack-powervm06:49
*** chas_ has quit IRC06:53
*** esberglu has quit IRC06:53
*** thorst_ has joined #openstack-powervm07:01
*** thorst_ has quit IRC07:09
*** kylek3h has joined #openstack-powervm07:11
*** kylek3h has quit IRC07:16
*** chas_ has joined #openstack-powervm07:19
*** esberglu has joined #openstack-powervm07:48
*** esberglu has quit IRC07:53
*** thorst_ has joined #openstack-powervm08:08
*** thorst_ has quit IRC08:13
*** openstackgerrit has quit IRC08:48
*** openstackgerrit has joined #openstack-powervm08:48
*** thorst_ has joined #openstack-powervm09:11
*** kylek3h has joined #openstack-powervm09:12
*** kylek3h has quit IRC09:17
*** thorst_ has quit IRC09:19
*** thorst_ has joined #openstack-powervm10:16
*** thorst_ has quit IRC10:23
*** esberglu has joined #openstack-powervm10:32
*** esberglu has quit IRC10:36
*** kylek3h has joined #openstack-powervm11:13
*** kylek3h has quit IRC11:17
*** thorst_ has joined #openstack-powervm11:20
*** thorst_ has quit IRC11:27
*** smatzek has joined #openstack-powervm12:08
*** esberglu has joined #openstack-powervm12:11
*** esberglu has quit IRC12:16
*** seroyer has joined #openstack-powervm12:20
*** thorst_ has joined #openstack-powervm12:48
*** thorst_ has quit IRC12:48
*** thorst_ has joined #openstack-powervm12:48
*** seroyer has quit IRC12:59
*** svenkat has joined #openstack-powervm12:59
*** kylek3h has joined #openstack-powervm13:09
*** edmondsw has joined #openstack-powervm13:15
*** jwcroppe has quit IRC13:16
*** tblakes has joined #openstack-powervm13:20
*** seroyer has joined #openstack-powervm13:33
*** jwcroppe has joined #openstack-powervm13:51
*** seroyer has quit IRC13:57
*** apearson has joined #openstack-powervm13:58
*** seroyer has joined #openstack-powervm14:05
*** mdrabe has joined #openstack-powervm14:11
*** smatzek has quit IRC14:32
*** esberglu has joined #openstack-powervm14:33
esbergluthorst_: Seeing that read only filesystem issue on staging now as well as some of the production nodes. Any idea what could be causing this?14:44
openstackgerritDrew Thorstensen (thorst) proposed openstack/nova-powervm: Add delay queue for events
thorst_esberglu: check the backing SAN...lets see if we have a true SAN issue14:45
thorst_or if its network.14:45
thorst_do you know how to do that?14:46
thorst_esberglu: PM'd you the SAN connection info.  Go into it, and look for critical errors.  I think bottom right.  See if any are new14:54
*** efried has joined #openstack-powervm14:56
esbergluNothing there, last status alert is from a month ago14:56
thorst_then its got to be network blips15:08
thorst_I'd restart the VIOSes.  Clear the systems and reboot the system basically15:08
thorst_let me check for a broadcast storm first.15:08
thorst_doesn't seem like a broadcast storm15:11
esbergluthorst_: Okay I will start rebooting.15:22
efriedadreznec, did you really mean to make the CI meeting 7:30am Central?15:22
adreznecefried: that's when we'd been doing it to make it easier for wangqwsh to attend15:24
adreznecIf that doesn't work we could shift it around15:24
efriedOkay, I'm fine with that answer.  Just wanted to make sure it was intentional.15:24
adreznecYep. Like I said, if either of these meeting times end up not working out we can definitely reschedule.15:24
adreznecJust trying to deal with timezones15:25
*** seroyer has quit IRC15:31
*** tjakobs has joined #openstack-powervm15:33
*** seroyer has joined #openstack-powervm15:47
*** mdrabe has quit IRC16:01
*** mdrabe has joined #openstack-powervm16:15
*** smatzek has joined #openstack-powervm16:26
thorst_esberglu: neo40 still working?16:32
thorst_I'm going to try that remote debug debacle16:32
esbergluefried: ^^16:32
efriedthorst_ Yes.  This is my LPM stack, which finally seems to be relatively stable, so don't eff it up too badly.16:33
* thorst_ apologizes in advance16:33
thorst_I actually shouldn't be able to screw it up16:33
thorst_unless I don't clean up my files...16:33
efriedNote that my SSP currently has the 2G Ubuntu 1604 image already in it.16:34
efriedSo you may want to use a different one.16:34
thorst_efried: I plan to use a 0's file16:38
thorst_adreznec: I didn't realize I had actually named my branch fifo_pipo16:40
thorst_that's 1 week ago me making a joke to today me16:40
thorst_efried: can we install the package that allows remote connections?16:44
efriedthorst_, sure.16:49
adreznecthorst_: lol, that makes more sense16:53
thorst_efried: This is the hidden error:  SpecialFileError: SpecialF...d pipe',)17:12
efriedthorst_ - did you truncate that or does it show up that way in the log?17:12
thorst_that's what shows up in pdb17:13
thorst_"`/tmp/tmpNY1Ukv/REST_API_Pipe` is a named pipe"17:15
thorst_probably just how I did this...17:17
efriedthorst_ - what's the stack when that happens?  The open on one side or the other?  The read or write?17:25
thorst_its how I did it.  shutil can't copy (dunno why) to a named pipe17:25
thorst_its not the real error17:25
thorst_I see the issue.17:26
thorst_we've got a threading issue.17:26
thorst_glance issues the write command to the file.  It waits for the write command to come back.  The read of the named pipe never gets invoked because its blocked on the thread that is issuing the write.17:27
thorst_the RESTAPIPipe needs to run in a different thread...17:27
thorst_does that make sense?17:27
efriedthorst_, I thought it was.17:31
efriedthorst_, how determined are we to keep this in memory?17:33
efriedthorst_, and what's using shutil?17:40
efriedthorst_ must be deep in a burrito.17:45
thorst_efried: just walked to cafe17:53
thorst_all from the same thread we invoke the 'write to file' before we invoke the 'read from file' so we are stuck in io block17:53
efriedthorst_, where's the code that writes to the pipe?17:57
thorst_customer provided function17:57
thorst_that we invoke17:57
efriedwhere's that invocation?17:57
*** k0da has joined #openstack-powervm17:59
thorst_line 41818:00
thorst_'Have the client start writing to the pipe'18:00
efriedI see it18:00
thorst_that is a blocking call18:00
efriedOkay, so let's shove that guy into its own thread.18:00
thorst_efried: +118:00
efriedthorst_, you or me?18:01
thorst_but do we make it part of the bigger thread pool (in upload_stream_coordinated) or down IN that method18:01
thorst_I can...just want to make sure my approach isn't something you'll hate18:02
efriedHow would you get that thread pool down in there?  You would have to pass it through two method calls and the RESTAPIPipe constructor.18:02
thorst_see ps118:03
thorst_easier there18:03
thorst_efried: I'd just stick that thing in the thread wait pool.18:05
efriedwhat 'thing'?18:05
thorst_efried: let me explain with code...I'll get a rev up in a few.18:06
*** efried has quit IRC18:35
svenkatefried: working on a problem with vNIC create events flowing to SEA agent which provisions vlan … it is due to VIFEventHandler in agent base treating all events same and send it to both agents. I want to know how do I distinguish between CNA vs VNIC events in ProvisionRequest.for_event method. which bit in the invoming event can I use?18:44
thorst_svenkat: efried hopped offline.  Lets get the bug open in launchpad for now18:48
svenkatsure. i will.18:48
thorst_svenkat: thx.  I think its a solid bug18:48
svenkatthorst_ : opened
openstackLaunchpad bug 1640564 in networking-powervm "agent_base.VIFEventHandler not distinguishing between CNA and VNIC events" [Undecided,New]18:53
adreznecthorst_: have you ever seen this with the new nova-powervm/pypowervm image code?
adreznecI just did a redeploy of the latest OSA and I'm hitting that19:24
adreznecpypowervm is
thorst_adreznec: that is eerily similar to the chunk errors we hit with our own chunky reader19:25
*** chas_ has quit IRC19:33
*** efried has joined #openstack-powervm19:42
*** kriskend has joined #openstack-powervm19:48
seroyerI seem to remember seeing discussions around None getting returned when we were trying to figure out what was wrong with the chunky method.  We weren't hitting the None error, but I think others did...  Will try and search for it again.20:04
seroyerAh.  This is what I was remembering:20:06
openstackLaunchpad bug 1434040 in Glance Client "Integrity check for images with checksum = "None"" [Undecided,Incomplete]20:06
seroyerNot the same problem.20:06
*** tblakes has quit IRC20:33
*** chas_ has joined #openstack-powervm20:34
thorst_well, I may have remote API working20:35
thorst_definition of working varies, but it limps along20:36
*** chas_ has quit IRC20:39
adreznecthorst_: slow?20:40
thorst_adreznec: actually quite speedy20:43
thorst_I'm just 100% sure that mr. efried will hack it apart into something better20:43
thorst_efried esberglu: I put a new patch up on 4458.  Can we try that out?  It worked locally (to a remote server)20:44
efriedYou do the hard part of making it function; I'll do the monkey work of reorganizing it so it's readable.20:44
thorst_efried: see, I think its readable20:44
efriedI haven't looked yet.20:44
efriedBut what we had before 4458 even started was approaching unreadable.20:44
thorst_it is certainly a jack of all trades.20:45
esbergluthorst_: Yeah I will try it out. Hopefully my systems don't die before then. Rebooting seemed to work temporarily20:45
thorst_esberglu: we can't recreate the network issue at the moment....so20:50
thorst_here's hoping20:50
efriedSo thorst_, why did you find you had to use DISK_IMAGE?20:50
adreznecFYI thorst_ that glance issue appears to have somehow been resolved, but I'm not entirely sure how I resolved it20:50
adreznecSo... that's uncomforting20:50
thorst_efried: do you mean stream instead of coordinated?20:51
thorst_because stream goes to the REST API20:51
thorst_coordinated requires the REST API to be on the local server20:51
thorst_which is kinda opposite of what we're doing for this CI env20:51
efriedfiggered it was something like that.20:51
efriedI'm asking what about coordinated requires local?20:51
efriedfact that you're passing a file path?20:52
thorst_the REST API makes a local file20:52
thorst_they're doing a FIFO pipe themselves20:53
thorst_so if the REST API is local, we can let them do the FIFO pipe20:53
thorst_but if we're remote, then we need to do the FIFO pipe and send it to the HTTP server stream20:54
thorst_esberglu: I think the edge switch is in a bad state...I think I can just reboot it20:56
thorst_but your SSPs may be a little cranky after that20:57
thorst_(but ultimately happier?)20:57
esbergluOkay. Want to hold off and see if that patch works first?20:58
thorst_this takes a minute or two?20:58
esbergluOkay go for it20:58
esbergluthorst_: what do you mean by cranky?20:59
thorst_esberglu: ports were flapping, it was claiming there were broadcast storms and none of its peers agreed, things like that21:00
thorst_the silence....of a network switch reboot...21:01
thorst_will it come back...21:01
thorst_and we're back21:01
adreznec[15:01:13]  <thorst_>will it come back...21:04
adreznec[15:01:24]  <thorst_>and we're back21:04
adreznecSuch suspense, 11 seconds21:04
apearsonBack, but...21:04
thorst_apearson: still looking - cool yer jets21:05
thorst_the two systems that work versus fail are one hop away21:05
thorst_so comparing paths21:06
efriedthorst_, totally on board with the theme of 4458, but have a number of gripes around the flow and use of vars and such.21:26
efriedDo you want me to futz with it myself, or comment it up and let you do it?21:26
*** smatzek has quit IRC21:30
*** chas_ has joined #openstack-powervm21:35
*** edmondsw has quit IRC21:37
thorst_efried: I'm cool if you want to do it and I'll poke around the network to see what's what21:38
efriedthorst_, roger wilco.21:39
*** chas_ has quit IRC21:40
*** thorst_ has quit IRC21:49
*** apearson has quit IRC21:50
*** jwcroppe has quit IRC21:57
*** jwcroppe has joined #openstack-powervm21:58
*** jwcroppe has quit IRC22:03
*** svenkat has quit IRC22:03
*** seroyer has quit IRC22:31
*** kriskend has quit IRC22:31
*** tblakes has joined #openstack-powervm22:36
*** chas_ has joined #openstack-powervm22:36
*** thorst_ has joined #openstack-powervm22:39
*** chas_ has quit IRC22:41
*** mdrabe has quit IRC22:42
*** mdrabe has joined #openstack-powervm22:45
*** k0da has quit IRC22:50
*** dwayne_ has quit IRC23:00
*** esberglu has quit IRC23:07
*** tjakobs has quit IRC23:10
*** mdrabe has quit IRC23:18
*** esberglu has joined #openstack-powervm23:21
*** seroyer has joined #openstack-powervm23:23
*** jwcroppe has joined #openstack-powervm23:24
*** jwcroppe has quit IRC23:25
*** esberglu has quit IRC23:25
*** jwcroppe has joined #openstack-powervm23:25
*** jwcroppe has quit IRC23:30
*** thorst_ has quit IRC23:32
*** thorst_ has joined #openstack-powervm23:32
*** seroyer has quit IRC23:37
*** thorst_ has quit IRC23:40
*** dwayne_ has joined #openstack-powervm23:42
*** chas_ has joined #openstack-powervm23:56

Generated by 2.14.0 by Marius Gedminas - find it at!