Wednesday, 2016-08-31

cloudnullso nothing super important.00:00
* cloudnull just tooling about00:00
clarkbfungi: gerrit continues to look happy00:02
clarkbeven with all of today's extra change churn00:03
clarkband now time to start working on dinner00:03
*** xarses has joined #openstack-infra00:03
jeblairwell, except for *right now* of course, because of mysqldump :)00:03
*** aeng has quit IRC00:03
*** camunoz_ has joined #openstack-infra00:04
*** baoli has joined #openstack-infra00:04
*** sflanigan has quit IRC00:05
*** baoli_ has joined #openstack-infra00:05
*** zhurong has quit IRC00:06
*** camunoz has quit IRC00:06
openstackgerritPaul Belanger proposed openstack-infra/system-config: Update osic-cloud1 credential format  https://review.openstack.org/35670200:06
oomichihi, now we don't see testr_results.html.gz on the gate logs/00:07
openstackgerritCraige McWhirter proposed openstack-infra/puppet-phabricator: Patches Required to Deliver Pholio  https://review.openstack.org/34248100:07
clarkboomichi: I filed a bug against devstack about it. its not writing the subunit file00:07
fungioomichi: if you happen to know someone on the qa team, maybe they could fix it ;)00:08
clarkbI didnt manage to root cause the issue in drvstack thoigh00:08
openstackgerritPaul Belanger proposed openstack-infra/system-config: Add credentials for osic-cloud8  https://review.openstack.org/35670300:08
oomichiclarkb: cool, thanks. can we know the link of LP00:08
oomichifungi: heh, yeah we should help up:)00:08
*** baoli has quit IRC00:09
clarkboomichi: https://bugs.launchpad.net/devstack/+bug/161747600:10
openstackLaunchpad bug 1617476 in devstack "devstack.subunit is not generated" [Undecided,Confirmed]00:10
oomichiclarkb: thanks :)00:10
pabelangerclarkb: fungi: osic-cloud8 looks to be ready now. I've update the patches ^00:11
openstackgerritPaul Belanger proposed openstack-infra/system-config: Add credentials for osic-cloud8  https://review.openstack.org/35670300:11
*** dingyichen has joined #openstack-infra00:13
*** yamahata has quit IRC00:14
*** chlong has quit IRC00:16
*** sdake has joined #openstack-infra00:16
*** sputnik13 has quit IRC00:18
*** pvinci has joined #openstack-infra00:19
pvinciHi.00:20
*** gouthamr has joined #openstack-infra00:20
pvinciI have a change that is stuck in "Needs Verified" for the last 6 hours. https://review.openstack.org/35835900:21
ianwclarkb / oomichi : hmm, having it in that exit trap is rather annoying, since we've closed off all the logging before we run it.  no other trap calls either ... i wonder if there's other weird ways you can affect signal masks00:21
*** aeng has joined #openstack-infra00:21
*** sflanigan has joined #openstack-infra00:21
pvinciCan someone help me troubleshoot why?00:21
*** thorst has joined #openstack-infra00:21
*** sdake has quit IRC00:21
fungipvinci: it depends on an abandoned change for stable/mitaka00:22
clarkbpvinci: your depends on is abandned and cannot merge00:22
pvinciIt needed the Depends-On intitialy to get through check.00:23
oomichiianw: oh, cannot we see any log about that now? that seems difficult to dig00:24
*** Swami__ has joined #openstack-infra00:24
*** gildub has joined #openstack-infra00:24
*** ddieterly has joined #openstack-infra00:25
*** Julien-zte has joined #openstack-infra00:25
clarkbianw: ya I think the exit trap does it so it happens on fails too but it us annoying00:25
*** sdake has joined #openstack-infra00:25
pvinciDo I just need to create a new patch-set remove the Depends-On:?00:25
clarkbmaybe do it in a controlled manner on success but also have trap do it for fails?00:25
pabelangerfungi: clarkb: confirmed osic-cloud8 works with SSL, puppetmaster credentials updated.00:25
pabelangerWe can start work in the morning to bring it online00:26
clarkbpvinci: you either need to get the depends on to merge or remove it00:26
clarkbpabelanger: yay00:26
*** dfflanders has joined #openstack-infra00:26
pabelangerI mean, we could start uploading nodepool images this evening00:26
pvinciclarkb: ok.  Thank you.00:26
clarkbrequires service restarts00:26
fungipvinci: yeah, taking the depends-on line out of the commit message now should work, as your master branch dependency already merged in neutron00:26
fungipvinci: but you change will need to get approved again00:27
fungisince the review and workflow votes will be cleared on a commit message update00:27
pvincifungi: OK.  Thanks!00:27
*** AnarchyAo has quit IRC00:27
clarkbpabelanger: I am pretty checked out now with dinner and kids though00:27
ianwoomichi / clarkb : ahh, i guess at least we "set +o xtrace" for that little status message, then never turn it back on, so that might hide what's actually going on00:28
ianwwe could start there00:28
*** Swami has quit IRC00:28
fungimordred: i'm going to give launch-node.py a try with your working ansible virtualenv once 363312 makes it onto puppetmaster.o.o00:29
fungiwill let you know how it goes00:29
openstackgerritMorgan Fainberg proposed openstack-infra/shade: Prevent Negative Caching  https://review.openstack.org/36332100:30
*** itisha has quit IRC00:30
pabelangerclarkb: enjoy, we can wait until tomorrow00:31
clarkbpabelanger: do we have a mirror there yet?00:32
openstackgerritPaul Belanger proposed openstack-infra/project-config: Upload nodepool images to osic-cloud8  https://review.openstack.org/35736400:32
*** sdague has joined #openstack-infra00:32
pabelangerclarkb: no, that will be launched in the morning.00:32
*** tonytan4ever has quit IRC00:33
*** gildub has quit IRC00:33
*** gildub has joined #openstack-infra00:33
*** rfolco has joined #openstack-infra00:33
*** mtanino has quit IRC00:34
fungiargh. i should not have a system-config change bounce out of the gate twice on nondeterministic failures. first gate-openstackci-beaker-centos-7 ended up with no rspec installed, now gate-infra-puppet-apply-fedora-23 spontaneously failed to resolve git.openstack.org in dns00:34
*** thcipriani is now known as thcipriani|afk00:34
fungiyeah, the first was actually a connection timeout in osic: http://logs.openstack.org/12/363312/1/gate/gate-openstackci-beaker-centos-7/5b0764a/console.html#_2016-08-31_00_12_45_96007200:36
fungithe eventual failure was a cascade error stemming from a failure to connect to rubygems.org00:36
clarkbdid that go throught nat?00:37
*** thorst_ has joined #openstack-infra00:37
clarkbwe might he seeing the limits of the nat there?00:37
fungigreat question... i think yes nat. there is no aaaa for that name00:37
*** spzala has joined #openstack-infra00:39
ianwpabelanger: urgh ... you know how i said fedora24 was ready .... 3 hour timeout on this one :( http://logs.openstack.org/12/363212/1/check/gate-tempest-dsvm-platform-fedora24-nv/fc07025/00:39
fungithe other failure was in bluebox though... http://logs.openstack.org/12/363312/1/gate/gate-infra-puppet-apply-fedora-23/c183f88/console.html#_2016-08-31_00_31_52_72477500:39
clarkbfungi: bluebox is also nat but 1:100:39
clarkbI lioe blaming nat00:39
*** thorst has quit IRC00:39
fungime too, but i think any further natshaming on my part will require beer00:41
fungiit's just too late in the evening for it not to00:41
fungianyway, here's hoping third time gating is a charm00:41
*** pvaneck has quit IRC00:41
oomichiclarkb: ianw: We can see testr_results.html on stable branch tests still, that seems happening on master only00:41
ianwoomichi: always?  or sometimes?00:43
*** spzala has quit IRC00:43
oomichiianw: always on my checking(10+ tests), but I'd like to check more00:43
fungioomichi: that suggests it's a recent regression in devstack's master branch which hasn't been backported to any stable branches?00:44
ianwoomichi: that sounds pretty deterministic00:44
pabelangerianw: ouch00:44
*** yamamoto_ has joined #openstack-infra00:44
oomichifungi: yeah, I guess so. and I checked the history of devstack and devstack-gate, but I cannot catch it yet00:44
ianwpabelanger: i hate to say it, but i think ansible might be having some of the blame -> http://paste.openstack.org/show/564955/00:45
ianwoomichi: i think to start we can put a "trap -p" at the end to make sure the trap is still registered, and turn on tracing.  that will give us a clue00:46
ianwpabelanger: a 10734s timeout seems like an odd number00:47
*** yamahata has joined #openstack-infra00:47
pabelangerianw: 179 mins00:49
pabelangersomething is setting it to 3 hours00:49
*** M-docaedo_vector has quit IRC00:50
oomichiianw: that seems useful, could you help that?00:50
ianwmaybe it's already gone for a minute and that's where it comes from?00:50
ianwoomichi: https://review.openstack.org/#/c/363326/00:51
*** caowei has joined #openstack-infra00:52
oomichiianw: awesome, thanks :) I'd like to check the result soon00:52
*** asettle has joined #openstack-infra00:54
ianwpabelanger: so the weird thing is, something oddly asynchronous is going on00:55
ianwpabelanger: console log ends at 21:27  http://logs.openstack.org/12/363212/1/check/gate-tempest-dsvm-platform-fedora24-nv/fc07025/console.html00:55
*** Goneri has quit IRC00:56
ianwpabelanger: but the ansible log ends at 00:29 http://logs.openstack.org/12/363212/1/check/gate-tempest-dsvm-platform-fedora24-nv/fc07025/_zuul_ansible/ansible_log.txt00:56
*** asettle has quit IRC00:59
*** gouthamr_ has joined #openstack-infra01:01
fungipabelanger: oomichi: ianw: clarkb: could it be an ansible-on-xenial behavior (stable branches are still using trusty)?01:01
ianwpabelanger: yeah, we got a problem growing disk on !raxspace hosts? http://paste.openstack.org/show/564957/01:02
ianwno ... same thing on rax hosts, i guess they just have enough disk to limp through01:02
*** Julien-z_ has joined #openstack-infra01:02
*** Julien-zte has quit IRC01:03
ianwarrgh!  why is nothing ever simple! :)01:03
fungiwow, my system-config change just hit another nondeterministic failure ("Unable to look up git.openstack.org" on gate-infra-puppet-apply-fedora-23 in bluebox again)01:03
*** salv-orlando has joined #openstack-infra01:03
ianwfungi: are there ipv6 addresses in there?  that was what i saw the other day01:03
fungiianw: looks like the only ipv6 we have in bluebox is linklocal01:04
ianwfungi: i mean did that message bail out and only put in an ipv6 address as the uncontactable address?  if that makes sense, probably not01:05
*** gouthamr has quit IRC01:05
ianwhttp://logs.openstack.org/12/363312/1/check/gate-infra-puppet-apply-fedora-23/57c387c/console.html <- nup, different error to what i saw yesterday01:05
*** Apoorva has quit IRC01:06
*** gongysh has joined #openstack-infra01:06
*** shashank_hegde has quit IRC01:06
oomichiianw: clarkb: we lost testr_result between 2016/08/22 - 2016/08/2301:07
oomichi08/22: Exist: http://logs.openstack.org/15/352715/1/check/gate-tempest-dsvm-neutron-dvr/2beb15a/logs/01:07
oomichi08/23: Lost: http://logs.openstack.org/37/356237/4/check/gate-tempest-dsvm-full-ubuntu-xenial/e476e95/logs/01:07
oomichiwill check git history01:08
*** Apoorva has joined #openstack-infra01:09
*** gouthamr_ is now known as gouthamr01:09
oomichiianw: clarkb: https://review.openstack.org/#/c/355234/ seems much related..01:11
*** salv-orlando has quit IRC01:11
*** esp has quit IRC01:12
ianwmmm, could be ...01:12
*** salv-orlando has joined #openstack-infra01:13
sdakehey folks - noticed one of my cohorts commits is not listed in zuul01:13
sdakeare only  certain projects listed in zuul?01:13
fungisdake: which change?01:13
*** zhurong has joined #openstack-infra01:13
sdakefungi https://review.openstack.org/#/c/363319/01:13
sdakenetworking-vpp repo - whatever that is :)01:13
fungii saw some networking-vpp changes showing up in the zuul status page earlier01:14
sdakeok then01:14
sdakemaybe its just gate overload01:14
fungii remember because there was a change chain several dozen long01:14
sdakelol01:15
fungiyeah, looks like that change is in a stack of 70+ changes01:15
openstackgerritMerged openstack-infra/system-config: Use regular expressions for wiki hostgroups  https://review.openstack.org/36331201:15
sdakefungi wow01:16
fungithat's crazy long for a dependent patch series01:16
*** psilvad has joined #openstack-infra01:17
*** ijw has joined #openstack-infra01:17
sdakefungi apparenty its initial git history01:17
fungithe zuul layout.yaml claims it should be running the python-jobs template, so presumably those changes would be tested normally in the check pipeline01:17
ijwfungi: sorry I'm a bad person.01:17
fungiijw: not at all, just a surprisingly long series of changes01:17
*** salv-orlando has quit IRC01:17
clarkbinitial historry can be imported when we create the project fwiw01:18
fungiit's possible it confused something--i'm trying to track down what that might have been01:18
clarkbtoo late now though01:18
sdakeclarkb that is precisely what i told ian :)01:18
ijwclarkb: Yeah, but unfortunately I didn't create the project...01:18
oomichihttps://review.openstack.org/#/c/363336/ is a reverting patch01:18
*** M-docaedo_vector has joined #openstack-infra01:18
sdakefungi so the 70 commits - should punish the gate for several days I thnk :(01:19
fungizuul's debug log is taking a while to open01:19
ijwAnyway, if this patch chain turns out to be an eldritch horror, just make recommendations and I'll do what I can01:19
fungiijw: i recommend a gate seal ;)01:19
sdakeijw considered squashing commit?01:19
* fungi makes terrible hpl joke01:19
sdakefungi wtb gate seal for kolla :)01:19
ijwsdake: I was asked to preserve the history (and believe me it's already squashed from what it was)01:20
fungiit's too bad the great old ones weren't viable mascot choices01:20
ijwAnd teh staypuft man is probably copyright01:20
sdakeby red hat no doubt ;-)01:20
ijwBut conveniently sticky if you do need to seal the gate01:20
sdakewhy oh why would they name a product staypuft01:21
sdakeijw I htink the problem yu will have is your first commit will fail the gate01:21
sdakealong with the other 7001:21
ijwYeah, understood01:22
fungisdake: ijw: looking back at https://review.openstack.org/363319 it seems it's not enqueued into the gate because "Change <Change 0x7facc939f690 363289,2> is needed but can not be merged"01:22
sdakefungi thanks i didn't catch it was 70 patches long :(01:22
ijwOK, that's probably a good thing01:22
*** david-lyle_ has joined #openstack-infra01:23
*** esikachev has quit IRC01:23
fungiyeah, so basically the changes have to merge in their expressed dependency order. start with the ones closest to be branch tip and see why they're not merging, then work your way up fixing whatever the ci complains about01:23
sdakefungi i think he will need da nonvoting gate for that to work01:23
sdakefungi but yup sounds vible ;)01:24
clarkbor just make it work...01:24
clarkbits trivial to bootstrap that01:24
clarkbfor python jobs at least01:24
*** baoli_ has quit IRC01:24
*** M-docaedo_vector has quit IRC01:24
ijwclarkb: ?01:24
sdakeclarkb agree - i'll get em going01:24
sdakeijw he means to make your gate job nonvoting01:24
ijwAh01:24
clarkbno01:24
clarkbI mean make them pass :)01:24
sdakeoh wrong sorry :)01:24
fungiit's shim in enough boilerplate ahead of those changes to get all jobs passing, and then make sure they all pass at every stage in the series01:24
clarkbya that01:25
ijwfungi: you ask much01:25
ijwLet me see what I can do about it01:25
sdakefungi - gog:)01:25
fungihowever, doing that for 71 changes may be too daunting01:25
ijwI can squash it more01:25
ijwTill it squeaks01:25
sdakeijw  i'd jut turn the gate nonvoting01:25
sdakeget the history in01:25
sdaketurn the gate voting01:25
sdakevictory01:25
*** akshai has quit IRC01:26
fungiyou might replace the python-jobs template in zuul's layout.yaml with the noop-jobs template if you're just going to set all those jobs nonvoting01:26
pabelangerianw: I can dig into it tomorrow morning01:27
fungithen put python-jobs back once you have your "import" completed and properly test subsequent proposed changes01:27
ijwThat's fine with me (not least because I don't want to be eating gate time)01:27
ianwpabelanger: yeah ... so no idea why growroot isn't working :(  i'll try and figure that out01:27
openstackgerritPaul Belanger proposed openstack-infra/system-config: Add credentials for osic-cloud8  https://review.openstack.org/35670301:27
*** fguillot has quit IRC01:27
fungiijw: sdake: but clarkb is right that making your changes pass the already defined jobs is less work on us, as we don't need to review project-config changes tu turn your jobs off and on again01:28
pabelangerianw: do you need to revert?01:28
ijwGiven nothing has gone in, not yet01:28
*** rfolco has quit IRC01:28
*** Swami has joined #openstack-infra01:28
sdakefungi - understood - not sure if the cats woring on this can get networking-vpp going and preserve the history01:28
ianwpabelanger: i think might have to ... it's a crap shoot if it has enough space or not depending on which host it runs on01:28
clarkband not only that it helps you see that your code works01:28
pabelangerianw: also, do you have a minute to review 362908, 36290001:28
ianwpabelanger: it would be ok if the job died in a timely fashion ... but sitting there for hours is bad news01:29
ijwAnd fungi: let me see what I can do.  Again, this is history.  The historical versions didn't pass the tests, so I can condense them down to a handful of versions that do, or I can just erase all history and add the authors in (but I would prefer not to if I could avoid it)01:29
fungiijw: sdake: basically, it's probably possible to just configure the tox.ini to return true for the pep8 and py27 envs, but docs is going to be harder since that relies on `python setup.py build_sphinx` not bombing out with a nonzero exit code01:29
sdakefungi thats a good idea01:30
ijwArgh, two ianw's in the same channel, that's just confusing01:30
*** fguillot has joined #openstack-infra01:30
*** Apoorva has quit IRC01:30
ijwfungi: that one is comparatively easy - I had to put in a template doc, and I can just move that patch up to the biginning of the stack rather than the end trivially enough01:30
fungiijw: yeah, i also used to get the two of you confused occasionally01:30
* ijw is easily confused01:31
*** yamamoto_ has quit IRC01:31
sdakei feel used all the time ;)01:31
ijwLet me rewrite history01:31
* ijw feels like the next Dr Who01:31
*** yanyanhu has joined #openstack-infra01:32
*** Swami__ has quit IRC01:32
fungihave sonic screwdriver, will time-travel01:32
*** sdague has quit IRC01:32
fungiyay! my wiki ansible groups fix merged finally01:33
*** tonytan4ever has joined #openstack-infra01:33
openstackgerritIan Wienand proposed openstack-infra/project-config: Revert "Switch in Fedora 24 devstack job"  https://review.openstack.org/36334001:33
ianwpabelanger: ^ :(01:34
ijwOK, if I rewrite history I'm guessing I want shot of the change-ids in the commit messages, just to confirm01:34
sdake,ozzxjkzxxxhhhhhnope01:34
sdakenope01:34
sdakeijw no change ids will cahnge just orderin gerrit01:34
fungiif rewriting history means just using rebase -i to reorder some commits, then you should be fine01:35
fungiwhat you _don't_ want is altering the change-ids in the commit messages, since that will cause the old changes associated with them to become orphaned cruft which will need to be separately abandoned01:35
fungiand with 70 some changes, that would be pretty annoying to clean up01:36
openstackgerritMerged openstack-infra/irc-meetings: kuryr: update meeting settings  https://review.openstack.org/36285201:36
sdakefungi the onl ytime that makes sensse is during a sqush01:36
fungisdake: yep, in a squash case you'll need to manually abandon the old changes associated with the no longer needed change-ids01:36
openstackgerritMerged openstack-infra/irc-meetings: Add a UTC clock and integrate local meeting time  https://review.openstack.org/35300801:37
ijwLet me see what I can do with this.  I should be able ot script up a nasty little walk through with tox locally to see if this is working01:37
openstackgerritMerged openstack-infra/irc-meetings: Check chairs are in the 'correct' format as part of pep8  https://review.openstack.org/34751101:38
*** thorst_ has quit IRC01:38
*** yamamoto_ has joined #openstack-infra01:38
*** thorst has joined #openstack-infra01:38
*** tonytan4ever has quit IRC01:38
openstackgerritMatt Riedemann proposed openstack-infra/project-config: Run placement-api job in devstack experimental  https://review.openstack.org/36334201:39
*** chlong has joined #openstack-infra01:41
*** changzhi has joined #openstack-infra01:42
*** kzaitsev_mb has quit IRC01:44
*** yuanying has quit IRC01:45
*** yuanying has joined #openstack-infra01:46
*** ddieterly has quit IRC01:46
*** Goneri has joined #openstack-infra01:46
*** thorst has quit IRC01:46
openstackgerritMerged openstack-infra/project-config: Run windmill-jobs-trusty(-nv) on master  https://review.openstack.org/36290001:48
openstackgerritMerged openstack-infra/project-config: Switch windmill-jobs-centos7 for voting  https://review.openstack.org/36290801:49
*** M-docaedo_vector has joined #openstack-infra01:52
openstackgerritMerged openstack-infra/irc-meetings: Remove the check target  https://review.openstack.org/34751201:52
*** netsin has joined #openstack-infra01:52
fungimordred: that (newer ansible) seems to have worked. now i'm just down to mundane vcsrepo troubleshooting01:53
*** zshuo has joined #openstack-infra01:55
*** spzala has joined #openstack-infra01:55
fungiianw: I'm +2 on that entire chain ending in 359500. worked great once i got past the group expansion related issues with the older ansible we're pinned to01:55
*** zhurong_ has joined #openstack-infra01:59
*** kuntelin has joined #openstack-infra02:00
*** pvinci has quit IRC02:01
*** pvinci has joined #openstack-infra02:02
*** zhurong has quit IRC02:02
amrithclarkb, thanks for confirming earlier today that you were able to install tempest in a venv. I'm re-running now ...02:03
amrithand it appears to be working better. I have enough to debug the failure (and it is local to my machines).02:04
amrithclarkb ^^02:04
*** kuntelin_ has joined #openstack-infra02:06
*** kuntelin has quit IRC02:06
*** ddieterly has joined #openstack-infra02:10
*** aeng has quit IRC02:10
*** dchen has joined #openstack-infra02:11
openstackgerritMichael Krotscheck proposed openstack-infra/project-config: NPM DSVM jobs are now voting.  https://review.openstack.org/36320002:11
*** sflanigan has quit IRC02:14
*** pblaho has quit IRC02:14
*** Goneri has quit IRC02:15
*** sdake has quit IRC02:16
*** shashank_hegde has joined #openstack-infra02:16
*** sdake has joined #openstack-infra02:17
*** kuntelin_ has quit IRC02:17
*** vinaypotluri has quit IRC02:22
*** tqtran has quit IRC02:22
*** Sukhdev has joined #openstack-infra02:23
*** aeng has joined #openstack-infra02:23
*** Swami__ has joined #openstack-infra02:25
openstackgerritIsaku Yamahata proposed openstack-infra/project-config: networking-odl: cover more combinations of version  https://review.openstack.org/34704502:26
openstackgerritIsaku Yamahata proposed openstack-infra/project-config: networking-odl: add job for OpenDaylight carbon  https://review.openstack.org/36335602:26
*** spzala has quit IRC02:26
*** Swami has quit IRC02:28
*** Swami_ has quit IRC02:29
*** Swami has joined #openstack-infra02:29
*** tphummel has joined #openstack-infra02:29
*** fguillot has quit IRC02:30
*** mriedem has quit IRC02:31
*** david-lyle_ has quit IRC02:31
*** gyee has quit IRC02:31
*** psilvad has quit IRC02:32
*** ddieterly has quit IRC02:33
*** baoli has joined #openstack-infra02:34
*** tonytan4ever has joined #openstack-infra02:34
*** yamahata has quit IRC02:35
*** dimtruck is now known as zz_dimtruck02:37
*** Jeffrey4l_ has joined #openstack-infra02:38
*** tonytan4ever has quit IRC02:38
*** tonytan4ever has joined #openstack-infra02:38
*** baoli has quit IRC02:40
*** gildub has quit IRC02:42
*** thorst has joined #openstack-infra02:45
*** reed_ has joined #openstack-infra02:46
*** reed_ has quit IRC02:47
*** yamamoto_ has quit IRC02:48
*** thorst has quit IRC02:52
*** pblaho has joined #openstack-infra02:54
*** pblaho has quit IRC02:59
*** gnuoy has quit IRC03:00
*** gnuoy` has joined #openstack-infra03:00
*** mriedem has joined #openstack-infra03:00
*** jamespage has quit IRC03:01
*** mriedem has quit IRC03:01
*** jamespag` has joined #openstack-infra03:01
*** mriedem has joined #openstack-infra03:01
*** mriedem has quit IRC03:01
*** yamamoto_ has joined #openstack-infra03:02
*** Genek has joined #openstack-infra03:16
*** salv-orlando has joined #openstack-infra03:17
*** tqtran has joined #openstack-infra03:21
*** yamamoto_ has quit IRC03:22
*** yamamoto_ has joined #openstack-infra03:22
*** yamamoto_ has quit IRC03:22
*** changzhi has quit IRC03:24
*** salv-orlando has quit IRC03:24
*** pblaho has joined #openstack-infra03:24
*** yamamoto_ has joined #openstack-infra03:25
*** aeng has quit IRC03:26
*** Ravikiran_K has joined #openstack-infra03:27
*** winggundamth has quit IRC03:27
*** winggundamth has joined #openstack-infra03:29
*** yamamoto_ has quit IRC03:29
*** flepied has joined #openstack-infra03:30
*** AnarchyAo has joined #openstack-infra03:32
*** esp has joined #openstack-infra03:34
*** shashank_hegde has quit IRC03:36
*** woodster_ has quit IRC03:39
*** vikrant has joined #openstack-infra03:40
*** adriant has quit IRC03:41
*** aeng has joined #openstack-infra03:43
openstackgerritCraige McWhirter proposed openstack-infra/puppet-phabricator: Configure HTTPD and HTTPS certificates  https://review.openstack.org/35037003:44
*** changzhi has joined #openstack-infra03:45
*** shashank_hegde has joined #openstack-infra03:45
ianwoomichi: so the trap is set ... that's something ... http://logs.openstack.org/26/363326/1/check/gate-tempest-dsvm-full-ubuntu-xenial/aec60b5/logs/devstacklog.txt.gz#_2016-08-31_01_12_51_02003:49
*** esp has quit IRC03:49
*** thorst has joined #openstack-infra03:50
*** gouthamr has quit IRC03:51
*** esp has joined #openstack-infra03:51
*** cody-somerville has quit IRC03:51
*** cody-somerville has joined #openstack-infra03:52
*** thorst has quit IRC03:57
openstackgerritMerged openstack-infra/project-config: Revert "Switch in Fedora 24 devstack job"  https://review.openstack.org/36334003:57
*** ijw has quit IRC04:04
*** dingyichen has quit IRC04:05
*** ijw has joined #openstack-infra04:08
*** cody-somerville has quit IRC04:08
*** cody-somerville has joined #openstack-infra04:08
*** cody-somerville has quit IRC04:08
*** cody-somerville has joined #openstack-infra04:08
ianwpabelanger: ^ see https://bugzilla.redhat.com/show_bug.cgi?id=1371761 .  i'll see what response i get in the bug then decide what to do04:11
openstackbugzilla.redhat.com bug 1371761 in util-linux "sfdisk return code breaks growpart" [Unspecified,New] - Assigned to kzak04:11
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources: Upgrade Laravel Version and ORM Framework  https://review.openstack.org/32230704:14
*** crinkle_ has joined #openstack-infra04:16
*** cschwede has quit IRC04:17
*** crinkle has quit IRC04:17
*** cschwede has joined #openstack-infra04:17
*** kaisers has quit IRC04:17
*** kaisers has joined #openstack-infra04:19
*** Genek has quit IRC04:21
*** jdg has joined #openstack-infra04:24
jdg#join #openstack-cinder04:25
ijwOK, I have rewritten history.  In this version of history my changes should pass the tests.  Also, gerrit is not purple, so I consider that a bonus.04:26
*** spzala has joined #openstack-infra04:26
*** salv-orlando has joined #openstack-infra04:27
*** ijw has quit IRC04:27
*** pgadiya has joined #openstack-infra04:28
*** ijw has joined #openstack-infra04:28
*** yamamoto_ has joined #openstack-infra04:29
*** spzala has quit IRC04:31
*** jdg has quit IRC04:32
*** salv-orlando has quit IRC04:32
openstackgerritNate Johnston proposed openstack-infra/project-config: Make neutron-fwaas functional job not experimental  https://review.openstack.org/35932004:32
*** ijw has quit IRC04:33
*** yuanying has quit IRC04:34
*** yuanying has joined #openstack-infra04:34
*** dtantsur|afk has quit IRC04:36
*** coolsvap_ has joined #openstack-infra04:36
*** links has joined #openstack-infra04:36
openstackgerritNate Johnston proposed openstack-infra/project-config: Make neutron-fwaas functional job not experimental  https://review.openstack.org/35932004:37
*** senk has joined #openstack-infra04:38
*** chlong has quit IRC04:38
*** chlong has joined #openstack-infra04:39
*** vinaypotluri has joined #openstack-infra04:41
*** sarob has joined #openstack-infra04:41
ianwoomichi: ok, so exit trap is getting called -> http://logs.openstack.org/26/363326/2/check/gate-tempest-dsvm-full-ubuntu-xenial/ecaf107/logs/devstacklog.txt.gz04:43
*** esp has quit IRC04:45
*** chlong has quit IRC04:45
*** sarob has quit IRC04:46
*** tphummel has quit IRC04:47
*** esp has joined #openstack-infra04:54
*** thorst has joined #openstack-infra04:54
*** dingyichen has joined #openstack-infra04:55
*** ijw has joined #openstack-infra04:55
*** Julien-z_ has quit IRC04:56
*** Jeffrey4l_ has quit IRC04:56
*** asettle has joined #openstack-infra04:57
*** chlong has joined #openstack-infra04:58
*** ijw has quit IRC05:00
*** claudiub has joined #openstack-infra05:01
*** thorst has quit IRC05:01
*** asettle has quit IRC05:02
*** yamahata has joined #openstack-infra05:04
*** psachin has joined #openstack-infra05:05
*** yamahata has quit IRC05:06
*** sdake has quit IRC05:10
*** sdake_ has joined #openstack-infra05:10
*** salv-orlando has joined #openstack-infra05:13
*** ilyashakhat has joined #openstack-infra05:16
*** changzhi has quit IRC05:18
*** senk has quit IRC05:20
*** jaosorior has joined #openstack-infra05:21
*** yonglihe has joined #openstack-infra05:22
*** hichihara has joined #openstack-infra05:24
*** hichihara has quit IRC05:24
*** roxanaghe has joined #openstack-infra05:25
*** sdake_ has quit IRC05:26
*** javeriak has joined #openstack-infra05:26
*** ilyashakhat has quit IRC05:30
*** gildub has joined #openstack-infra05:30
*** roxanaghe has quit IRC05:32
*** sdake has joined #openstack-infra05:33
*** Na3iL has joined #openstack-infra05:34
craigeIs there problem with the checks or am I missing something? http://logs.openstack.org/70/350370/3/check/gate-puppet-phabricator-puppet-lint/ec4c9d1/console.html05:38
AJaegermorning ianw, could you review https://review.openstack.org/#/c/362839/2 , please?05:38
AJaegercraige: http://logs.openstack.org/70/350370/3/check/gate-puppet-phabricator-puppet-lint/ec4c9d1/console.html#_2016-08-31_03_48_50_273894 - timed out05:39
*** Julien-zte has joined #openstack-infra05:39
craigeI thought that was it but was unsure. thanks for confirming AJaeger05:40
*** caowei has quit IRC05:41
openstackgerritMerged openstack-infra/project-config: Introduce functional/fullstack Neutron Xenial jobs  https://review.openstack.org/35984305:42
*** nwkarsten has quit IRC05:42
AJaegercraige: I expect that's it ;) So, recheck...05:42
*** nwkarsten has joined #openstack-infra05:42
*** caowei has joined #openstack-infra05:43
openstackgerritMerged openstack-infra/project-config: Remove unused gate-tempest-dsvm-full-ceph-plugin-src filter  https://review.openstack.org/36320505:44
*** nwkarsten has quit IRC05:47
*** mtanino has joined #openstack-infra05:48
*** ifarkas_afk has quit IRC05:50
openstackgerritDerek Higgins proposed openstack-infra/project-config: Change tripleo ha2 job types back too ha  https://review.openstack.org/36340905:51
openstackgerritDerek Higgins proposed openstack-infra/tripleo-ci: Remove the ha2 JOBTYPE  https://review.openstack.org/36341105:52
*** senk has joined #openstack-infra05:54
*** _nadya_ has joined #openstack-infra05:54
*** armax has quit IRC05:56
*** sandanar has joined #openstack-infra05:57
*** ilyashakhat has joined #openstack-infra05:57
*** aeng has quit IRC05:57
*** camunoz_ has quit IRC05:57
*** dingyichen has quit IRC05:58
*** markvoelker has joined #openstack-infra05:58
*** _nadya_ has quit IRC05:59
*** thorst has joined #openstack-infra06:00
openstackgerritEli Qiao proposed openstack-infra/project-config: Higgins: Add post script for tempest api testing  https://review.openstack.org/36341606:00
vrovachev1Hello, dear colleagues. I was created patch for fuel-qa project with fix in misprint for gates. Please take a look https://review.openstack.org/#/c/36287106:01
*** martinkopec has joined #openstack-infra06:01
*** oanson has joined #openstack-infra06:02
*** Swami has quit IRC06:04
*** Swami__ has quit IRC06:04
odyssey4mehmm, it looks like there are a bunch of nodes in a waiting state which aren't transitioning to any other state06:06
*** thorst has quit IRC06:07
*** salv-orl_ has joined #openstack-infra06:07
*** salv-orlando has quit IRC06:10
*** aeng has joined #openstack-infra06:10
*** camunoz_ has joined #openstack-infra06:11
*** dingyichen has joined #openstack-infra06:11
*** Genek has joined #openstack-infra06:11
*** ianychoi has quit IRC06:12
*** ianychoi has joined #openstack-infra06:12
*** esikachev has joined #openstack-infra06:12
openstackgerritMerged openstack-infra/project-config: Adds Magnum API Reference jobs to gate and check builds  https://review.openstack.org/36283906:14
*** pcaruana has joined #openstack-infra06:15
*** claudiub has quit IRC06:17
*** javeriak has quit IRC06:18
*** ilyashakhat has quit IRC06:19
*** andreas_s has joined #openstack-infra06:20
*** rcernin has joined #openstack-infra06:21
*** sdake has quit IRC06:22
odyssey4mejhesketh AJaeger any idea what's up with the hanging nodes in nodepool?06:24
*** salv-orl_ has quit IRC06:24
jheskethodyssey4me: something does look odd there, yes06:27
odyssey4mejhesketh it looks like there are a small number running, but there are tons of merge check jobs waiting06:27
odyssey4meand the check queue is slowly climbing - even though there should be plenty of nodes to consume06:27
jheskethjobs aren't launching, so it's probably something to do with zuul or zuul-launcher06:28
*** ijw has joined #openstack-infra06:28
*** amotoki has joined #openstack-infra06:29
*** Genek has quit IRC06:29
*** Illumitardi has joined #openstack-infra06:30
*** abregman has quit IRC06:31
*** mtanino has quit IRC06:32
*** watanabe_isao has joined #openstack-infra06:32
*** markvoelker has quit IRC06:34
*** dingyichen has quit IRC06:35
HeOSHello, infra-team! I'd like to ask to review the following request: https://review.openstack.org/#/c/362002/. I'll really appreciate your help.06:36
*** gildub has quit IRC06:37
*** nwkarsten has joined #openstack-infra06:44
ianwjhesketh: hmm, something's going on but it's not immediately obvious to me06:44
jheskethianw: yep, agreed... I'm poking around at the logs atm06:45
*** mikelk has joined #openstack-infra06:45
ianwyeah, nodepool seems to be assigning and deleting nodes06:45
jheskethianw: looks like something may have just gotten unstuck...06:47
ianwthere was something earlier with like 70 dependent changes, an initial import put through as separate changes06:48
ianwi wonder if that's being digested06:48
*** nwkarsten has quit IRC06:49
*** javeriak has joined #openstack-infra06:49
*** ilyashakhat has joined #openstack-infra06:49
jheskethI was wrong about it being unstuck..06:50
ianw2016-08-30 07:07:33,795 ERROR gear.Client.unknown: Exception in poll loop:06:53
ianwthis seems to be the most recent error in zuul06:53
*** ijw has quit IRC06:53
*** camunoz_ has quit IRC06:53
ianw2016-08-31 06:06:10,989 ERROR zuul.Scheduler: Exception in run handler:06:54
ianwzuul@zuul:/var/log/zuul$ date06:54
ianwWed Aug 31 06:54:08 UTC 201606:54
*** ilyashakhat has quit IRC06:54
ianwjhesketh: ^ maybe the scheduler loop has stopped?  50 minutes ago might be about right06:54
*** amotoki has quit IRC06:56
ianwhttp://paste.openstack.org/show/565070/06:56
jheskethianw: maybe, but it looks like it should continue okay : http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/scheduler.py#n99306:58
*** javeriak has quit IRC06:59
jheskethianw: gearman is showing no jobs requested or running... it shows a high number of workers07:00
jheskeththat supports your scheduler theory07:00
ianwjhesketh: i guess the logs don't really, right after that is "2016-08-31 06:06:10,991 INFO zuul.Scheduler:" so zuul.Scheduler is still logging07:01
*** javeriak has joined #openstack-infra07:02
dfflandersrcarrillocruz, will you be in Barcelona?  Hoping to request your skills to do team teaching with us on shade for AppDev again?07:03
*** thorst has joined #openstack-infra07:05
*** esikachev has quit IRC07:05
* AJaeger just one message where a change finished testing, so something is going...07:05
AJaegermorning ianw and jhesketh , thanks for looking into this!07:05
jheskethianw: something may have unstuck now however07:06
jhesketh267 nodes building07:06
AJaegeryeah!07:07
ianwhmm, yes ...07:07
*** dtardivel has joined #openstack-infra07:07
*** tesseract- has joined #openstack-infra07:08
*** bethwhite_ has joined #openstack-infra07:08
ianwhmm, glitch in the matrix?07:09
*** florianf has joined #openstack-infra07:11
jheskethyeah, I don't like not knowing why it unstuck...07:11
jheskeththe merge-check pipeline is still huge...07:11
*** yaume has joined #openstack-infra07:11
*** thorst has quit IRC07:12
*** shardy has joined #openstack-infra07:12
*** kaisers_ has joined #openstack-infra07:13
AJaeger1100 entries...07:13
*** ihrachys has joined #openstack-infra07:14
*** sshnaidm|afk is now known as sshnaidm07:15
jheskethand growing :-s07:16
AJaegerjhesketh, ianw: Since yolanda is on vacation, I would appreciate if you help a bit more than usual with reviewing of changes. Could you go this week some time over open project-config changes, please?07:16
AJaegerjhesketh: so, back to normal ;)07:16
jheskethheh07:16
*** amotoki has joined #openstack-infra07:16
jheskethAJaeger: I'm always happy to help with reviews, just point me somewhere :-)07:17
jheskethI have done a poor job the last month or so on picking up fresh reviews sadly :-(07:17
ianwAJaeger: sure07:17
AJaegerA dashboard with jobs that have seen one +2 already:07:17
AJaegerhttps://review.openstack.org/#/dashboard/?foreach=%2528project%253Aopenstack%252Dinfra%252Fproject%252Dconfig%2529+status%253Aopen+NOT+label%253AWorkflow%253C%253D%252D1+label%253AVerified%253E%253D1%252Cjenkins+is%253Amergeable&title=AJ+Review+Inbox&Needs+final+%252B2=NOT+label%253ACode%252DReview%253C%253D%252D1%252Cproject%252Dconfig%252Dcore+label%253ACode%252DReview%253E%253D207:17
AJaegerNote that some might wait on dependent changes, some might wait on Neutron Infra liason.07:18
*** nmagnezi has joined #openstack-infra07:18
*** ifarkas has joined #openstack-infra07:18
AJaegerAnd there're also some changes that are unreviewed...07:19
*** jerryz has joined #openstack-infra07:19
*** dizquierdo has joined #openstack-infra07:20
* AJaeger is offline for a bit now...07:21
*** hichihara has joined #openstack-infra07:22
*** esikachev has joined #openstack-infra07:22
*** drifterza has joined #openstack-infra07:23
*** hichihara has quit IRC07:24
*** yanyanhu has quit IRC07:25
*** Hal1 has quit IRC07:26
*** yanyanhu has joined #openstack-infra07:26
*** spzala has joined #openstack-infra07:26
*** jerryz has quit IRC07:27
*** jlanoux has joined #openstack-infra07:29
*** markvoelker has joined #openstack-infra07:30
*** spzala has quit IRC07:31
*** markvoelker has quit IRC07:35
*** kaisers_ has quit IRC07:35
*** jamespag` is now known as jamespage07:35
*** abregman has joined #openstack-infra07:36
*** jpich has joined #openstack-infra07:37
*** abregman has quit IRC07:37
*** abregman has joined #openstack-infra07:38
*** shardy has quit IRC07:39
*** salv-orlando has joined #openstack-infra07:39
*** shardy has joined #openstack-infra07:40
*** gnuoy` is now known as gnuoy07:40
*** Hal1 has joined #openstack-infra07:41
*** vincentll has joined #openstack-infra07:42
*** spzala has joined #openstack-infra07:42
*** _nadya_ has joined #openstack-infra07:43
*** hashar has joined #openstack-infra07:44
*** vincentll has quit IRC07:44
*** zhurong_ has quit IRC07:45
openstackgerritVadim Rovachev proposed openstack-infra/project-config: Fix branches for fuel-qa gates  https://review.openstack.org/36287107:46
*** jlanoux has quit IRC07:47
*** spzala has quit IRC07:47
*** zhurong has joined #openstack-infra07:48
openstackgerritVadim Rovachev proposed openstack-infra/project-config: Fix branches for fuel-qa gates  https://review.openstack.org/36287107:50
*** Sukhdev has quit IRC07:52
*** cdent has joined #openstack-infra07:56
*** priteau has joined #openstack-infra07:57
*** shashank_hegde has quit IRC07:57
*** spzala has joined #openstack-infra07:58
*** zzzeek has quit IRC08:00
*** mugsie|alt has quit IRC08:00
*** hichihara has joined #openstack-infra08:00
*** mugsie|alt has joined #openstack-infra08:01
*** markvoelker has joined #openstack-infra08:01
*** matthewbodkin has joined #openstack-infra08:01
*** zzzeek has joined #openstack-infra08:01
AJaegerianw, jhesketh : Still something is odd with zuul - having entries for 3+ hours in merge-check is odd. Also, post and periodic queues are not served.08:02
*** r-mibu has quit IRC08:02
*** spzala has quit IRC08:03
*** matrohon has joined #openstack-infra08:05
*** chlong has quit IRC08:06
*** markvoelker has quit IRC08:07
hasharthe job queue is quite fuel, maybe  it is a spam of  merge:merge jobs?08:09
*** thorst has joined #openstack-infra08:09
hasharfuel ..08:10
hasharfull08:10
*** claudiub has joined #openstack-infra08:11
hasharAJaeger: looks like nodepool is spinning instances again as of 7:00utc   with lot of "in use" nodes.  So I guess the stack is going to catch up all fine08:11
hasharAJaeger: and 'post' and 'periodic'  have a low precedence, so if something is starved those queues are probably not processed at all08:13
*** rossella_s has joined #openstack-infra08:16
*** thorst has quit IRC08:17
acabothi guys, sorry to interupt, I dont really understand why jenkins fail on https://review.openstack.org/#/c/362984/. Any guess ? Thx08:17
acabotlogs are http://pastebin.com/0T3xTLcB08:18
acabotand I dont see where I need to define those jobs...08:18
acabotthx08:18
jheskethAJaeger: Yeah, I'm going to let it clear out some of the other queues first... as hashar points out the post/periodic ones are low priority08:19
jheskeththe merge though isn't looking healthy08:19
jheskethacabot:08:19
hasharmaybe some zuul-merger have troubles catching up08:19
*** lucas-dinner is now known as lucasagomes08:20
*** javeriak has quit IRC08:20
jheskethacabot: you need to define the jobs in jenkins/jobs/ if you're familiar with that?08:20
AJaegerjhesketh: I would expect the proposal slave - translation jobs - run. That slave is only used in periodic and post, so should be served. But I don't see it in use.08:20
AJaegerjhesketh: nc proposal.slave.openstack.org 19885 - last action hours ago ;(08:20
AJaegeris it offline?08:20
jheskethhmm08:21
acabotjhesketh : I dont think I need to add a job as I'm using a standard job 'publish-to-pypi" in layout.yaml08:22
*** kong has quit IRC08:22
jheskethacabot: that only exists if you have that template initiated in jjb08:22
*** javeriak has joined #openstack-infra08:22
*** yanyanhu has quit IRC08:22
acabotjhesketh : jjb ?08:23
AJaegeracabot: it'S not a standard job. It's a job specific to your repository08:23
jheskethacabot: jenkins job builder08:23
AJaegeracabot: Read http://jaegerandi.blogspot.de/2016/02/creating-new-test-jobs-in-openstack-ci.html08:23
*** kong has joined #openstack-infra08:23
*** yanyanhu has joined #openstack-infra08:23
acabotAJaeger : thx let me look at this :-)08:24
*** e0ne has joined #openstack-infra08:24
AJaegeracabot: also documented in the Infra Manual docs.openstack.org/infra/manual08:24
jheskethAJaeger: zlstatic and proposal.slave look okay... (still poking)08:24
*** martinkopec has quit IRC08:25
*** martinkopec has joined #openstack-infra08:25
openstackgerritAntoine Cabot proposed openstack-infra/project-config: Add publish-to-pypi for watcher-dashboard  https://review.openstack.org/36298408:25
*** gongysh has quit IRC08:26
*** dchen has quit IRC08:26
*** oomichi has quit IRC08:27
*** zubchick has quit IRC08:27
*** javeriak has quit IRC08:28
*** javeriak has joined #openstack-infra08:29
*** oomichi has joined #openstack-infra08:29
*** markvoelker has joined #openstack-infra08:29
*** gildub has joined #openstack-infra08:29
openstackgerritVadim Rovachev proposed openstack-infra/project-config: Change ACLs for fuel-qa  https://review.openstack.org/35970408:30
*** auggy has quit IRC08:30
*** zubchick has joined #openstack-infra08:30
openstackgerritVadim Rovachev proposed openstack-infra/project-config: Change ACLs for fuel-qa project  https://review.openstack.org/35970408:30
*** derekh has joined #openstack-infra08:31
*** rossella_s has quit IRC08:31
*** auggy has joined #openstack-infra08:31
AJaegeracabot: you need to *Define* a job in jjb and then you can schedule the job in zuul.08:32
AJaegerYour change to jjb is just a variable in an existing job, no definition of a new one08:32
*** r-mibu has joined #openstack-infra08:33
*** Julien-zte has quit IRC08:33
*** markvoelker has quit IRC08:34
openstackgerritArie Bregman proposed openstack-infra/zuul: Add 'reset_branch' option to Merger  https://review.openstack.org/36204908:36
AJaegerjhesketh: is zlstatic processing merge-check as well?08:36
*** Julien-zte has joined #openstack-infra08:36
*** binbincong has joined #openstack-infra08:37
openstackgerritAntoine Cabot proposed openstack-infra/project-config: Add publish-to-pypi for watcher-dashboard  https://review.openstack.org/36298408:37
*** ihrachys has quit IRC08:37
*** samueldmq has quit IRC08:38
*** JerryOpenix has quit IRC08:39
*** samueldmq has joined #openstack-infra08:39
jheskethAJaeger: it shouldn't... merge-check should just go to the zuul-mergers08:39
*** mhickey has joined #openstack-infra08:39
*** javeriak has quit IRC08:40
AJaegerah, ok. Just wild guessing to see whether there was a direct connection08:41
ianwoomichi: http://logs.openstack.org/26/363326/3/check/gate-tempest-dsvm-full-ubuntu-xenial/3ccdbee/logs/devstacklog.txt.gz#_2016-08-31_07_17_14_33908:43
*** sarob has joined #openstack-infra08:43
ianwoomichi: in short, it seems to be running, and it seems it would output to the right location.  i think that maybe leaves something changing on the copying side.  if you don't get a chance, i'll look into it my time tomorrow08:44
*** JerryOpenix has joined #openstack-infra08:44
*** yolanda has joined #openstack-infra08:45
*** sarob has quit IRC08:48
*** watanabe_isao has quit IRC08:51
acabotAJaeger : thx for your help and sorry for not looking again at the doc ;-)08:54
*** pilgrimstack has quit IRC08:56
*** asettle has joined #openstack-infra08:58
openstackgerritEli Qiao proposed openstack-infra/project-config: Higgins: Add post script for tempest api testing  https://review.openstack.org/36341608:58
*** AnarchyAo has quit IRC08:59
*** hichihara has quit IRC08:59
*** markvoelker has joined #openstack-infra08:59
*** pilgrimstack has joined #openstack-infra08:59
*** andreykurilin_ has joined #openstack-infra09:00
*** markvoelker has quit IRC09:04
jheskethAJaeger: merge-check is going down... I think we just need to give it time :-)09:05
*** nijaba has quit IRC09:05
hasharis that a function to attempt to do a merge:merge via zuul-merger ?09:06
jheskethhashar: is that a question to me, I'm not sure I follow sorry09:07
*** nijaba has joined #openstack-infra09:07
*** nijaba has joined #openstack-infra09:07
*** nwkarsten has joined #openstack-infra09:07
*** electrofelix has joined #openstack-infra09:07
*** nwkarste_ has joined #openstack-infra09:09
*** nwkarst__ has joined #openstack-infra09:10
AJaegerjhesketh: great09:10
AJaegerjhesketh: and proposal slave is handling translation jobs...09:10
AJaegerThat took a long time to recover ;( Sorry for beeing impatient09:11
jheskethyeah I think it was just the priority settings09:11
jheskethall good.. zuul does a lot so I've learned to be patient09:11
*** nwkarsten has quit IRC09:11
AJaegerjhesketh: Ah, requirements updates first...09:12
*** nwkarste_ has quit IRC09:13
*** dtantsur has joined #openstack-infra09:13
*** cdent has left #openstack-infra09:14
*** vgridnev has quit IRC09:14
*** ihrachys has joined #openstack-infra09:14
*** thorst has joined #openstack-infra09:14
*** nwkarst__ has quit IRC09:14
*** abregman has quit IRC09:15
*** jlibosva has joined #openstack-infra09:16
jlibosvaAJaeger: hi09:16
AJaegerhi jlibosva09:16
jlibosvaAJaeger: thanks for help with the xenial jobs. I think we/I have a problem09:16
jlibosvaAJaeger: it seems the functional-ubuntu-trusty is now not part of check queue09:17
AJaegerjlibosva: that sounds right09:17
*** yanyanhu has quit IRC09:18
AJaegerjlibosva: you replaced gate-neutron-dsvm-functional-nv by gate-neutron-dsvm-functional-ubuntu-trusty-nv in the check queue09:18
AJaegerOr which job are you talking about?09:19
*** chem has joined #openstack-infra09:20
*** jlanoux has joined #openstack-infra09:20
jlibosvaAJaeger: I wanted to rename gate-neutron-dsvm-functional to gate-neutron-dsvm-functional-ubuntu-trusty09:21
*** vgridnev has joined #openstack-infra09:21
AJaegerand that happened, didn't it?09:22
*** thorst has quit IRC09:22
AJaegerjlibosva: you have me confused, please explain what went wrong and show some proof of it09:22
jlibosvaAJaeger: It seems it did not. The test disappeared09:22
jlibosvaAJaeger: ok :)09:22
openstackgerritMartin André proposed openstack-infra/tripleo-ci: Add quotes around systemctl command in test  https://review.openstack.org/36351809:22
jlibosvaAJaeger: gimme a minute I'll find examples09:22
jlibosvaAJaeger: previously, we ran functional tests on Neutron - it ran on trusty: https://review.openstack.org/#/c/351287/ - name of the job is gate-neutron-dsvm-functional09:23
*** pzhurba has joined #openstack-infra09:24
*** andreykurilin_ has quit IRC09:25
jlibosvaAJaeger: What I intended was to keep the functional tests running for Neutron on Trusty, until we have a confidence to switch to Xenial. So as I modified template, I renamed this job to gate-neutron-dsvm-functional-ubuntu-trusty.09:25
jlibosvaAJaeger: to keep it running as it was before - but it's not what happened - https://review.openstack.org/#/c/333804/09:25
pzhurbaHello09:25
jlibosvaAJaeger: I'd expect to have gate-neutron-dsvm-functional-ubuntu-trusty there09:25
pzhurbaReview please https://review.openstack.org/#/c/36295009:26
*** lock_ has joined #openstack-infra09:26
*** andreykurilin has joined #openstack-infra09:26
AJaegerjlibosva: checking...09:26
*** andreykurilin has left #openstack-infra09:27
AJaegerjlibosva: http://logs.openstack.org/43/359843/8/gate/gate-project-config-layout/e1563ca/console.html#_2016-08-31_05_41_42_84796609:27
*** abregman has joined #openstack-infra09:28
AJaegerthe change you did for  gate-neutron-dsvm-fullstack-ubuntu-trusty with the branch needs to be done for this job as well09:28
*** markvoelker has joined #openstack-infra09:29
jlibosvaAJaeger: it's shouldn't have -nv suffix09:29
jlibosvas/it's/it/09:29
*** ianychoi has quit IRC09:29
AJaegerjlibosva: http://logs.openstack.org/43/359843/8/gate/gate-project-config-layout/e1563ca/console.html#_2016-08-31_05_41_43_09804309:29
AJaegerLook elsewhere in the file - both are wrong ;)09:30
*** yolanda has quit IRC09:30
*** andreykurilin has joined #openstack-infra09:30
AJaegerjlibosva: do you know what you need to do?09:30
jlibosvaAJaeger: I need to define regex for our job to be running on all branches09:31
jlibosvaour job = funcitonal-ubuntu-trusty09:31
*** andreykurilin has left #openstack-infra09:31
jlibosvaAJaeger: thanks for your help! And sorry for screwing up, I'm quite n00b in project-config :)09:31
AJaegerjlibosva: you're welcome - we need more help here ;)09:32
*** andreykurilin has joined #openstack-infra09:33
*** andreykurilin has left #openstack-infra09:33
*** markvoelker has quit IRC09:33
*** andreykurilin has joined #openstack-infra09:34
rcarrillocruzdfflanders: it is a bit soon to know if i'll get funding for the summit09:34
*** andreykurilin__ has quit IRC09:36
BobBallmordred: https://review.openstack.org/gitweb?p=openstack%2Fnova.git;a=commitdiff;h=1bb5a0d1017dd634444932dc87dc8d6c4460934b - I guess it may not have made it into Rackspace's cloud yet, but XenAPI should indeed support disk labels09:36
BobBallmordred: Or perhaps I mis-understood the conversation... hmmmz09:37
BobBall:)09:37
openstackgerritJakub Libosvar proposed openstack-infra/project-config: Run functional-ubuntu-trusty jobs on all branches  https://review.openstack.org/36353109:37
jlibosvaAJaeger: ^^ I hope this is it :)09:37
openstackgerritMerged openstack-infra/project-config: Add publish-to-pypi for watcher-dashboard  https://review.openstack.org/36298409:37
AJaegerjlibosva: let'S wait for build results and then double check the gate-project-config-layout lines09:38
jlibosvaack09:38
openstackgerritAlexander Evseev proposed openstack-infra/puppet-os_client_config: Fill module by manifest  https://review.openstack.org/36353309:39
openstackgerritMerged openstack-infra/project-config: Fix branches for fuel-qa gates  https://review.openstack.org/36287109:40
*** chem has quit IRC09:41
*** chem has joined #openstack-infra09:42
openstackgerritJakub Libosvar proposed openstack-infra/project-config: Run functional-ubuntu-trusty jobs on all branches  https://review.openstack.org/36353109:43
openstackgerritMerged openstack-infra/storyboard-webclient: Re-add note that markdown formatting is supported  https://review.openstack.org/36316209:43
*** salv-orlando has quit IRC09:45
openstackgerritMerged openstack-infra/project-config: Add tripleo-centos-7-ovb-ha-ipv6 experimental job  https://review.openstack.org/36296609:47
openstackgerritMerged openstack-infra/project-config: Check for jobs without attributes in Zuul layout.yaml  https://review.openstack.org/36321409:48
*** pgadiya has quit IRC09:49
openstackgerritFatih Degirmenci proposed openstack-infra/jenkins-job-builder: Add support for Parameterized Scheduler Plugin  https://review.openstack.org/35316509:49
openstackgerritMartin André proposed openstack-infra/tripleo-ci: Fix scp command with IPv6 addresses  https://review.openstack.org/36354509:50
*** vinaypotluri has quit IRC09:52
openstackgerritMartin André proposed openstack-infra/tripleo-ci: Fix scp command with IPv6 addresses  https://review.openstack.org/36354509:52
openstackgerritAlexander Evseev proposed openstack-infra/puppet-os_client_config: Add manifest  https://review.openstack.org/36353309:56
*** markvoelker has joined #openstack-infra09:57
dlahnmorning09:59
*** yamamoto_ has quit IRC10:01
*** zhurong has quit IRC10:01
*** markvoelker has quit IRC10:02
*** salv-orlando has joined #openstack-infra10:02
*** eranrom has quit IRC10:02
*** ifarkas_ has joined #openstack-infra10:05
*** pgadiya has joined #openstack-infra10:07
openstackgerritMerged openstack-infra/project-config: networking-midonet: Fix gate_hook check  https://review.openstack.org/36031210:08
*** javeriak has joined #openstack-infra10:09
openstackgerritMerged openstack-infra/project-config: Add k8s-docker-suite-app-murano project to openstack  https://review.openstack.org/35774510:09
*** yaume_ has joined #openstack-infra10:09
*** yaume has quit IRC10:12
jlibosvaAJaeger: do we have any other magician who could help us merging it in emea region? :)10:13
AJaegerjlibosva: jhesketh is reviewing right now (thanks!)10:16
jlibosvathank you!10:16
AJaegerjlibosva: I mean reviewing in general...10:17
*** javeriak has quit IRC10:19
AJaegerjhesketh: https://review.openstack.org/#/c/363531 - and I would approve that without waiting for armax to fix a regression10:19
jheskethAJaeger, jlibosva: looking already ;-)10:19
*** Julien-zte has quit IRC10:19
*** thorst has joined #openstack-infra10:20
AJaegerthanks!10:20
jheskethAJaeger, jlibosva: I'm confused though.. if the job/branch were omitted wouldn't it already run on every branch?10:20
jlibosvajhesketh: IIUC there is some magic that ubuntu-trusty runs only on some branches10:21
AJaegerjhesketh: we have the global regexes for ubuntu-trusty and ubuntu-xenial10:21
jlibosvaby default10:21
AJaegerjhesketh: line 117110:21
jheskethah that's right, I see10:22
jheskeththanks10:22
* jhesketh missed the ubuntu part of the name10:22
*** _degorenko|afk is now known as degorenko10:22
jhesketh+w10:22
jlibosvajhesketh: AJaeger thanks! I owe you a beer or something :)10:23
jheskethanytime :-)10:23
AJaegerjlibosva: you're welcome10:23
*** markvoelker has joined #openstack-infra10:26
*** thorst has quit IRC10:26
openstackgerritMerged openstack-infra/project-config: add in missing gnocchi 2.2 job references  https://review.openstack.org/36126810:29
*** HeOS has quit IRC10:30
*** markvoelker has quit IRC10:30
*** HeOS has joined #openstack-infra10:30
openstackgerritMerged openstack-infra/project-config: Run functional-ubuntu-trusty jobs on all branches  https://review.openstack.org/36353110:31
*** bethwhite_ has quit IRC10:31
wznoinskhi infra, I get the below when run manually or when building image with openstack-repos element:10:33
wznoinsk git clone git://git.openstack.org/openstack/k8s-docker-suite-app-murano.git10:33
wznoinskCloning into 'k8s-docker-suite-app-murano'...10:33
wznoinskfatal: remote error: access denied or repository not exported: /openstack/k8s-docker-suite-app-murano.git10:33
AJaegerwznoinsk: when was your change merged? IT takes an hour or two for new repos to be setup properly. So, please try again later and if after two hours, it's not merged, tell us again.10:35
wznoinskAJaeger: yeah, I see it's a fresh project-config change, not mine, caught it only while rebuilding my image using dib10:37
AJaegerwznoinsk: So, either use an old config file that does not reference it or wait until it's cloned (and there are  few more repos that were just approved)10:39
*** ansiwen has quit IRC10:42
*** sarob has joined #openstack-infra10:44
AJaegerjlibosva: note that we run job changes via cron every 15 mins - it might take an hour until everything is updated. So, your change is not live yet.10:45
wznoinskAJaeger: not really doable, element sources the project list of https://git.openstack.org/cgit/openstack-infra/project-config/plain/gerrit/projects.yaml, will have to hack by hand10:45
*** mestery has quit IRC10:46
AJaegerjlibosva: http://status.openstack.org/zuul/ has at bottom "Last reconfigured: " - check when your change merged and whether the reconfigure is afater that - and then your change is life. BUT any change currently tested will get the new job after reconfiguring...10:46
openstackgerritYuval Brik proposed openstack-infra/project-config: Rename Smaug to Karbor  https://review.openstack.org/35330410:46
*** sdague has joined #openstack-infra10:46
*** sarob has quit IRC10:49
AJaegerwznoinsk: have a lunch break ;)10:49
*** sdake has joined #openstack-infra10:49
electrofelixzaro: could you check out my response on https://review.openstack.org/#/c/312885/2/git_review/cmd.py10:55
*** markvoelker has joined #openstack-infra10:56
*** markvoelker has quit IRC11:00
*** dizquierdo has quit IRC11:01
*** rhallisey has joined #openstack-infra11:02
*** esikachev has quit IRC11:03
*** salv-orlando has quit IRC11:04
*** vstoiko has quit IRC11:05
*** amotoki has quit IRC11:09
*** bethwhite_ has joined #openstack-infra11:09
*** bethwhite__ has joined #openstack-infra11:10
*** bethwhite_ has quit IRC11:10
*** bethwhite__ has quit IRC11:10
*** bethwhite_ has joined #openstack-infra11:10
*** Na3iL has quit IRC11:10
*** amotoki has joined #openstack-infra11:14
*** javeriak has joined #openstack-infra11:15
*** Ahharu has joined #openstack-infra11:16
AhharuHello11:16
AhharuIs there issues with openstack git?11:16
AJaegerAhharu: not a known one - what is your problem?11:16
AhharuSSL problems when trying to download the packages by r10k11:17
*** wznoinsk has quit IRC11:18
*** sarob has joined #openstack-infra11:19
*** ramishra has quit IRC11:21
AJaegerAhharu: sorry, can't help11:21
AJaegerAhharu: hope somebody else will be around to help soon - how can we reproduce your problem? Did it work before?11:22
*** ramishra has joined #openstack-infra11:23
*** shardy is now known as shardy_lunch11:24
*** sarob has quit IRC11:24
*** markvoelker has joined #openstack-infra11:25
Ahharuyes it was working before, the thing is that SSL connection to git.openstack.org sometimes gets stuck11:26
Ahharuplain http works fine11:26
*** jkilpatr has joined #openstack-infra11:26
*** salv-orlando has joined #openstack-infra11:26
sdagueAJaeger: yeh, my devstack is hung on a git fetch right now11:26
sdagueany infra-root up?11:27
AJaegersdague: not that I'm aware off11:27
jheskethsdague: I'm around11:28
sdaguesomething is up with the git servers11:28
*** rtheis has joined #openstack-infra11:28
openstackgerritBrad P. Crochet proposed openstack-infra/tripleo-ci: Only ask for the overcloud-full image  https://review.openstack.org/36359211:29
sdagueI had a dead hang locally when trying to run devstack11:29
sdagueand someone else above just reported similar issue11:29
*** markvoelker has quit IRC11:29
jheskethsdague: do you have any more to go off? eg can you reproduce with -vvv?11:31
sdaguenope, I restarted this devstack run. It was buried inside enough layers there all I had was the git fetch hang11:32
AJaegerjhesketh: I thought it was too late for you...11:32
*** wznoinsk has joined #openstack-infra11:33
jheskethAJaeger: it's evening, but always happy to help if I can11:33
openstackgerritLiam Young proposed openstack-infra/project-config: Add service-control charm interface  https://review.openstack.org/36359511:33
AJaegerthanks, jhesketh !11:33
*** esikachev has joined #openstack-infra11:34
*** nwkarsten has joined #openstack-infra11:34
jheskethsdague: which repo was it fetching?11:36
sdaguerequirements11:36
sdaguemaybe it was a blip, and it's back now, I seem to be getting further now11:36
AJaegerlooks also like puppet hasn't been run the last 90 minutes - looking at http://puppetboard.openstack.org/11:37
*** kzaitsev_mb has joined #openstack-infra11:37
*** YorikSar has quit IRC11:37
*** baoli has joined #openstack-infra11:37
jheskethsdague: hmm okay... let me know if you see it again.. fwiw there is nothing obvious that I can see in the server stats etc11:38
*** xyang1 has quit IRC11:39
jlibosvaAJaeger: ok, thanks for info11:39
*** nwkarsten has quit IRC11:39
*** ldnunes has joined #openstack-infra11:40
*** dfflanders has quit IRC11:41
openstackgerritLiam Young proposed openstack-infra/project-config: Add service-control charm interface  https://review.openstack.org/36359511:42
*** lucasagomes is now known as lucas-hungry11:43
rcarrillocruzmordred: when you get around, gimme a ping, i'm seeing some oddness in glance/nova interactions in the infracloud11:43
*** thorst has joined #openstack-infra11:43
*** YorikSar has joined #openstack-infra11:43
*** baoli_ has joined #openstack-infra11:46
*** rfolco has joined #openstack-infra11:47
*** coolsvap_ is now known as coolsvap11:47
*** baoli has quit IRC11:49
*** nwkarsten has joined #openstack-infra11:49
*** sshnaidm is now known as sshnaidm|afk11:50
fungisdague: i'm around now too for a bit11:51
*** asettle has quit IRC11:53
*** psilvad has joined #openstack-infra11:54
AJaegerjhesketh, fungi : It looks like none of the merges from over an hour ago is active - is puppet running that long? Looking at puppetboard, I see last results are from 1h:47. Is that ok?11:54
AJaegermorning, fungi !11:54
*** rhallisey has quit IRC11:54
*** nwkarsten has quit IRC11:54
rcarrillocruzmordred: the issue is http://paste.openstack.org/show/565138/, i get failures creating instances (not sure when this happened, it was working last week) due to nova asking to glance for the image id on the v1 endpoint, despite being the image published on the v2. I've tried putting IMAGE_API_VERSION: '2' on the OSCC clouds.yaml but it doesn't make any difference11:55
fungisdague: according to http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/nodepool.yaml#n8 our image updates start at 10:34 utc, but i don't see any significant traffic spikes on the git server from that (at least not in excess of what we see from our ci later in the day) http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=862&rra_id=all11:56
*** markvoelker has joined #openstack-infra11:56
fungiAJaeger: i have to step away for a few minutes, but i can check our ansible log when i get back and see if it's spending lots of excess time somewhere in the loop11:57
*** ddieterly has joined #openstack-infra11:57
fungishould be around again in ~30 minutes11:57
AJaegerfungi: sure - perhaps it'S back fine then ;)11:57
*** asettle has joined #openstack-infra11:58
*** asettle has quit IRC11:58
*** larainema has quit IRC11:58
*** asettle has joined #openstack-infra11:59
*** yamamoto has joined #openstack-infra11:59
*** spzala has joined #openstack-infra11:59
*** psilvad has quit IRC11:59
*** rhallisey has joined #openstack-infra11:59
*** yamamoto_ has joined #openstack-infra11:59
*** markvoelker has quit IRC12:01
*** larainema has joined #openstack-infra12:02
*** ddieterly has quit IRC12:02
*** Na3iL has joined #openstack-infra12:03
*** tongli has quit IRC12:03
*** spzala has quit IRC12:03
*** yamamoto has quit IRC12:03
*** salv-orlando has quit IRC12:04
*** asselin_ has quit IRC12:04
jheskethAJaeger, fungi: ansible/puppet haven't ran on non infra-cloud nodes for a while... I can't tell if that's just because of locks though12:04
*** jaosorior has quit IRC12:04
*** jlibosva has quit IRC12:05
rcarrillocruzjhesketh , fungi : i run manual ansible puppet runs against infracloud to bring up computes, that's why there are recent runs against them12:05
rcarrillocruzfyi12:05
jheskethah okay, thanks rcarrillocruz12:06
*** jaosorior has joined #openstack-infra12:06
AJaegerrcarrillocruz: I'm talking about runs on zuul etc12:09
*** rodrigods has quit IRC12:09
*** rodrigods has joined #openstack-infra12:09
*** gouthamr has joined #openstack-infra12:09
*** Jeffrey4l_ has joined #openstack-infra12:10
openstackgerritSean Dague proposed openstack-infra/project-config: to accelerate placement api work, put it in nv check  https://review.openstack.org/36361312:11
openstackgerritBartosz Kupidura proposed openstack-infra/puppet-apps_site: [wip] Glare support for app-catalog  https://review.openstack.org/35902912:11
*** salv-orlando has joined #openstack-infra12:12
sdaguefungi / AJaeger / jhesketh / rcarrillocruz can I get a fast review on https://review.openstack.org/36361312:12
sdagueit's moving a job from experimental into nv check for nova12:12
*** dtantsur is now known as dtantsur|bbl12:12
AJaegerlooking...12:12
*** ddieterly has joined #openstack-infra12:12
*** dprince has joined #openstack-infra12:13
*** shardy_lunch is now known as shardy12:15
sdaguehmm... same issue12:16
sdaguesame git issue12:16
jheskethfungi: looks like ansible is running puppet across our nodes okay now (probably was before too)12:16
dhellmanngood morning12:16
jheskethsdague: same repo?12:16
sdagueyes12:16
jheskethhmm12:17
sdagueif I jump in from another session, and try again, it works12:17
sdaguebut this is going to be painful if 50% of devstack builds just hang locally12:17
jheskethsdague: where in devstack is it failing (do you have a line)12:19
jheskethgerrit appears to be syncing requirements without any trouble12:19
mordredsdague: also, any idea if we're seeing this in a specific regoin?12:19
AJaegerjhesketh: indeed, zuul was just reconfigured12:19
sdaguemordred: the region is my house12:20
*** pradk has joined #openstack-infra12:20
*** berendt has joined #openstack-infra12:20
mordredsdague: oh. right12:20
mordredsdague: sorry, still coffeeing - I remember you said that now12:20
sdagueyep, no worries12:20
jheskethAJaeger: and looks like puppetboard is updating now too12:20
AJaegerjhesketh, fungi: confirming ^12:20
sdaguejhesketh: it's running the git fetch12:20
AJaegerso, we're green again from my side ;)12:20
sdagueI blasted and restarted again, because I actually need to test patches12:21
sdagueyep, hanging again, this time on nova repo12:22
sdagueI at least changed the git timeout here, so maybe it will blast and retry12:22
sdagueyeh, so setting GIT_TIMEOUT=30 locally is making it retry these network hanges12:22
sdagueit defaults to 012:23
jheskethfungi: hmm, this could be a problem: http://paste.openstack.org/show/565142/12:23
sdaguebut it's blipping a lot12:23
mordredsdague: quick non-related question ... with https://review.openstack.org/#/c/346282 I get a shade gate failure: http://logs.openstack.org/01/362901/1/check/gate-shade-dsvm-functional-neutron/9698d83/console.html#_2016-08-30_18_56_58_83851212:23
jheskethfungi: if we're only running in a 2GB vm we should look to upgrade that12:23
mordredsdague: which is blockign landing the patch we need to get nodepool updated ... I tried this: https://review.openstack.org/#/c/363157/12:24
pabelangerjhesketh: no, that is a known issue12:24
dhellmannmordred : when you have a sec, I'd like to go ahead with enabling the tagging automation by landing https://review.openstack.org/#/c/36315612:24
*** markvoelker has joined #openstack-infra12:24
pabelangerjhesketh: we run out of memory when posting logs to puppetboard12:24
openstackgerritMartin André proposed openstack-infra/tripleo-ci: Fix scp command with IPv6 addresses  https://review.openstack.org/36354512:24
pabelangeralso, morning12:25
mordredbecause that's what we originally had a month ago or so ... but it's feeling like there is something I should be setting explicitly in the job config that I'm not12:25
jheskethpabelanger: oh, okay... we should fix that12:25
sdaguemordred: so, honestly, I don't know. I'd see what sc68cal says when he gets up. I'm kind of heads down on placement-api this week so can't really get this into my stack12:25
pabelangerjhesketh: yes, sadly we need to rebuild the server12:25
mordredsdague: k, no problem. mostly asked in case you happened to know the answer off the top of your head12:25
*** hashar has quit IRC12:25
fungiokay, back12:26
mordreddhellmann: okie. I have hit the +A12:26
*** hashar has joined #openstack-infra12:26
dhellmannmordred : thanks!12:26
sdagueon that one I was mostly trusting the other folks. But devstack config for neutron is a bit of a whack-a-mole at times, at least off the default path12:26
*** gordc has joined #openstack-infra12:27
*** Genek has joined #openstack-infra12:27
*** psilvad has joined #openstack-infra12:27
*** kgiusti has joined #openstack-infra12:28
mordredsdague: woot12:28
fungijhesketh: yeah, i seem to recall there were memory leaks/oom issues with ansible someone was trying to work out12:28
openstackgerritMerged openstack-infra/project-config: to accelerate placement api work, put it in nv check  https://review.openstack.org/36361312:28
sdaguemordred: at least we have a working default path now :)12:28
sdaguebaby steps12:28
fungioh, nevermind, pabelanger repluied12:28
*** markvoelker has quit IRC12:29
*** sshnaidm|afk is now known as sshnaidm12:29
*** nwkarsten has joined #openstack-infra12:29
*** trown|outtypewww is now known as trown12:29
*** Ravikiran_K has quit IRC12:29
*** ddieterly has quit IRC12:29
jhesketh:-)12:30
*** javeriak has quit IRC12:31
fungifwiw, when i hacked launch-node to add -vvv on ansible calls, i noticed it transfers an _insane_ amount of data during what looks like fact collection12:31
*** rossella_s has joined #openstack-infra12:32
mordredsdague: that's fantastic! now I just need tofigure out how to test creating a provider network and I'm set :)12:32
fungii assume it does that every time it runs against a server but normally hides that from its stdout12:32
mordredfungi: it's possible to turn fact collection off12:32
sdagueyeh, sane provider network setup wasn't quite in the mix yet. Maybe next cycle.12:33
fungiwhile having the fact reporting in puppetdb is useful, we collect many orders of magnitude more detail than we really need to12:33
openstackgerritMerged openstack-infra/project-config: enable release tagging for all repos  https://review.openstack.org/36315612:33
mordredfungi: hrm. we have gather_facts set to true explicitly. I feel like we did that for a reason ... one sec12:33
*** openstackgerrit has quit IRC12:34
*** openstackgerrit has joined #openstack-infra12:34
fungii bet we could speed up launch-node.py by turning that off in its custom play at least12:34
fungiooh, i managed to catch gerrit with 36 httpd threads running just now12:36
fungithat's the highest i've spotted yet12:36
AJaegerfungi, team: I did some short analysis on docs.o.o content and put it on https://etherpad.openstack.org/p/CGMHA4ANGZ12:36
AJaegerIf you have any questions or anything your want me to look at, please tell me.12:37
AJaegerI'll then talk with Docs team...12:37
fungiAJaeger: one comment on the _sources directories... the "show source" links in the rendered documentation relies on those so it can show you the restructuredtext version of a page12:39
*** markusry has joined #openstack-infra12:40
fungiat least in normal sphinx-type documents12:40
paulobanonanyone has any idea what might be causing this: https://storyboard.openstack.org/#!/story/200061812:40
dhellmannfungi, mordred : now that https://review.openstack.org/#/c/363156/ has landed, what's the lag between that version of the script being applied to the signing node? that's a puppet update, right?12:41
mordreddhellmann: yah. so 15-30 minutes ish12:42
fungipaulobanon: yes, jenkins job builder needs permissions to modify jobs12:42
dhellmannmordred : cool, thanks12:42
paulobanonfungi, only works when the jjb apiuser im using has administrator permissions. Even with all Job permissions gives the same error12:43
AJaegerfungi, I know we cannot remove it. But we could not generate it going forward - and then the "show source" will not be there12:43
fungipaulobanon: my guess is that jenkins changed their permission model for the api in 2.7 and 1.6.112:44
*** rlandy has joined #openstack-infra12:44
*** zhurong has joined #openstack-infra12:44
fungipaulobanon: we stopped using jenkins here so i haven't been following its development closely, but electrofelix or zxiiro might know12:44
paulobanonfungi, i see, ill try to look on jenkins side of things. Thanks!12:45
*** lucas-hungry is now known as lucasagomes12:45
*** ifarkas has quit IRC12:46
rcarrillocruzmordred: do you have a sec to check http://paste.openstack.org/show/565138/ ? not sure why nova hits v1 on glance....12:46
mordredrcarrillocruz: I think that's going to be in nova.ini12:46
mordredrcarrillocruz: there will be a glance endpoint configured?12:46
*** ifarkas_ is now known as ifarkas12:47
rcarrillocruzwhat we have in /etc/nova/nova.conf in the controller is this:12:47
rcarrillocruz[glance]12:47
rcarrillocruzapi_servers=https://controller00.vanilla.ic.openstack.org:929212:47
*** tongli has joined #openstack-infra12:47
rcarrillocruzshould it read /v2 ?12:47
openstackgerritMerged openstack-infra/project-config: Revert "Disable rax-iad due to launch failure rate"  https://review.openstack.org/36288512:47
*** tongli has quit IRC12:48
mordredrcarrillocruz: what version are we useing?12:48
*** woodster_ has joined #openstack-infra12:48
*** tongli has joined #openstack-infra12:49
rcarrillocruzas in the cloud version? mitaka12:49
mordredrcarrillocruz: I think there is not support for v2 in nova in mitaka ... sdague, you added that in newton, right?12:50
mordredrcarrillocruz: so I think we need to tell glance to also run a v1 endpoint12:50
sdagueapi_servers don't have a version url in them12:50
sdagueso https://controller00.vanilla.ic.openstack.org:9292 is right12:50
rcarrillocruzsdague: mind checking http://paste.openstack.org/show/565138/12:51
rcarrillocruzthat's where i show the issue12:51
sdagueall the version selection is done behind the scenes12:51
rcarrillocruzlong story short: i upload an image, it gets published as v2/uuid12:51
rcarrillocruzbut then i boot the server with that image, nova fails cos it tries to find it on v1/uuid on glance12:51
mordredrcarrillocruz: it looks like we are running the v1 endpoint though12:51
*** ijw has joined #openstack-infra12:52
rcarrillocruzmordred: so, then we should instruct nodepool oscc and our manual glance commands to upload images with v112:52
rcarrillocruzso nova doesn't complain12:52
electrofelixpaulobanon: there were changes in Jenkins 2.0 and newer to require more privileges to get the plugins_info12:52
sdaguercarrillocruz: look at the nova logs12:52
rcarrillocruzlet me test that thing12:52
andreas_sHi fungi, do you know from where the email address is taken that is used for the Summit ATC emails?12:52
mordredrcarrillocruz: it shouldn't matter12:52
mordredrcarrillocruz: it's the same backend set of images ...12:52
sdaguercarrillocruz: Error finding address for12:52
sdaguereally sounds like DNS resolution fail12:52
*** mestery has joined #openstack-infra12:52
sdaguea 404 would probably be a different issue12:53
mordredsdague: that would be running on the nova computes, right?12:53
sdagueyes12:53
paulobanonelectrofelix, got it thank you12:53
fungiandreas_s: yes, i send them to every e-mail address you have configured in your gerrit account at https://review.openstack.org/#/settings/contact12:53
mordredsdague: tahnks!12:53
electrofelixpaulobanon: you can either a) disable getting it at all, or b) provide a file containing what would have been if you had access, that allows for creating a job to be run by a user with more privs than your default one12:53
*** markvoelker has joined #openstack-infra12:54
sdaguemordred: it might be run from other nodes as well, but definitely from computes12:54
mordredrcarrillocruz: so we need to make sure that dns resolution is working on the compute nodes12:54
electrofelixpaulobanon: and then passing the plugin info to be used by a user with less privileges to only update jobs12:54
sdagueI can't remember if api hits it as a precheck12:54
rcarrillocruzroot@compute001:/var/log/nova# host controller00.vanilla.ic.openstack.org12:54
rcarrillocruzcontroller00.vanilla.ic.openstack.org has address 15.184.64.512:54
openstackgerritEmilien Macchi proposed openstack-infra/project-config: move tripleo scenario jobs to check pipeline, non-voting  https://review.openstack.org/36362912:55
rcarrillocruzi can run an ansible run with that against all computes, maybe i'm hitting a compute that has resolution hosed12:55
andreas_sfungi, ah ok. So if I change my mail there today, I'll get the ATC pass sent to that new email, also if my commits happened on the old email?12:55
*** javeriak has joined #openstack-infra12:55
sdaguercarrillocruz: yeh, or flip it to the ip addr to see if that makes it go away12:55
*** javeriak has quit IRC12:55
sdaguercarrillocruz: you know which compute failed?12:55
sdaguecan you get on it's logs and backtrack from there12:55
openstackgerritDerek Higgins proposed openstack-infra/tripleo-ci: Install tripleo-admin ssh keys on CI nodes  https://review.openstack.org/36363012:56
AJaegerandreas_s: didn't you get already one? fungi is only sending it out now to those that didn't get one yet.12:56
*** jcoufal has joined #openstack-infra12:56
rcarrillocruzsdague: how do I know? cos from the nova-scheduler i just see messages like: "Successfully synced instances from host 'compute032.vanilla.ic.openstack.org'"12:56
rcarrillocruzi would assume i'd get which compute gets selected on the scheduler12:56
rcarrillocruzso probably a log level tweak is needed?12:56
*** ijw has quit IRC12:57
fungiandreas_s: only if you didn't have any changes merged to official repos between april 7 (mitaka release day) and august 15 (when i sent the most recent batch). otherwise i've already sent one to whatever address(es) you have configured previously. though i can resend it to your updated address now if you tell me your account id number from https://review.openstack.org/#/settings12:57
mordredrcarrillocruz: perhaps ansible all of the hosts with "grep c6bd9eba-bf07-4320-99d8-e407d0d76331 /var/log/nova/*" :)12:57
rcarrillocruzthat works too12:58
*** YorikSar has quit IRC12:58
*** markvoelker has quit IRC12:58
*** ilyashakhat has joined #openstack-infra12:59
sdagueyeh, if you have distributed logs, what mordred said12:59
sdagueELK would be a ++ as soon as you can get it up12:59
*** YorikSar has joined #openstack-infra13:00
openstackgerritMerged openstack-infra/jenkins-job-builder: Remove unused builder.Builder.update_job method  https://review.openstack.org/31975213:00
*** mriedem has joined #openstack-infra13:00
openstackgerritMerged openstack-infra/jenkins-job-builder: Rename Builder.delete_job to Builder.delete_jobs.  https://review.openstack.org/31975313:00
*** pradk has quit IRC13:01
electrofelixwaynr zxiiro: https://review.openstack.org/#/c/319754/15 on merging objects to a JenkinsManager one just a few minor nits, we can tidy up these up subsequently if preferred, main item is a question around what we're testing that can follow up subsequently but not in that patch. let me know whether you want to handle the nits in this patch or follow up13:01
*** dizquierdo has joined #openstack-infra13:03
*** gildub has quit IRC13:03
*** markvoelker has joined #openstack-infra13:04
*** spzala has joined #openstack-infra13:04
*** vikrant has quit IRC13:05
openstackgerritJim Rollenhagen proposed openstack/gertty: Use urlparse from six for python 3 compat  https://review.openstack.org/36363713:05
*** ddieterly has joined #openstack-infra13:05
*** Julien-zte has joined #openstack-infra13:06
*** amitgandhinz has quit IRC13:06
*** amitgandhinz has joined #openstack-infra13:07
openstackgerritPaul Belanger proposed openstack-infra/project-config: Run ansible-role-ubuntu-trusty jobs on master  https://review.openstack.org/36363913:08
*** ddieterly has quit IRC13:09
openstackgerritJesse Pretorius (odyssey4me) proposed openstack-infra/project-config: Add OSA keystone uwsgi functional tests  https://review.openstack.org/36364013:09
*** psachin has quit IRC13:09
rcarrillocruzbang13:10
rcarrillocruz2016-08-31 11:49:09.220 16435 ERROR nova.image.glance CommunicationError: Error finding address for https://controller00.vanilla.ic.openstack.org:9292/v1/images/b37fd797-f863-434d-ab9e-4d27557432f5: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed13:10
rcarrillocruzsdague , mordred , thx for the help, something with ssl13:10
*** markvoelker has quit IRC13:10
AJaegerfungi, mail send out to docs list to suggest removal of 887 MB from docs.openstack.org.13:10
sdaguercarrillocruz: nice13:11
mordredrcarrillocruz: woot! and we learn things13:11
fungiAJaeger: awesome--thanks for going through all that13:11
*** fguillot has joined #openstack-infra13:11
rcarrillocruzso yeah13:11
rcarrillocruzthere are couple things13:11
rcarrillocruzthat we have no puppetized13:11
rcarrillocruzlike13:11
rcarrillocruzyou know13:11
mordredcerts?13:11
rcarrillocruztrustying our self-signed certificates13:11
*** raildo has joined #openstack-infra13:11
rcarrillocruz:D13:11
mordred:)13:11
rcarrillocruzwhen i first crreated an instance last week13:12
mordredrcarrillocruz: I have this hunch that the next thing to puppet is trusting our self-signed certs13:12
rcarrillocruzit worked13:12
sdagueheh13:12
andreas_sAJaeger, fungi, ok, got it thanks. I have mine already - just was wondering if that works out without larger impacts - and ATC was one of them I had in mind thanks!13:12
rcarrillocruzcos i trusted the cert on the couple machines i provisioned13:12
rcarrillocruzbut i have not with the forty something i provisioned afterwards13:12
pabelangersdague: fungi: Did we figure out why git clones were hanging this morning? Or is that still an ongoing issue13:12
AJaegerwznoinsk: I can clone the repo now - hope everything works for you as well13:13
wznoinsklooks good, thanks13:14
fungiandreas_s: yeah, handling address changes is tricky. basically when i send updated batches of invites i diff against the previous list by gerrit account id number. that should _normally_ be enough but there are cases where people accidentally create and begin using new (duplicate) gerrit accounts or we combine duplicate accounts... the list generator tries to find and deduplicate them where possible13:14
fungibut there's not always enough overlap in data to find them all13:14
*** mtanino has joined #openstack-infra13:15
fungiandreas_s: it's further complicated by additions/removals from the extra-atcs lists in governance, since someone can be both an extra-atc and a contributor (perhaps to a different team than they're an extra-atc on), so identifying duplication and changes in that duplication over the course of a cycle between batches is especially complicated13:15
*** sdake_ has joined #openstack-infra13:16
andreas_sfungi, that seems to be a really complex process :D13:16
andreas_sfungi, thanks for the insights!13:17
*** hewbrocca has joined #openstack-infra13:17
fungiandreas_s: i'm hoping with the coming project-team gatherings we'll be able to just make admission free (or trivially inexpensive) for anyone who wants to attend, and then use attendance to one of more ptgs over a reasonably lengthy period of time to get discounted/free access to the summits. so hopefully this becomes much simpler in the future13:18
*** rhallisey has quit IRC13:18
*** mdrabe has joined #openstack-infra13:18
*** zz_dimtruck is now known as dimtruck13:18
*** sdake has quit IRC13:19
*** esberglu has joined #openstack-infra13:20
*** yamamoto_ has quit IRC13:20
*** ilyashakhat has quit IRC13:20
mat128Hi group, I'm getting IPv6 links for running jobs in the zuul status page13:21
mat128if, for example you search for "363294"13:21
mat128"telnet://2001:4800:1ae1:18:f816:3eff:fe6e:1042:19885"13:21
pabelangerfungi: sdague: it looks like tripleo-test-cloud-rh1 can reproduce the git clone hanging failures this morning. All of their devstack jobs are timing out after 3 hours13:21
AJaegermat128: YEs, that's correct. Welcome to the 21st century ;)13:22
pabelangermat128: yes, we have an IPv6 only cloud, osic-cloud113:22
mat128oh wow :)13:22
mat128now I have a real reason to ask my provider for ipv6..13:22
*** sdake_ is now known as sdake13:22
mat128lol13:22
sdaguewould be really great to get that web console proxy.... :)13:23
*** coolsvap is now known as _coolsvap_13:23
fungimat128: i've been having lots of luck with a free v6 tunnel from hurricane electric (tunnelbroker.net) for many years now13:24
fungimat128: but yes, just in case there are still isps out there who claim there's no rush to implement ipv6 because nobody actually has v6-only content... well be happy to use us as an example! ;)13:25
*** Goneri has joined #openstack-infra13:27
rtheisnot sure if this is the correct channel for grafana dashboard questions ...13:28
*** flepied has quit IRC13:28
rtheisnetworking-ovn dashboard has several issues that I'm not sure how to resolve: http://grafana.openstack.org/dashboard/db/networking-ovn-failure-rate13:28
mat128fungi: thanks, I'll try that13:29
*** zul has quit IRC13:30
rtheisThe "Unit Test Failure Rate" graph has no data.  And while the job names look good in project config, I'm not finding success or fail data posted to graphite13:30
*** rossella_s has quit IRC13:31
*** jamesdenton has joined #openstack-infra13:31
zxiiroelectrofelix: cool, I'll leave that to waynr  since it's his patch. Let me fix the merge conflict in the last patch listed though since it doesn't depend on anything13:32
*** cardeois has joined #openstack-infra13:33
*** zul has joined #openstack-infra13:33
*** ansiwen has joined #openstack-infra13:34
openstackgerritThanh Ha proposed openstack-infra/jenkins-job-builder: Simplify delete by removing unnecessary loop  https://review.openstack.org/35799013:35
EmilienMhello infra! can I have a review on https://review.openstack.org/#/c/363629/ please?13:35
*** hongbin has joined #openstack-infra13:36
*** matt-borland has joined #openstack-infra13:36
fungirtheis: when was that graph added, and how often do changes get approved for that repo? do you also have rules to skip unit tests for certain kinds of patches (for example those only modifying documentation files)?13:37
*** mtanino has quit IRC13:37
rtheisfungi: the graph has been around for at least a few weeks13:38
zxiiroelectrofelix: fyi I added the my patch to fix the disabled always returning true patch to the v2 list since I think it's important to merge that13:38
rtheiswe have changes getting approved usually on a daily basis13:38
fungirtheis: i wonder if none of them have failed a unit test job in the gate pipeline since the graph was added13:39
rtheisI believe there are unit test skips for doc-only changes13:39
*** ddieterly has joined #openstack-infra13:39
zxiiroelectrofelix: can you review the patches after the one you left a comment on just in case if it's a trivial rebase we can merge it when you're gone?13:39
rtheisfungi: that may be possible13:39
waynrelectrofelix zxiiro: i'll probably try to address the nits in this patch, not sure until i have time to take a closer look though...did you see my comments the other day about renaming jenkins_jobs.builder to jenkins_jobs.manager? any thoughts on that?13:39
rtheisbut the check queue certainly had failures and the py34 and py35 data isn't shown13:40
AJaegerrtheis: create a change that fails and let's see whether it shows up;)13:40
zxiiroelectrofelix: waynr with that said I think we're very close to having this all done. Just need to figure out if we still need the 3 patches that are in merge conflict at the bottom https://review.openstack.org/#/q/status:open+project:openstack-infra/jenkins-job-builder+branch:master+topic:jjb-2.0.0-api13:40
*** Guest81 has joined #openstack-infra13:40
rtheisAJaeger: I'll see if we have one already13:41
AJaegerrtheis: but indeed, this looks odd...13:41
rtheisAJaeger: here is one https://review.openstack.org/#/c/362494/13:41
sambettsHi infra, where can I find the rules/regs for whether a project is allowed to publish docs to docs.openstack.org/developer ??13:41
dhellmannI have an unexpected job failure for pypi-both-upload on the monasca-events-api repo. The ACLs look like they use the default, and I don't think the release team tagged anything. Could the job have been triggered by a repo import? http://logs.openstack.org/00/004f1a23226101c6d4349a74462899f59c08dd93/release/monasca-events-api-pypi-both-upload/69f2304/console.html13:41
*** isaacb has joined #openstack-infra13:42
AJaegersambetts: any project in the big tent can. See also infra manual Creator'S guide13:42
fungiAJaeger: rtheis: getting a failing change to show up on the gate pipeline graph would be hard since it would need to succeed in check to get enqueued13:42
AJaegerfungi, oh, yes. Silly me...13:42
AJaegersambetts: which repo do you care about?13:43
sambettsAJaeger: thanks, I'm trying to work out if networking-cisco is still allowed to publish docs, I vaguly remember something about not being allowed too13:43
fungiusually the way you get hits in the gate pipeline failures is by having nondeterministic failures13:43
rtheisAnother oddity is that is the "Integrated Failure Rates" still shows an old job name13:43
AJaegersambetts: it'S not in governance/reference/projects.yaml13:43
rtheisgate-tempest-dsvm-networking-ovn-native-l3 instead of gate-tempest-dsvm-networking-ovn-native-services13:44
AJaegersambetts: so, should not publish on docs.o.o13:44
*** sai has joined #openstack-infra13:44
*** xyang1 has joined #openstack-infra13:44
*** Guest81 has quit IRC13:44
pabelangerfungi: sdague: Wonder if adding http://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=d53ad0b07d3e7bdd2668c2d3f1815d95d4b8f532 to devstack-gate could help with network issue for git clones13:45
fungirtheis: if you rename a job in the ci system, you'd also need to adjust the graphite counter names in your grafana dashboards to match13:45
rtheisfungi: we did13:45
sambettsAJaeger: so I'm still a little confused where we stand now, because we were kicked out of the neutron umbrella thing, but I thougth that meant we were still a big tent project13:45
jkilpatrpabelanger, is there anyway to separate the pep8 and ansible linting jobs for openstack/browbeat?13:46
rtheisyet the old name still shows in the graphs13:46
*** amotoki has quit IRC13:46
rtheis*graph13:46
openstackgerritOpenStack Proposal Bot proposed openstack-infra/project-config: Normalize projects.yaml  https://review.openstack.org/36366913:46
sdaguepabelanger: I'm not running devstack-gate13:46
sdaguepabelanger: this is running devstack, locally13:46
fungirtheis: oh, so the grafana dashboard isn't updated to what the configuration in networking-ovn.yaml says it should be... interesting13:46
sdagueand we're hanging today, a lot13:47
pabelangerjkilpatr: yes it is possible, you'd create a new job breaking them apart. However, we're trying to use the linters / pep8 target to group them together13:47
AJaegersambetts: https://review.openstack.org/303026 removed networking-cisco from the Big Tent. Removal from the Stadium included removal from Big Tent13:47
rtheisfungi: yes, that appears to be the case13:47
pabelangersdague: right, it is 100% reproducible with tripleo jobs right now13:47
sdaguepabelanger: which indicates that it's going to be an issue for lots of folks13:47
pabelangeryes13:48
*** njohnston has left #openstack-infra13:48
jkilpatrpabelanger, The issue is we're likely to fail ansible linters for a while since we want to  actually refactor whats failing instead of bandaid. But this means python errors don't get -1'd13:48
rtheisfungi: although I did notice that the latest "Periodic jobs" graph addition did show up13:48
AJaegerinfra-root, setup of https://github.com/openstack/k8s-docker-suite-app-murano did not work. The repo was approved earlier today but is not mirrored. Could somebody fix this, please?13:48
sambettsAJaeger: Can a project like networking-cisco still become part of the big tent? Do I need to reapply?13:49
rtheisfungi: it is almost like the existing graphs which were update in the configuration aren't being used in dashboard ... only new graphs13:49
fungidhellmann: https://pypi.org/project/monasca-events-api/#history shows a 0.0.5 version uploaded more than a year ago, and a 0.0.6 uploaded 10 months ago13:49
AJaegersambetts: I'm not in the TC. But according to my understanding and the messaging I remember: Yes.13:49
fungirtheis: pabelanger may have some more insights once he13:49
fungi's freed up13:49
dhellmannfungi : yeah13:49
pabelangerjkilpatr: Yup, understood13:49
rtheisthanks13:49
fungihe's our grafana specialist and the grafyaml author13:50
AJaegersambetts: best talk with a TC member for a more educated answer13:50
sambettsAJaeger: Ok thanks! I'll go and do some more digging13:50
AJaegersambetts: there are conditions for a project to be part of big tent, check whether you meet them.13:50
*** dtantsur|bbl is now known as dtantsur13:51
sambettsAJaeger: We've been working to maintain our OpenStackness because we assumed we were still a part of it, just not in the neutron thingy anymore, so I hope we still meet them13:51
fungidhellmann: they very well may have manually uploaded early releases to pypi13:51
dhellmannfungi : yeah. I'm trying to understand what triggered a job to run today or yesterday, though13:52
fungidhellmann: oh! looking13:52
*** rossella_s has joined #openstack-infra13:52
AJaegerfungi, the repo was imported today13:52
dhellmannok, I thought that might be the case because it seemed new13:52
fungipabelanger: sdague: zuul-cloner added timeout and retry options, so maybe devstack-gate just isn't using them (yet)?13:52
dhellmannfungi, AJaeger: it's interesting that importing a repo triggers the jobs like that13:52
*** ekhugen has quit IRC13:53
dhellmannhmm13:53
AJaegerfungi, dhellmann https://review.openstack.org/362462 merged 5 hours ago13:53
*** amotoki has joined #openstack-infra13:53
dhellmannfungi, mordred : it doesn't look like the signing node has been updated with the new version of the release script (it still has the skip in)13:53
jkilpatrpabelanger, need me to do anything to make the job?13:53
pabelangerfungi: Okay, digging into that13:53
*** esp has quit IRC13:54
openstackgerritGabriele Cerami proposed openstack-infra/tripleo-ci: Add IPv6 network configuration for ipv6 job types  https://review.openstack.org/36367413:54
sdaguefungi: ... it's not devstack-gate13:54
*** eharney has joined #openstack-infra13:54
fungisdague: oh, i was responding to pabelanger's suggestion of adding git timeout support to devstack-gate13:54
pabelangerjkilpatr: sorry, distracted with another issue. What is the issue you are trying to solve?13:54
fungitoo many conversations at once for me today13:55
jkilpatrpabelanger, different jobs in CI for pep8 and ansible-linters, so that one can be voting and the other won't be for now13:55
*** jamielennox|away is now known as jamielennox13:55
pabelangerjkilpatr: I'd just comment out the code in tox.ini for now, make the existing job voting, then once you are ready, uncomment the tox change13:56
jkilpatrpabelanger, ok then, will do. Thanks for the help13:56
*** psachin has joined #openstack-infra13:57
*** ekhugen has joined #openstack-infra13:57
*** pgadiya has quit IRC13:57
waynrzxiiro electrofelix: I would like to get https://review.openstack.org/351743 and https://review.openstack.org/333076 into 2.0.0 API13:58
*** tongli_ has joined #openstack-infra13:58
fungidhellmann: so my guess is the reasons you don't normally see that happen: 1. projects are often imported into gerrit before they add release jobs; 2. most repos that get imported don't already have tags in them13:58
fungidhellmann: but yeah, we "import" projects with a scripted push of the content from an existing repository somewhere, and gerrit emits tag-related events when those tags are pushed in13:59
zxiirowaynr: ok cool, we just need to fix the merge conflicts, I might be able to help with that13:59
dhellmannfungi : those reasons make sense. I'm not too worried, since nothing is actually broken, but I wanted to make sure we understood what was going on.13:59
*** zshuo has quit IRC14:00
*** links has quit IRC14:00
dhellmannfungi : I have to step away. When you have a few minutes, could you look at puppet on the signing node to make sure it's running ok? we merged the change to enable release automation everywhere, but that version of the script doesn't seem to have been pushed out yet. The patch landed ~90 minutes ago. https://review.openstack.org/#/c/363156/14:01
*** tongli has quit IRC14:01
fungidhellmann: earlier today puppet went several hours without updating servers, so we may have some issues with our configuration management getting applied in a timely manner14:01
*** jamielennox is now known as jamielennox|away14:01
pabelangerfungi: do you have a link handy for the zuul-cloner timeout / retry you mentioned? I don't see anything in the documentation.14:02
dhellmannfungi : ah! ok.14:02
*** wgd3 has joined #openstack-infra14:02
*** jamielennox|away is now known as jamielennox14:02
*** pradk has joined #openstack-infra14:02
*** rfolco has quit IRC14:03
*** zhurong has quit IRC14:03
*** zhurong has joined #openstack-infra14:03
*** rfolco has joined #openstack-infra14:04
AJaegerpabelanger, fungi: and jobs don't use it either if my grep in jenkins/jobs is correct14:04
*** dimtruck is now known as zz_dimtruck14:04
fungii'm reading teh zuul.lib.cloner module now to see where that is14:05
fungimaybe the change i'm remembering never merged?14:05
*** cdent_ has joined #openstack-infra14:05
mordreddhellmann: sorry - was writing a long email to someone ...14:05
mordredthanks fungi for responding14:06
*** sdake_ has joined #openstack-infra14:06
fungipabelanger: AJaeger: aha, 282099 and 282102 are what i was thinking of14:06
funginot yet merged14:06
*** akshai has joined #openstack-infra14:07
*** cardeois_ has joined #openstack-infra14:07
*** cardeois has quit IRC14:07
*** yamamoto has joined #openstack-infra14:07
*** cdent_ has quit IRC14:07
*** cdent has joined #openstack-infra14:07
*** sdake has quit IRC14:07
AJaeger;(14:08
*** tonytan4ever has quit IRC14:09
*** matt-borland has quit IRC14:09
*** tonytan4ever has joined #openstack-infra14:09
*** egarbade_ has joined #openstack-infra14:09
* sc68cal connects14:10
sc68calmordred: looking14:10
mordredsc68cal: yay! thanks14:11
*** rfolco has quit IRC14:11
*** cardeois_ is now known as cardeois14:11
sc68calmordred: I have to do a little research, but basically we create a physical network named public, and I don't think more than one network can use the same physical net14:11
*** eranrom has joined #openstack-infra14:11
*** spzala has quit IRC14:12
*** rfolco has joined #openstack-infra14:12
*** spzala has joined #openstack-infra14:12
*** rbrndt has joined #openstack-infra14:12
sc68calmordred: http://logs.openstack.org/01/362901/1/check/gate-shade-dsvm-functional-neutron/9698d83/logs/etc/neutron/plugins/ml2/ml2_conf.ini.txt.gz14:13
sc68calscroll to the bottom14:13
mordredsc68cal: ah - so, that's the thing that creates the public physical network14:13
*** matbu is now known as matbu|mtg14:14
*** rhallisey has joined #openstack-infra14:14
sc68calmordred: DevStack now creates a public network and uses the "public" physical network14:14
sc68cal.... we may need to rename it.14:14
sc68calbut basically we create a neutron network, that then maps it to the br-ex device14:14
*** eventingmonkey has joined #openstack-infra14:14
* sc68cal digs for it14:14
*** sandanar has quit IRC14:15
sc68calmordred: http://logs.openstack.org/01/362901/1/check/gate-shade-dsvm-functional-neutron/9698d83/logs/devstacklog.txt.gz#_2016-08-30_18_46_38_67114:15
mordredah.14:15
*** abregman has quit IRC14:16
mordredso - our test is trying to do that, but it's failing because devstack has already done that14:16
sc68calI believe so14:16
*** amotoki has quit IRC14:16
fungimordred: did /var/log/ansible.log replace /var/log/run_all.log on puppetmaster.o.o?14:16
sc68calit's a recent development, thanks to kevinbenton 's https://review.openstack.org/34628214:16
*** tpsilva has joined #openstack-infra14:16
*** spzala has quit IRC14:17
sc68calmordred: The actual first commit - https://review.openstack.org/#/c/343072/ - details14:17
pabelangerfungi: Ah, thanks.14:17
sc68calmordred: we had to revert the revert a couple times :)14:17
mordredsc68cal: gotcha. so, it's a great thing generally, but may mean that this particular test won't work on devstack anymore, since the base devstack setup has already done that thing14:17
*** Guest81 has joined #openstack-infra14:17
mordredfungi: maybe? I dont remember thathappening14:17
jrollsc68cal: the latest revert of the revert broke ironic CI, fwiw. we're working on fixing but would you prefer to revert in the meantime?14:17
sc68calmordred: yeah just put some logic in to check for a public network that has the physnet set14:17
mordredsc68cal: cool. and skip-if that is true I guess14:18
sc68caljroll: The issue is we've deprecated the way we were doing it before. Like, setting external_network_bridge=br-ex for the l3 agent will soon not work14:18
*** tonytan4ever has quit IRC14:18
sc68calit's going bye-bye14:18
sc68calper https://review.openstack.org/#/c/343072/14:19
sc68calI should write something on the ML14:19
fungimordred: nevermind, it was puppet_run_all.log (i was spacing on correct the name)14:19
mordredfungi: phew14:19
*** esp has joined #openstack-infra14:19
*** tonytan4ever has joined #openstack-infra14:20
jrollsc68cal: I don't know how all of this works, but I do hope we can find something workable for ironic before we make that not work14:20
fungiit looks like maybe puppeting all the infracloud nodes has significantly increased the time to complete our update loops14:20
fungito the point where it's taking a couple hours to complete now14:20
*** mtanino has joined #openstack-infra14:20
anteayafungi: :(14:20
mordredfungi: oh. perhaps we shold put infracloud puppet on a different loop14:20
mordredfungi: so that infracloud puppet does not block infra puppet?14:21
fungisome of this could be reachability issues with some of the infra-cloud nodes14:21
mordredyah14:21
fungiso it may be waiting for ssh timeouts to kick in14:21
mordredalso - perhaps increasing our parallelism is warranted14:21
fungii'm working out the phase timing now14:22
*** psachin has quit IRC14:22
fungilooks like we do the git servers first and that currently takes ~3 minutes14:22
AJaegerbbl14:22
jrollsc68cal: for now, we're setting Q_USE_PROVIDERNET_FOR_PUBLIC=False to get us back up and running, still waiting for CI on that though14:23
fungioh, looking at the tun that started at 12:30 utc, the review.o.o phase took almost half an hour on its own14:23
fungis/tun/run14:23
sc68caljroll: OK. hope that unblocks you. Again sorry for the surprise....14:24
jrollsc68cal: yeah, I'd just like some help figuring out the future here (probably after freeze)14:24
mordredsc68cal: oh - if I do Q_USE_PROVIDERNET_FOR_PUBLIC=False ... that would not use the physical network for the public network?14:24
fungiafter review.o.o we take another ~3 minutes to do other nodes, then we start on the infracloud nodes14:24
*** zz_dimtruck is now known as dimtruck14:25
sc68calmordred: yes that should have that effect14:25
fungithen ansible times out for some 15+ minutes on "fatal: [controller00.vanilla.ic.openstack.org]: UNREACHABLE!"14:25
mordredsc68cal: ok. cool. I might add a job then that has that set ... so that I'll skip trying to create a provider net in one of the jobs, but create one in the other job14:25
sc68calmordred: k14:26
*** Guest81 has quit IRC14:26
*** amotoki has joined #openstack-infra14:26
fungiand it got a network disconnect after some 10+ minutes writing to compute037.vanilla.ic.openstack.org14:27
fungianother disconnect on compute040.vanilla.ic.openstack.org14:27
*** rajinir has joined #openstack-infra14:28
*** spzala has joined #openstack-infra14:28
fungiso basically on the curent pulse it started working through infracloud nodes at ~1300 utc and is still going 1.5 hours later14:28
fungicurrent14:28
fungircarrillocruz: ^ ideas? this is pretty crippling at the moment14:29
*** yamamoto has quit IRC14:29
mordredsc68cal: https://review.openstack.org/363715 I think this should do it ... thnak you for your help!14:29
*** yamamoto has joined #openstack-infra14:29
fungircarrillocruz: can we temporarily disable ansible for infra-cloud until it can be switched to a separate loop14:29
fungi?14:29
mordredfungi: ++14:29
rcarrillocruzunreachable controller00 ?14:30
rcarrillocruzi'm in now14:30
rcarrillocruzlet me check the others14:30
pabelangerfungi: So, I was thinking the other day, we should consider moving infracloud ansible wheel to a new server.  As we don't affect our control plane for the project.  This also gives starts our migration away from puppetmaster.o.o14:30
*** tonytan4ever has quit IRC14:30
rcarrillocruzok, so my ansible play run against infracloud just finished14:31
rcarrillocruzi get as unrechable:14:31
rcarrillocruzcompute01614:31
*** abregman has joined #openstack-infra14:31
rcarrillocruzcompute00514:31
*** tonytan4ever has joined #openstack-infra14:31
fungircarrillocruz: well, ansible timed out trying to reach it from puppetmaster. no idea if that means it's a network problem or an ansible problem. regardless the sheer number of nodes we're updating in infra-cloud coupled with the complexity of what's being done on them plus possible network problems there causes it to dwarf the time needed to run against our typical servers14:31
rcarrillocruzand compute04314:31
fungircarrillocruz: trace through /var/log/puppet_run_all.log and look at timestamps14:32
*** akshai has quit IRC14:32
*** ZZelle has joined #openstack-infra14:32
rcarrillocruzso, how's the usual pattern, put servers with problems in emergency group, yeah?14:33
sc68calmordred: no problem..... sorry it kind of is after the fact rather than warning people in advance14:33
sc68caljroll: ^14:33
*** nmagnezi has quit IRC14:33
*** akshai has joined #openstack-infra14:33
rcarrillocruzso that doesn't run by the run_all.sh playbooks14:33
fungircarrillocruz: yeah, though if they're all already in a group maybe the group itself can be disabled?14:33
rcarrillocruzwell, i  believe putting the whole group is a bit too much,14:34
fungii am admittedly fuzzy on how ansible groups other than the disabled group play into disabling14:34
openstackgerritMonty Taylor proposed openstack-infra/system-config: Run puppet on infracloud in a different cron  https://review.openstack.org/36371914:34
mordredrcarrillocruz, fungi: ^^14:34
rcarrillocruzsome failures are legit, cos they are not completely configured , provisioned14:34
amrithis there a way to get to CI gate pipeline logs before the jobs are done? I was too late to grab the output with netcat from the job but it failed and I'd like to see the output if possible. commit 5b3f953b0d858f62a469a115fb5aa345333a9fd1, job output telnet://104.130.73.19:1988514:34
fungircarrillocruz: sure, but even the successful runs against those servers seem to be... slow14:34
fungicompared to our typical virtual machines anyway14:35
rcarrillocruzmordred: good, that'll give me breathing time till i pinpoint the individual offending ones14:35
mordredrcarrillocruz: ++14:35
*** amotoki has quit IRC14:35
fungiand it seems fine if we have a different update frequency for infra-cloud nodes vs our virtual servers14:36
*** oanson has quit IRC14:36
*** ansiwen has quit IRC14:36
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Run puppet on infracloud in a different cron  https://review.openstack.org/36371914:37
*** david-lyle_ has joined #openstack-infra14:37
fungiokay, so i've confirmed the delay this run on review.o.o is that Exec[manage_projects] took nearly half an hour to return14:37
*** ansiwen has joined #openstack-infra14:37
*** salv-orlando has quit IRC14:38
rcarrillocruzmordred: how about logging the ansible puppet runs to a different log file?14:38
pabelangermordred: fungi: rcarrillocruz: Any chance for reviews on https://review.openstack.org/#/c/356702/ https://review.openstack.org/#/c/356703 ? osic-cloud8 is ready and want to get the credentials onto puppetmaster.o.o14:38
*** ddieterly is now known as ddieterly[away]14:38
*** senk has quit IRC14:39
mordredrcarrillocruz: sure?14:39
*** ansiwen has quit IRC14:39
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Run puppet on infracloud in a different cron  https://review.openstack.org/36371914:39
*** ansiwen has joined #openstack-infra14:40
openstackgerritPaul Belanger proposed openstack-infra/system-config: Add credentials for osic-cloud8  https://review.openstack.org/35670314:40
*** ddieterly[away] is now known as ddieterly14:40
openstackgerritPeter Zhurba proposed openstack-infra/project-config: Add repo for openstack/puppet-glare.  https://review.openstack.org/36295014:40
rcarrillocruzmordred: ^14:40
rcarrillocruzi'm cool landing as is14:40
rcarrillocruzfungi, pabelanger ^14:40
*** kushal has joined #openstack-infra14:41
rcarrillocruzpabelanger: i just created the mirror on the infracloud and put the DNS14:41
pabelangerrcarrillocruz: isn't there going to be issues which update_puppet.yaml playbook?14:41
rcarrillocruzwill add the node on site.pp shortly14:41
rcarrillocruzlet me review your changes14:41
pabelangerthat new crontab will not update puppet bit locally14:41
pabelangerso there could be some sync issues14:41
mordredyah. I didn't have a good answer for that14:42
mordredbut figured for now infracloud having eventual consistency was likely fine14:42
pabelangerokay14:42
*** eventingmonkey has quit IRC14:42
rcarrillocruzwe can always put an additional update_puppet on the run_all_infracloud afterwards, let's see howw it goes for now14:42
fungilgtm, +314:43
*** tonytan4ever has quit IRC14:43
fungithanks!14:43
*** wgd3 is now known as wgd3[away]14:43
*** tonytan4ever has joined #openstack-infra14:43
fungiso digging further in the review.o.o logs, manage_projects was correctly triggered due to a change for k8s-docker-suite-app-murano14:44
*** ansiwen has quit IRC14:44
*** eventingmonkey has joined #openstack-infra14:44
*** ansiwen has joined #openstack-infra14:45
fungii wonder if the script simply needs some refactoring to handle the scale of repos we've grown to14:45
rcarrillocruzquestion: the emergency hosts group, is that backed in repo or just a file on the puppetmaster14:45
rcarrillocruz?14:45
rcarrillocruzversion controlled i mean14:45
*** mdrabe has quit IRC14:46
fungircarrillocruz: there are two. the emergency one is not in any git repo, but there is also a disabled group in the groups file in system-config14:46
fungifor longer-term disablement, we should use the file in system-config14:46
openstackgerritBartosz Kupidura proposed openstack-infra/puppet-apps_site: [wip] Glare support for app-catalog  https://review.openstack.org/35902914:46
rcarrillocruzah sweet14:46
*** sputnik13 has joined #openstack-infra14:46
rcarrillocruzi'll put the 3 servers i cannot provision them on the long-term file then, thx14:46
*** amotoki has joined #openstack-infra14:48
*** amitgandhinz has quit IRC14:48
*** amitgandhinz has joined #openstack-infra14:49
*** tonytan_brb has joined #openstack-infra14:49
skraynevAJaeger: fungi: Sorry for the interruption. I just worry about patch https://review.openstack.org/#/c/357745/, I see ,that it was finally merged. And I can see a new repository, but it looks like provided info of existing repo was not used. May be I did something wrong or need to wait a bit more time?14:49
*** eventingmonkey has quit IRC14:50
*** tonytan4ever has quit IRC14:50
jrollsc68cal: no worries, it happens14:52
*** Swami has joined #openstack-infra14:52
fungiso looking at the manage-projects debug log, for every single repo we have in projects.yaml it's doing 11 git calls and a file copy (at least as far as what it logs that it's doing) seemingly in an attempt to see whether re-pushing the acl into gerrit will result in an update? that definitely seems excessive when the update to project-config only touched one repo14:52
*** markvoelker has joined #openstack-infra14:52
skraynev AJaeger: fungi: oh Jesus. I missed ".git"14:52
openstackgerritMatthew Treinish proposed openstack-infra/project-config: Add non-voting governance tag validation job  https://review.openstack.org/36316514:52
vrovachev1andreas_s: I got +1 from fuel PTL for patch https://review.openstack.org/#/c/359704/14:53
andreas_svrovachev1, you probably meant AJaeger :)14:54
openstackgerritMonty Taylor proposed openstack-infra/shade: Skip test creating provider network if one exists  https://review.openstack.org/36371514:54
fungiskraynev: looks like it imported fine to me http://git.openstack.org/cgit/openstack/k8s-docker-suite-app-murano/14:54
*** mdrabe has joined #openstack-infra14:54
vrovachev1andreas_s: Oh, yes, I'm so sorry :)14:54
vrovachev1AJaeger:  I got +1 from fuel PTL for patch https://review.openstack.org/#/c/359704/14:55
*** afred312 has joined #openstack-infra14:55
anteayafungi: wow, that is a lot of work manage-projects is doing for little effect14:56
*** nt has joined #openstack-infra14:57
fungianteaya: yeah, i have a feeling it could be made _waaaay_ more efficient14:57
fungithough i'm not sure i have teh available bandwidth to hack on it yet14:57
nthey folks, I get the following warning after updating from JJB 1.3.0 to JJB 1.6.1:  WARNING:jenkins_jobs.modules.publishers:trigger-parameterized-builds:Using deprecated order for parameter sets in triggered-parameterized-builds. This will be changed in a future release to inherit the order from the user defined yaml. To enable this behaviour immediately, set the config option '__future__.param_order_from_yaml' to 'true' and change the input job14:57
ntconfiguration to use the desired order14:57
skraynevfungi: Great. Then it's just a some delay for updating github copy of the repo. I checked https://github.com/openstack/k8s-docker-suite-app-murano14:57
ntis there something I should adjust in my JJB templates to resolve that warning?14:57
skraynevfungi: thank you. you saved my nerves :)14:58
anteayafungi: agreed on both points14:58
fungiskraynev: oh, github's api is terrible. we don't really support that github mirror more than on a best-effort basis. this is what it did according to our logs: http://paste.openstack.org/show/565208/14:58
anteayant: did you consider following the instructions that accompany the warning?14:59
*** david-lyle_ has quit IRC14:59
fungiskraynev: i'll poke at it and see what happened, but odds are it created the repo in github but returned a 404 response because github is terribly broken for real automation, so we subsequently assumed the repo wasn't there and never tried to grant gerrit permission to replicate into it14:59
skraynevfungi: O_o. I have not known about it.14:59
*** asselin_ has joined #openstack-infra14:59
*** ijw has joined #openstack-infra15:00
ntanteaya, yes, but i just want to be sure that i don't need to adjust my templates in any way.15:00
fungiskraynev: yeah, git.openstack.org is where our official git repos are served from. github.com is not a service we control, and most of us wish we could just drop it entirely but then someone else would run an even worse mirror of our repos on github because there are too many people who think that github is the official place for everything open-source15:01
skraynevfungi: ok. thank you for the attention. It's really not big deal to have a copy on github IMO. However it confused me, because I was not familiar with such issue15:01
openstackgerritMatthew Treinish proposed openstack-infra/project-config: Add lpmqtt project  https://review.openstack.org/36329615:01
openstackgerritMatthew Treinish proposed openstack-infra/project-config: Add puppet-lpmqtt project  https://review.openstack.org/36329715:01
*** inc0 has joined #openstack-infra15:01
skraynevfungi: I WAS one of such people ;)15:01
*** tongli_ has quit IRC15:01
*** isaacb has quit IRC15:02
ntanteaya, i have my parameterized builds ordered already as i would prefer.15:02
*** tongli has joined #openstack-infra15:02
fungiskraynev: i've manually corrected the permissions for that repo in github and gerrit has successfully replicated into it now15:02
mordredelectrofelix: ^^ nt has some questions15:02
inc0hey guys, did you manage to check out our little cluster?15:03
inc0pabelanger ^15:03
openstackgerritafazekas proposed openstack/os-testr: Error on invalid list parameter combination  https://review.openstack.org/36373915:04
pabelangerinc0: Yes, have patches up now to bring it online.15:04
inc0sorry, hey and gals, ladies and gentleman, hello to everyone;)15:04
*** dmellado is now known as dmellado|mtg15:04
pabelangerhttps://review.openstack.org/#/q/topic:osic-cloud815:05
pabelangerrcarrillocruz: how did using cloud-launcher for infracloud mirror go?15:05
*** jamielennox is now known as jamielennox|away15:05
inc0thank you sir15:05
rcarrillocruzworked just fine15:06
rcarrillocruzi had issues, unrelated to launcher15:06
rcarrillocruzcomputes did not have the self-signed certificate for the controller trusted15:06
pabelangerrcarrillocruz: cool. I'll use it for osic-cloud8 then15:06
rcarrillocruzbut i did an ansible -a 'update-ca-certificates' against all and then i could create it fine15:06
rcarrillocruzpabelanger: ++15:06
skraynevfungi: thank you :)15:06
inc0btw pabelanger, me and mrhillsman have talk in Barcelona about benchmarking you cloud with openstack infra15:06
inc0fyi:)15:07
*** tongli has quit IRC15:07
fungiooh, i'll need to sit in on that one15:07
fungii promise i won't heckle15:07
fungi(much)15:07
pabelangersounds fun15:07
inc0we'll try to convince people out there to do what we're doing15:07
pabelangerindeed15:07
fungisound logic. how could i disagree?15:07
inc0fungi, there always is a way15:08
inc0somebody is wrong somewhere, always15:08
inc0internet taught me this15:08
*** bethwhite_ has quit IRC15:08
*** jaosorior has quit IRC15:08
fungigranted, our application is a bit atypical for gloud apps in general, as we tend to do a lot of expensive (boot and delete) operations, repeated huge image uploads, et cetera15:09
*** ijw has quit IRC15:09
*** armax has joined #openstack-infra15:09
*** _coolsvap_ is now known as coolsvap15:09
*** thcipriani|afk is now known as thcipriani15:09
openstackgerritMerged openstack-infra/tripleo-ci: Split additional features across the periodic jobs  https://review.openstack.org/36290415:09
inc0fungi, well, benchmarking is about stressing your env right?15:09
fungiyeah, we'll stress it in some ways your typical customers probably won't15:09
inc0we'll try to characterize this workload and see if we need to add any artificial on top of it15:09
*** yamahata has joined #openstack-infra15:10
fungii have a feeling we put a lot of load on the api and storage backend15:10
inc0we'll figure it out, still it will test out basic operations heavily - and that is valuable too15:10
mordredfungi: otoh - I think nodepool is what people tell me "cloud native" workloads are supposed to lok like15:10
*** matbu|mtg is now known as matbu15:10
mordredfungi: you know, "cattle" that you delete and replace rather than fix - tons of elastic things15:11
*** zhurong has quit IRC15:11
*** ddieterly is now known as ddieterly[away]15:11
inc0mordred, another thing is that we intend to drop live migration and move stuff around too15:11
inc0at some point15:11
*** ddieterly[away] is now known as ddieterly15:11
fungimordred: until you say "oh, we delete these servers on average every 30 minutes"15:11
mordredfungi: funny enough - the cloudfoundry and kubernetes folks seem to think that's _slow_15:11
fungiwow15:11
*** ddieterly is now known as ddieterly[away]15:11
electrofelixnt: you don't need to update your templates, instead this is a ini file change, see http://docs.openstack.org/infra/jenkins-job-builder/execution.html#future-section15:11
mordredfungi: also, our control plane is apparently "not cloud enough"15:11
mordredfungi: so somehow we're both "too cloud" and "not cloud enough" simultaneously15:12
fungimordred: our control plane is not cloud at all. we hardly ever delete it, and we get upset when it breaks15:12
mordredit makes me wonder if any of the people making up these terms have ever actually used a cloud, or if they've ever even run a service15:12
inc0well, tbh k8s suck ass in terms of database, which is not a cattle by definition15:12
*** sputnik13 has quit IRC15:12
mordredinc0: funny that, right?15:12
inc0mordred, that being said I am core in kolla-k8s too;)15:13
*** sputnik13 has joined #openstack-infra15:13
*** sputnik13 has quit IRC15:13
fungiyeah, i'm curious how you treat your cloudfoundry or kubernetes control plane as cattle15:13
mordredinc0: db is one of the things I always bring up when people tell me I shoul dhave no persistent or special computers15:13
*** dprince has quit IRC15:13
inc0we're actually trying to make control plane work and there are nice things about it15:13
*** ggnel_t has quit IRC15:13
inc0yeah, it's a religion which has holes, like most of religions15:13
electrofelixnt: this will change the order the sets of parameters are combined when using 'trigger-parameterized-builds' module from using the order defined in code to using the order you define in the template definition. In certain cases depending on the order of various parameters sets if you have a same parameter defined multiple times, the last definition of a parameter wins15:13
fungimordred: databases aren't computers! they're just a haze of magic electrons permeating all your cattle15:14
fungiyou can safely ignore databases15:14
fungibecause cloud15:14
ntelectrofelix, thank you, changed the ini file as suggested and the warnings went away.  i was just a bit confused on what the ordering stuff meant.  thanks for clarifying!15:14
electrofelixnt: which can affect the values of the parameters in the triggered build15:14
*** tongli has joined #openstack-infra15:14
inc0when I get into this discussion I've even heard answers like "proper enterprises uses oracle cluster aside"...which is well...15:14
pleia2good morning15:14
inc0fungi, or even better, use mongo db15:14
pabelangermordred: so, speaking of not cloud enough. Thoughts on standing up a 2nd nodepool.o.o server to build DIBs for our control plane servers?  I'd like to see us do that15:14
inc0it's webscale15:14
jrollif you use /dev/null for your db you can totally do it cloudy15:14
electrofelixnt: it should be an edge case if it bites people, hence the warning being present for a while15:14
fungiinc0: i happen to have seen what happens when your proper enterprise oracle cluster members get deleted or taken up and down rapidly15:14
*** vhosakot has joined #openstack-infra15:15
fungiit's... not pretty15:15
inc0jroll, https://github.com/dcramer/mangodb15:15
jrollinc0: ++15:15
*** vinaypotluri has joined #openstack-infra15:15
inc0so I need to teach openstack to run on top of riak15:15
*** tonytan4ever has joined #openstack-infra15:15
inc0riak works.15:15
openstackgerritPeter Zhurba proposed openstack-infra/project-config: Add repo for openstack/puppet-glare.  https://review.openstack.org/36295015:15
*** Guest81 has joined #openstack-infra15:16
mordredfungi: I honestly thought you said "premeating your cattle"15:18
*** tongli has quit IRC15:18
fungimordred: pre-meat is what cattle are, after all15:18
*** tonytan_brb has quit IRC15:18
mordredfungi: yah. but maybe if  you meat your meat something wonderful happens15:18
fungipre-meat your cattle for extra meatiness15:19
inc0kill a server before you even start it? that's so cloud!15:19
*** jamesdenton has quit IRC15:20
*** tphummel has joined #openstack-infra15:20
*** ddieterly[away] is now known as ddieterly15:20
*** jamesdenton has joined #openstack-infra15:20
inc0everything is cloud native if you kill it early enough15:20
inc0I think I'll make tshirt with this phrase.15:21
*** hockeynut has joined #openstack-infra15:21
*** pcaruana has quit IRC15:22
*** rcernin has quit IRC15:22
openstackgerritMerged openstack-infra/system-config: Run puppet on infracloud in a different cron  https://review.openstack.org/36371915:22
*** mdrabe has quit IRC15:24
*** mdrabe has joined #openstack-infra15:24
anteayagood morning pleia215:25
clarkbrcarrillocruz: fwiw I thought we had solved the cert trustying issue in puppet for the cloud because we ran into that previously15:25
mordredinc0: ++15:25
clarkbalso has anyone investigated devstack git timeouts further? does thsi affect our cloud instances or just sdague ?15:25
rcarrillocruzclarkb: we have a cacert.pp manifest to do that, but i think the logic is off15:25
anteayaclarkb: I have not investigated nor have I witnessed that anyone else has either15:26
rcarrillocruzhttps://github.com/openstack-infra/puppet-infracloud/blob/master/manifests/cacert.pp#L2415:26
anteayathough I did miss a good bit of yesterday due to weather in my area15:26
rcarrillocruzthat looks to me is only going to exec if the file changes15:26
clarkbrcarrillocruz: ok I know there was some trouble with it in the past but I thought yolanda and pabelanger and crinkle_ sorted it out15:26
rcarrillocruzbut on first deploy, the update-ca-certificates never gets run15:26
*** yamamoto has quit IRC15:26
*** andreas_s has quit IRC15:27
pabelangerclarkb: I've poked around on tripleo-test-cloud-rh1, basically confirming there is an issue.  Sounds like we have a patch to zuul-cloner to expose timeouts and retries but that hasn't landed yet15:27
fungiclarkb: i made a cursory look at the cacti graphs for git.o.o, but didn't really dig deeper15:27
rcarrillocruzi mean, i ran an ansible -a 'update-ca-certificates' and pretty much all servers added the cert on the command output15:27
rcarrillocruzso it never ran15:27
rcarrillocruzdespite the file being on /usr/local/share/ca-certificates15:27
clarkbanyone have a source IP address for one of the timeouts? we can check against the haproxy log15:28
fungias far as i know it's not impacting our ci, but we also have a warm cache so maybe this is more pronounced for people without that to reduce their git remote operations dramatically15:28
anteayapabelanger: sorry I missed most of the conversation yesterday, what was the outcome of the rename all the things to windmill discussion?15:28
clarkbrcarrillocruz: or the command variable is wrong?15:28
*** Guest81 has quit IRC15:28
*** esp has quit IRC15:29
rcarrillocruzoh wait15:29
rcarrillocruzi believe fatih from opnfv had a change for that15:29
rcarrillocruzyou may be right15:29
rcarrillocruzsec15:29
pabelangeranteaya: no problem, I've abandoned the rename process for now. We can have some more discussions about it in the future15:29
*** sandanar has joined #openstack-infra15:29
rcarrillocruzyup15:29
rcarrillocruzhttps://review.openstack.org/#/c/361652/1/manifests/params.pp15:29
rcarrillocruzclarkb: ^15:29
clarkb`sudo tail -f /var/log/haproxy.log | grep -v -- --` is the command to see less normal connections on the haproxy instance15:30
anteayapabelanger: oh okay, thanks, I appreciate that, happy to participate in the future discussion if my participation is helpful15:30
pabelangerclarkb: 66.187.229.153 is from tripleo-test-cloud-rh115:30
anteayayay 3 things to look at for friday15:30
anteayafungi: did anyone create an etherpad for friday yet, do you know?15:30
clarkbpabelanger: grepping for that IP in the haproxy log doesn't show me anything that looks broken. There are connections that were ended normally according to the log15:31
clarkbfrom just over 2 hours ago15:31
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Add compute016.vanilla to disabled group  https://review.openstack.org/36375115:32
*** esikachev has quit IRC15:32
rcarrillocruzfungi: is that ^ right to disabling a long-term server15:32
rcarrillocruz?15:32
fungianteaya: not yet, no. i'm still on the fence about using the documented playbook since it really only takes care of the easy parts (directory moves, database update queries, gerrit group renames) and punts on the hard things (restarting/requeuing zuul, moving/transferring in github)15:32
clarkbpabelanger: is there a job log I can compare timestamps with for that IP?15:32
rcarrillocruzpabelanger: i just completed puppet ansible run on infracloud mirror15:33
anteayafungi: I feel we need an etherpad regardless of the chosen workflow15:33
anteayafungi: would you agree?15:33
rcarrillocruzit's nice there's a wildcard on the site.pp, i didn't have to add anything15:33
fungianteaya: sure, but having an etherpad is as easy as making up a title for it15:33
rcarrillocruzSO15:33
*** dmellado|mtg is now known as dmellado15:33
rcarrillocruzgiven that we have 42 computes15:33
rcarrillocruzand the mirror is up15:33
rcarrillocruzand dns is up15:33
anteayafungi: yes, i just didn't want to do that if one existed already15:33
fungianteaya: it's what to put in that etherpad that still needs to be decided15:33
rcarrillocruzi think we are good to add some servers on infracloud to nodepool?15:33
mordredrcarrillocruz: wow, awesome15:33
rcarrillocruzclarkb, fungi , pabelanger  ^15:34
anteayafungi: agreed, thought I can add the patches and their status and leave the remainder blank for the moment15:34
fungianteaya: thanks15:34
anteayaokay thank you15:34
*** jlanoux has quit IRC15:34
fungircarrillocruz: neat! did you do any benchmarking yet to figure out what the flavor needs to have (particularly from a cpu performance perspective)?15:35
clarkbconfirmed it is /usr/sbin/ on ubuntu not just debian15:35
rcarrillocruzfungi: i ran a nova dsvm full tempest last week15:35
rcarrillocruzit took a bit less than an hour15:35
rcarrillocruzwhich is in line to an osic run15:35
openstackgerritBen Nemec proposed openstack-infra/tripleo-ci: Set default for pingtest template  https://review.openstack.org/36375315:35
rcarrillocruzobv. , there were no neighbours on that compute :-)15:35
anteayafungi: https://etherpad.openstack.org/p/project-renames-Septemeber-201615:35
rcarrillocruzi suggest we bump little by little and see how it goes?15:36
mordredrcarrillocruz: ++15:36
*** tonytan_brb has joined #openstack-infra15:36
mordredrcarrillocruz: it'll be fun to watch load on the cloud as we increase nodepool load15:36
rcarrillocruzindeed15:36
rcarrillocruzi believe we've never seen so much capacity if we are able to bring up the whole infracloud PLUS the new osic15:36
rcarrillocruz\o/15:36
*** tesseract- has quit IRC15:36
*** JerryOpenix has quit IRC15:37
* clarkb has started a local clone nova loop to see if the issue can be reproduced from here15:38
rcarrillocruzi'll propose the patch15:38
*** tonytan4ever has quit IRC15:38
openstackgerritMatthew Treinish proposed openstack-infra/project-config: Add lpmqtt project  https://review.openstack.org/36329615:39
clarkbrcarrillocruz: and you have trusted the cert everywhere?15:39
openstackgerritMatthew Treinish proposed openstack-infra/project-config: Add puppet-lpmqtt project  https://review.openstack.org/36329715:39
rcarrillocruzclarkb: yeah, i did an ansible -a 'update-ca-certificates' ~compute15:39
fungiclarkb: rcarrillocruz: it looks like the ca-certificates package provides /usr/sbin/update-ca-certificates on my debian systems15:39
fungisame as on ubuntu 14.0415:39
rcarrillocruzy15:40
*** salv-orlando has joined #openstack-infra15:40
clarkbrcarrillocruz: its not just compute that needs it fwiw15:40
clarkbrcarrillocruz: the controller(s) should also get that as things talk to keystone for example15:40
*** sdake_ is now known as sdake15:40
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/project-config: Enable infracloud servers in Nodepool  https://review.openstack.org/36375615:41
rcarrillocruzthe controller i did myself manually15:41
clarkb(its a good thing my ISP doesn't have a quota on my usage as I clone nova over and over and over again)15:42
* rcarrillocruz goes grab some coffee15:42
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: Implement scenari001, 002 and 003  https://review.openstack.org/36250415:43
clarkbgiven that there hasn't been tons of screaming on the git thing is it possible that sdague and tripleo cloud share a common networking issue to git.o.o? I haven't been able to reproduce this yet locally from the other coast and haproxy logs from a quick grep look happy15:43
*** mikelk has quit IRC15:44
*** cody-somerville has quit IRC15:45
*** nstolyarenko has joined #openstack-infra15:45
*** armax has quit IRC15:45
mordredclarkb: maybe sdague and the tripleo cloud are actually the same person ... have you ever seen them in the same place at the same time??15:45
anteayathe higgins to zun rename patch isn't a rename patch I just discovered, it merged in june and changes the names of their irc channels15:46
*** salv-orlando has quit IRC15:46
*** _nadya_ has quit IRC15:46
anteayatrying to flag down one of their developers to get them to offer a rename patch15:46
*** esikachev has joined #openstack-infra15:47
*** matt-borland has joined #openstack-infra15:47
*** armax has joined #openstack-infra15:47
*** abregman has quit IRC15:47
*** mkarpin has joined #openstack-infra15:47
*** matrohon has quit IRC15:48
*** markvoelker has quit IRC15:48
*** kaisers_ has joined #openstack-infra15:48
mkarpinHello all! are there some issues with git.openstack.org or review.openstack.org?15:48
nstolyarenkoHello Folks! Could you please review my patch https://review.fuel-infra.org/#/c/25430/. It is very important for us. Thank you15:49
clarkbmkarpin: that is something we are investigating. Are you noticing timeouts to git.openstack.org? have any more info?15:49
openstackgerritDoug Hellmann proposed openstack-infra/release-tools: sort output of latest-deliverable-versions by team  https://review.openstack.org/36376315:49
mordredsc68cal: wanna see a funny typo?15:50
mordredsc68cal:             filters={'pysical_network': 'public'})15:50
rcarrillocruzfwiw i had random lags pulling from git.openstack.org about an hour a half ago15:50
openstackgerritMerged openstack-infra/puppet-infracloud: Fix path to update-ca-certificates for Debian  https://review.openstack.org/36165215:50
mordredsc68cal: not surprisingly, that did not find the network15:50
sc68calmordred: haha so that's why it failed15:50
mat128mordred: too much python :)15:50
openstackgerritDoug Hellmann proposed openstack-infra/release-tools: sort output of latest-deliverable-versions by team  https://review.openstack.org/36376315:51
clarkbpabelanger: I am looking at https://review.openstack.org/#/c/356703/7/modules/openstack_project/templates/nodepool/clouds.yaml.erb will cloud8 not have the same network setup as cloud1? eg we need to explicitly list that the v6 network has public v6 and private v4?15:51
sc68calnstolyarenko: wrong channel. #fuel ?15:51
mkarpinclarkb yes on my third party ci i have devstack stucked on15:51
mkarpin2016-08-31 14:26:37.704 | ++ /opt/stack/murano/devstack/plugin.sh:install_murano:357 :   git_clone git://git.openstack.org/openstack/murano.git /opt/stack/murano master15:52
openstackgerritMonty Taylor proposed openstack-infra/shade: Skip test creating provider network if one exists  https://review.openstack.org/36371515:52
jeblairclarkb, rcarrillocruz, mordred: i know everyone working on nodepool stuff has access to the server now, but it still might be nice to get this in: https://review.openstack.org/34694315:52
mkarpinclarkb https://murano-ci.mirantis.com/jenkins/job/gate-murano-dashboard-ubuntu/975/consoleFull15:52
rcarrillocruzlooking15:52
mordredjeblair: lgtm15:52
rcarrillocruzjeblair: lgtm15:53
clarkbmkarpin: does that host have ipv6?15:54
clarkbmkarpin: or would this be ipv4 only?15:54
*** rcernin has joined #openstack-infra15:54
mordredclarkb: btw - a devstack change made the shade gate not work, so we didn't land the revert so I didn't restart nodepool15:54
mkarpinI think ipv4 only15:54
clarkbmordred: kk15:55
mordredclarkb: fix to the test case is in flight, at which point we can land the revert15:55
nstolyarenkoSorry, wrong patch URL. Please review this one https://review.openstack.org/#/c/361965/15:55
mkarpinclarkb I think ipv4 only, need to check with it guys15:55
clarkbmkarpin: looking at the log origin is git.openstack.org not review.openstack.org. Do you have some other information that would indicate review.openstack.org is also affected?15:55
jeblairrcarrillocruz: are we about to turn on infracloud for realz?15:55
mkarpinclarkb i think i have something one moment15:56
*** bethwhite_ has joined #openstack-infra15:56
clarkb(if review.o.o is also affected I am more likely to blame the network as they sit in the same DC but are otherwise hosted on completely different systems)15:57
rcarrillocruzjeblair: yes, i think we are good to test it15:57
rcarrillocruzto recap:15:57
rcarrillocruzvanilla has 48 nodes15:57
rcarrillocruzmitaka15:58
jeblairrcarrillocruz: 48 compute hosts?15:58
rcarrillocruz2 nodes are taken for bifrost/baremetal and another one for controller15:58
bnemecclarkb: FWIW, this is what I get when I clone from the tripleo cloud: http://paste.openstack.org/show/565231/15:58
rcarrillocruzout of 46 compute hosts, 3 are not working, due to NIC / vlan issues15:58
clarkbwhy does bifrost need 2 instances?15:58
bookwarclarkb: we have following bug reported https://bugs.launchpad.net/mos/+bug/161893615:58
jeblairgotcha15:58
openstackLaunchpad bug 1618936 in Mirantis OpenStack "[pkgs-ci-pub] Unusual upstream gerrit behavior" [Critical,New] - Assigned to Fuel CI (fuel-ci)15:58
bnemecOh wait, it finally got off 36%.15:58
rcarrillocruzclarkb: just one, bifrost/baremetal is the same thing15:59
mgagneis there a way to rebase a whole topic in Gerrit? suppose you don't have the commit locally yet15:59
rcarrillocruzit's just the bifrost machine is called baremetal0015:59
bnemecNow it's stuck at Receiving objects:   1% (134/13306), 44.00 KiB | 1024 bytes/s15:59
*** shamail has joined #openstack-infra15:59
*** ansiwen has quit IRC15:59
fungidhellmann: looks like puppet finally applied 363156 to signing01.ci.o.o15:59
rcarrillocruzi'll sort out the offending servers with the DC folks15:59
rcarrillocruzand as for chocolate15:59
mgagnegit review -d <change-id> will download the chain in its current state but won't update the chain to latest patchset15:59
rcarrillocruzwe have more machines15:59
jeblairclarkb: at one point i think we decided to do that for safety/redundancy15:59
rcarrillocruzbut HW is not as good15:59
dhellmannfungi : thanks for the heads-up15:59
clarkbbookwar: to be clear the https and ssh hosts are completely different htere16:00
*** matthewbodkin has quit IRC16:00
mkarpinclarkb i have zuul often stucked with thing like that for example http://paste.openstack.org/show/565230/, it just stucked, and do not merge in ps it looks like http://paste.openstack.org/show/565233/16:00
rcarrillocruzjeblair, clarkb : we had baremetal00 and baremetal01 cos we had east/west in different locations16:00
pabelangerclarkb: yes, I should fix that16:00
clarkbbookwar: you may want to clarify that in the bug as otherwise its really really confusing16:00
rcarrillocruzbut now we have just one bifrost serving pxe boot to both16:00
jeblairmgagne: not a whole topic at once, i don't think.  (in gerrit, the 'cherry-pick to branch' command will probably work for one change)16:00
jeblairmgagne: but!16:00
*** abregman has joined #openstack-infra16:00
mkarpinclarkb its only today16:00
mgagnegretty again? =)16:00
openstackgerritPeter Zhurba proposed openstack-infra/project-config: Add repo for openstack/puppet-glare.  https://review.openstack.org/36295016:00
*** gyee has joined #openstack-infra16:00
fungimgagne: you might want to `git review -d somechange` followed by `git restack` https://pypi.org/project/git-restack/16:01
*** Ahharu has quit IRC16:01
fungimgagne: though as jeblair points out, a patch series and a topic are distinct concepts in gerrit. probably would help to know for sure which you mean16:01
mgagnefungi: how will restack know about latest version of a change from Gerrit? it looks like git-review will cherry-pick and won't know about updated changes16:01
jeblairmgagne: you can download the last change, then run 'git restack'.  though that's usually best for *not* rebasing a series.  if you *do* want to rebase the series, then plain old 'git rebase' may be what you want.16:01
mgagnefungi: anything form of chain of commits16:02
*** weshay is now known as weshay_food16:02
jeblairmgagne: 'git review -d <tip>' will download the whole patch series16:02
Zaraquestion from the StoryBoard meeting-- we have a pythonclient! it has docs (and further docs-in-progress)! the docs aren't rendered anywhere handy! we think it's better to keep things modular and keep them living in the pythonclient repo, so now we're wondering: how we can ensure they're tracked on docs.openstack.org?16:02
fungimgagne: oh, is the problem you're seeing that you have a series of dependent changes A,B,C and someone has uploaded a new version of B without rebasing C?16:02
mkarpinclarkb bookwar i am experiencing exactly the same as https://bugs.launchpad.net/mos/+bug/161893616:02
openstackLaunchpad bug 1618936 in Mirantis OpenStack "[pkgs-ci-pub] Unusual upstream gerrit behavior" [Critical,New] - Assigned to Fuel CI (fuel-ci)16:02
mgagnethe case is: the one performing the rebase doesn't have the latest version locally and can only access Gerrit16:02
anteayaI've informed hongbin in the -zun channel that we need a rename patch for their project to be renamed: http://eavesdrop.openstack.org/irclogs/%23openstack-zun/%23openstack-zun.2016-08-31.log.html#t2016-08-31T15:43:0516:02
clarkbrcarrillocruz: gotcha16:02
anteayain case someone shows up asking about it and I'm not around16:03
mgagnefungi: that's one example. I know you can rebase from UI but it's say for the sack of this example, that there is a million change. can't click forever16:03
zaromorning16:03
*** apuimedo is now known as apuimedo|away16:03
*** trown is now known as trown|brb16:03
anteayamorning zaro16:03
jeblairZara: add jobs similar to the storyboard jobs in project-config16:03
clarkberror: RPC failed; curl 56 SSL read: error:00000000:lib(0):func(0):reason(0), errno 104 success! now to see what haproxy/apache say about my ip16:03
Zarajeblair: aha, thank you16:03
*** edtubill has joined #openstack-infra16:04
jeblairZara: 'infra-publish-jobs' looks like the name in zuul16:04
*** devananda|MOVING is now known as devananda16:05
*** ansiwen has joined #openstack-infra16:05
jeblairZara: looks like they are already setup in jjb, so just the zuul layout.yaml change is needed16:05
jeblairoh wait16:05
jeblairwhat's python-storyboardclient-infra-docs-tags-only16:06
ZaraI have no idea but it sounds exciting.16:06
*** ildikov has quit IRC16:06
jeblair    description: Publish infra documents, use when only publish on tag16:06
clarkb[31/Aug/2016:15:55:05.089] balance_git_https balance_git_https/git08.openstack.org 1/0/253881 108025 cD 73/58/58/7/0 0/016:06
jeblairZara: so it looks like storyboardclient docs are already set to be published, but only when the repo is tagged16:07
*** rockyg has joined #openstack-infra16:07
clarkband from the haproxy manual: this is oftne caused by network failures on the client side16:07
*** hewbrocca is now known as hewbrocca-afk16:07
jeblairZara: so when it is tagged, it should show up here: http://docs.openstack.org/infra/python-storyboardclient/16:08
openstackgerritAdam Coldrick proposed openstack-infra/storyboard: Send notifications to subscribers for worklists  https://review.openstack.org/35473016:08
clarkbthe tripleo IP that pabelanger provided did not close any connections with the cD state though16:08
openstackgerritAdam Coldrick proposed openstack-infra/storyboard: Make it possible to get worklist/board timeline events via the API  https://review.openstack.org/35472916:08
openstackgerritAdam Coldrick proposed openstack-infra/storyboard: Don't allow users to subscribe to private worklists they can't see  https://review.openstack.org/36377616:08
*** trown|brb is now known as trown16:08
jeblairZara: i don't know why that choice was made for that repo16:08
SotKI suspect it would be sensible for us to undo that choice16:09
*** drifterza has quit IRC16:09
*** hashar has quit IRC16:09
*** Julien-zte has quit IRC16:09
jeblairclarkb: http://grafana.openstack.org/dashboard/db/git-load-balancer?panelId=11&fullscreen16:10
jeblairclarkb: zoom out a bit there16:10
openstackgerritMerged openstack-infra/project-config: Normalize projects.yaml  https://review.openstack.org/36366916:10
Zara(I suppose the choice has meant I've just learned more about how project-config works, but I agree)16:10
clarkbtahts neat I go from 7ms to 72ms rtt in seattle16:10
clarkbits like they have thousands of miles of cable looped up to spin my packets around in16:10
jeblairclarkb: oh, here's how you link zoom: http://grafana.openstack.org/dashboard/db/git-load-balancer?from=1471968641519&to=147265984151916:10
anteayaclarkb: just for you16:10
jeblairgrr.. that's the whole dashboard16:11
*** links has joined #openstack-infra16:11
jeblairclarkb: so nevermind -- do that first link then zoom out :)16:11
anteayalike flash boys I think is the name of the book16:11
openstackgerritMerged openstack-infra/system-config: Update osic-cloud1 credential format  https://review.openstack.org/35670216:11
fungimgagne: yeah, i agree the git-review -d behavior seems to be to download the latest patchset of the change you specify (assuming you don't include a patchset number for it) along with the specific patchsets (not latest patchsets) it depends on in other changes16:11
*** Sukhdev has joined #openstack-infra16:11
clarkbjeblair: ok16:11
clarkbfungi: mgagne yes because its just fetching the patchset you told it to and git is pulling in the parents automagically16:12
mgagnefungi: yea, will check with the one with the problem, it's not me. Deps aren't showing well in Gerrit so it's hard to visualize the state of things. maybe it's just a matter of git review -d the chain and rebase against his own change.16:12
mgagneclarkb: true. I think it's just a matter of education around what git-review is really doing16:12
*** asettle has quit IRC16:12
clarkbjeblair: interesting. What does that map to in the haproxy logs? is it the health checks on the backend16:12
*** javeriak has joined #openstack-infra16:13
*** tongli has joined #openstack-infra16:13
*** ianw has quit IRC16:13
jeblairclarkb: i will refresh my memory :)16:13
openstackgerritAdam Coldrick proposed openstack-infra/storyboard: Send notifications to subscribers for worklists  https://review.openstack.org/35473016:13
openstackgerritAdam Coldrick proposed openstack-infra/storyboard: Make it possible to get worklist/board timeline events via the API  https://review.openstack.org/35472916:13
*** asettle has joined #openstack-infra16:13
*** abregman has quit IRC16:13
mordredodyssey4me, rbergeron: re: galaxy api ... it appears that there IS a REST API - it's just not documented. sniffing the network traffic of the web ui shows the API interactions to do things like "hey, plesae import repo X"16:13
fungimgagne: right, i'm thinking through ways we could enhance git-review to make that easier. it's a complex problem because you ultimately need to initiate a rebase for any changes after one which has a newer patchset in gerrit, and that cascades the rest of the way up the series, potentially resulting in merge conflicts on multiple commits along teh way16:14
jeblairclarkb: it's eresp: http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/git/haproxy-statsd.py#n7616:14
clarkbAH01215: fatal: The remote end hung up unexpectedly I see those in the git error log for httpd on git0316:15
clarkb(picked git03 as it seemed to be errory according to the graph jeblair linked)16:15
Zarajeblair: hm, looking at `release:` there, it seems storyboard has a similar setting (storyboard-infra-docs-tags-only)16:16
Zarabut docs for storyboard are rendered16:16
Zaraso I'm wondering if there's anything else affecting it16:16
ihrachysso what's about those network glitches when working with openstack infra resources (git, gerrit)? is it a known thing?16:16
clarkbjeblair: that includes write errors on the client sockets but haproxy claims that won't be counted against the server stats. Not sure how else it would show them in that case16:16
Zara(or hm, maybe the repo only needs one tag and doesn't have one)16:17
*** Sukhdev has quit IRC16:17
*** yamamoto has joined #openstack-infra16:17
Zara(I parsed it as 'release on new tag')16:17
openstackgerritJesse Pretorius (odyssey4me) proposed openstack-infra/project-config: Move unsuccessful non-voting OSA jobs to experimental  https://review.openstack.org/36378316:17
*** javeriak has quit IRC16:17
*** asettle has quit IRC16:17
openstackgerritPaul Belanger proposed openstack-infra/system-config: Add credentials for osic-cloud8  https://review.openstack.org/35670316:17
*** javeriak has joined #openstack-infra16:18
pabelangerraddaoui: ^ updates for osic-cloud816:18
*** kaisers_ has quit IRC16:18
*** akshai has quit IRC16:18
clarkbfor my specific connection git said: fatal: The remote end hung up unexpectedly on my desktop. Haproxy said client disconnected16:18
clarkbthis tells me neither end wanted to close the tcp connection but something did16:18
mgagnefungi: yes. I think what that person asked me is a bit far fetched where he wanted to rebase someone's else series of changes which you usually don't do yourself.16:18
mgagnefungi: and as you said, you could end up in merge conflicts hell16:19
openstackgerritPaul Belanger proposed openstack-infra/system-config: Add credentials for osic-cloud8  https://review.openstack.org/35670316:19
raddaouinice pabelanger . mrhillsman: ^16:19
openstackgerritMerged openstack-infra/puppet-nodepool: Proxy nodepool webapp status commands  https://review.openstack.org/34694316:19
clarkbnone of the git hosts appear anywhere near their bw limits. Grafana shows that haproxy statistics include an increase in https server eresp errors16:20
jeblairmgagne: sometimes i use gertty for this -- i walk down the tree and hit 'x' to cherry pick the latest version of each patchset in whatever order i want to do it in.16:20
*** ijw has joined #openstack-infra16:20
clarkbbut review.openstack.org is also apparently affected making me further think its less of a git.openstack.org issue as those two stacks are so vastly different. Different gits apaches ssls kernels etc16:20
openstackgerritPaul Belanger proposed openstack-infra/system-config: Add credentials for osic-cloud8  https://review.openstack.org/35670316:21
mgagnejeblair: yes. It happens the person isn't used to gerrit workflow but to github PR's one instead which hides a lot of git concepts =)16:21
pabelangerraddaoui: mrhillsman: if you want to check our configuration for cloud8 ^16:21
Zara(and hm, `git tag` doesn't show any tags for either repo; am I thinking of the right sort of tag?)16:21
AJaegerZara, just finished reading backscroll...16:21
AJaegerZara, let me help: We publish in general - the client projects only when there are tags, since most users will install a release.16:22
AJaegerAnd for server projects we publish with each version - so that developers get the info.16:22
*** martinkopec has quit IRC16:22
*** Benj_ has joined #openstack-infra16:22
mordredrbergeron, odyssey4me: I have updated the galaxy-issues bug with information16:22
AJaegerBut we can change that. If you want to change the job for your client, just send a proposal.16:22
AJaegerZara: Or do a release ;)16:22
jeblairclarkb: the only other thing i note from the graphs is there seems to be a bit more http (not https) current sessions tonday than normal.  it's a stretch.  you kind of have to squint to see it.  i'm not giving it a lot of weight.16:23
anteayaAJaeger: any idea what yuval's irc nick is? https://review.openstack.org/#/c/35330416:23
*** Swami has quit IRC16:23
odyssey4memordred reverse engineers yet another api :)16:24
anteayaAJaeger: and i don't know if you caught it in backscroll but zun doesn't have a rename patch in gerrit16:24
AJaegeranteaya: http://stackalytics.com/?user_id=jhamhader -> https://launchpad.net/~jhamhader -> it's youval16:24
AJaegeranteaya: didn't catch that one ;(16:24
anteayaAJaeger: their link in the wiki was to a patch to rename their channels that you merged in june16:24
fungimgagne: what i've usually done is isolate the first change depending on an outdated patchset, then `git review -d` the latest patchset for its parent change id and `git review -x` the change in question followed by all child changes in the series one by one, fixing merge conflicts as i go16:24
AJaegeryuval I mean16:24
rbergeronmordred: thank you! :)16:24
AJaegeranteaya: ;(16:24
*** ijw has quit IRC16:24
anteayaAJaeger: neither did I, so I posted in their channel16:24
ZaraAJaeger: Thanks, that makes things clearer. Though I'm still confused because the jobs suggest that docs for both projects are released on a  'only when tagged' basis, but we have docs up for one and not the other, and neither has any tags. I'm wondering if the client also needs to be listed explicitly here: https://git.openstack.org/cgit/openstack-infra/project-config/tree/docs-site/infra-documents.16:24
*** yaume_ has quit IRC16:25
Zarayaml ?16:25
anteayaAJaeger: so far no response16:25
Zarawhat a wonderful place to split a url16:25
Zarahttps://git.openstack.org/cgit/openstack-infra/project-config/tree/docs-site/infra-documents.yaml16:25
mordredrbergeron: honestly, just docs on auth handshake would be Good Enough16:25
AJaegeranteaya: https://review.openstack.org/#/c/32924716:25
mgagnefungi: yea but I strongly suggest to that person: yea, don't do that, let that person deals with his own changes.16:25
fungiZara: it's possible the one which has docs and no tags originally had a different docs publication job in place16:25
anteayaAJaeger: and thank you, yuval or someone else from smaug/karbor, this patche needs a rebase please: https://review.openstack.org/#/c/35330416:25
*** shashank_hegde has joined #openstack-infra16:25
anteayaAJaeger: ah thank you16:26
AJaegeranteaya: will you update the wiki? And update the topic on all changes for consistency?16:26
*** tongli_ has joined #openstack-infra16:26
clarkbjeblair: the connection retries for the backends seem to be sitting consistently at 0 which makes me think that the backends are pretty happy16:26
anteayatopic is updated as long as they keep the same topic during rebases16:26
anteayawill update the wiki, thank you16:27
jeblairZara: yes it does need to be listed there, but only after being published; that's just the index page16:27
clarkbtotal conenctions I don't see going over 100 so should be well below any system fd limits16:27
AJaegerfungi, that's the same approach I use16:27
AJaegerthanks, anteaya16:27
clarkbthe cD state does seem to occur after roughly 120000 milliseconds between connection accept and close16:28
clarkb(there are some outliers to that but not many)16:28
Zarafungi: ah, okay. I don't have history so I don't know if that happened or not, am looking at storyboard over here: https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n470516:28
*** roxanaghe has joined #openstack-infra16:28
anteayaAJaeger: thank you, also fixed in etherpad which is https://etherpad.openstack.org/p/project-renames-Septemeber-201616:28
*** ddieterly is now known as ddieterly[away]16:28
AJaegerZara, yes, you need to list it there. But let's check the contents on docs.o.o...16:29
*** jpich has quit IRC16:29
clarkbits like something is just nuking any connection taking longer than 2 minutes16:29
clarkb(except there are plenty of happy connectiosn that take longer too)16:29
AJaegerZara, last push of storyboard docs was on 26th of August16:29
*** tongli has quit IRC16:29
*** dprince has joined #openstack-infra16:30
*** tongli_ has quit IRC16:30
AJaegerZara, for storyboard there's the template infra-publish-jobs used - which publishes after each commit.16:30
*** yamamoto has quit IRC16:30
AJaegerSo, we do both right now - publish after tag and publish after each commit.16:30
clarkbthe server and backend queues are also sitting at 0 telling me that haproxy isn't having to park things waiting on backends16:30
*** weshay_food is now known as weshay16:30
AJaegerZara, I can clean this up. Do you want to change python-storyboard publishing as well?16:31
clarkbwhich is expected when well below limits16:31
ZaraAJaeger: ahhhh, right. yeah, I'd like python-storyboardclient to publish on each commit, so I'm guessing we want it to have use same template16:32
*** tongli has joined #openstack-infra16:32
fungiclarkb: this is a pretty common behavior when another instance sharing the same hypervisor is eating up most of the nic buffers16:32
mgagneupdate on mtl01 at internap: replaced network hardware, tests are in progress and so far, it's doing well. will update once completed.16:33
fungithe host's tcp/ip stack begins to exhibit pathological behaviors that result in all manner of odd disconnects and timeouts for guests16:33
*** nstolyar_ has joined #openstack-infra16:33
*** nstolyarenko has quit IRC16:33
ZaraAJaeger: if you're able to do it, that would be wonderful (and thank you!) otherwise I'm happy to :)16:33
clarkbfungi: ya everything I can see from what we control and have access to looks happy16:34
mordredmgagne: cool. did we find a real problem for you then?16:34
*** wgd3[away] has quit IRC16:34
clarkbfungi: I think its something in the network stack between $client and git.openstack.org16:34
zaroOnline reindex testing on review-dev.openstack.org will commence in about 30 mins. Would appreciate volunteers to bang at gerrit.  Do as much banging as you like, a few mins to about 1 hr.16:34
*** ddieterly[away] is now known as ddieterly16:34
mgagnemordred: so far, all points to a faulty network hardware16:34
mordredexcellent16:34
jeblairclarkb, fungi: this is affecting all of the backend servers though, right?16:34
clarkbfungi: apparently review.openstack.org is also exhibiting some of this maybe it shares a hypervisor or switch or router or something16:34
anteayazaro: I'm not sure my timing will line up, I'm about to go offline for the rest of the day16:35
fungiclarkb: it could certainly be a switch with a nearly-full bridge table or something16:35
anteayazaro: will participate if I am online16:35
clarkbjeblair: yes my grep for cDs shows a pretty good spread16:35
mgagnemy own tests didn't show the problem since I landed on the non-faulty hardware. in fact, half of the hardware was faulty, not the other.16:35
clarkbjeblair: but the errors are happenign in front of haproxy I think not behind. We are not queuing or needing to retry any connections to the backends16:35
mordredmgagne: oh lovely16:35
openstackgerritAndreas Jaeger proposed openstack-infra/project-config: Update storyboard publishing  https://review.openstack.org/36379516:35
AJaegerZara: ^16:35
ntDoes Jenkins Job Builder still support Python 2.6?  A lot of projects are dropping 2.6 support and I'm just checking on JJB.  I looked through the docs and didn't see anything about verson compatibility.16:35
jeblairclarkb: oh hrm, i thought that would be the error that's not counted in eresp...16:36
*** esp has joined #openstack-infra16:36
AJaegerZara: once that's in and you have documents published for the client, please send yourself a change for the yaml file so that the index file gets updated.16:36
clarkbnt: we don't have a test platform for python2.6 so its not tested there at least16:36
*** tongli has quit IRC16:36
zaront: it's no longer tested against py26 only py27 and py3416:36
clarkbjeblair: eresp can include client errors according to the haproxy manual. I think we are seeing these cDs show up in the eresp stats16:36
*** shashank_hegde has quit IRC16:37
clarkband its pretty consistently 2 minutes and bam connection is cD16:37
jeblairclarkb: write error on the client socket (won't be counted for the #       server stat)16:37
ntclarkb, zaro, thanks for the info.  I get deprecation warnings about 2.6 from some of the dependencies, so this is good info.16:37
ZaraAJaeger: yay, thanks! Will do.16:37
zaroanteaya: sounds good.16:37
jeblairclarkb: that makes me think that client errors would show up in stats.haproxy.balance_git_https.BACKEND.eresp but not stats.haproxy.balance_git_https.git01.eresp16:37
clarkbjeblair: ya thats how I would interpret it too but the logs themselves don't seem to line up with that. eg tehre are no retries16:38
*** tongli has joined #openstack-infra16:38
clarkb(I would expect a retry when a backend eresps)16:38
fungiclarkb: thinking back, i've seen similar behavior on overloaded switchrouters, where the routing is tightly coupled to bridge flows and so actively disconnects open sockets its tracking when the flow table begins to fill up16:38
Zarazaro: what's the best way for us to hammer gerrit?16:39
fungiby spoofing tcp/rst or similar16:39
clarkbgranted I am looking at the subset of the logs that matches grep -v -- --16:40
fungiclarkb: are we seeing this for git protocol too? or just http(s)?16:40
*** tphummel has quit IRC16:41
zaroZara: the test will be around changes.  so any type of operation that involves a change.  like create patchset/review it/update it/merge it/download it/etc..16:41
clarkbfungi: ya the cDs appear to be all protocols. Interesting I just saw some SDs16:41
clarkbthose would be server disconnects16:41
*** ddieterly is now known as ddieterly[away]16:42
anteayazaro: my company has arrived, I'm offline now, sorry for the poor timing hope you get some volunteers16:42
anteayazaro: thanks for testing16:42
*** tongli has quit IRC16:43
*** ianw has joined #openstack-infra16:43
*** _nadya_ has joined #openstack-infra16:43
clarkbSD "The connection to the server died with an error during the data transfer. This usually means that haproxy has received an RST from the server or an ICMP message from an intermediate equipment while exchanging data with the server. This can be caused by a server crash or by a network issue on an intermediate equipment."16:43
clarkbso still doesn't rule out network issues16:43
Zarazaro: okay, sounds straightforward!16:43
*** tongli has joined #openstack-infra16:44
*** fguillot_ has joined #openstack-infra16:44
*** _nadya_ has quit IRC16:44
openstackgerritJesse Pretorius (odyssey4me) proposed openstack-infra/project-config: Add OSA keystone uwsgi functional tests  https://review.openstack.org/36364016:44
clarkbjeblair: the SDs are much less frequent and we got two in close proximity to each other for different backend hosts16:44
pleia2zaro: I got asked to join a call, is it ok to push out testing a few?16:44
clarkbbut that could possibly explain the server specific eresps16:44
clarkbin addition to the client disconnect cDs16:44
*** lucasagomes has quit IRC16:44
zaropleia2: well reindex will take 1.5 hrs. so as long as it's not a super long call you should have plenty of time to bang.16:45
*** tphummel has joined #openstack-infra16:45
*** lucasagomes has joined #openstack-infra16:46
jeblairclarkb: here's the doc; you can find the stats in sec 9.1: http://www.haproxy.org/download/1.5/doc/configuration.txt16:46
pleia2zaro: ok :)16:46
jeblairclarkb: i agree with your observation about eresp.  i can't explain the apparent discrepancy.  i don't feel like reading the haproxy source right now though.  :)16:46
*** yamamoto has joined #openstack-infra16:47
*** tongli has quit IRC16:48
*** Apoorva has joined #openstack-infra16:48
clarkbI am going to restart my cloen nova test on a backend directly and have it talk to lcoalhost16:48
clarkbto at least bolster the argument that the service itself is fine16:48
*** ddieterly[away] is now known as ddieterly16:49
*** daemontool has joined #openstack-infra16:50
*** akshai has joined #openstack-infra16:50
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Add compute038.vanilla to disabled group  https://review.openstack.org/36380516:51
*** ddieterly is now known as ddieterly[away]16:52
*** _nadya_ has joined #openstack-infra16:52
*** senk has joined #openstack-infra16:52
clarkbya I am seeing the SDs happen to all of the instances16:52
clarkber all of the backend instances. So this doesn't appear to be any single backend being unruly16:52
*** piet has joined #openstack-infra16:52
*** roxanaghe has quit IRC16:52
clarkbthat coupled with the cDs makes it hard for me to think our service is broken, it just can't tcp16:53
clarkbfungi: anything else you can think that would be worth checking?16:54
*** roxanaghe has joined #openstack-infra16:54
openstackgerritMonty Taylor proposed openstack-infra/shade: Skip test creating provider network if one exists  https://review.openstack.org/36371516:54
* mordred hits head on wall16:54
clarkbwait16:54
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Disable compute025.vanilla  https://review.openstack.org/36380616:54
clarkbmordred: don't you want to test that regardless?16:54
mordredclarkb: it's not possible to test if the devstack has already created a network using that physical network16:55
fungiclarkb: my usual go-to would be analyzing the kernel's interface buffer/queue utilization... it's been a while so i'm rereading how one does it these days16:55
mordredclarkb: in order to execute that test now, I need to create a new job that configures devstack differently16:55
clarkbmordred: you can't have two physical networks?16:55
clarkbthat seems broken16:55
mordredclarkb: you cannot have two neutron networks that map to the same underlying physical network16:55
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Disable compute008.vanilla  https://review.openstack.org/36380916:56
clarkbmordred: ya make another disjoint one16:56
clarkbor maybe even overlapping?16:56
mordredit's not possible16:56
clarkbeg /25 instead of /24?16:56
clarkbmordred: why not?16:56
mordredthat's not the conflict16:56
mordredyou define a pre-existing physical network in the ml2.conf file by name16:56
openstackgerritMatt Riedemann proposed openstack-infra/project-config: Move placement job to nova's check queue  https://review.openstack.org/36381016:57
mordredthen, when you create the neutron network object, you say "provider:physical_network = name_defined_in_file"16:57
mriedemsdague: ^16:57
mordredand that defines the mapping between the neutron entwork and the underlying physical netowrk it represents16:57
*** salv-orlando has joined #openstack-infra16:57
mordredso what I need to do for the shade test is to make a new job that runs devstack without the provider network defined16:57
*** tongli has joined #openstack-infra16:57
clarkbwow16:57
mordredso that we can define it in the test16:57
rcarrillocruzfolks, gotta run a few errands, i may be later online should the infracloud being put online today16:58
rcarrillocruzlaterz16:58
sdaguemordred / clarkb - quick hit on that - https://review.openstack.org/363810 ?16:58
*** trown is now known as trown|lunch16:58
*** shamail has left #openstack-infra16:59
sdagueI was apparently slightly uncaffinated this morning when I made the first change16:59
fungiclarkb: jeblair: at least the system interrupts for eth0 only seem to be consuming ~10% of one cpu on the proxy server (and ~1% on each of the backends)16:59
zaroOk. online reindex testing on review.o.o will commence now: https://etherpad.openstack.org/p/gerrit-online-index-testing17:00
zaroi'm going to hijack #openstack-meeting channel for this so join me over there if interested.17:01
zaroopps meant openstack-sprint channel17:01
*** tongli has quit IRC17:02
mordredsdague: done17:03
clarkbfungi: let me know if I can help somehow with the linux tcp investigating17:04
clarkbthe cloen nova over and over again on git03 is going fine17:04
mordredsc68cal: just for your amusement - it's "provider:physical_network" ... not "physical_network" ... also, I decided to change the test to be for provider:network_type instead, in case devstack decides it wants to change the network name17:05
*** yamahata has quit IRC17:05
clarkbfungi: also I think there is some weight to your idea that our caches improve things because the client disconnects at least all seem to happen after 2 minutes17:06
clarkbfungi: in theory our caches will reduce the total connection time to something much shorter17:06
*** akshai has quit IRC17:07
Shrewsmordred: we need a new job for the one shade test??17:07
Shrewsdid i read that sb right?17:07
clarkbShrews: I think you can likely replace your current job with the new job but yes17:07
*** jerryz has joined #openstack-infra17:08
*** derekh has quit IRC17:09
*** spzala has quit IRC17:09
pabelangerShrews: jeblair: any thoughts about exposing a CLI command to individually launch a node? Basically to be used for the purpose of debugging cloud failures. Today, I have a shade script to do it, but not exactly how nodepool will launch a server17:10
*** sputnik13 has joined #openstack-infra17:10
openstackgerritJames E. Blair proposed openstack-infra/zuul: Re-enable test_failed_change_at_head  https://review.openstack.org/36382117:12
*** akshai has joined #openstack-infra17:12
pabelangerShrews: jeblair: or expanding out auto hold feature to include launch failures (scp nodepool-script / ready-script)17:12
mordredShrews: if we want to execute that test, yes17:12
*** tongli has joined #openstack-infra17:12
jeblairpabelanger: can we do that as a v3 TODO?  i think it will be easier then17:12
*** amitgandhinz has quit IRC17:12
mordredShrews, clarkb: honestly, I think testing that shade works in clouds wiht provider networks is more important than testing that shade can create a provider network17:12
pabelangerjeblair: Of course17:13
mordredso I would not recommend replacing the job with a non-provider network config17:13
*** ddieterly[away] is now known as ddieterly17:13
*** amitgandhinz has joined #openstack-infra17:13
clarkbpabelanger: are there specific instances where nova boot hasn't been sufficient? might help shape the way the command works (fwiw I have had really good luck just noav booting things)17:13
mordredBUT - I do see value in testing that shade works witha provider-network config and also with a config that doesn't have one and requires floating ips ... so I could see making both of those17:13
*** annegentle has joined #openstack-infra17:13
mordredand don't think it would be a waste of energy17:13
*** akshai has quit IRC17:14
claytonit appears dmsimard is on pto, does anyone know if his ARA tool requires Ansible 2.x?17:14
pabelangerclarkb: no, thats what I do today, but with shade. I just figure it would be easier to debug failures, using the same code path we use to launch server.17:14
openstackgerritJesse Pretorius (odyssey4me) proposed openstack-infra/project-config: Add OSA keystone uwsgi functional tests  https://review.openstack.org/36364017:15
clarkbpabelanger: maybe? I haven't experienced any cases where I need nodepool to boot an instance to debug why it was failing so I don't know17:15
*** markvoelker has joined #openstack-infra17:15
*** e0ne has quit IRC17:16
pabelangerclarkb: Ya, it is rare. But that's what I am trying to figure out with rax-iad right now. Server boots, we cannot connect to it via SSH17:16
clarkbwas wondering if you had run into that with something like writing /etc/nodepool contents17:16
fungiooh, sorry, just got distracted by a large delivery. back now and looking at the git servers some more17:16
mordredclayton: I do not know - but I would be surprised if it didn't17:16
*** tongli has quit IRC17:17
clarkbpabelanger: thats likely not related to nodepool at all but instead glean?17:17
clarkbpabelanger: did you attach a config drive?17:17
mordredclayton: it's new enough that I'd be very surprised if he wrote it for 1.917:17
claytonmordred: that was my thought also.  the callback api changed between 1.9 and 2.x17:17
pabelangerclarkb: possible, yes using config-drive17:17
mordredclayton: yah17:17
claytonunfortunately we're still on 1.917:17
*** tphummel has quit IRC17:17
* mordred recommends upgrading ;)17:17
*** akshai has joined #openstack-infra17:17
clarkbfungi: fwiw I am basically ready to write a support ticket and see what rax says, but will wait on doing that17:18
pabelangerclarkb: agreed, I don't think it is nodepool either, but would make things easy if we could tell nodepool to keep that failure online then toss the UUID over the wall to rackspace.17:18
pabelangerright now, I'm manually trying to reproduce, if this server will boot17:18
*** rossella_s has quit IRC17:18
*** fguillot_ has quit IRC17:18
clarkbpabelanger: oh is it not consistent? gotcha that would be one case where it would be helpful (though you'd need more of an auto hold than a boot command)17:18
*** lucasagomes is now known as lucas-dinner17:19
*** mhickey has quit IRC17:19
*** rossella_s has joined #openstack-infra17:19
fungi`ethtool -S eth0` is decidedly unhelpful on xen guests, it seems17:19
jeblairclarkb, fungi: are we still interested in having nodepool image builds attempt to happen at a given time of day?  or would logic like "rebuild if the image is older than X hours, regardless of time of day" work?  (obvs that would probably start out being every 24 hours at the same time, but would probably fairly quickly start to move around).  cc: Shrews17:19
openstackgerritMerged openstack-infra/project-config: Move placement job to nova's check queue  https://review.openstack.org/36381017:20
clarkbjeblair: the benefit to having it happen at a certain time when snapshot/upload was consistent was that if an image had problems we could ensure they wouldn't start until roughly when the humans that could address those problems were present17:20
fungijeblair: i think either would work out for us, except that we might go into image rebuild loops when there are issues, unless we also added a throttle17:20
mordredjeblair: I like rebuild if image is older ... also, a while back I was thinking that nodepool considering images older than X hours as not being viable to boot content on would be another neat knob17:20
sc68calmordred: ok. :)17:20
clarkbjeblair: but now glance image upload is so unreliable that we can't depend on that and also we have more people around the globe making that less important17:20
pabelangerclarkb: Yup17:20
sc68calmordred: isn't openstack f.u.n.?17:21
mordredsc68cal: so f.u.n.17:21
clarkbmordred: if you put that in place most of our providers would stop working :)17:21
jeblairfungi: yeah, though i'm considering auto image rebuild loops a feature -- since right now, clarkb frequently manually executes a human-powered image rebuild loop :)17:21
jeblairwell, usually upload, not build, but sometimes build too.17:22
pabelanger\o/17:22
pabelangerhttp://grafana.openstack.org/dashboard/db/nodepool-osic?from=1472577734067&to=1472664134067&var-provider=All17:22
AJaegerproject-config cores, could you review the storyboard publishing change, please? https://review.openstack.org/36379517:22
clarkbthe build problems tend to be more consistent until addressed where as upload tends to work eventually if you try hard enough17:22
pabelangerofficially 24 hours and osic-cloud1 has reported 0 launch node failures17:22
*** itisha has joined #openstack-infra17:22
pabelangercloudnull: ^17:22
jeblairclarkb: right, with the notable exception of 'git clone/jeepyb' :)17:22
*** ddieterly is now known as ddieterly[away]17:22
clarkbjeblair: so maybe builds should have a hard limit of retries but uploads could continue with a backoff17:22
*** tphummel has joined #openstack-infra17:22
*** dizquierdo has quit IRC17:22
cloudnullpabelanger: ++17:22
fungiclarkb: depends. we have plenty of transient image build problems too (especially around caching git repos when new projects are being added)17:23
pabelangerhttp://grafana.openstack.org/dashboard/db/nodepool-osic?from=1472577734067&to=1472664134067 is the actually URL17:23
cloudnullthats even after a massive spinup last evening17:23
clarkbfungi: thats true17:23
fungiprobably what jeblair also meant17:23
jeblairwords17:23
* fungi gets back to reading very dry rhel 7 performance tuning guides17:23
clarkbI think if we did a thing that retried a failed build once (or some small number of times) then waited again for some period of image is X time old retry that would work well17:23
*** akshai has quit IRC17:23
pabelangercloudnull: Ya, seems to be holding its own well17:24
mtreinishAJaeger: any idea what I'm missing on: https://review.openstack.org/#/c/363297/3 it's probably something so obvious I'm blind to it :)17:24
clarkbthen for uploads have them retry over and over with a backoff between uploads that is reset whenever a new image is built17:24
cloudnulltime to find the next break point :)17:24
*** tongli has joined #openstack-infra17:24
clarkbis that complicated enough? :)17:24
mtreinishAJaeger: it's failing on the merge template check: http://logs.openstack.org/97/363297/3/check/gate-project-config-layout/0b2b23a/console.html#_2016-08-31_17_04_51_49429817:24
*** kzaitsev_mb has quit IRC17:24
fungifor those who need some good bedtime reading material... https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Performance_Tuning_Guide/chap-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Networking.html17:24
*** pt_15 has joined #openstack-infra17:24
jeblairclarkb: we could add limits, but honestly, i'd be okay with our dedicated image builder continually running trying to build an image, as long as it didn't starve other image builds (ie, it looped through all of them before retrying)17:24
pabelangerhopefully we can get osic-cloud8 up later today, once the fires are out17:24
clarkbjeblair: ya that would probably work too we would just churn the disk of that machine if/when things break17:25
jeblairpabelanger: also https://review.openstack.org/36375617:25
clarkbbut hey cloud :P17:25
pabelangerjeblair: woah17:25
*** akshai has joined #openstack-infra17:25
jeblairi think that's ready to go if anyone wants to babysit :)17:25
pabelanger+317:25
fungiclarkb: your idea holds merit: "Network performance problems are most often the result of hardware malfunction or faulty infrastructure. Red Hat highly recommends verifying that your hardware and infrastructure are working as expected before beginning to tune the network stack."17:25
pabelangerYup, have some cycles to watch17:25
fungiclarkb: red hat says we should check out our network hardware first17:26
clarkbfungi: wfm :)17:26
*** akshai has quit IRC17:26
clarkbpabelanger: I am trying to rereview cloud8 stack now that network info is updated17:26
clarkbalso I really want it to be cloud917:26
*** _nadya_ has quit IRC17:26
pabelangerclarkb: cool, I added both networks like we did with cloud117:27
clarkbyup looks goood17:27
mordredclarkb: link?17:27
*** ddieterly[away] is now known as ddieterly17:27
clarkbmordred: topic:osic-cloud817:28
mordredthanks17:28
*** akshai has joined #openstack-infra17:28
*** tongli has quit IRC17:28
mordredpabelanger: +A17:29
AJaegermtreinish: looking...17:29
mordredpabelanger: I left the nodepool change cause it has 2 +2s17:29
mordredand likely wants to be watched17:29
pabelangerya17:29
pabelangerwe need to restart nodepool-builder for that too17:29
clarkbjeblair: the new scaling pain point definitely seems to be image upload reliability as we add more and more regions fwiw. So I think changing how we do those uploads is a good thing17:29
clarkbpabelanger: are you going to +A that one or should I?17:30
clarkbthen I need to submit a rax ticket17:31
pabelangerclarkb: sure if you want to17:31
*** tongli_ has joined #openstack-infra17:31
clarkbdone17:31
fungithe featureset for the nic driver in these xen guests is pretty limited in what we can adjust. it does at least have tso and gso support (and both are enabled)17:32
fungigro as well17:32
* AJaeger sends two "-" signs to mtreinish to fix his change17:32
clarkbfungi: so tl;dr of ticket would be noticing tcp disconnects that both sides of connection does not expect for git.openstack.org (uuid here) between hosts that are external to rax and to those within rax using their public IPs. Provide list of example IP addrs and timestemps for disconnects then cross fingers?17:32
clarkbfungi: anything else you think we should add to that17:32
*** manjeets_ has quit IRC17:34
fungiclarkb: it's a shot in the dark, but sure. we should just expect a slow and fairly unsatisfying response17:34
*** tongli_ has quit IRC17:34
*** watanabe_isao has joined #openstack-infra17:34
*** yamamoto has quit IRC17:34
*** tongli_ has joined #openstack-infra17:35
fungii've not had much luck reporting nuanced network issues to rackspace in the past, except when they've been able to spot a noisy neighbor on the same hypervisor host17:35
* fungi remembers the ages we went round and round on their ipv6 dscp issues17:36
*** dtantsur is now known as dtantsur|afk17:36
*** watanabe_isao has quit IRC17:36
clarkbmy git03 clone against locahost https is continuing to be happy17:37
fungiwhere openssh changed their qos defaults and rackspace's gear was just outright dropping packets for ssh sessions after the qos shift in the session17:37
*** abregman has joined #openstack-infra17:37
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: POC: WIP: oooq undercloud install  https://review.openstack.org/35891917:37
*** bradjones has quit IRC17:38
openstackgerritMerged openstack-infra/system-config: Add credentials for osic-cloud8  https://review.openstack.org/35670317:38
*** bradjones has joined #openstack-infra17:38
*** tonytan_brb has quit IRC17:38
*** pcaruana has joined #openstack-infra17:39
*** tongli_ has quit IRC17:39
*** bradjones is now known as Guest3037417:39
fungii suppose we could up our rmem_default (it's currently 212992 while rmem_max is 33554432). it'll potentially make latency a little worse but may smooth out some bumps if we're briefly overrunning the buffer at times17:39
openstackgerritMerged openstack-infra/project-config: Enable infracloud servers in Nodepool  https://review.openstack.org/36375617:39
*** tongli has joined #openstack-infra17:40
fungiclarkb: jeblair: looked into the mysterious xen_netfront/xennet errors in dmesg yet?17:41
mtreinishAJaeger: haha, yeah that is pretty obvious now that you've pointed it out. I knew it was gonna be a dumb mistake like that17:42
fungii have to say i find their wording amusing if nothing else17:42
fungi[Wed Aug 31 17:39:12 2016] xen_netfront: xennet: skb rides the rocket: 20 slots17:42
*** degorenko is now known as _degorenko|afk17:42
* fungi can't imagine a more cryptic error)17:42
clarkbfungi: no, I checked dmesg and it was mostly just auth stuff17:42
clarkbdidn't see that17:42
*** yamahata has joined #openstack-infra17:42
*** electrofelix has quit IRC17:42
fungihttp://www.brendangregg.com/blog/2014-09-11/perf-kernel-line-tracing.html17:42
*** sambetts is now known as sambetts|afk17:43
*** ddieterly is now known as ddieterly[away]17:43
*** tongli has quit IRC17:43
*** markvoelker has quit IRC17:43
fungi"It's a driver bug with TSO. A very large skb can span too many pages (more than 16) to be put in the driver ring buffer. One workaround is "sudo ethtool -K eth0 tso off", for your interface. There's plenty of articles about this on the Internet, and they are easy to find thanks to our mysterious message."17:44
*** tongli has joined #openstack-infra17:44
openstackgerritJames E. Blair proposed openstack-infra/nodepool: Remove image-update cron  https://review.openstack.org/36383717:44
clarkbhuh do we want to try that before submitting the rax ticket?17:44
*** rwsu has quit IRC17:44
fungibug 131781117:45
openstackbug 1317811 in linux (Ubuntu Utopic) "Dropped packets on EC2, "xen_netfront: xennet: skb rides the rocket: x slots"" [Medium,Fix released] https://launchpad.net/bugs/131781117:45
clarkbfungi: we run centos on these machines fwiw17:45
pabelangerclarkb: okay, reproduced my network issue in rax-iad. Server online but no SSH access17:45
fungiclarkb: yep, just a datapoint. different distros, same kernel driver though17:46
clarkbya17:46
*** tongli has quit IRC17:46
*** tongli has joined #openstack-infra17:46
clarkbfungi: seems like it would be simple to disable tso, keep an eye on cpu utilization and see if haproxy is happier17:47
*** shashank_hegde has joined #openstack-infra17:48
fungiclarkb: yeah, i'm leaning that way but still reading17:48
fungii mean, tso exists for a reason. without it, our cpu utilization may go up a lot. on the other hand, we have way more available processing power on this machine than we use even at peak17:49
oomichipleia2: hello, thanks for reviewing.17:50
*** tongli has quit IRC17:50
clarkbdevstack-gate cores can I get reviews on https://review.openstack.org/#/c/312647/ thats another thing that will help increase the speed of our test jobs17:50
oomichipleia2: can you take a look at another https://review.openstack.org/#/c/358149/ ?17:50
clarkbfungi: yup, I think we will definitely want to watch cpu utilization closely17:50
*** tongli has joined #openstack-infra17:50
pleia2oomichi: I'll add it to my list17:50
pabelangerclarkb: we don't manage security groups in rackspace, do we?17:51
clarkbpabelanger: there are no security groups in rackspace so no17:51
*** dimtruck is now known as zz_dimtruck17:51
pabelangerclarkb: Ya, I thought that was the case.17:51
pabelangerso, networking issue or iptables for rax-iad17:52
jeblairclarkb, fungi: if you have a sec to give a quick "+1 in principle" to  https://review.openstack.org/363837  that would be nice17:52
jeblairmordred: ^17:52
clarkbjeblair: can trade you review for 312647? just another small tweak to speed up our jobs a little bit17:52
jeblairclarkb: we have xfs?17:53
*** Na3iL has quit IRC17:53
clarkbjeblair: we don't anymore beacuse we switched to dib build images everywhere, but centos defaults to xfs and I thin fedora does too17:54
*** rvasilets__ has left #openstack-infra17:54
clarkbjeblair: so people using devstack-gate on not our images may run into it17:54
*** ihrachys has quit IRC17:54
pabelangerrcarrillocruz: I manually kill some ansible-playbook processes on puppetmaster.o.o, I think there were 2 different playbooks running on infracloud between the 2 crontab processes17:55
clarkbactually that statement isn't entirely correct17:55
clarkbwe switch to dib build "minimal" images which don't rely on the prebuilt distro iamges which ship with an opinion on fs17:55
clarkbif you use the non minimal centos dib buulds you will get xfs17:55
*** _nadya_ has joined #openstack-infra17:55
jeblairclarkb: +2 but i rechecked since i didn't see current logs17:56
jeblair(so couldn't double check that it worked)17:56
phschwartzfungi: who is best to talk to, I just realized today is the final day to get tickets with the ATC code only to be shocked that I never got my atc code17:56
clarkbjeblair: sounds good17:57
*** akshai has quit IRC17:57
mordredjeblair: ++17:57
fungiphschwartz: i'll check my logs17:57
phschwartzfungi: ty17:58
*** rbrndt has quit IRC17:58
*** tongli has quit IRC17:58
pabelangerjeblair: clarkb: I think 363837 is great! Thanks for doing that17:58
*** tongli has joined #openstack-infra17:58
pleia2oomichi: just a request to add some documentation about this change to the README17:58
*** e0ne has joined #openstack-infra17:59
fungiphschwartz: i sent it to both your linux.vnet.ibm.com and progmad.com e-mail addresses on june 2017:59
phschwartzfungi: ah, found it. It was flagged as spam by ibm's mail server :(17:59
phschwartzfungi: ty for looking for me18:00
*** dkehn has quit IRC18:00
*** tongli_ has joined #openstack-infra18:00
fungiphschwartz: use it now, before it no longer covers 100% of teh registration cost18:00
*** dkehn_ has quit IRC18:00
fungiwhich is, like, tomorrow18:00
pabelangerjeblair: clarkb: does this mean moving forward we submit individual image builds to nodepool-builder, rather then all at one?18:00
oomichipleia2: so fast review, thanks :)  OK, I will update README ASAP18:00
clarkbpabelanger: its more subtle then that. The builders will used a shared db (zk) to store info on when they last built and uploaded images. When they see that they need to be updated they will start doing the work18:01
clarkbpabelanger: so its less about instructing a builder from some central brain and more about decentralized coordination based on synchronized data18:01
*** zz_dimtruck is now known as dimtruck18:01
pabelangerclarkb: Right, thats much better18:02
*** trown|lunch is now known as trown18:02
*** tongli has quit IRC18:03
openstackgerritMatthew Treinish proposed openstack-infra/project-config: Add puppet-lpmqtt project  https://review.openstack.org/36329718:03
*** sarob has joined #openstack-infra18:03
*** tphummel has quit IRC18:04
AJaegermtreinish: do you have a governance change for that?18:04
*** spzala has joined #openstack-infra18:05
*** akshai has joined #openstack-infra18:06
*** tphummel has joined #openstack-infra18:07
*** dkehn has joined #openstack-infra18:07
*** pt_15 has quit IRC18:08
*** salv-orl_ has joined #openstack-infra18:08
*** larainema has quit IRC18:08
mtreinishAJaeger: not yet, I can push one up for it right now18:08
AJaegermtreinish: please do - and then amend your two changes and add Needed-By18:09
openstackgerritFatih Degirmenci proposed openstack-infra/jenkins-job-builder: Add support for Parameterized Scheduler Plugin  https://review.openstack.org/35316518:09
*** jkilpatr has quit IRC18:10
openstackgerritDavid Shrewsbury proposed openstack-infra/shade: Allow str for ip_version param in create_subnet  https://review.openstack.org/36384618:10
*** srobert has joined #openstack-infra18:10
*** salv-orlando has quit IRC18:10
*** pvinci has quit IRC18:11
*** tongli has joined #openstack-infra18:11
*** larainema has joined #openstack-infra18:11
*** pt_15 has joined #openstack-infra18:12
*** dkehn_ has joined #openstack-infra18:12
*** eeiden has quit IRC18:13
*** stewie925 has quit IRC18:13
*** wcriswell has quit IRC18:13
clarkbfungi: find anything else? want to go ahead and disable tso?18:14
*** skipp has quit IRC18:14
openstackgerritDoug Hellmann proposed openstack-infra/project-config: require CLA for release-tools  https://review.openstack.org/36385118:14
*** sarob has quit IRC18:14
*** tongli_ has quit IRC18:14
openstackgerritMatthew Treinish proposed openstack-infra/project-config: Add puppet-lpmqtt project  https://review.openstack.org/36329718:14
openstackgerritMatthew Treinish proposed openstack-infra/project-config: Add lpmqtt project  https://review.openstack.org/36329618:14
fungiclarkb: sorry, got pulled into troubleshooting two other things in the middle of this. getting back to it now18:15
AJaegerteam, FYI, I have serious problems to reach even docs.openstack.org - there might be other networking problems with Rackspace...18:15
AJaegera reload helps normally18:15
mtreinishAJaeger: ^^^18:15
AJaegerfungi, 363297 and 363296 need your review18:15
AJaegermtreinish: once fungi is happy, I'll review ;)18:16
AJaegerthanks, mtreinish18:16
*** florianf has quit IRC18:16
openstackgerritKen'ichi Ohmichi proposed openstack-infra/bugdaystats: Add "daily" argument to update_stats()  https://review.openstack.org/35814918:16
oomichipleia2: ^^^: thanks for your reviewing. updated18:17
pleia2oomichi: thanks for adding that :)18:17
*** wcriswell has joined #openstack-infra18:17
AJaegermtreinish: +418:18
pleia2oomichi: hm, wouldn't this be optional?18:18
*** skipp has joined #openstack-infra18:18
*** tonytan4ever has joined #openstack-infra18:18
oomichipleia2: +1 for making it optional :)18:18
oomichipleia2: please give me -1 on the patch18:19
pleia2oomichi: ok18:19
mtreinishAJaeger: heh, is fungi ever unhappy :)18:19
mtreinishAJaeger: cool, thanks18:19
fungimtreinish: these hawaiian shirts reflect my state of mind18:20
AJaeger;)18:20
AJaegerproject-config cores, could you review the storyboard publishing change so that Zara has documents, please? https://review.openstack.org/36379518:21
* AJaeger waves good bye18:21
*** e0ne has quit IRC18:21
pleia2oh yay, storyboard docs18:21
pabelangerinfracloud-vanilla lives: http://logs.openstack.org/05/293305/49/check/gate-tempest-dsvm-neutron-linuxbridge/20aacae/console.html18:22
pabelangerjob failed however18:22
pabelangerrcarrillocruz: looks like quota issues in infracloud-vanilla18:22
*** e0ne has joined #openstack-infra18:22
*** hashar has joined #openstack-infra18:23
*** jkilpatr has joined #openstack-infra18:23
pabelangerhttp://mirror.regionone.infracloud-vanilla.openstack.org/18:23
pabelangerthat's the issue18:23
pabelangerwe have no AFS data18:23
pabelangerrcarrillocruz: ^18:23
*** hashar is now known as hasharAway18:23
*** ddieterly[away] is now known as ddieterly18:24
clarkbpabelanger: possible firewall issues?18:24
pabelangerclarkb: checking18:24
clarkbblocking our afs udp packets/18:24
*** shardy is now known as shardy_afk18:24
pabelangerOh18:25
*** shardy_afk has quit IRC18:25
pabelangerI don't think the server was reboot after coming online18:25
pabelangerAFS module /lib/modules/3.13.0-93-generic/fs/openafs.ko does not exist.18:25
clarkbpabelanger: so the modules are not loaded? launch node should always restart the instances...18:25
pabelangerclarkb: rcarrillocruz used cloud-launcher18:25
pabelangerso this is the likely issue18:26
pabelangerlet me reboot and see if that fixes it18:26
fungiclarkb: at https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1317811/comments/22 it's suggested that disabling scatter-gather similarly solved it18:26
openstackLaunchpad bug 1317811 in linux (Ubuntu Utopic) "Dropped packets on EC2, "xen_netfront: xennet: skb rides the rocket: x slots"" [Medium,Fix released]18:26
clarkbkk18:26
fungiclarkb: which `ethtool -k eth0`says is also enabled18:26
*** zaro has quit IRC18:26
*** sandanar_ has joined #openstack-infra18:27
clarkbfungi: I like that comments also confirms mtu is not at fault here since we are also 1500 mtu18:27
*** Sukhdev has joined #openstack-infra18:27
*** zaro has joined #openstack-infra18:27
fungiright, i checked ip link show eth0 there to be sure18:27
*** sarob has joined #openstack-infra18:28
*** _nadya_ has quit IRC18:28
*** sandanar_ has quit IRC18:28
fungisupposedly linux 3.17 has a workaround, but no idea if that's backported to rhel 7's 3.1018:28
*** rbrndt has joined #openstack-infra18:28
clarkbturning off sg will also likely lead to more cpu utilization or at least more blocking for the reads and writes ya?18:28
fungiyep18:28
pabelangerclarkb: still no kernel module. Trying to find out why18:30
*** sandanar has quit IRC18:30
pabelangerlikely want to revert until we confirm mirror is working18:30
clarkbpabelanger: did the iamge we based that on use the hardware support for ubuntu which doesn't have working afs module?18:30
pabelangerclarkb: I am not sure, rcarrillocruz launched the mirror this time18:31
clarkbpabelanger: what is the kernel version?18:31
*** amotoki has quit IRC18:31
pabelangerLinux mirror 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux18:31
clarkbI think tahts the right one18:31
clarkbfungi: ethtool -k eth0 to see the features enabled?18:33
rcarrillocruzI launched it with the trusty dib image18:33
*** vhosakot has quit IRC18:34
openstackgerritMerged openstack-infra/shade: Skip test creating provider network if one exists  https://review.openstack.org/36371518:34
pabelangerOh18:34
pabelangerrcarrillocruz: ya, we've never tested that18:34
pabelangerrcarrillocruz: for now, I've just used ubuntu cloud image18:34
clarkbfungi: I say lets try that and watch the cpu utilization and grep -v -- -- /var/log/haproxy.log18:34
*** _nadya_ has joined #openstack-infra18:35
*** rockyg has quit IRC18:35
fungiclarkb: yeah18:35
*** yamamoto has joined #openstack-infra18:35
*** vhosakot has joined #openstack-infra18:35
clarkbfungi: you going to run the command or should I?18:35
pabelangerso, I see some puppet error around openafs18:35
pabelangeropenafs-modules-dkms was also missing18:36
pabelangerso, I don't think we launched the server properly18:36
fungiclarkb: i can... did you have a reliable way to reproduce the issue other than tailing haproxy logs?18:36
mgagneclarkb: alright, mtl01 is ready to go. We tested all compute nodes and they didn't show any problem.18:36
*** crst has joined #openstack-infra18:36
pabelangerand we should delete and reprovision again18:36
*** dprince has quit IRC18:36
*** sshnaidm is now known as sshnaidm|afk18:36
*** crst is now known as Guest554118:36
clarkbfungi: no just tailing logs. I also have a local clone loop I can run against it which was good for confirming the fail here was unexpected and unexpected in haproxy18:36
clarkbfungi: I can restart my clone loop as soon as you tell me sg is off18:36
fungiclarkb: should we try disabling tso first, or sg, or both?18:37
clarkbfungi: I think sg18:37
clarkbas the tso thing seems more related to funny MTUs which we don't have18:37
*** dtardivel has quit IRC18:37
fungiagreed, that seems like it would be slightly lower-impact too18:37
fungiclarkb: oh, sg is needed for tso18:38
clarkbah18:38
openstackgerritPaul Belanger proposed openstack-infra/project-config: Revert "Enable infracloud servers in Nodepool"  https://review.openstack.org/36388118:38
pabelangerrcarrillocruz: clarkb^18:38
clarkbso both go off if we turn off sg?18:38
rcarrillocruzpabelanger: i was not sure which image i should based it on18:38
fungiso it's possible they saw disabling sg fix it because disabling sg disables tso too18:38
rcarrillocruzasked it and got to use dib18:38
rcarrillocruzwhich seems18:38
clarkbfungi: ya18:38
rcarrillocruzto lack features to have afs working :/18:38
rcarrillocruzso, kernel not having afs, no?18:38
fungiclarkb: anyway, http://paste.openstack.org/show/565280/18:38
pabelangerrcarrillocruz: what I've used for the last 3 mirrors: https://cloud-images.ubuntu.com/trusty/current/18:39
rcarrillocruzk18:39
fungidisabling sg also disabled tso and gso18:39
pabelangerrcarrillocruz: ya, openafs is missing kernel modules18:39
rcarrillocruzthen leave that with me18:39
rcarrillocruzi'll nuke the image on the openstackci tenant18:39
rcarrillocruzupload from cloud images18:39
rcarrillocruzand launch it again18:39
clarkbfungi: my cloen loop is running as is my tail piped through grep -v -- --18:39
rcarrillocruzi'll do when back at home18:39
fungiclarkb: so far cpu utilization seems unchanged18:39
pabelangerrcarrillocruz: we also need to update cloud-launcher to reboot the server after puppet runs18:39
rcarrillocruzi've +2 the change you linked18:39
zarofungi: are you monitoring http threads on javamelody?18:40
zarofor gerrit18:40
pabelangerrcarrillocruz: we also have a quota issue, going to dig into that now18:40
clarkbfungi: still seeing a few cD's maybe those settings won't take effect on old connections? I don't know how the kernel deals with that18:40
pabelangerrcarrillocruz: but for bluebox, I manually did it since we don't have ansible modules yet18:40
fungizaro: no, virtual network interface driver tuning on the git load balancer18:40
rcarrillocruzpabelanger: well, the pre-post provisioning should be really out of the launcher, launcher just provisions18:40
rcarrillocruzi'll do the swap18:40
rcarrillocruzset hostname18:40
rcarrillocruzand reboot18:40
fungiclarkb: yeah, i'm expecting a potentially delayed reaction18:40
rcarrillocruzmanually18:40
rcarrillocruzi really need to get back to fix os_server to make the pre/post actions on the launcher18:41
rcarrillocruzi def. want to chat with folks about what i have in mind at mid cycle18:41
*** ddieterly is now known as ddieterly[away]18:41
zarook, i wasn't able to find it on javamelody so was wondeering18:41
rcarrillocruzpotentially, it can even help for replacing / upgrading servers18:41
clarkbpabelanger: rcarrillocruz it is odd that the package manager wouldn't pull in what we need for dkmsing afs18:41
fungiclarkb: si on Cpu7 seems a little elevated now over what it was hovering at before the change18:41
rcarrillocruzpabelanger: back in a bit, leave the thing with me, i'll do in an hour when back18:42
fungiclarkb: also sy is up a bit18:42
*** yamamoto has quit IRC18:42
clarkbjust saw another cD in haproxy logs18:42
*** tongli has quit IRC18:43
openstackgerritHongbin Lu proposed openstack-infra/project-config: Zun: rename higgins to zun (2)  https://review.openstack.org/32924718:43
*** Sukhdev has quit IRC18:43
fungii assume the clarkb no new "rides the rocket" in dmesg since 18:37:2018:43
*** chem has quit IRC18:43
clarkbI haven't checked that but can18:43
pabelangerclarkb: looks like it tried, but failed for some reason18:43
fungithat was a weird started-typing-one-thing-then-decided-to-type-something-else18:44
*** _nadya_ has quit IRC18:44
*** tongli has joined #openstack-infra18:44
fungiclarkb: no new "rides the rocket" in dmesg since 18:37:2018:44
fungiis what i meant to say18:44
*** chem has joined #openstack-infra18:44
clarkbah yup I confirm18:44
clarkbunitl I can get my client to disconnect locally with a cD in haproxy we can't really be sure that those clients didn't just disappear on their own18:45
fungialso entirely possible our disconnects are not at all related to packet loss from skb reassembly issues18:45
clarkbyup18:45
*** ddieterly[away] is now known as ddieterly18:45
*** eeiden has joined #openstack-infra18:45
fungibut sy and si on Cpu7 have climbed quite a bit more, which lends credence to your theory that it doesn't take effect for established sockets18:46
*** ddieterly is now known as ddieterly[away]18:46
*** esp has quit IRC18:46
oomichipleia2: I did re-think more on https://review.openstack.org/#/c/358149/3/README.rst , I feel it is better to avoid it optional18:46
pabelangerclarkb: for some reason, puppet didn't properly install opensafs-client: http://paste.openstack.org/show/565281/18:46
oomichipleia2: because the HTML will have to contain both data pages for current one and daily18:46
*** tongli_ has joined #openstack-infra18:46
zarofungi: we finished the oline index testing and the result shows that reindex worked without error while our small band of testers were banging on it during the index.18:47
*** stewie925 has joined #openstack-infra18:47
pleia2oomichi: I see, I'll make some time to play around with it then18:47
clarkbfungi: we can also watch that grafana graph that jeblair pointed out /me finds that tab18:47
pabelangerclarkb: and rather then shave the yak with a ubuntu-trusty DIB, I'd say switch back to ubuntu-cloud image for now.  Then work on our DIB images for control plan servers strategy18:48
pleia2oomichi: I'll comment in the review once I finish18:48
fungizaro: that's great news! i'm looking forward to the maintenance on friday18:48
*** tongli has quit IRC18:48
* pleia2 still hopes most folks will have wandered off by rename time on Friday ;)18:48
clarkbpabelanger: ya that seems fine18:48
oomichipleia2: thanks again :)18:48
pleia2it was a bit slow here and there18:48
zarofungi: only thing to note is that the online reindex pegged the CPU on review-dev instance. so it was slow during the index18:48
pleia2yeah, we'll want to keep an alert going throughout the reindex18:49
*** hasharAway is now known as hashar18:49
clarkbpabelanger: huh it couldn't get an apt lock18:49
clarkbpabelanger: possibly a dirty image build?18:49
pleia2noting about potential degraded performance or somesuch18:49
zaroyou can view the javamelody logs https://review-dev.openstack.org/monitoring18:49
fungizaro: yep, i expect performance may be slow18:49
*** senk has quit IRC18:49
fungibut it'll also be a very low-activity time for us18:49
pabelangerclarkb: possible18:50
clarkbpabelanger: rcarrillocruz so why didn't the ansible cloud launch fail ?18:50
openstackgerritArie Bregman proposed openstack-infra/zuul: Handle non-valid HEAD  https://review.openstack.org/36204918:50
clarkbI think we need to make sure that cloud launch reboots servers after booting and configing them and it should fail when puppet fails18:50
rcarrillocruzcos it just provisions18:50
rcarrillocruzit doesn't run puppet18:50
clarkbwhat runs puppet?18:50
pabelangerOh18:51
rcarrillocruzi ran it the launcher from clouds_layouts18:51
zarofungi: i assume zuul will not be running?  but users will continue to be able to access it?18:51
rcarrillocruznot the launch-node ansible thingy18:51
clarkbrcarrillocruz: yes I know18:51
clarkbrcarrillocruz: I am saying that what you ran has some things we need to address including failing when it should fail and rebooting thei nstace18:51
rcarrillocruzi kicked puppet manually myself afterwards18:51
clarkbuh18:51
fungizaro: zuul will be running, but we've not had trouble keeping up with volume this week so i don't expect it will be particularly hampered by gerrit slowness18:51
clarkbok I don't think we should be using that tool for now if it isn't going to do these things for us (we have automated them in launch node because they are important)18:51
pabelangerAgreed, don't want to loose that step right now18:52
pabelangerlose*18:52
*** Guest81 has joined #openstack-infra18:52
*** david-lyle_ has joined #openstack-infra18:52
*** david-lyle_ has quit IRC18:52
zarofungi: ok, we should at least notify users that they may see "Working" on Gerrit UI during downtime. that's what was happening for us18:53
clarkbfungi: I am not really seeing the SDs and cDs go away. I would've expected htat after 10 minutes or so they should be down to a trickle18:53
clarkbfungi: probably time to submit that rax ticket after all18:53
clarkbthough my local cloen is still happy my throughput has fallen a little bit18:54
*** Guest81 has quit IRC18:54
clarkbwe were doing 3-7MBps but now its 2-5MBps might be in the margin of internet error though18:54
clarkboh just had one under 1MBps18:55
*** akshai has quit IRC18:55
*** senk has joined #openstack-infra18:55
*** senk has quit IRC18:55
*** akshai has joined #openstack-infra18:55
clarkbjeblair: also I have roughly been able to correlate SDs in haproxy log to the eresps in grafana18:56
*** ijw has joined #openstack-infra18:57
clarkbfungi: any opposition to me filing that ticket now? I don't want to file it if you think its something we need to fix on our end18:57
*** Guest5541 has quit IRC18:58
clarkbbut I need to pop out and find lunch so want to file it before leaving for a bit18:58
openstackgerritPaul Belanger proposed openstack-infra/system-config: Add osic-cloud8 to cloud-launcher  https://review.openstack.org/36389018:58
pabelangerclarkb: fungi: have a moment to review^, adds osic-cloud8 to cloud-launcher so we can setup security groups18:58
fungiclarkb: nah, i'm in favor of the ticket18:59
mgagneclarkb, pabelanger: Let me know when you are ready to enable mtl01. it's ready now.18:59
pabelangermgagne: sure, give me a few minutes18:59
*** ddieterly[away] is now known as ddieterly18:59
*** alexey_weyl has joined #openstack-infra18:59
clarkbpabelanger: mgagne did we still want to rerun a test job on it first?18:59
*** dprince has joined #openstack-infra18:59
clarkbI can do that after lunch its pretty simple. Or if we are confident the issue is resolved we can just reenable19:00
*** Guest81 has joined #openstack-infra19:00
mgagneclarkb: up to you at this point, better be safe I gues19:00
*** mwhahaha has joined #openstack-infra19:00
pabelangerclarkb: mgagne: Ya, lets do that19:00
openstackgerritPaul Belanger proposed openstack-infra/system-config: Add osic-cloud8 to cloud-launcher  https://review.openstack.org/36389019:00
*** kushal has quit IRC19:01
clarkbpabelanger: mgagne especially since its just before feature freeze we should avoid making tests unhappy as much as possible19:01
*** ildikov has joined #openstack-infra19:01
mgagnehehe19:01
*** esp has joined #openstack-infra19:01
clarkbpabelanger: I can run that test after lunch if you like or if you want to do it just boot the nodepool image then run the reproduce.sh script as jenkins19:01
clarkbI think taht should work19:01
pabelangerYup, I can do that now19:01
clarkbfungi: jroll JayF 160831-dfw-0002533 has been submitted re the haproxy unhappyness19:02
*** hrybacki is now known as hrybacki|afk19:02
JayF*blink*19:03
JayFI have no context for that?19:03
jrollditto19:03
fungiclarkb: i guess in a few minutes i'll reenable sg/tso/gso19:03
clarkbJayF: jroll basically tcp connectivity in rax dfw seems to be flaky. Its really noticeable on our git mirror but there are some reports it may be affecting review.o.o and possibly docs hosting as well19:03
clarkbJayF: jroll client and server both notice that a tcpconnection has gone away unexpectedly for both sides19:04
jrollclarkb: fun stuff19:04
fungiit definitely seems to have shifted more work into the cpu as expected, and got rid of the xen_netfront/xennet errors the kernel was spewing, but if it doesn't fix the disconnects i'd rather we switch back to kernel tuning defaults anyway19:04
clarkbJayF: jroll we see it from internet connections and between hsots in the same region so likely in rax and not on the internets19:04
clarkbfungi: yup19:04
jrollclarkb: not sure I can escalate that too much (mostly don't know how/where right off hand) but I'll take a look and see what I can do19:04
*** ilyashakhat has joined #openstack-infra19:05
clarkbok lunch now19:05
fungijroll: suspicion is some device in the network (probably in relatively close topological proximity to our git.openstack.org haproxy instance) is closing active flows >120 seconds in age19:05
clarkbjroll: JayF fungi has all the details too if you need more datas19:05
fungiclarkb: i'm curious though what it is about the sessions going through the haproxy hitting this, that our ssh sessions aren't impacted19:06
jrollcool19:06
openstackgerritMerged openstack-infra/project-config: Revert "Enable infracloud servers in Nodepool"  https://review.openstack.org/36388119:07
fungithough there are a number of potential factors, not the least of which are ipv4 vs ipv6 and different qos levels/dscp precedence19:07
fungiinteractive openssh sessions use dscp 0x04 while bulk protocols tend to set 0x0219:09
*** egarbade_ has quit IRC19:10
*** ifarkas is now known as ifarkas_afk19:11
openstackgerritDoug Hellmann proposed openstack-infra/project-config: be more careful using setuptools commands in release script  https://review.openstack.org/36389519:12
*** Guest81 has quit IRC19:12
fungior 0x0 it looks like19:13
*** links has quit IRC19:13
fungiclarkb: are v4 and v6 sessions equally affected?19:16
*** ddieterly is now known as ddieterly[away]19:16
pabelangerclarkb: mgagne: running on 198.72.124.7019:16
clarkbfungi: its mostly v4 but thr occasional v6 is seenin haproxy log19:16
fungik19:17
openstackgerritEmilien Macchi proposed openstack-infra/project-config: tripleo-ui: add missing jobs for release management  https://review.openstack.org/36389719:17
*** piet has quit IRC19:18
openstackgerritEmilien Macchi proposed openstack-infra/project-config: tripleo-ui: add missing jobs for release management  https://review.openstack.org/36389719:18
*** salv-orl_ has quit IRC19:19
*** e0ne has quit IRC19:20
*** sarob has quit IRC19:20
*** ddieterly[away] is now known as ddieterly19:20
*** coolsvap has quit IRC19:22
*** rfolco has quit IRC19:22
fungiclarkb: worth noting, if i `time nc git.openstack.org ssh` it closes at real 2m0.167s (coincidence?)19:23
*** daemontool has quit IRC19:23
*** rfolco has joined #openstack-infra19:23
fungihttp closed for me much earlier though, at real 0m51.389s19:24
*** Guest81 has joined #openstack-infra19:25
*** akshai has quit IRC19:25
fungiconsistently at 51s19:25
*** Guest81 has quit IRC19:27
fungithat may be apache on a backend closing it for lack of inbound request though19:27
*** kzaitsev_mb has joined #openstack-infra19:28
*** akshai has joined #openstack-infra19:28
fungiclarkb: `time nc git.openstack.org git` also ends at real 2m0.164s19:28
*** claudiub has quit IRC19:29
*** gyee has quit IRC19:29
rcarrillocruzcan i please get reviews for https://review.openstack.org/#/c/363751/ https://review.openstack.org/#/c/363805/ https://review.openstack.org/#/c/363806/19:30
*** Swami has joined #openstack-infra19:30
rcarrillocruzand https://review.openstack.org/#/c/363809/19:30
rcarrillocruzto pull them out infracloud19:30
*** jkilpatr has quit IRC19:30
pabelangerrcarrillocruz: wow, is that how we disable hosts?19:31
pabelanger1 massive line with hostnames?19:31
*** akshai has quit IRC19:31
pabelangermuch sadness19:31
rcarrillocruznot sure, that's why i wanted to ask for reviews19:31
rcarrillocruzi understand from fungi that for long-term disabling, that's the way19:31
rcarrillocruzbut i may be wrong19:32
*** akshai has joined #openstack-infra19:32
*** sdake_ has joined #openstack-infra19:32
rcarrillocruzwe can put them on emergency, not back in git repo, but not sure when those servers will be fixed19:32
openstackgerritAlexey Weyl proposed openstack-infra/project-config: Vitrage tempests  https://review.openstack.org/36390519:32
rcarrillocruzi agree the mechanism seems ugly enough :/19:32
fungircarrillocruz: well, that's the way to disable hosts from the dynamic inventory. if infra-cloud uses a static inventory you could just remove the inventory entries?19:32
rcarrillocruzok19:33
rcarrillocruzthat too19:33
rcarrillocruzi'll abandon19:33
*** sdake has quit IRC19:33
rcarrillocruzbetter not be consistent here instead of putting a long line of hosts19:33
rcarrillocruzpabelanger: trusty image for mirror, yeah?19:34
*** _nadya_ has joined #openstack-infra19:34
alexey_weylHi,19:34
pabelangerrcarrillocruz: I just downloaded one into /tmp19:34
pabelangerfor osic-cloud819:34
alexey_weylplease approve the following change:19:34
alexey_weylhttps://review.openstack.org/#/c/363905/19:34
rcarrillocruzon puppetmaster?19:34
pabelangerrcarrillocruz: also, I think I just recreated the mirror in infra-cloud19:34
pabelangerrcarrillocruz: yes19:34
*** sdake has joined #openstack-infra19:35
rcarrillocruzlet me use it then19:35
pabelangerrcarrillocruz: I ran cloud-launcher to confirm osic-cloud8 settings19:35
rcarrillocruzi'll push tomorrow the change for being able to run the launcher against a single cloud19:35
*** larsks has joined #openstack-infra19:35
rcarrillocruzi.e. ansible-playbook run_launcher blah -e "cloud=osic-cloud1"19:35
openstackgerritRichard Theis proposed openstack-infra/irc-meetings: Add networking-ovn meeting  https://review.openstack.org/36390619:36
rcarrillocruzpabelanger: also, not sure if you know19:36
rcarrillocruzbut you can pass tags to the launcher19:36
rcarrillocruze.g.19:36
*** sdake_ has quit IRC19:36
rcarrillocruzyou want to create just projects19:37
rcarrillocruzyou run19:37
rcarrillocruzansible-playbook blah --tags projects19:37
pabelangerya19:37
rcarrillocruzand will just process projects from the clouds_layouts.yml19:37
rcarrillocruzk19:37
*** flepied has joined #openstack-infra19:37
pabelangerI think tomorrow we should see how to loop it into our ansible wheel, its been stable for a while19:37
*** sarob has joined #openstack-infra19:38
*** tphummel has quit IRC19:38
rcarrillocruzmordred: hah, https://github.com/ansible/ansible-modules-core/issues/1658#issuecomment-236480459 , we were chatting about that yesterday, just got it updated on my mailbox :D19:39
mordredrcarrillocruz: haha19:39
rcarrillocruzpabelanger: if we had ^, we could just drive images with the launcher by pointing to the cloud-images url, howerver mordred says the 'feature' is not really a thing in v2 or smth :/19:40
fungiclarkb: on a lark, i'm doing an isolated packet capture of both ends for an idle netcat socket to git.o.o:git so i can compare what they see at the 2-minute mark socket termination19:41
mordredflaper87: ^^19:41
*** sarob has quit IRC19:42
clarkbfungi: ok curious what you find19:42
fungiclarkb: i'll paste.o.o the result19:42
pabelangerrcarrillocruz: Ya, would rather just work on 2nd nodepool instances to manage control plane DIBs19:42
*** pvaneck has joined #openstack-infra19:43
*** berendt has quit IRC19:43
*** jkilpatr has joined #openstack-infra19:43
*** rfolco has quit IRC19:43
pabelangermrhillsman: raddaoui: Around to help with an network issue for osic-cloud8? having issues sshing into 172.22.132.3919:44
*** tphummel has joined #openstack-infra19:44
mordredrcarrillocruz: yah - I see no mention of it in the glance v2 api docs19:44
pabelangermrhillsman: raddaoui: 544c3700-c31f-4270-ac3b-a7bea98fd742 in question19:44
*** rfolco has joined #openstack-infra19:44
mordredrcarrillocruz: it's possible there might be a Task that does it - but I do not think we should support that19:45
pabelangermtreinish: raddaoui: Did I need to attach a FIP? or will external-v4 provide me the address?19:45
rcarrillocruzpabelanger: i really think we should have maybe a dib for the mirror with our keys baked in... that or figure out why the currenty ubuntu-trusty dib image doesn't work for afs19:45
fungiclarkb: hrm... server initiates a fin at the 2-minute mark, so maybe this is a coincidence and it's also the default time to close a git socket with no request? http://paste.openstack.org/show/565286/19:45
rcarrillocruzcos with it we don't have to generate a tmp keypair, we just launch the mirror and we have our keys baked in19:45
*** ddieterly has quit IRC19:45
*** Guest81 has joined #openstack-infra19:46
pabelangerrcarrillocruz: yes, but I want us to have the build process automated first. With out that, we're basically in the same boat19:46
mtreinishpabelanger: ?19:46
pabelangermtreinish: mistab sorry19:46
mtreinishheh, no worries19:46
clarkbfungi: oh thats goign through haproxy right? I think haproxy will time things out which lines up with the cD explanation19:46
clarkbfungi: maybe 2 minutes is that timeout19:46
openstackgerritMerged openstack-infra/project-config: require CLA for release-tools  https://review.openstack.org/36385119:46
openstackgerritDavid Shrewsbury proposed openstack-infra/shade: Allow str for ip_version param in create_subnet  https://review.openstack.org/36384619:46
clarkbfungi: that doesn't explain why ssh would do similar though19:46
fungiclarkb: yeah, i'm capturing one of those next19:47
rcarrillocruzyup, i def. want to work on that when infracloud is rolling19:47
fungihaving trouble figuring out how to tail sshd logs under systemd though19:47
clarkbfungi: journalctl -f something something somethign19:47
pabelangerrcarrillocruz: if we can get quorum, we'd launch nodepool02.o.o, have it use all-clouds.yaml and be the delivery system for images to our control plane. Starting with -minimal images, and iterating on that19:47
clarkbfungi: journalctl -f -u sshd19:48
fungihow appropriate19:48
clarkb-f for follow and -u to specify the unit19:48
fungiyeah, i read it another way, but that works too19:48
clarkb-u will accept a pattern too if you need to be fancier19:48
*** alexey_weyl has quit IRC19:49
*** akshai has quit IRC19:49
pabelangermrhillsman: raddaoui: Just tried external-v6, but looks like we also got an ipv4 address19:50
pabelanger| addresses                            | external-v6=2001:4800:1ae1:17:f816:3eff:fe1d:6ef0, 172.22.180.5419:50
mordredpabelanger: I do not think that's a terrible idea19:50
pabelangermordred: right, we just need to make sure we have enough of them19:51
clarkbmordred: pabelanger fwiw I really don't want to rely on glance image upload for servers that matter right now19:51
clarkbIMO we really need much more reliable glance before thats doable19:51
mrhillsmanshould be good pabelanger19:51
mrhillsmani want to say that those ipv4 addresses are just tag alongs19:52
*** javeriak_ has joined #openstack-infra19:52
pabelangerclarkb: Yes, we need make things more stable for sure19:52
mrhillsmanas single stack ipv6 is no bueno19:52
clarkbits possible the fail rate is realted to iamge size which will be better for our other servers. Just don't want us to decide to switch before we have a reliable service backing the new images19:52
fungiclarkb: same story... server initiated a fin at the 2-minute mark http://paste.openstack.org/show/565287/19:53
mrhillsmanraddaoui am i correct assuming that?19:53
pabelangerclarkb: right, we'd need to do some testing for sure19:54
pabelangermrhillsman: okay, will relaunch here in a minute19:54
clarkbfungi: huh19:54
fungiclarkb: also sshd doesn't log the connection/disconnect when there were never any bytes transmitted over the socket19:55
*** annegentle has quit IRC19:55
fungiother than the sshd banner19:55
*** javeriak has quit IRC19:55
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: pingtest: run 'openstack stack failures list' when failure  https://review.openstack.org/36391819:56
fungiclarkb: strangely, if i sigint my nc before the timeout, sshd logs "Did not receive identification string"19:56
clarkbhuh19:57
clarkbso much huh19:57
fungibut if i let it go the full 2 minutes, it never logs anything about the termination19:57
*** tphummel has quit IRC19:58
*** tphummel has joined #openstack-infra19:59
fungii'll check some other systems to see if i can identify consistent behaviors20:00
*** aimeeu has quit IRC20:00
*** aimeeu has joined #openstack-infra20:00
clarkbfungi: and this is happening with nc right not with an interactive ssh proper connection20:00
*** ldnunes has quit IRC20:00
*** annegentle has joined #openstack-infra20:00
fungiclarkb: correct, i wanted to rule out ssh keepalives20:00
openstackgerritRichard Theis proposed openstack-infra/irc-meetings: Add networking-ovn meeting  https://review.openstack.org/36390620:00
fungiand key renegotiation and all that20:01
clarkbya20:01
*** scottynomad has quit IRC20:01
fungiconfirmed 2 minutes to a personal debian server in rackspace iad20:02
fungitrying a local debian server on the same lan as my client next20:02
*** ldnunes has joined #openstack-infra20:02
*** javeriak has joined #openstack-infra20:03
*** gyee has joined #openstack-infra20:03
*** javeriak_ has quit IRC20:03
*** _nadya_ has quit IRC20:03
*** tonytan4ever has quit IRC20:03
*** azvyagintsev has quit IRC20:03
*** jamesdenton has quit IRC20:04
*** rfolco has quit IRC20:04
*** sarob has joined #openstack-infra20:05
*** rfolco has joined #openstack-infra20:05
*** tphummel has quit IRC20:05
pabelangermrhillsman: still getting SSH timeouts, mind taking a look? e6f9fe21-24ea-4d5b-9682-779a69ce06f720:06
pabelangermrhillsman: using external-v6 network20:06
openstackgerritMerged openstack/os-client-config: Go ahead and handle YAML list in region_name  https://review.openstack.org/36248320:07
*** sigmavirus is now known as sigmavirus|awa20:07
fungiclarkb: any idea roughly how many lines of logs per day we index in logstash?20:08
clarkbfungi: I can give you exact numbers if you want :)20:08
raddaouipabelanger: was in meeting, reading20:08
fungiclarkb: it's for something anecdotal, so don't spend time looking it up20:08
clarkbwell I need to look it up to know anyways20:09
*** esp has quit IRC20:09
fungiright, i mean it's unimportant20:09
clarkbyseterday was 728 million documents20:09
clarkbone doc is roughly one line20:09
fungibut thanks!20:09
fungiclarkb: confirmed, a debian server on my local lan also disconnects an initially idle netcat socket to its sshd at the 2-minute mark as well20:10
raddaouipabelanger: yeah single stack ipv6 dosent work, that why external-v6 has two subnets, but you can only connect to VM with the ipv6 one20:10
*** eharney has quit IRC20:11
pabelangerraddaoui: sure, the issue right now is, using external-v6, we're not able to SSH into the server on both ipv4 or ipv620:12
pabelangerraddaoui: our security groups look to be correct20:12
clarkbpabelanger: this is cloud8?20:12
pabelangerclarkb: yes20:12
clarkbI can take a quick look oh except osc is still broken with neutron /me shakes fist20:12
pabelanger2001:4800:1ae1:17:f816:3eff:fe04:b48520:12
*** ilyashakhat_mobi has joined #openstack-infra20:13
pabelangeris the IP is question20:13
*** bethwhite_ has quit IRC20:13
clarkbpabelanger: whats the isntance uuid?20:13
pabelangerclarkb: e6f9fe21-24ea-4d5b-9682-779a69ce06f720:13
haleybclarkb: did you break ipv6 again? :)20:13
pabelangerclarkb: it might die shortly, launch-node about to time out20:13
pabelangerYa, just got deleted20:14
clarkbpabelanger: server list shows ya no instances20:14
pabelangerlet me launch another with --keep20:14
* clarkb will try booting one manually20:14
raddaouiso for you openstackci mirror project, you should attach your VM to an internal network attached to external-v4 and assign it the fip20:14
clarkboh I can wait20:14
raddaouilike the test VM I had before20:14
*** ilyashakhat has quit IRC20:14
pabelangerOh20:14
pabelangerokay, I am not doing that20:14
clarkbhrm?20:14
raddaouiand then you can use the public ip mapped to it as specified on the email20:15
pabelangerlet me quickly try that20:15
*** piet has joined #openstack-infra20:15
raddaouiyeah I tested that and works fine20:15
clarkboh its like the old style cloud120:15
clarkbso you have to create a network, router, subnet, dhcp range, dns servers, etc etc20:15
clarkbthen wire that all up to be able to get floating IPs on external-v420:16
fungiclarkb: `sudo ethtool -K eth0 sg on` conveniently reenabled tx-scatter-gather, tcp-segmentation-offload, tx-tcp-segmentation and generic-segmentation-offload again20:16
clarkbfungi: ncie20:16
raddaouithere is one fip allocated to that project 172.22.132.3520:16
*** tongli_ has quit IRC20:16
raddaouiwhich is mapped internally to the public ip20:17
fungiclarkb: and the "xen_netfront: xennet: skb rides the rocket" lines in dmesg instantly came back20:17
pabelanger1 sec20:17
clarkbfungi: our low docs on the weekend is 126million per day and our high is 750million per day20:17
clarkbraddaoui: uhm20:17
fungiclarkb: thanks!20:17
clarkbisn't 172.22.132.35 not routable?20:17
clarkber rather not globally routable?20:17
clarkbfungi: we consistently float right around 700million during work days20:17
*** kgiusti has left #openstack-infra20:17
raddaouiyeah but it is mapped to  72.3.183.4520:18
clarkbraddaoui: so we are behind two NATs?20:18
*** apuimedo|away is now known as apuimedo20:18
*** waht has joined #openstack-infra20:18
clarkb72.3.183.45 to 172.22.132.35 to whatever range we choose for our neutron subnet?20:18
raddaouiyeah I think those ipas are mapped in the firewall20:19
raddaouiips*20:19
raddaouiyes20:19
clarkb:(20:19
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: pingtest: run 'openstack stack failures list' when failure  https://review.openstack.org/36391820:19
*** esikachev has quit IRC20:20
clarkbpabelanger: fwiw its almost impossible to get the router + subnet + network + wiring + dhcp + dns resolvers etc stuff correctfrom the command line. Since it only ever needs to be done once I have chaeted and its the one thing I use horizon for. That said I think this cloud may go away  under us ya? so this is a good candidate to be added to cloud launch if we can figure out the incantation20:20
openstackgerritOleksandr Berezovskyi proposed openstack-infra/project-config: Clone sources of ironic-inspector and ironic-inspector-client  https://review.openstack.org/36392220:21
raddaouiyou can attach it directly to the external-v4 but you should make sure vm has 172.22.132.3520:21
*** esp has joined #openstack-infra20:21
clarkbraddaoui: you mean by statically configuring the network on the host?20:22
clarkbour tooling assumes dhcp or config drive network info. Not sure we can hack that to get that addr20:22
*** dimtruck is now known as zz_dimtruck20:23
pabelangerclarkb: sure, I can work on cloud_launcher20:23
*** sarob has quit IRC20:23
*** sarob has joined #openstack-infra20:24
*** sarob has quit IRC20:25
*** akshai has joined #openstack-infra20:25
raddaouiwell that's why I allocated that fip from the beggining to the project so you can allocate it whenever you attach your VM to an internal network20:26
*** hockeynut has quit IRC20:26
*** cdent has quit IRC20:27
fungii get that neutron probably can't handle a /32 host route with an rfc-1918 serial, but i'm curious why that global address isn't itself the fip pool20:28
*** sarob has joined #openstack-infra20:28
pabelangerraddaoui: Ya, we'd like to automate that process. We've actually rebuilt our mirrors a fair bit recently20:29
openstackgerritafazekas proposed openstack/os-testr: Construct a list of test cases instead of passing a regexp  https://review.openstack.org/34887820:29
*** vhosakot has quit IRC20:29
*** jpeeler has quit IRC20:30
*** Goneri has quit IRC20:30
*** vhosakot has joined #openstack-infra20:30
*** ilyashakhat_mobi has quit IRC20:31
fungithough i guess the way it's worked around in the cloud1 redesign is that there's just a pat in front of the rfc-1918 "public" neutron net?20:31
fungiso no fip20:31
*** sarob has quit IRC20:31
fungino, wait, that's for the openstackjenkins project, but not the mirror server in the openstackci project20:32
clarkbfungi: in cloud1 they gave us a provider network with directly assigned "public" IPs20:32
clarkbno rfc-1918 involved20:32
openstackgerritMerged openstack-infra/tripleo-ci: Implement scenari001, 002 and 003  https://review.openstack.org/36250420:32
raddaouiI don't see the right fip allocated to the project anymore. let me fix this and then will assign it to one of your running VM20:32
clarkbbefore they did that the floating Ips were properly routable and we just had a single NAT20:32
fungiclarkb: that's not what i see bount to eth020:32
fungier, bound20:32
fungiinet 172.99.106.183/24 brd 172.99.106.255 scope global eth020:32
clarkbfungi: its possible the mirror is still using the old setup20:32
fungithat's currently mirror.regionone.osic-cloud1.o.o20:32
clarkbfungi: oh wait the mirror is funny because its got both networks on it iirc20:32
*** claudiub has joined #openstack-infra20:32
* clarkb logs in20:32
fungiit's also still not correctly dual-stack20:33
clarkbya ok eth1 is ipv6 global network. eth0 is on the private network with rfc1918 addr and that is NATed by neutron with a floating IP20:33
clarkbfungi: it isn't?20:33
fungioh, eth1!20:33
fungino wonder i was confused20:34
*** Jeffrey4l__ has joined #openstack-infra20:34
clarkbya eth1 is sepoarate20:34
*** markusry has quit IRC20:34
*** markusry has joined #openstack-infra20:34
fungiso 10.0.13.115 for the "public" v4 i guess20:34
*** markusry has quit IRC20:34
* mordred reads scrollback20:34
fungiamusing to see eth1 as the global/egress interface20:34
clarkb10.0.13.115 is the ipv4 addr from the ipv4 subnet that shares the network with the ipv6 subnet for the eth1 interface20:34
*** Jeffrey4l_ has quit IRC20:35
clarkbeth0 is the ipv4 addr on the ipv4 subnet that is on the network that we can attach floating IPs to20:35
clarkbso your ipv4 global traffic goes over eth0 and ivp6 over eth1. Then you just use local routes for eth1 ipv420:35
pabelangerokay, I have the mirror attaching to both networks20:35
mordredthis is the thing that's fixed by the new shade patch20:35
pabelangerjust working on assigning FIP now20:35
pabelangerclarkb: http://paste.openstack.org/show/565316/20:36
mordredwhich will get rolled out as soon as the caching revert patch makes its way through CI20:36
pabelangerokay, neat20:36
clarkbwell shade won't change how we have things configured20:36
clarkbit will just change how shade reports on them20:36
mordredthat is correct20:36
mordredjust saying - this is the networking setup that is at th eroot of why we had to make that change20:37
fungiclarkb: indeed, looks like routing table has "default via 172.99.106.1 dev eth0" and "default via fe80::def dev eth1"20:37
clarkbpabelanger: did you configure int_netw or was it preexisting?20:37
clarkbmordred: ya20:37
pabelangerclarkb: I used --nics to assign it.  It was existing20:37
clarkbmordred: its also going to get more fun where the public addr shade sees is not the actual public addr because of another layer of nat we can't see20:37
pabelangerI have not created any networks20:37
clarkbpabelanger: ah ok then we will not need to add anything to cloud laucnher20:38
fungiclarkb: so i guess in cloud1 there's a "provider" network for 172.99.106.0/24 and our eth0 is assigned into that? no nat involved?20:38
clarkbfungi: yup20:38
funginifty20:39
*** sarob has joined #openstack-infra20:39
clarkbfungi: before pabelanger rebuilt that mirror to have the v6 interface we had nat with a floating IP and only v4 connectivity20:39
mordredclarkb: wat?20:39
clarkbmordred: yes 72.3.183.45 to 172.22.132.35 to whatever range we choose for our neutron subnet?20:39
clarkbmordred: in this case looks like neutron subnet add ris 192.168.2.920:39
fungiclarkb: i wonder if they couldn't extend that same 172.99.106.0/24 provider net to the openstackci tenant in cloud820:39
fungithen we could do something similar20:40
mordredholy crap that's terrible20:40
mordredI can't even ..20:40
clarkbits definitely a bunch of magicallness20:40
clarkbraddaoui: are there no alternatives to that? eg we can't expose the 72.3.183.45 addr to neutron as a single ip floating ip pool or something?20:40
clarkbraddaoui: then that way the cloud reflects reality for us?20:41
mordredlike, honestly - it would be better to just not have the v4 'provider net' at all20:41
clarkbyou have to have it for afs20:41
mordredif it's not actually routing things20:41
mordredclarkb: afs works with floating ips20:41
*** ansiwen has quit IRC20:41
clarkbmordred: yes this is still floating IPs20:41
mordredwow20:41
mordredthat's20:41
clarkbmordred: its real public addr on firwall outside of openstack to neutron floating ip that is rfc1918 addr to private subnet rfc1918 addr20:42
raddaouiwell I dont think that is possible because 72.3.183.45 is not routed inside our private network20:42
raddaouimrhillsman: ^20:42
clarkbraddaoui: you just have to route it to the neutron router I think20:42
clarkbI don't know how that relates to your private network20:42
fungizaro: i wonder if you have any idea what the deal is with the gerrit behavior observed in this thread: http://lists.openstack.org/pipermail/openstack-dev/2016-August/102639.html20:43
mrhillsmanreading20:44
raddaouiyeah I can do that but after how packets will be routed internally from that VM to the firewall20:44
mordredoh - is this because the kolla team are themselves "clients" of the osic stuff, so the kolla control plane is not 'trusted' in the same way as the things run by the osic humans?20:45
*** sarob has quit IRC20:45
mrhillsmankolla is not relevant to cloud820:46
mrhillsmani'm still a bit lost on the issue20:46
rcarrillocruzpabelanger, clarkb : i have problems to create the mirror with launch-node.py20:47
rcarrillocruzhttp://paste.openstack.org/show/565318/20:47
clarkbmrhillsman: the issue is we don't want to be behind two nats preventing our cloud apis from tellin us what the reality is for our floating IP situation20:47
rcarrillocruzi believe is because we don't have neutron ns metadata proxy to inject key20:47
rcarrillocruzthus20:47
mrhillsmanas i understood there was on a need for the one floating ip for mirror VM20:47
clarkbmrhillsman: instead it would be nice if neutron could attach the actual publci IP as a floating IP so that cloud queries return complete info20:47
rcarrillocruzwe need to pass  a config drive ?20:47
clarkbmrhillsman: there is a need for one globally routable ipv4 addresss20:47
rcarrillocruzwith the key to bake it in20:47
*** sarob has joined #openstack-infra20:47
clarkbmrhillsman: aiui the way this is being presented to us is via a magical NAT on a firewall somewhere that our cloud api queries will not be privy to20:48
pabelangerrcarrillocruz: launch node will try to use its own key20:48
clarkbso it will just happen to wkrk which is less than ideal20:48
rcarrillocruzyeah , it creates one of the fly but isn't that injected on clouds with neutron  metadata server ?20:48
pabelangerrcarrillocruz: I needed --config-drive with tripleo-test-cloud-rh1 I think20:48
rcarrillocruzyep, what i thought20:49
rcarrillocruzdo we have a config drive on the puppetmaster to use ?20:49
mgagnepabelanger: any success with mtl01?20:49
fungimrhillsman: yeah, the need isn't specifically for a floating ip. if for example the 172.99.106.0/24 provider network we've got an address in for cloud1 could also be extended to a provider network for cloud8 we could skip dealing with fips entirely20:49
mrhillsmanunfortunately cloud1 and cloud8 are segregated so this would not be possible20:50
*** pvinci has joined #openstack-infra20:50
pabelangerrcarrillocruz: we use cloud.create_keypair() then add the key to create_server20:50
fungia global ipv4 address pool of /32 size would suit us, even without a fip at all20:50
pabelangermgagne: test just finished20:51
pabelangermgagne: and passed20:51
mgagneawesome!20:51
pabelangerclarkb: mgagne: ^ so I think we can bring mtl01 back online20:51
clarkbpabelanger: mgagne yay20:51
mrhillsmanso are you wanting to just attach that public IP directly to the mirror VM?20:51
mordredthat would be perfect20:51
fungimrhillsman: but certainly if we do have to have a fip, then having the fip be a global ipv4 address rather than another rfc-1918 address would help. with the double-nat, the openstack api doesn't tell us at all what our routable ipv4 address for that instance is20:51
mordredyah. what fungi said20:52
zarofungi: the doc says that it will stop email only to reviewers and watchers.20:52
rcarrillocruzah ok, i thought i had to pass a path to the config-drive param20:52
mordredthe only way we konw how to connect to the machien is to ask nova "what's your ip address" - and then we ssh to that20:52
rcarrillocruzgoing thru now20:52
pabelangerclarkb: mgagne: lets start with 10 nodes first?20:52
fungimrhillsman: if we could attach the global address to the mirror instance directly (e.g. neutron address pool just large enough to provide us 1 address) that would be even better than a fip, yes20:52
mrhillsmanthere is only one IP though am i right?20:52
clarkbpabelanger: mgagne 10 is fine by me20:52
mrhillsmanit should not change?20:52
mgagneI think it is a good thing to start slow20:52
zarofungi: i guess owners and subscribers still get emails20:53
zaroohh i guess subscriber and watcher are the same.20:53
openstackgerritPaul Belanger proposed openstack-infra/project-config: Slowly bring internap-mtl01 back online  https://review.openstack.org/36393120:53
pabelangerclarkb: mgagne:^20:54
fungimrhillsman: yeah, we only need one globally-routable ipv4 address. if we need to replace the server, we'll tear the old server down first or hot-attach the interface later when we move traffic20:54
mrhillsmanthere is only one floating ip address available20:54
zaroso it continues to email authors and people who starred the change.20:54
pabelangerrcarrillocruz: in fact, we should just make config-drive true by default20:54
mordredmrhillsman: our ansible inventory uses the openstack api to construct itself dynamically20:54
fungimrhillsman: or if it has to be a fip, we'll reassign the fip when we replace the server20:54
mordredmrhillsman: if nova lies to us about the actual public address, none of our automation will be able to talk to the server20:54
clarkband aiui in the current situation the ip we will see is the one for the neutron floating ip which is not the actual address we should talk to20:55
mordredmrhillsman: so it's not like we can just learn the address once and write it down20:55
mrhillsmanbut it should never lie because there is only one20:55
zarofungi: i don't think there's a way to completely stop emails and it's difficult to seperate bot and human reviews since both use the same comments channel to post info about the change.20:55
mrhillsmanonly one floating ip available, only one public address available20:55
mrhillsmaneverything else is ipv6 except for that mirror VM20:55
clarkbmrhillsman: there is the publicly routable IP taht is NAT'd to an rfc 1918 address which is then NAT'd to another rfc 1918 address20:55
*** eharney has joined #openstack-infra20:55
zaroalthough i believe newer version of Gerrit does have the feature to stop emails.20:55
mordredmrhillsman: the one that nova knows about is not the actual address20:55
clarkbmrhillsman: the cloud apis will show us the two rfc 1918 addresses not the actual publicly routable IP that is sitting on a firweall NATed to our floating IP20:56
mrhillsmanok, i got you20:56
pabelangerrcarrillocruz: if you don't mind: https://review.openstack.org/#/c/363931/20:56
*** e0ne has joined #openstack-infra20:56
mordredwoot20:56
*** raildo has quit IRC20:56
clarkb72.3.183.45 to 172.22.132.35 to 192.168.2.920:56
mrhillsmanright20:56
*** rtheis has quit IRC20:56
clarkbwe have no visibility into what 72.3.183.45 is20:56
rcarrillocruzsure20:56
clarkbwe only see 172.22.132.35 and 192.168.2.920:56
mrhillsmanlet me see if there is a way to change without having to make significant adjustment20:56
*** jamesdenton has joined #openstack-infra20:56
fungiso either we'd like to be able to bind 72.3.183.45 directly to a virtual ethernet interface in the server instance, or at worst have 72.3.183.45 be the fip20:56
rcarrillocruzjebus, MOAR CLOUDS pls20:57
pabelangerclarkb: stepping away here for family time, I'll let you decided on bringing mtl01 online tonight.20:58
clarkbpabelanger: I just approved it20:58
clarkbpabelanger: I will keep an eye on it20:58
pabelangerclarkb: Also, do you mind restarting nodepool-builder so we can pickup osic-cloud8?20:58
pabelangerclarkb: great, thanks20:58
clarkbpabelanger: well we have to fgire out this networking thing before that matters20:59
clarkbbut yes I can do that if we sort out something that will work20:59
* dhellmann wonders what networking-vpp is and why they've chosen to import their repo via one git review at a time *this week*20:59
clarkbdhellmann: ijw and sdague can probably tell you about it20:59
*** rwsu has joined #openstack-infra21:00
pabelangerclarkb: Yup, figured we'd get the images in place in case it was a simple fix21:00
*** rfolco has quit IRC21:00
dhellmannclarkb : thanks21:00
*** ekhugen has left #openstack-infra21:00
fungiclarkb: dhellmann: ijw and sdake (i doubt sdague cares at all about it)21:00
dhellmannah21:01
clarkboh sorry my bad on tab completing21:01
*** rhallisey has quit IRC21:01
clarkbooh its completely stopped zuul from doing anything useful21:01
*** matt-borland has quit IRC21:01
dhellmannfungi : we may want to have a soft policy about not importing new repos during deadline weeks, esp. next cycle since it's so much shorter21:01
clarkbjeblair: ^ you may be interested in this21:01
dhellmannyeah, there's a huge check queue now21:01
*** jkilpatr has quit IRC21:01
clarkbdhellmann: well I think this repo was put into gerrit forever ago they just didn't import their code at that time (I dunno why)21:01
sdakedhellmann - no idea what it is, it was imported one commit at a time to preerve commit history - no idea why they didn't wait until after milestone321:01
fungidhellmann: it sounded something like someone at cisco created that as a new repo but continued to commit to an internal copy instead, and then later wanted their work imported into gerrit after the project already existed in it21:02
pabelangerdhellmann: fungi: Ah, yes. The old bomb zuul with 9k events patch-set21:02
sdakedhellmann they didn't use the upstream flag on their new repo submission21:02
sdague..... :(21:02
sdaguelet me say, I have words, which are not suitable for irc for that21:02
sdakedon't blame me, I wasn't consulted prior to the work ;)21:02
*** baoli_ has quit IRC21:02
dhellmannis there any way to do anything about it now?21:02
pabelangerzuul will eventually recover21:03
sdaguepabelanger: eventually21:03
clarkbdhellmann: we could probably dump zuul's queues. remove that projects changes, restart zuul and requeue everything else and make that project read only21:03
pabelangerya21:03
sdaguethe point is we have this freeze21:03
*** baoli has joined #openstack-infra21:03
dhellmannI mean, zuul's doing it's thing so we're not blocked, but the queue is pretty long21:03
sdagueand we have important content to land21:03
*** jamesdenton has quit IRC21:03
sdagueand this is effectively a DOS attack21:03
*** jamesdenton has joined #openstack-infra21:03
*** salv-orlando has joined #openstack-infra21:03
*** psilvad has quit IRC21:03
sdakei hear you, again I can't undo what they have done - they clearly should have read about the upstream: tag21:03
*** ansiwen has joined #openstack-infra21:03
sdakeif you would lilke me tog et them to stop rebasing or a week21:04
*** tonytan4ever has joined #openstack-infra21:04
sdakei can do so21:04
sdakeor/for21:04
sdakei think there are continual rebases to get the gate jobs working21:04
dhellmannsdake : yes, please ask them to stop doing anything with those patches for now21:04
sdaguesdake: also, why are they doing this this way at all21:04
sdakedhellmann have timeframe when they can open it up again21:04
sdaguethere is a whole git import infrastructure21:04
pabelangersadly, I have to run. So I won't be able to support the effort.  But I think we need to come up with a proper fix, this is about the 4th time in 6 months this has happend21:04
dhellmannsdake : next week at the earliest21:04
sdakesdague what i was told was the documentation stated how to import a repo two different ways in the documentation21:04
sdakeand they did the wwrong way21:05
fungiyep. there's a delicate balance with new project creation. we purposely didn't create a provision for post-creation bulk import because it could be used to bypass code review entirely, but by the same token we don't want to set a precedent that the infra team will manually import your repo for you if you forget to specify it at creation time21:05
clarkbsdague: yes they failed to use that infrastructure so now its either push to gerrit like this or get a gerrit admin to force push (which we really don't like doing)21:05
dhellmannfungi : maybe we should remove the version of the import that doesn't expect an upstream repo21:05
jeblairdhellmann: i've never imported a repo21:05
sdagueclarkb: well force push is a lot less evil that destroying critical merge time21:05
dhellmannjeblair : you're special21:05
jeblairdhellmann: i always create new projects from scratch in our infra21:05
jeblairdhellmann: i hope not21:05
jeblairi believe in our community process21:06
clarkbsdague: yes but there is another alternative21:06
fungii see creating a project outside the openstack community and importing it later as a bit of an anti-pattern21:06
sdakejeblair i usse cookiecutter and start from there21:06
dhellmannjeblair: In general I do, too, but I'm having a bit of trouble believing in a process that leads to this result.21:06
sdagueanyway, regardless of that21:06
clarkbthey could push a reasonable set of chagnes at a time in order to move things along without DOSing21:06
clarkbeg 5 instead of 7021:06
sdagueremember zuul is crazy slow on 170 deep patch series21:06
sdaguethere is an n^2 problem21:06
sdakefungi I think its ok in some circumstances - heat was created in this way, kolla and magnum all used the upstream patch21:07
sdaguewe hit this before21:07
clarkbsdague: yup21:07
sdagueso this either has to be dumped out of zuul21:07
sdagueotherwise we just destroyed freeze21:07
sdakeclarkb ya i hear ya - Iguess their orders were to preserve history21:07
fungisdake: i agree there are times when projects start outside the community and join us later, but by the same token i don't want to make it seem like that's the preferred default behavior pattern21:07
sdakefungi I typically create a cookiecutter on github and use upstream21:07
openstackgerritMatt Riedemann proposed openstack-infra/project-config: Run with cells v2 in placement and neutron grenade jobs  https://review.openstack.org/36393721:07
sdakebut again, not consulted21:07
jeblairdhellmann: i don't even know what to say to that.  i'm clearly not defending this.  someone made an error.21:08
sdagueok, problem at hand.21:08
sdakewas pinged lat night about why networking-vpp wasn't showingup in zuul21:08
sdaguelets take the philosophy to later21:08
*** Guest81 has quit IRC21:08
sdaguecan this get dumped?21:08
sdakeso reachedout to openstack-infra21:08
dhellmannjeblair : yeah, I think my more common pattern is what sdake just said: create something with cookiecutter then import it. maybe I'm the special one.21:08
sdakesdague if thats possible21:08
sdakesdague do so, and these guys cn do this disruptive work in a week21:08
jeblairdhellmann: i use cookiecutter to create the initial commit21:08
clarkbdoes zuul have an unenqueue to go with enqueue? that might be the other alternative btu I think not without dumping, restarting, and enqueuing only what we want?21:08
jeblairi'm just incensed by the idea that we would force people to not use our system because of tihs21:09
sdakecookiecutter a bit out of date so needs some manual fixups21:09
pabelangerwhat is the downside of zuul continuing to merge the patches? Missed deadlines right?21:09
*** tonytan4ever has quit IRC21:09
jeblairinstead of just saying, hey, someone messed up21:09
jeblairpeople do that21:09
jeblairlet's fix it21:09
sdaguejeblair: right, I agree21:09
dhellmannjeblair : I also like to try to set up the jobs when I import the repo, so the tests run from the start with any "real" content. like I said, maybe I'm doing it wrong.21:09
sdaguelets fix it21:09
fungiclarkb: the dwqueue feature was never completed. i think that patch is still partially implemented and under review21:09
clarkbpabelanger: potentially yes since in ~12 hours is freeze crunch time I think21:09
sdaguepabelanger: right21:09
clarkbpabelanger: ttx's working day basically21:09
sdaguebasic it's a DOC on our release21:09
sdagueDOS21:09
jeblairi want to help, but i need a minute to cool off21:09
sdaguejeblair: ok, cool, np21:09
pabelangerIIRC, this happend this morning too. Has anybody see what the downtime was?21:10
dhellmannjeblair : sorry, didn't mean to tick you off :-(21:10
clarkbpabelanger: there is no downtime21:10
clarkbpabelanger: it just slows things down due to the n^2 merge problem21:10
sdakesorry guys -if i had been consulted - different outcome21:10
clarkb(at least that is my understanding of it)21:10
dhellmannjeblair : I guess I'm just coming from a different perspective21:10
clarkbpabelanger: so new changes are not queued to run their jobs as the zuul mergers are all working overtime to enqueue these changes21:10
fungipabelanger: no downtime, just a prioritization concern. people want to make sure that release-critical (to openstack) work isn't slowed by non-release-critical/unofficial project testing that isn't tied to the release21:10
clarkbeventually it will get through it21:10
pabelangerclarkb: well, nodepool is currently not launching nodes.  That's the downtime I was referring too21:10
*** priteau has quit IRC21:11
clarkbpabelanger: yes its not doing that because zuul is only very slowly queueing new jobs due to the merge backlog (I think)21:11
pabelangerRight21:11
sdakeso AI or me is to get them to stop all work on networking-vpp until dhellmann gives me a green light21:11
sdakeanything else?21:11
fungiwe've generally treated all projects equally, in some part because implementing a project prioritization solution would be complicated21:11
jeblairyeah, the queue processors are running21:11
pabelangerclarkb: fungi: So a quick look at grafana shows it took about 4 hours for zuul to clear out21:11
jeblairmerge backlog seems plausible21:11
rcarrillocruzpabelanger: ok, the mirror is up21:11
rcarrillocruzhow can i make sure the afs is sane and all21:12
rcarrillocruz?21:12
jeblairdo we have a number on that?21:12
*** jcoufal has quit IRC21:12
jeblairrcarrillocruz: just access the mirror over http21:12
clarkbjeblair: I haven't checked gearman but can pretty quickly21:12
pabelangerrcarrillocruz: if you can access http://mirror.regionone.infracloud-vanilla.openstack.org/ and see repos, that is usually all I do21:12
clarkbmerger:merge    7668    8       821:12
clarkbso ya I think thats it21:12
rcarrillocruzo-k, that's fast, i thought it neeeded to transfer stuff or something21:13
pabelangernow I run, will catch up on backscroll21:13
mordredrcarrillocruz: it does the transfer in the background as needed21:13
fungiand probably ~1.3k of that is the backlog in the merge-check pipeline21:13
jeblairyeah, looking at a merger, it's chugging through vpp work21:13
fungisince it's low priority21:13
jeblairfungi: that's fine and shouldn't affect other things21:13
rcarrillocruzclarkb, clarkb : do I revert the revert for nodepool infracloud now21:13
rcarrillocruz?21:13
clarkbrcarrillocruz: if the mirror is working then sure21:14
sdakeclarkb fungi dhellmann need any other actions out o me?21:14
sdakeo/of21:14
rcarrillocruzyup, i see folders, and they have content21:14
fungiright, just saying we'll get to merge requests for all other changes before the merge requests for the merge-check pipeline are processed21:14
rcarrillocruzi'll push21:14
jeblairprobably we're looking at check changes being behind a backlog of merges for the vpp changes in check21:14
jeblairfungi: yeah21:14
raddaouiclarkb: just FYI I tried to create an instance with the image uploaded (trusty) and I clouldn't ssh or ping its ipv6 address unlike the one we have21:14
fungiso ultimately it's more like a 6.6k backlog on merges21:14
jeblairand the mergers have to stack 177 changes up for each21:14
jeblairthe *good* news is this gradually gets faster...21:15
clarkbraddaoui: the image we uploaded?21:15
jeblairas the mergers start to remember changes they've seen and don't need to fetch them21:15
sdaguejeblair: umm... yeh, but iirc the last time it was like days21:15
clarkbsdague: no this happened last night it took a few hours21:15
sdagueand after 12 hours we just killed and restarted21:15
raddaouiyes the Ubuntu 14.04.5 LTS (Trusty Tahr) Daily21:15
sdagueclarkb: at 177?21:15
sdakeclarkb dhellmann crafting email now - please let me know ifi should provide further instructions rather then "STOP ALL ACTIVITY"21:15
sdagueoh, I see the node graph now21:16
rcarrillocruzin other news, i got a quick reply from HPE DC folks, they claim they fixed the cabling issues of at least a couple servers21:16
*** ldnunes has quit IRC21:16
raddaouithe Ubuntu 14.04 LTS works fine with ipv621:16
rcarrillocruzi'll provision them tomorrow then21:16
sdagueright, so that's like a 6 hour halt?21:16
clarkbsdague: I think it would be ok for them to push a few patches at a time and iterate through them that way21:16
clarkber21:16
clarkbsdake: ^21:16
mordredrcarrillocruz: you're killing it with the new servers. I'm also impressed with the response you're getting from the DC ops folks21:16
*** tphummel has joined #openstack-infra21:16
sdagueclarkb: probably, but given that we've had directions challenges in the past, it's probably just safer to ask them to wait 2 weeks21:16
sdakercarrillocruz ++ :)21:16
clarkbraddaoui: we uploaded a daily? I am confused21:17
rcarrillocruz;-)21:17
*** shashank_hegde has quit IRC21:17
sdagueand regroup with a mentor after that21:17
sdakei should be their mentor21:17
sdakenot sure why they didn't ask21:17
*** shashank_hegde has joined #openstack-infra21:17
sdakeprobably because i am so overloaded i can brely come up forair.. :(21:17
clarkbraddaoui: pabelanger likely knows what the image story is but if you did two nics I want to say there may be issues auto configing the second21:17
*** matbu is now known as matbu|afk21:17
raddaouiyeah I didn't upload it and you guys were using it with your VMs21:17
fungijeblair: if we rebased just the change closest to the branch tip so that all the other changes are invalidated for an unmergable parent, would that clear them out quickly or do the mergers still have to try to merge each of those?21:18
clarkbraddaoui: I want to say we ran into that on cloud1 when we did the two nics there. We basically just had to enable eth1 and then it picked up the RAs21:18
*** rossella_s has quit IRC21:18
*** annegentle has quit IRC21:18
*** e0ne has quit IRC21:18
*** rossella_s has joined #openstack-infra21:18
jeblairfungi: checking21:18
*** gouthamr has quit IRC21:18
raddaouino actually I am just using one interface eth021:19
fungii guess the mergers would still have to pick the work requests up out of gearman, but still might short-circuit after that21:19
*** dprince has quit IRC21:19
openstackgerritDoug Hellmann proposed openstack-infra/infra-manual: emphasize the prefered way for importing repository history  https://review.openstack.org/36394121:19
dhellmannjeblair : ^^21:19
*** e0ne has joined #openstack-infra21:19
*** e0ne has quit IRC21:20
jeblairfungi: i don't think we cancel merger requests, and merger requests have the whole data set with them, so i don't think that would work21:21
*** ansiwen has quit IRC21:21
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/project-config: Revert "Revert "Enable infracloud servers in Nodepool""  https://review.openstack.org/36394221:21
jeblairfungi: it's a good idea, and if this weren't changing in v3 anyway, i'd say it should behave like that :)21:21
*** spzala has quit IRC21:21
clarkbI am wathcing the merge job count fall so we are working through it just not super quickly21:21
*** spzala has joined #openstack-infra21:22
clarkb7589 queued now21:22
rcarrillocruzclarkb, pabelanger : ^ if you don't mind...21:22
jeblairclarkb: have you worked out a slope yet to get an eta? (i think you can subtract 1.6k as fungi said)21:22
clarkbraddaoui: unfortunately I don't know where that image came from so don't know much about how it is configured to do networking21:22
clarkbjeblair: pabelanger said it took 4 hours last night21:22
* clarkb looks at irc timestamps to produce rough jobs per minute number21:23
fungigot it. so really if we're not going to wait it out, we need to dump a copy of the pipelines we can salvage (check and gate), restart zuul, edit the check export to remove all changes for networking-vpp, then start zuul and then reenqueue the other old check and gate changes21:23
jeblairfungi: oh, one other option:21:23
rcarrillocruzthx pleia221:23
pleia2sure21:23
pleia2rcarrillocruz: exciting times :)21:24
rcarrillocruzindeed ! :D21:24
fungiit looks like the most recent tag for the release pipeline ran the important jobs, so we won't lose anything critical there21:24
fungithough we'll lose a couple hours of jobs queued up for the post pipeline21:24
*** mdrabe has quit IRC21:24
clarkbat this rate less the 1.6k we will be done in about 12.5 hours21:24
*** mdrabe has joined #openstack-infra21:25
clarkbrate is ~8 per minute21:25
jeblairfungi: we could write a zuul-merger that pops jobs off the stack and fails them quickly, and exits as soon as it sees something that isn't networking-vpp.21:25
fungiclarkb: taking acceleration into account, or was that just a linear burn-down estimate?21:25
clarkbfungi: linear21:25
jeblairclarkb: the rate will increase, but not dramatically.  i think that helps us get an order of magnitude.21:25
fungifair enough21:25
jeblairi don't think the custom zuul-merger is a good idea.  i think we should dump/edit/reload.21:25
*** spzala has quit IRC21:26
*** aviau has quit IRC21:26
*** yolanda has joined #openstack-infra21:26
fungii think getting dequeue working (i guess for v3 at this point) is also a good longer-term idea21:26
*** aviau has joined #openstack-infra21:26
clarkbjeblair: ya customer zuul-merger seems like a way to potentially mangle things with bad merge info21:26
*** fguillot_ has joined #openstack-infra21:27
clarkb(if we get it wrong021:27
fungibut yeah, i feel like there's a lot more risk if we rush a hacked-up merger into place to shear away the networking-vpp changes21:27
fungior what clarkb said21:27
jeblairoh, heh, that's actually a better idea than mine.  but they're still both harder than just restarting.21:27
fungiat least dump/reenqueue is a devil we know21:27
jeblairwhere are we on nodepool restart?21:27
jeblairshould we roll one into this?21:27
jeblairmordred: ^?21:27
clarkbthe builder needs a restart to pick up new cloud infos but unsure of the main daemon and shade situation21:28
*** sdague has quit IRC21:28
fungithe gearman server clearing is going to tank all our in progress image uploads anyway, right/.21:28
fungi?21:28
clarkbfungi: yup21:28
clarkbso I will restart the builder21:28
jeblairyeah, so we can do the builder restart with no additional impact21:28
fungiso might as well roll a nodepool and builder restart into the mix21:28
*** waht has quit IRC21:29
mordredjeblair: the shade patch still hasnt' landed21:29
fungii'm happy to do either the zuul or the nodepool part21:29
fungimordred: did it pass tests?21:29
mordredfungi: yes. well, it passed the important ones21:29
mordredfungi: the current issue was battling the devstack config change21:30
fungiwe could just apply it while we're restarting21:30
jeblairmordred: is it a thing we should manually apply and restart, or does it still need more time?21:30
*** Illumitardi has quit IRC21:30
mordredwe could do that - I don't think it's terribly dangerous - shall I get prepped for that?21:30
jeblairmordred: yeah, let's21:30
mordredk. one sec21:30
*** zz_dimtruck is now known as dimtruck21:31
fungihopefully at least not dangerous the way asps are (very!) dangerous21:31
clarkbI can get the nodepool builder since I told pabelanger I would do that earlier21:31
jeblairdhellmann: ++21:31
*** nstolyar_ has quit IRC21:32
mordredfungi: ASPs?21:32
fungimordred: those are, in fact, especially dangerous21:32
fungiglad to no longer have to care about any of the webservers that ran them21:33
jeblairfungi: bad dates.21:33
mordredI'm installing new shade before restart21:33
*** fguillot_ has quit IRC21:33
fungiheh. nice cross-reference21:33
* clarkb hears a whoosh go over his head21:33
fungiclarkb: indiana jones and the lost ark quotes21:34
mordrednew shade installed - http://paste.openstack.org/show/565320/ are the dependencies that changed in case we need to revert21:34
fungithanks mordred21:34
jeblairthere are only 4 vpp changes in check21:34
*** annegentle has joined #openstack-infra21:35
jeblairso when we're ready to start --21:35
fungithe rest haven't been queued yet i guess?21:35
jeblairi'll stop/edit/start/re-enqueue zuul21:35
jeblairfungi: there's no backlog21:36
fungiahh21:36
jeblairfungi: i think it's just 4 changes with hundreds of dependencies21:36
jeblairclarkb: will restart builder21:36
fungioh, yep. that 'splains it21:36
jeblairmordred: will restart nodepool21:36
mordredyup21:36
jeblairfungi: will quote movies21:36
clarkband I can get nodepool-builder if mordred isn't including that in nodepoold restarting21:36
mordredI'm tailing the nodepool debug log to look for bad tracebacks21:36
jeblairclarkb: let's put you on builder duty21:37
clarkbwfm21:37
jeblairmordred: will just do main daemon21:37
jeblaireverybody set?21:37
mordredmain daemon, standing by21:37
*** sarob has quit IRC21:37
clarkbI am going to go ahead and stop builder now, then start again when mordred is happy with nodepoold21:37
mordred++21:37
jeblairoh wait21:38
clarkbthats done so ready when you are21:38
*** sarob has joined #openstack-infra21:38
jeblairfungi: i have a job for you21:38
*** thorst has quit IRC21:38
mordred2016-08-31 21:38:06,393 DEBUG nodepool.NodePool: Instance ubuntu-trusty-rax-dfw-9172205 (3aad4b32-694a-4964-9bce-9b67e1f20c2a) in rax-dfw has no nodepool metadata21:38
jeblairfungi: can you stand by to restart all the zuul mergers?21:38
*** rwsu has quit IRC21:38
mordredI don't believe I've seen that before21:38
fungijeblair: standing by now21:38
jeblairmordred: still ready?21:38
*** rwsu has joined #openstack-infra21:38
mordredstanding by21:39
jeblairmordred, clarkb, fungi: and go :)21:39
jeblairzuul is stopped21:39
mordrednodepool stopped21:39
mordrednodepool started21:40
mordredjeblair: do you care about this: http://paste.openstack.org/show/565321/21:40
jeblairzuul is restarted21:41
clarkbmordred: did that happen before or after shutdown?21:41
fungimergers have all restarted now21:41
mordredclarkb: after the start21:41
openstackgerritDavid Lyle proposed openstack-infra/project-config: Add craton-dashboard repository (Horizon Plugin)  https://review.openstack.org/35427421:41
mordredclarkb: but I'm not seeing it again21:41
clarkbmordred: ya I think thats a race between main thread initing db and setting global config and event handler for job finished21:42
jeblairmordred: huh, maybe there's a race... right that.  :)21:42
clarkbmordred: it may mean that we will leak that node though21:42
clarkbmordred: so maybe check that instance in 15 minutes to see if the cleanup routines don't somehow handle it21:42
fungijust confirmed, zuul-merger processes all have a recent start time now21:43
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: POC: WIP: oooq undercloud install  https://review.openstack.org/35891921:43
mordredok. I'm not seeing anything particularly unhappy in the logs21:43
jeblairthe 8 hour timeout for used nodes would catch it eventually21:43
clarkbmordred: ya from what I see of the service it looks to be happy so far21:44
clarkbmordred: we want to see osic instances get the right ip info though right?21:44
ijwHey, sorry about the patch push earlier21:45
sdakedhellmann clarkb - all work on networing-vpp is halted until clarkb gives me a green light21:45
ijwAs we talked this over yesterday afternoon I thought I was in the clear21:45
mordredclarkb: yah21:45
*** thiagop has quit IRC21:46
clarkbwait I am the stoplight?21:46
sdakeiwj sierra happens :)21:46
clarkbijw: we didn't realize at the time that it was gumming up the works21:46
sdakeijw that is21:46
fungiijw: yesterday it was at a time of day where there was very little going on21:46
clarkbijw: and basically we want to get feature freeze out the door21:46
mrhillsmanmordred fungi clarkb can you all just use the mirror in cloud1 for cloud8?21:46
*** dimtruck is now known as zz_dimtruck21:46
mrhillsmanthey are both in the same DC21:46
dhellmannsdake, clarkb : I can act as stop light21:46
mordredmrhillsman: not really21:46
clarkbdhellmann: yes I think you would be ebtter than me :)21:47
sdakedhellmann sounds good ill sync up with you next week adn week after21:47
ijwAnd per previous comments, I've come to this somewhat late, so the problem we have is one group created the repo empty - hence ther otherwise silly way to import history21:47
dhellmannsdake : ok21:47
mordredmrhillsman: the mirrors are inferred from cloudname+regionname in the setup scripts on the build nodes21:47
dhellmannijw : I think we're just going to want you to go a few patches at a time. zuul doesn't cope well with extremely deep series like that.21:47
ijwBut yes, sdake got to me and I can certainly stop.21:47
jeblairdhellmann, sdake, ijw: particularly -- merging the patches is important21:48
ijwdhellmann: can do if you like.21:48
fungimordred: while it does feel suboptimal, and might pose scaling problems, what are the other complications besides just having to put the cloud8 mirror name in dns for the same ip addresses as the cloud1 mirror?21:48
mrhillsmanok, i'll work on it21:48
mordredfungi: oh - that's an idea21:48
clarkbmordred: http://paste.openstack.org/show/565322/ is that logging what you expect out of osic cloud1?21:48
dhellmannijw, sdake : let's sync up tuesday (monday being a holiday) and see how things are looking21:48
ijwFor what it's worth, I think it is a shortcoming that we can't import history post-creation, though I understand your viewpoint21:48
jeblairdhellmann, sdake, ijw: so not only just pushing a small number of patches, but since they are dependent on each other, making sure that only a small number are open.  that's what hurt us this time, that zuul was preparing hundreds of patches together for a single change21:48
mordredclarkb: nope. one sec ...21:48
sdakejeblair ya makes sensse21:49
dhellmannjeblair : I was going to suggest that they start cherry-picking from the bottom of their stack and go ~5 at a time. Would that work?21:49
ijwYeah, fine - I wasn't aware it was setting up an O(n^2) task (it's not obvious from the outside).21:49
clarkbfungi: I think that may be the best workaround if we can't get the IP into the cloud in a way that tools want21:49
ijwLet me go patch weeding, and I'll submit a couple this evening and see how that works.21:50
mrhillsmani'm am quite sure this will not get looked at by networking until tomorrow21:50
clarkbmrhillsman: we aren't going to have to deal with weird bw bottlenecks if we do that?21:50
dhellmannijw : no, please do not submit any more patches until next week21:50
mrhillsmanbut should be possible21:50
clarkbmrhillsman: like only 100mbps between regions or similar?21:50
mrhillsmani do not believe so but would hate for that not to be the truth21:50
jeblairdhellmann: if you literally mean cherry-picking -- i think the same patch history can still be preserved -- ie, it's okay for them to be dependent on each other (still have the same git parents).  we just want to keep the number of outstanding unmerged patches small.21:51
mordredclarkb: oh! yes.21:51
mordredclarkb: that is correct21:51
mordredclarkb: nodepool is reporting public v4 and v621:51
mrhillsmanthey should all be next to each other but i'd imagine it would be a concern of the routing21:51
mordredand we expect this node to not have public v421:51
clarkbmordred: yup that sounds right to me21:51
mrhillsmani'll work on getting the proper setup in place regarding that address21:51
dhellmannjeblair : ok, I'm not sure how to take N patches in a series and only submit 5 of them without picking them into a new branch that doesn't include the N-5 patches.21:51
clarkbmordred: you think this is happy then? I can start the builder up?21:51
mordredclarkb: yah21:52
mrhillsmanjust hoping delay is not much21:52
openstackgerritK Jonathan Harker proposed openstack-infra/project-config: Add integration tests between system-config and logstash-filters  https://review.openstack.org/32072921:52
clarkbmordred: jeblair fungi I am starting the builder now21:52
*** zz_dimtruck is now known as dimtruck21:52
dhellmannjeblair : I mean, I guess just "git checkout $sha" at the 5th item?21:52
clarkband thats done. says it is listening for jobs21:52
jeblairdhellmann: ah yeah, i see what you're saying.  yes i think that will work.21:53
fungidhellmann: basically what we need to avoid is a zuul-triggering event for any change with lots of open parent _or_ child changes in gerrit21:54
*** tphummel has quit IRC21:54
dhellmannok. so there's also a long list of existing open patches in that repo https://review.openstack.org/#/q/project:openstack%2Fnetworking-vpp+is:open,n,z21:55
dhellmannit sounds like those should be merged before we do anything else?21:55
jeblairfungi, dhellmann, sdake: yeah, we'll want to be careful approving the existing changes in gerrit.  should only approve those one at a time, for starters, starting with the change closest to the branch tip21:55
mordredclarkb: uhm ...21:55
clarkbmordred: yes?21:56
jeblairzuul will need to walk the whole tree to see if it needs to enqueue the children.  that will take it a little while.21:56
jeblair(the first time, at least, when nothing is cached)21:56
dhellmannright21:56
raddaouiclarkb, pabelanger  can you look at the VMs in openstackci project they are both directly connected to provider network they are sshable21:56
mordredclarkb: we're at very low building-nodes count21:56
mordredclarkb: http://grafana.openstack.org/dashboard/db/nodepool21:56
jeblairmordred: i agree.  looking.21:56
clarkbmordred: we have a bunch ready21:56
*** yolanda has quit IRC21:56
clarkbwell relative to building21:56
*** nwkarsten has quit IRC21:57
jeblairoh, we probably leaked a bunch during the restart21:57
raddaouiyeah clarkb from reading history I think rcarrillocruz uploaded the trusty image21:57
clarkbyup there is a ton used21:57
*** jamesdenton has quit IRC21:57
clarkbraddaoui: ah in that case it may be a dib built image which we will want to replace with the canonical/ubuntu published ones for now21:57
jeblairclarkb, mordred: i'm going to clean some of those up21:58
clarkbmordred: jeblair I think we can just delete all used > 20 minutes21:58
raddaouiit does not work with both v4 and v6 I guess21:58
clarkbjeblair: kk21:58
clarkbraddaoui: it depends on config drive, is one attached?21:58
clarkbactually thats not true hrm21:58
ijwdhellmann, jeblair: I mean literally a couple of patches.  If I have to patch / approve / patch / approve, so be it, but I understand what I shouldn't be doing at this point, I think21:58
clarkbit should dhcp by default for v4 but maybe it doesn't v6 in that case21:58
*** manjeets- has joined #openstack-infra21:58
dhellmannijw : really, seriously, and truly please do not do anything with that repo this week.21:58
ijwdhellmann: would you like me to kill the patches up for review21:59
*** hrybacki|afk is now known as hrybacki21:59
openstackgerritDavid Lyle proposed openstack-infra/project-config: Add craton-dashboard repository (Horizon Plugin)  https://review.openstack.org/35427421:59
manjeets-hello infra folks I want to enable a extension_driver in conf file by default for some tempest tests21:59
*** javeriak has quit IRC21:59
dhellmannijw : I would like nothing to be touched at all, for now. When I'm done with the milestone I will have time to help get things merged carefully. I don't have that time this week.21:59
ijwdhellmann: ok, all good21:59
dhellmannijw : thanks21:59
raddaouino just eth022:00
*** tphummel has joined #openstack-infra22:00
mordredclarkb, raddaoui: I believe we've had issues with second interfaces not being configured to pick up stuff by default22:00
mordredpabelanger has puppet to fix it for the osic-cloud1 mirror22:01
manjeets- for devstack gate deployment i want test to enable a extension driver by default,22:01
mordredso - it's a known thing with the base images aiui22:01
jeblair2016-08-31 22:01:07,294 INFO nodepool.NodePool: Need to launch 401 ubuntu-xenial nodes for zuul on osic-cloud122:01
mordredjeblair: that'll be fun22:01
mordredcloudnull: ^^ buckle up22:01
jeblairit's doing it right now :)22:01
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: TEST: DONT RECHECK: periodic jobs  https://review.openstack.org/35921522:01
jeblairwell, actually it's issuing glance image list and flavor list over and over22:02
jeblaircause i guess we're not caching those right yet22:02
mordredjeblair: to the same cloud?22:02
jeblairyep22:02
mordredsigh22:02
mordredI thought that was long since sorted22:02
*** shashank_hegde has quit IRC22:02
clarkbmanjeets-: if you want something enabled by default the best place to do that is devstack22:03
clarkbmanjeets-: devsatck-gate should really only be used to do things like configure non defaults for specific tests or make testing work in a non interactive manner22:03
mordredjeblair: I'll look in to that tomorrow - unless you think it's choking us too badly right now22:04
manjeets-clarkb, https://review.openstack.org/#/c/354447/ I have a api test which covers a scenario22:04
clarkbmordred: we have 955 in building so I think its probably working ok22:04
jeblairmordred: yeah, it's just slowing us, not killing us22:05
manjeets-all I want to set a parameter in ml2.conf.ini extension_drivers = dns22:05
*** ansiwen has joined #openstack-infra22:05
mordredjeblair: ok. good. it sounds like a good "first thing when I wake up" thing to fix - rather than a "last thing before I start drinking"22:05
dhellmannthanks for resetting things, everyone, it looks like the jobs at the front of the queues have started up again22:06
jeblairmordred: ++22:06
jeblairdhellmann: np22:06
*** fguillot has quit IRC22:07
cloudnullmordred: rutro-shaggy22:07
* cloudnull goes for a beer leaving pager on desk22:08
clarkbmanjeets-: as I said if you want to set a default configuration for one of the projects typically the best place for that is devstack22:08
mordredcloudnull: I lost my pager ... golly, 15 years ago?22:08
*** hashar has quit IRC22:08
cloudnullsadly i have a "smart" phone now.22:09
*** signed8bit has joined #openstack-infra22:09
*** fguillot has joined #openstack-infra22:09
mordredcloudnull: my phone has been set in "do not disturb or even ring" mode for quite some time :)22:09
clarkbI remember when I had two because I refused to be on call on my personal phone22:09
cloudnullso far so good, life has note exploded yet.22:09
rbergeroni actually flushed my pager down a toilet once22:09
clarkbmordred: android has a handy feature that is only make noise if its from someone in this list22:09
rbergeronduring my on-call week no less22:10
clarkbrbergeron: were you testing its water resistance?22:10
*** spzala has joined #openstack-infra22:10
*** esp has quit IRC22:10
*** adriant has joined #openstack-infra22:11
clarkbmanjeets-: though devstack may just copy whatever is in neutron's example config for that22:11
*** esp has joined #openstack-infra22:11
clarkbmanjeets-: so you may have to update neutrons example config22:11
cloudnull271939 info and 355 error messages processed in the last 10 min. with the spike going down on every refresh.22:11
cloudnulli think we're through the build storm22:12
manjeets-clarkb, thats autogenerated i guess22:12
pabelangercatching up on backscoll22:12
*** mriedem has quit IRC22:12
manjeets-need to figure what it reads before autogenerating22:12
cloudnullrbergeron: thats how you win the pager game.22:12
cloudnull:)22:12
*** akshai has quit IRC22:12
rbergeronclarkb: not really, it just fell out of its holster as i flushed... it's as though it knew its true destiny22:12
cloudnullhahahahaha ^22:12
clarkbmanjeets-: ya I think it uses the oslo config objects inside neutron to generate the file so if you edit that it may do the right thing22:13
rbergeroncloudnull: would have been better if it hadn't been my joke threat for years22:13
clarkbwhen I was oncall we did once get a nice smartphone photo of another smartphone in a toilet22:14
cloudnullit was a freudian slip, into the toilet22:14
rbergeron"if we dont get more than 3 ppl in this rotation, one of these days i swear i'm gonna flush this thing down the toilet"22:14
*** asettle has joined #openstack-infra22:14
*** gordc has quit IRC22:14
pabelangerraddaoui: clarkb: checking the server in osic-clou8 now22:14
*** spzala has quit IRC22:15
*** piet has quit IRC22:15
clarkbpabelanger: did you see the suggestion of maybe just put dns records inplace that point cloud8 at cloud1 for now22:15
*** spzala has joined #openstack-infra22:15
*** xyang1 has quit IRC22:15
pabelangerclarkb: Ya, we can do that if people are fine with that22:16
mrhillsmanon a side note i submitted change request regarding the IP; not sure again of time to resolution but ball is rolling22:16
mrhillsmanbut if we can do something temporarily to help out like the dns change, that would be great22:16
mrhillsmanat least the resources can be used in some manner22:16
pabelangerYup, if other infra-root are good with that, I can update DNS records now22:16
*** inc0 has quit IRC22:17
*** eharney has quit IRC22:17
*** esberglu has quit IRC22:17
clarkbya I think that would work fine for now particularly if we start with a small number of instances (which we have been)22:17
*** esberglu has joined #openstack-infra22:18
mordredyah22:19
mordredI thnk it's a fine thing to do22:19
pabelangerhttp://mirror.regionone.osic-cloud8.openstack.org22:19
mordredand then also, if there are problems, we can konw about it22:19
*** asettle has quit IRC22:19
mordredwoot22:19
clarkbpabelanger: also builder was restarted22:19
pabelangerclarkb: nice22:20
*** srobert has quit IRC22:20
*** cardeois has quit IRC22:21
pabelangerclarkb: ha, we need to restart it again. osic-cloud8 patch hasn't landed on nodepool.o.o yet22:21
*** piet has joined #openstack-infra22:21
jheskethMorning22:23
*** esberglu has quit IRC22:24
jeblairoh huh22:25
jeblairi don't really understand why we're doing weird things with dns for cloud922:25
jeblaircloud822:25
jeblaircan someone status log that?22:25
jeblairand maybe ping infra-root with an explanation22:26
jeblaircause that's super weird22:26
mordredjeblair: lemme tl;dr you first - and lets see if we can turn it into a useful status22:26
* clarkb can backup explain22:26
mordredjeblair: the networking in cloud8 is such that our mirror is behind the double nat - so our automation has no idea what the actual ip of the server is ... the cloud8 people are looking in to fixing this, but there are things outside of their immediate control22:27
mordredjeblair: in the mean time, it was suggested as a workaround to just use the cloud1 mirror since they're in the same data center by pointing the dns record there22:27
mordredthat way the cloud8 people can work on getting the ips sorted in parallel22:27
jeblairgotcha22:27
jeblairmordred: i think i'd just status log that :)22:27
mordredthat's a lot of status log :)22:28
* mordred tries22:28
*** Sukhdev has joined #openstack-infra22:28
mordred#status the networking in cloud8 is such that our mirror is behind the double nat - so our automation has no idea what the actual ip of the server is ... the cloud8 people are looking in to fixing this, but there are things outside of their immediate control22:28
openstackstatusmordred: unknown command22:28
*** chem is now known as chem|off22:28
mordredgah22:28
mordredit's #status log isn't it?22:28
jeblairyep22:28
clarkbya22:28
mordred#status log the networking in cloud8 is such that our mirror is behind the double nat - so our automation has no idea what the actual ip of the server is ... the cloud8 people are looking in to fixing this, but there are things outside of their immediate control22:28
openstackstatusmordred: finished logging22:28
mordred#status log in the mean time, it was suggested as a workaround to just use the cloud1 mirror since they're in the same data center by pointing the dns record there22:29
openstackstatusmordred: finished logging22:29
mordred#status log that way the cloud8 people can work on getting the ips sorted in parallel22:29
openstackstatusmordred: finished logging22:29
*** annegent_ has joined #openstack-infra22:30
jeblairmordred: why doesn't this affect nodepool's use?22:31
jeblair(only affects v4?)22:32
*** akshai has joined #openstack-infra22:32
mordredjeblair: yah - this is only for the floating ip22:32
mordredjeblair: the single floating ip we have there - because the cloud doens't need any other ipv4 networks22:32
jeblairgotcha22:33
*** annegentle has quit IRC22:33
*** mdrabe has quit IRC22:33
*** Sukhdev has quit IRC22:34
*** akshai has quit IRC22:34
*** signed8bit is now known as signed8bit_Zzz22:35
*** sbezverk_ has quit IRC22:36
*** piet has quit IRC22:37
pabelangerthe other issue is, eth1 is currently down. So, if that is our ipv6 interface, we still cannot SSH22:37
*** shashank_hegde has joined #openstack-infra22:37
pabelangerwhen I did the mirror is osic-cloud1, ipv4 was eth022:37
mordredpabelanger: yah22:37
mordredI mean - one way to make that easier ...22:37
mordredwould be to attach the neutron router to GATEWAY_NET_V622:38
mordredso that the fip would attach to the ipv4 address on the same interface as the v622:38
cloudnullpabelanger: our error node launch attempt record is now ruined... trieste mi vida... :)22:38
mordredand the boot command would just be a single network - so a single nic22:38
mordredcloudnull: yah?22:39
cloudnullhttp://grafana.openstack.org/dashboard/db/nodepool-osic?panelId=11&fullscreen22:39
cloudnull^ sad sad days.22:39
pabelangerpretty sure that is a nodepool failure, not osic-cloud122:39
pabelangerlet me confirm22:39
jeblairwhat an utter failure!22:39
jeblairlet's pack it all up and go home22:39
cloudnulljeblair: IRK...22:40
cloudnullit was good while it lasted22:40
cloudnull:)22:40
pabelangerYup, IndexError22:40
jeblairpabelanger: what happened with the logging change?22:40
pabelangerjeblair: I'm just checking that actually22:40
*** vhosakot has quit IRC22:41
pabelanger362455, needs a +322:41
*** akshai has joined #openstack-infra22:42
jeblairwell, that would have been nice to have in before this restart22:42
*** tphummel has quit IRC22:42
dhellmanndoes anything special need to be done on the signing node to have it start picking up tag-releases jobs again because of the restart?22:44
*** Sukhdev has joined #openstack-infra22:45
clarkbdhellmann: we might have to requeue jobs22:46
openstackgerritK Jonathan Harker proposed openstack-infra/project-config: Ensure that gerrit projects have zuul pipelines  https://review.openstack.org/36396922:47
dhellmannclarkb : I just approved a patch and there's a job queued up now22:47
clarkbjeblair: did you capture the release queue by chance when you restarted zuul?22:47
*** zhurong has joined #openstack-infra22:47
*** mriedem has joined #openstack-infra22:47
dhellmannclarkb : usually those are picked up in seconds22:47
dhellmannoh, this job wasn't even enqueued until 5 minutes ago22:47
clarkbdhellmann: hrm maybe I misunderstand what you are asking22:47
dhellmannthere's a job queued up in release-post. it has been waiting for 6 minutes for the only node that can run it, which shouldn't be doing anything else afaik. is there a way to see what's in the queue for the special signing node?22:48
dhellmannI'm wondering if a worker lost contact with a server or something22:48
jeblairclarkb: no, someone said it was okay22:49
clarkbdhellmann: oh gotcha22:49
dhellmannjeblair : yeah, there weren't any queued up anyway22:50
clarkbthe last log line for the zlstatic launcher is from 2211UTC22:50
clarkbI wonder if it didn't reregister with the gearman server after it restarted22:50
clarkbthat log line was an onfinalized message so presumably it finished that job then didn't start any others22:51
jeblairclarkb:22:51
jeblairhttps://review.openstack.org/35080722:51
*** zhurong has quit IRC22:52
clarkbdoes that mean I should restart hte zuul launcer on static now?22:52
jeblairclarkb: yep.  also +3ing the change would be a nice touch.22:52
clarkbya will review that as soon as I restart the launcher process22:53
clarkbhrm service restart didn't work22:53
clarkbit stopped but didn't start /me tries explicit start22:53
jeblairfungi: how would you feel about increasing the max line width to 100 in zuul?22:55
*** Sukhdev has quit IRC22:55
openstackgerritMatt Riedemann proposed openstack-infra/devstack-gate: DNM: testing cellsv2 grenade/devstack run  https://review.openstack.org/36397122:55
dhellmannclarkb : there it goes22:55
jeblairtests/test_scheduler.py:386:80: E501 line too long (80 > 79 characters) dict(name='project-merge', result='SUCCESS', changes='1,1 2,1 3,1'),22:55
clarkbdhellmann: cool means restart worked22:56
clarkbnow to review the fix22:56
jeblairfungi: i'm dealing with a bunch of those sorts of things -- it's a case where i think we're generally hurting legibility22:56
dhellmannclarkb , jeblair : thanks again!22:56
jeblairdhellmann: np, hopefully that's the last time we hit that error :)22:56
dhellmannjeblair : I have to wrap at 65 cols in my book, so I feel your pain22:56
dhellmannhmm, the lp comment script looks like it's hung again, though. telnet://signing01.ci.openstack.org:1988522:57
clarkbjeblair: I can do 3 columns at 80 wide on current monitor which is kind of nice22:58
jeblairdhellmann, clarkb: this diff makes me sad: http://paste.openstack.org/show/565447/22:58
dhellmannjeblair: yeah22:59
dhellmannand there goes the lp script, too22:59
dhellmannwoot, we just tagged a release in ci22:59
*** rbrndt has quit IRC22:59
jeblairall hail ci releasing overlords22:59
jeblairdhellmann: that means you're free to write books now, right?23:00
dhellmannjeblair : it at least means I can go on vacation on the final release date again, since I can do that from my phone23:00
pabelangerYay, that is nice23:01
*** annegent_ has quit IRC23:01
*** gouthamr has joined #openstack-infra23:02
openstackgerritJames E. Blair proposed openstack-infra/zuul: Re-enable test_failed_change_at_head  https://review.openstack.org/36382123:02
dhellmannI'm going to try another one, skipping the "clarkb restarts the service" step this time. ;-)23:03
fungisorry, had to disappear to make/eat food. catching back up23:03
dhellmannfungi : release automation is working!23:03
*** pradk has quit IRC23:03
openstackgerritMerged openstack-infra/project-config: Slowly bring internap-mtl01 back online  https://review.openstack.org/36393123:04
fungimordred: jeblair: clarkb: one thing i didn't even bring up, but i have a suspicion that afs through a double-nat would be... troublesome23:06
openstackgerritMerged openstack-infra/project-config: Revert "Revert "Enable infracloud servers in Nodepool""  https://review.openstack.org/36394223:07
fungithough maybe it would just be fine23:07
jeblairfungi: well, any nat is 'trouble' so a double nat is 'double trouble'....23:07
jeblairfungi: but theoretically, it maybe might still possibly work.23:07
rcarrillocruzweeeeee23:08
jeblairfungi: as long as you don't have more than one client23:08
*** esp has quit IRC23:08
*** Benj_ has quit IRC23:09
fungijeblair: you mean increase the max width of lines in code? i could survive it23:09
openstackgerritMatt Riedemann proposed openstack-infra/project-config: Add stable-maint-core to os-vif gerrit ACL  https://review.openstack.org/36397823:10
pabelangerhere we go23:10
pabelangerinternap-mtl01 and infracloud-vanilla just came online in nodepool23:10
mgagne:D23:10
*** edtubill has quit IRC23:10
fungijeblair: i mean, conventional wisdom is that if you have overly long lines because of lots of levels of indentation, you need to break it up into more modular functions/methods, but sometimes that's just bs23:10
openstackgerritDoug Hellmann proposed openstack-infra/project-config: wait to publish releases.o.o until after tagging  https://review.openstack.org/36397923:11
mgagneyea, saw the merge and update in grafana23:11
*** coreyob has quit IRC23:11
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: POC: WIP: oooq undercloud install  https://review.openstack.org/35891923:11
mgagnefungi: when you know that line length limit takes its origin from punch cards :D23:11
rcarrillocruz2016-08-31 23:12:37,481 INFO nodepool.NodeLauncher: Creating server with hostname ubuntu-xenial-infracloud-vanilla-4022814 in infracloud-vanilla from image ubuntu-xenial for node id: 402281423:12
pabelangerrcarrillocruz: did you say ipv6 is now working in infracloud network?23:13
openstackgerritMatt Riedemann proposed openstack-infra/project-config: Add nova-stable-maint to os-vif gerrit ACL  https://review.openstack.org/36397823:14
rcarrillocruzhaven't asked to the NET folks yet23:14
rcarrillocruzwill follow up with my EMEA contact tomorrow23:14
*** tpsilva has quit IRC23:14
pabelangerrcarrillocruz: cool23:14
*** jamielennox|away is now known as jamielennox23:14
fungimgagne: i have a fondness for punch cards23:14
fungiway better than programming a computer by reordering circuit boards in the frame23:15
*** akshai_ has joined #openstack-infra23:15
mgagnehttps://en.wikipedia.org/wiki/Characters_per_line23:15
mgagnewith some model this number was either reduced by half to 40 CPL23:15
mgagne40 max line length, awesome :D23:16
jeblairfungi: http://paste.openstack.org/show/565447/ is specifically what i'm looking at23:17
pabelangerrcarrillocruz: clarkb: I'm going to work on a buildimage job tomorrow to create ubuntu-minimal images for our control plan, to at least get the ball rolling23:17
pabelangerthen see why the mirror failed with afs23:18
rcarrillocruzcool23:18
*** Julien-zte has joined #openstack-infra23:19
*** akshai has quit IRC23:19
openstackgerritMerged openstack-infra/zuul: Ansible launcher: re-register functions after disconnect  https://review.openstack.org/35080723:21
*** krtaylor has quit IRC23:22
*** jamielennox is now known as jamielennox|away23:22
openstackgerritMerged openstack-infra/project-config: Upload nodepool images to osic-cloud8  https://review.openstack.org/35736423:23
*** gyee has quit IRC23:24
pabelangermgagne: nc 198.72.124.71 1988523:24
mgagneso you found the one job running in mtl01 =)23:24
clarkbwait did the cahnge we needed for the builder not even merge yet :P ok we can restart it again now that there are no jobs23:24
pabelangerclarkb: ya23:25
pabelangerinfracloud failed to schedule the node23:25
*** salv-orlando has quit IRC23:25
*** Julien-zte has quit IRC23:25
*** hockeynut has joined #openstack-infra23:26
fungijeblair: yeah, the only way to wrap and keep it semi-readable is to switch to one parameter per line in those functions, but that just means a lot fewer function calls in your screen23:26
*** jamielennox|away is now known as jamielennox23:28
*** hongbin has quit IRC23:28
pabelangermgagne: success23:28
mgagne:D23:28
mgagnepabelanger: up to you to increase quota.23:29
pabelangermgagne: lets see what tomorrow holds23:30
pabelangerbut I don't see a reason not too23:30
*** nwkarsten has joined #openstack-infra23:30
mgagnejust less humans to respond in case of problem tonight =)23:30
mgagnepabelanger: 150 would be a reasonable value, (up from previous suggested 120)23:31
*** fguillot has quit IRC23:31
mgagnemaybe prepare the change and merge tomorrow23:31
pabelangermgagne: Yup, I'm about to walk away for the night, but feel free to propose it23:31
mgagneok, will do23:31
openstackgerritMatt Riedemann proposed openstack-infra/project-config: Add cinder-stable-maint to os-brick  https://review.openstack.org/36398223:32
rcarrillocruzi'm also walking away23:32
*** esp has joined #openstack-infra23:32
rcarrillocruztalk to you tomorrow folks23:32
rcarrillocruzg'night23:32
pleia2rcarrillocruz: nice job today! night :)23:33
*** kzaitsev_mb has quit IRC23:33
*** krtaylor has joined #openstack-infra23:34
*** coreyob has joined #openstack-infra23:35
*** jamielennox is now known as jamielennox|away23:36
*** gongysh has joined #openstack-infra23:37
*** Swami has quit IRC23:37
*** oomichi has quit IRC23:39
*** gildub has joined #openstack-infra23:39
openstackgerritMathieu Gagné proposed openstack-infra/project-config: Set max-servers value to 150 for internap-mtl01  https://review.openstack.org/36398423:39
*** oomichi has joined #openstack-infra23:39
*** oomichi has quit IRC23:40
*** gyee has joined #openstack-infra23:41
*** Sukhdev has joined #openstack-infra23:41
*** claudiub has quit IRC23:42
*** Sukhdev has quit IRC23:43
*** yuanying has quit IRC23:43
*** yuanying has joined #openstack-infra23:44
*** oomichi has joined #openstack-infra23:44
*** fguillot has joined #openstack-infra23:48
*** jerryz has quit IRC23:49
*** dingyichen has joined #openstack-infra23:54
clarkbmgagne: you happy for ^ to happen whenever?23:58
mgagnepabelanger: suggested we merge this change tomorrow23:58
mgagneclarkb: pabelanger suggested ^23:58
mgagneclarkb: since there will be more humans available to react23:59
clarkbkk I +2'd it23:59
mgagneclarkb: I'm leaving the office now, but it's up to you or any infra-root23:59
*** rwsu has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!