Tuesday, 2018-01-30

*** gk__ has joined #openstack-infra00:02
*** jamesmcarthur has joined #openstack-infra00:05
*** andreww has quit IRC00:05
*** olaph1 has joined #openstack-infra00:09
*** salv-orlando has joined #openstack-infra00:09
*** jamesmcarthur has quit IRC00:10
*** olaph has quit IRC00:10
pabelangerianw: you online today?00:12
ianwyep00:12
*** dingyichen has joined #openstack-infra00:13
pabelangerianw: k, mind if I share an update on zuul.o.o?00:13
ianwpabelanger: nope ... happy to help if i can00:13
*** salv-orlando has quit IRC00:13
pabelangerbasically, http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=64792&rra_id=all is pretty flat now, but if you don't mind keeping an eye out for it rising. I believe most issues with memory might have been around too many topic:zuulv3-projects approved, but we haven't completely confirmed00:14
pabelangerbut, when it was swapping, I was able to save queues using the crontab that dmsimard|afk setup.  I had issues stopping zuul-scheduler, had to use kill -9, haven't looked into why that was00:15
pabelangerwas going to do that in the morning00:15
pabelangerthat's basically it for zuul today00:16
ianwok00:17
pabelangerianw: 537933 might need to be promoted to top of integrated change queue to help fix a timeout issue with nova, that would help cut down on amount of gate resets00:17
pabelangerianw: incase mriedem is looking for it00:17
ianwi haven't restored queues from the json, i think i saw some instructions go by?00:17
pabelangerianw: let me get the syntax I used00:17
ianw537933 worth a force merge?00:18
*** tosky has quit IRC00:18
ianwi will watch it's progress, if it fails00:19
pabelangerianw: I had to copy /var/lib/zuul/www/backup/status_<timestamp> to /tmp00:19
pabelangerthen ran00:19
pabelangerpython /opt/zuul/tools/zuul-changes.py file:///tmp openstack gate >gate.sh00:19
pabelangerand check too00:20
pabelangerthen bash -x gate.sh as normal00:20
pabelangerand check00:20
*** jamesmcarthur has joined #openstack-infra00:20
ianwok, cool00:21
pabelangera patch would be to symlink status_<timestamp>.json to status.json, then we can just use the /var/lib/zuul/www/backup folder00:21
pabelangersomething for another day00:21
pabelangerianw: good luck :D00:21
ianwthanks :)  catch you later00:22
*** jamesmcarthur has quit IRC00:25
*** dhill_ has quit IRC00:26
ianwplaying with bringing up a mirror in linaro cloud00:31
ianwironically, the thing it would benefit from mostly is probably a ... mirror :)00:31
ianwgetting to ubuntu upstream repos seems a little unreliable00:32
*** felipemonteiro_ has quit IRC00:32
*** dave-mccowan has quit IRC00:32
*** jamesmcarthur has joined #openstack-infra00:35
dmsimard|afkianw, pabelanger: I have a patch up to restore from file00:35
dmsimard|afkhttps://review.openstack.org/#/c/536622/00:36
dmsimard|afkThere is a system-config doc change as well: https://review.openstack.org/#/q/topic:zuul-changes00:36
dmsimard|afkOnce that lands, we can consider moving the backups somewhere not web related00:37
dmsimard|afkFWIW I wasn't aware file:/// worked o_O00:38
*** olaph1 has quit IRC00:38
*** olaph has joined #openstack-infra00:40
EmilienMbnemec: ack, thx00:41
*** jamesmcarthur has quit IRC00:41
*** shu-mutou has joined #openstack-infra00:41
*** gk__ has quit IRC00:42
*** cuongnv has joined #openstack-infra00:51
openstackgerritliusheng proposed openstack-infra/zuul master: Fix AttributeError when handle periodic job with github driver  https://review.openstack.org/53664500:52
*** katkapilatova has left #openstack-infra00:54
*** jamesmcarthur has joined #openstack-infra00:56
*** gk__ has joined #openstack-infra00:57
*** andreas_s has joined #openstack-infra01:01
*** kiennt26 has joined #openstack-infra01:03
openstackgerritIan Wienand proposed openstack/diskimage-builder master: GPT partitioning support  https://review.openstack.org/53349001:04
*** andreas_s has quit IRC01:05
*** melwitt has quit IRC01:10
*** jroll has quit IRC01:10
*** jamesmcarthur has quit IRC01:10
*** jroll has joined #openstack-infra01:11
*** gyee has quit IRC01:11
*** myoung|bbl is now known as myoung01:12
*** apetrich has quit IRC01:20
*** apetrich has joined #openstack-infra01:20
*** liujiong has joined #openstack-infra01:28
*** Swami has quit IRC01:29
EmilienMianw: hey01:32
EmilienMianw: I'm testing yum puppetlabs mirror01:33
EmilienMianw: it's failing on http://logs.openstack.org/72/537572/9/check/puppet-openstack-integration-4-scenario001-tempest-centos-7/2873abe/job-output.txt.gz#_2018-01-30_01_06_21_62996301:33
EmilienMianw: I wonder if we synced the mirror lately01:33
*** jamesmcarthur has joined #openstack-infra01:33
*** salv-orlando has joined #openstack-infra01:33
*** liujiong has quit IRC01:36
*** liujiong has joined #openstack-infra01:36
*** derekjhyang has joined #openstack-infra01:37
*** jamesmcarthur has quit IRC01:38
*** hongbin has joined #openstack-infra01:39
*** salv-orlando has quit IRC01:39
*** jamesmcarthur has joined #openstack-infra01:40
*** felipemonteiro_ has joined #openstack-infra01:40
*** pramodrj07 has quit IRC01:43
*** namnh has joined #openstack-infra01:43
*** jamesmcarthur has quit IRC01:44
*** camunoz has quit IRC01:46
*** liujiong has quit IRC01:48
*** liujiong has joined #openstack-infra01:49
*** jamesmcarthur has joined #openstack-infra01:52
*** slaweq_ has joined #openstack-infra01:54
*** melwitt has joined #openstack-infra01:56
*** melwitt is now known as Guest9430601:56
*** Guest94306 is now known as jgwentworth01:58
*** slaweq_ has quit IRC01:59
*** jgwentworth is now known as melwitt02:01
*** gcb has joined #openstack-infra02:04
*** pahuang has quit IRC02:05
*** pahuang has joined #openstack-infra02:05
*** yamamoto has joined #openstack-infra02:05
*** efried_hexchat has quit IRC02:06
ianwEmilienM: i'll check the job in a little02:07
ianwand get back to you02:07
EmilienMianw: ack02:07
EmilienMianw: I'm here the next few hours02:07
*** harlowja has quit IRC02:10
*** olaph1 has joined #openstack-infra02:11
prometheanfireis there a bug I can subscribe to for why gate keeps on backing up?02:11
*** gk__ has quit IRC02:12
*** olaph has quit IRC02:12
clarkbprometheanfire: I dont think it wa sa single thing. log server lost a volume. eventlet broke some projects. zuul OOM'd. neutron unittests timeout. nova functional tests time out and so on02:13
clarkbthus is pretty normal for a feature freeze. We've just not had a busy feature freeze in a while :/02:13
*** ekcs has quit IRC02:14
prometheanfirewe haven't uncapped eventlet was that an os package thing?02:15
*** jamesmcarthur has quit IRC02:15
prometheanfirefair enough thoug :D02:15
clarkbya it was some ubunut python update interaction that newer eventlet fixed02:16
clarkbI'm not sure what actions have been taken to address it02:16
*** rossella_s has quit IRC02:16
prometheanfirewell, eventlet was on my hopes for uncapping this cycle, but didn't work out02:17
prometheanfirethat's all02:17
prometheanfireit's on the shortlist for rocky02:17
*** pramodrj07 has joined #openstack-infra02:17
*** rloo has quit IRC02:17
*** jamesmcarthur has joined #openstack-infra02:18
*** pramodrj07 has quit IRC02:18
*** felipemonteiro_ has quit IRC02:18
*** pramodrj07 has joined #openstack-infra02:18
*** efried_hexchat has joined #openstack-infra02:19
*** rossella_s has joined #openstack-infra02:20
*** jamesmcarthur has quit IRC02:22
*** jamesmcarthur has joined #openstack-infra02:23
*** ykarel|away has joined #openstack-infra02:23
*** slaweq_ has joined #openstack-infra02:29
*** Pramod has joined #openstack-infra02:31
*** pramodrj07 has quit IRC02:31
*** rlandy|bbl is now known as rlandy02:31
*** jamesmcarthur has quit IRC02:33
*** slaweq_ has quit IRC02:34
*** Pramod has quit IRC02:37
*** Pramod has joined #openstack-infra02:37
*** ykarel|away has quit IRC02:38
*** Pramod has quit IRC02:39
*** pramodrj07 has joined #openstack-infra02:39
*** olaph has joined #openstack-infra02:44
*** olaph1 has quit IRC02:45
*** hongbin has quit IRC02:46
*** yamamoto_ has joined #openstack-infra02:47
*** hongbin has joined #openstack-infra02:47
*** jamesmcarthur has joined #openstack-infra02:48
*** yamamoto has quit IRC02:50
*** d0ugal has quit IRC02:51
*** d0ugal has joined #openstack-infra02:52
*** jamesmcarthur has quit IRC02:53
*** edmondsw has joined #openstack-infra02:54
*** flwang has quit IRC02:55
*** ramishra has joined #openstack-infra02:55
*** edmondsw has quit IRC02:58
*** dave-mccowan has joined #openstack-infra02:58
*** cshastri has joined #openstack-infra02:59
*** zhurong has joined #openstack-infra03:03
*** yamahata has quit IRC03:05
*** armax has quit IRC03:08
*** apetrich has quit IRC03:09
*** apetrich has joined #openstack-infra03:13
*** cshastri has quit IRC03:15
*** jamesmcarthur has joined #openstack-infra03:20
*** psachin has joined #openstack-infra03:25
*** jamesmcarthur has quit IRC03:25
*** uberjay has quit IRC03:28
*** uberjay has joined #openstack-infra03:29
*** agopi|out has joined #openstack-infra03:31
*** dhajare_ has joined #openstack-infra03:34
ianwEmilienM: ok, the mirror job seems to be going03:34
EmilienMianw: syncing you mean?03:35
ianwwell, running, yeah03:35
ianwthat file isn't upstream? http://yum.puppetlabs.com/el/7/dependencies/03:36
*** salv-orlando has joined #openstack-infra03:36
*** yamamoto has joined #openstack-infra03:36
openstackgerritEmilien Macchi proposed openstack-infra/project-config master: Remove reno jobs for some tripleo projects  https://review.openstack.org/53908103:36
*** slaweq_ has joined #openstack-infra03:37
EmilienMianw: weird03:38
EmilienMianw: yum trying to find it03:38
EmilienMlet me check repo config03:38
EmilienMhttp://logs.openstack.org/72/537572/9/check/puppet-openstack-integration-4-scenario001-tempest-centos-7/2873abe/logs/repolist.txt.gz03:39
EmilienMsounds broken03:39
*** yamamoto_ has quit IRC03:39
EmilienMconfig is here: http://logs.openstack.org/72/537572/9/check/puppet-openstack-integration-4-scenario002-tempest-centos-7/8c9a7f2/job-output.txt.gz#_2018-01-30_01_16_00_92317903:39
EmilienMlet me find why03:40
*** salv-orlando has quit IRC03:41
*** rlandy has quit IRC03:42
*** cshastri has joined #openstack-infra03:42
*** slaweq_ has quit IRC03:42
openstackgerritIan Wienand proposed openstack-infra/system-config master: Add aarch64 sources configuration  https://review.openstack.org/53908303:42
*** uberjay has quit IRC03:43
EmilienMianw: I'll try without the second repo03:43
EmilienMI found out that our existing jobs don't use it03:43
*** jamesmcarthur has joined #openstack-infra03:44
*** hongbin has quit IRC03:44
*** uberjay has joined #openstack-infra03:45
EmilienMianw: it's weird $basearch isn't use by the second repo03:46
ianwhmm, it's all noarch ruby stuff?03:46
EmilienMnot sure03:47
EmilienManyway, trying without this repo: https://review.openstack.org/#/c/537572/03:47
EmilienMlet's see how that works03:47
*** uberjay has quit IRC03:47
*** uberjay has joined #openstack-infra03:47
*** yamahata has joined #openstack-infra03:47
*** jamesmcarthur has quit IRC03:48
*** daidv has quit IRC03:50
*** daidv has joined #openstack-infra03:50
*** dave-mccowan has quit IRC03:52
EmilienMianw: can you please review https://review.openstack.org/#/c/539081/ ?03:53
*** jamesmcarthur has joined #openstack-infra03:57
*** slaweq_ has joined #openstack-infra03:58
*** coolsvap has joined #openstack-infra04:00
openstackgerritMatthew Treinish proposed openstack-infra/storyboard master: Add MQTT notification publisher  https://review.openstack.org/53857504:02
*** slaweq_ has quit IRC04:03
*** jamesmcarthur has quit IRC04:03
*** sree has joined #openstack-infra04:04
*** olaph1 has joined #openstack-infra04:04
*** zhurong has quit IRC04:05
openstackgerritMatthew Treinish proposed openstack-infra/storyboard master: Add MQTT notification publisher  https://review.openstack.org/53857504:05
*** jamesmcarthur has joined #openstack-infra04:05
*** olaph has quit IRC04:06
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: Implement a static driver for Nodepool  https://review.openstack.org/53555304:08
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: nodeutils: use socket.getaddrinfo instead of ipaddress  https://review.openstack.org/53908604:09
*** mylu has quit IRC04:09
*** jamesmcarthur has quit IRC04:10
*** links has joined #openstack-infra04:10
EmilienMianw: same problem with http://logs.openstack.org/72/537572/11/check/puppet-openstack-integration-4-scenario001-tempest-centos-7/702106e/job-output.txt.gz#_2018-01-30_03_54_23_73092404:13
EmilienMmhh04:13
EmilienMI feel like $basearch is interpreted04:15
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: Implement a static driver for Nodepool  https://review.openstack.org/53555304:16
*** adriant has quit IRC04:18
*** jamesmcarthur has joined #openstack-infra04:18
EmilienMianw: nevermind, I think I used the wrong repo, let me fix04:20
*** harlowja has joined #openstack-infra04:21
*** mylu has joined #openstack-infra04:21
*** jamesmcarthur has quit IRC04:22
*** armax has joined #openstack-infra04:24
*** harlowja has quit IRC04:26
*** harlowja has joined #openstack-infra04:26
*** harlowja has quit IRC04:27
*** armax has quit IRC04:28
*** jamesmcarthur has joined #openstack-infra04:30
*** adriant has joined #openstack-infra04:30
*** ykarel has joined #openstack-infra04:31
*** armax has joined #openstack-infra04:35
*** jamesmcarthur has quit IRC04:36
*** salv-orlando has joined #openstack-infra04:37
*** pgadiya has joined #openstack-infra04:38
*** rosmaita has quit IRC04:39
*** jamesmcarthur has joined #openstack-infra04:40
*** salv-orlando has quit IRC04:42
*** edmondsw has joined #openstack-infra04:42
*** jamesmcarthur has quit IRC04:44
*** andreas_s has joined #openstack-infra04:45
*** apetrich has quit IRC04:45
*** edmondsw has quit IRC04:46
*** apetrich has joined #openstack-infra04:46
EmilienMianw: i've got something working04:47
* EmilienM off04:47
*** jamesmcarthur has joined #openstack-infra04:48
*** andreas_s has quit IRC04:49
ianwok, good :)  happy to help if the mirror needs tweaking04:50
*** jamesmcarthur has quit IRC04:53
*** jamesmcarthur has joined #openstack-infra04:58
*** jamesmcarthur has quit IRC05:02
*** armax has quit IRC05:05
*** mylu_ has joined #openstack-infra05:06
*** armax has joined #openstack-infra05:06
*** pcichy has quit IRC05:07
*** mylu has quit IRC05:08
*** slaweq_ has joined #openstack-infra05:12
*** jamesmcarthur has joined #openstack-infra05:13
*** sshnaidm|afk has quit IRC05:14
*** slaweq_ has quit IRC05:16
openstackgerritMerged openstack-dev/pbr master: Updated from global requirements  https://review.openstack.org/53702205:17
*** jamesmcarthur has quit IRC05:19
*** cuongnv has quit IRC05:22
*** kjackal has joined #openstack-infra05:23
*** kjackal has quit IRC05:23
*** jamesmcarthur has joined #openstack-infra05:24
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: Implement a static driver for Nodepool  https://review.openstack.org/53555305:27
*** cuongnv has joined #openstack-infra05:27
*** agopi|out has quit IRC05:30
*** jamesmcarthur has quit IRC05:30
*** sshnaidm has joined #openstack-infra05:31
*** pramodrj07 has quit IRC05:32
*** pramodrj07 has joined #openstack-infra05:33
*** jamesmcarthur has joined #openstack-infra05:34
*** salv-orlando has joined #openstack-infra05:38
*** jamesmcarthur has quit IRC05:38
prometheanfireAJaeger: old restart or new one?05:38
*** jamesmcarthur has joined #openstack-infra05:41
*** zhurong has joined #openstack-infra05:42
*** ekcs has joined #openstack-infra05:43
*** salv-orlando has quit IRC05:43
*** sshnaidm has quit IRC05:43
openstackgerritMerged openstack-infra/project-config master: Remove reno jobs for some tripleo projects  https://review.openstack.org/53908105:43
*** agopi has joined #openstack-infra05:44
*** bhavik1 has joined #openstack-infra05:45
*** jamesmcarthur has quit IRC05:46
AJaegerprometheanfire: what do you mean?05:47
*** jamesmcarthur has joined #openstack-infra05:48
*** Pramod has joined #openstack-infra05:49
AJaegerianw: could you review https://review.openstack.org/#/c/538182/ and  https://review.openstack.org/#/c/538185/ , please? Those are needed for queens release05:51
*** esberglu_ has joined #openstack-infra05:51
*** pramodrj07 has quit IRC05:52
*** bhavik1 has quit IRC05:53
*** jamesmcarthur has quit IRC05:53
*** jamesmcarthur has joined #openstack-infra05:54
*** esberglu has quit IRC05:55
*** salv-orlando has joined #openstack-infra05:59
*** jamesmcarthur has quit IRC05:59
prometheanfireyou recheched something 30 min ago06:00
*** slaweq has quit IRC06:00
*** jappleii__ has quit IRC06:01
AJaegerprometheanfire: ah - that was for the restart from yesterday around 14:00 UTC06:02
*** sshnaidm has joined #openstack-infra06:02
*** olaph has joined #openstack-infra06:05
*** olaph1 has quit IRC06:05
prometheanfireah, guess it was missed, thought the others went through06:06
*** jamesmcarthur has joined #openstack-infra06:07
*** dsariel has joined #openstack-infra06:10
*** jamesmcarthur has quit IRC06:12
*** bhavik1 has joined #openstack-infra06:14
*** mylu_ has quit IRC06:16
*** slaweq has joined #openstack-infra06:16
*** slaweq_ has joined #openstack-infra06:17
*** jamesmcarthur has joined #openstack-infra06:20
*** dsariel has quit IRC06:21
*** jaosorior has joined #openstack-infra06:21
*** slaweq_ has quit IRC06:21
*** slaweq has quit IRC06:21
*** jtomasek has quit IRC06:24
*** jamesmcarthur has quit IRC06:24
*** jtomasek has joined #openstack-infra06:24
*** slaweq has joined #openstack-infra06:24
*** pgadiya has quit IRC06:27
*** edmondsw has joined #openstack-infra06:30
*** janki has joined #openstack-infra06:31
*** dbecker has quit IRC06:31
*** slaweq has quit IRC06:31
*** jamesmcarthur has joined #openstack-infra06:32
*** pgadiya has joined #openstack-infra06:32
*** edmondsw has quit IRC06:34
*** jamesmcarthur has quit IRC06:36
*** bhavik1 has quit IRC06:36
*** pgadiya has quit IRC06:41
*** pgadiya has joined #openstack-infra06:41
*** yamamoto_ has joined #openstack-infra06:43
ianwzuul's memory usage looking pleasingly static06:43
*** jamesmcarthur has joined #openstack-infra06:44
*** dbecker has joined #openstack-infra06:45
*** yamamoto has quit IRC06:46
*** Pramod has quit IRC06:48
*** jamesmcarthur has quit IRC06:49
openstackgerritMerged openstack-infra/project-config master: Use horizon publish job for blazar-dashboard  https://review.openstack.org/53818206:52
*** jamesmcarthur has joined #openstack-infra06:52
*** esberglu has joined #openstack-infra06:55
openstackgerritGhanshyam Mann proposed openstack-infra/project-config master: Remove legacy ec2-api jobs  https://review.openstack.org/53910906:56
openstackgerritMerged openstack-infra/project-config master: Add publish-to-pypi in blazar-nova repo  https://review.openstack.org/53818506:56
*** jamesmcarthur has quit IRC06:57
*** esberglu_ has quit IRC06:57
*** ekcs has quit IRC06:58
*** zhurong has quit IRC06:58
*** jamesmcarthur has joined #openstack-infra06:59
openstackgerritGhanshyam Mann proposed openstack-infra/openstack-zuul-jobs master: Remove legacy ec2-api jobs  https://review.openstack.org/53911107:00
*** zhurong has joined #openstack-infra07:02
openstackgerritGhanshyam Mann proposed openstack-infra/project-config master: Remove legacy ec2-api jobs  https://review.openstack.org/53910907:03
*** jamesmcarthur has quit IRC07:03
*** jtomasek has quit IRC07:05
*** ramishra has quit IRC07:07
*** jamesmcarthur has joined #openstack-infra07:09
*** ramishra has joined #openstack-infra07:09
*** pcaruana has joined #openstack-infra07:10
AJaegerianw, prometheanfire : Thanks to that static memory usage, I'll recheck one by one.07:10
AJaegerprometheanfire: we added all changes back after the restart with the exception of some that change zuul configs since we suspected them to be responsible. So, I'm rechecking them now..07:11
*** jamesmcarthur has quit IRC07:14
prometheanfireAJaeger: do I need to recheck https://review.openstack.org/538842 ?07:15
*** jamesmcarthur has joined #openstack-infra07:17
*** andreas_s has joined #openstack-infra07:17
AJaegerprometheanfire: check http://zuul.openstack.org/ - and you see it's in the gate pipeline. But this is so long that the "Starting gate" message was not even send ;( So, nothing to do for this one07:20
prometheanfireok07:20
prometheanfirethat's what I though (been looking at the queue)07:20
*** jamesmcarthur has quit IRC07:21
*** jamesmcarthur has joined #openstack-infra07:23
*** jtomasek has joined #openstack-infra07:27
*** annp has joined #openstack-infra07:27
*** jamesmcarthur has quit IRC07:29
*** janki is now known as janki-afk07:30
*** iyamahat has joined #openstack-infra07:30
*** jamesmcarthur has joined #openstack-infra07:32
*** d0ugal has quit IRC07:34
*** salv-orlando has quit IRC07:36
*** e0ne has joined #openstack-infra07:36
*** salv-orlando has joined #openstack-infra07:36
*** janki-afk is now known as janki07:36
*** jamesmcarthur has quit IRC07:37
*** slaweq has joined #openstack-infra07:39
*** dingyichen has quit IRC07:40
*** salv-orlando has quit IRC07:41
*** armax has quit IRC07:41
*** dingyichen has joined #openstack-infra07:43
*** agopi has quit IRC07:44
*** agopi has joined #openstack-infra07:44
*** slaweq has quit IRC07:44
*** jamesmcarthur has joined #openstack-infra07:46
*** alexchadin has joined #openstack-infra07:49
*** jamesmcarthur has quit IRC07:50
*** jamesmcarthur has joined #openstack-infra07:52
*** slaweq has joined #openstack-infra07:54
*** jtomasek has quit IRC07:54
*** jamesmcarthur has quit IRC07:56
*** slaweq has quit IRC07:58
*** jamesmcarthur has joined #openstack-infra07:59
*** salv-orlando has joined #openstack-infra07:59
*** tmorin has joined #openstack-infra08:01
*** d0ugal has joined #openstack-infra08:02
*** tmorin has quit IRC08:02
*** tmorin1 has joined #openstack-infra08:02
*** jamesmcarthur has quit IRC08:06
*** b_bezak has joined #openstack-infra08:07
*** jamesmcarthur has joined #openstack-infra08:11
*** slaweq has joined #openstack-infra08:11
*** oidgar has joined #openstack-infra08:13
*** sshnaidm is now known as sshnaidm|afk08:15
*** jamesmcarthur has quit IRC08:15
*** stakeda has quit IRC08:16
*** tesseract has joined #openstack-infra08:17
*** edmondsw has joined #openstack-infra08:18
*** armaan_ has quit IRC08:19
*** aviau has quit IRC08:19
*** aviau has joined #openstack-infra08:19
*** armaan has joined #openstack-infra08:19
*** edmondsw has quit IRC08:23
*** ralonsoh has joined #openstack-infra08:25
*** jamesmcarthur has joined #openstack-infra08:26
*** tmorin has joined #openstack-infra08:28
*** slaweq_ has joined #openstack-infra08:28
*** tmorin1 has quit IRC08:28
*** jamesmcarthur has quit IRC08:30
*** slaweq_ has quit IRC08:32
*** dingyichen has quit IRC08:36
*** jamesmcarthur has joined #openstack-infra08:37
*** kjackal has joined #openstack-infra08:40
cmurphyi think the gate is broke https://zuul.openstack.org/builds.html08:41
cmurphyspecifically the novnc package on ubuntu http://logs.openstack.org/84/537784/4/check/tempest-full-py3/05fec34/job-output.txt.gz#_2018-01-30_07_07_20_47305008:41
*** jamesmcarthur has quit IRC08:41
prometheanfireI just got a notification from gate08:43
prometheanfireso not totallt broken08:43
*** iyamahat has quit IRC08:43
*** dklyle has joined #openstack-infra08:43
*** david-lyle has quit IRC08:44
*** jamesmcarthur has joined #openstack-infra08:45
*** s-shiono has quit IRC08:45
*** jamesmcarthur has quit IRC08:50
*** andreas_s has quit IRC08:50
*** lennyb has quit IRC08:50
*** andreas_s has joined #openstack-infra08:50
*** jpena|off is now known as jpena08:50
cmurphynovnc seems okay to me, maybe the problem already fixed itself08:53
AJaegerfixed by another mirror sync?08:54
cmurphymaybe?08:54
*** eyalb has joined #openstack-infra08:55
AJaegerhopefully ;)08:56
* cmurphy crosses fingers08:56
*** jamesmcarthur has joined #openstack-infra08:57
*** jpich has joined #openstack-infra08:58
openstackgerritBalazs Gibizer proposed openstack-infra/project-config master: consolidate nova job definitions  https://review.openstack.org/53890808:58
*** priteau has joined #openstack-infra08:59
*** slaweq_ has joined #openstack-infra09:00
*** vivsoni has quit IRC09:00
*** vivsoni_ has joined #openstack-infra09:00
*** jamesmcarthur has quit IRC09:01
*** sshnaidm|afk is now known as sshnaidm09:04
*** amoralej|off is now known as amoralej09:04
*** slaweq_ has quit IRC09:04
*** jamesmcarthur has joined #openstack-infra09:07
*** yamahata has quit IRC09:08
*** jamesmcarthur has quit IRC09:12
*** arxcruz|off is now known as arxcruz09:12
*** florianf has joined #openstack-infra09:12
*** matbu has quit IRC09:14
*** jamesmcarthur has joined #openstack-infra09:16
*** florianf has quit IRC09:17
*** salv-orlando has quit IRC09:17
*** salv-orlando has joined #openstack-infra09:18
*** jamesmcarthur has quit IRC09:20
*** shardy has joined #openstack-infra09:20
*** ijw has joined #openstack-infra09:21
*** salv-orlando has quit IRC09:23
*** ijw has quit IRC09:26
*** shu-mutou is now known as shu-mutou-AWAY09:26
*** hrw has joined #openstack-infra09:29
*** janki has quit IRC09:30
*** janki has joined #openstack-infra09:30
*** jamesmcarthur has joined #openstack-infra09:30
*** kiennt26 has quit IRC09:30
*** andreas_s_ has joined #openstack-infra09:31
*** lennyb has joined #openstack-infra09:34
*** andreas_s has quit IRC09:35
*** jamesmcarthur has quit IRC09:36
*** oidgar has quit IRC09:40
*** jamesmcarthur has joined #openstack-infra09:41
*** derekh has joined #openstack-infra09:42
*** verdurin_ is now known as verdurin09:45
cmurphylooks still broken :/ http://logs.openstack.org/46/525346/35/check/tempest-full/5ca494b/job-output.txt.gz#_2018-01-30_09_26_15_59948409:46
*** jamesmcarthur has quit IRC09:46
*** namnh has quit IRC09:51
*** cuongnv has quit IRC09:51
*** daidv has quit IRC09:51
*** daidv has joined #openstack-infra09:51
*** namnh has joined #openstack-infra09:51
*** cuongnv has joined #openstack-infra09:51
*** kopecmartin has joined #openstack-infra09:54
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul autohold: allow filtering per commit  https://review.openstack.org/53699309:54
*** jamesmcarthur has joined #openstack-infra09:55
eyalbwe also have the same problem http://logs.openstack.org/16/538816/1/gate/vitrage-dsvm-api-py27/b63f990/logs/devstacklog.txt.gz#_2018-01-30_07_38_41_36209:58
*** armaan has quit IRC09:59
*** matbu has joined #openstack-infra09:59
*** jamesmcarthur has quit IRC09:59
*** jtomasek has joined #openstack-infra10:00
*** sambetts|afk is now known as sambetts10:01
*** zhurong has quit IRC10:02
*** ijw has joined #openstack-infra10:02
*** pgadiya has quit IRC10:03
*** oidgar has joined #openstack-infra10:04
*** edmondsw has joined #openstack-infra10:06
cmurphynot finding any recent changes in the upstream novnc or python-numpy or websockify packages10:06
*** ijw has quit IRC10:07
*** yamamoto has joined #openstack-infra10:08
*** liujiong has quit IRC10:08
*** armaan has joined #openstack-infra10:08
*** jamesmcarthur has joined #openstack-infra10:09
*** edmondsw has quit IRC10:10
bauzasis it me or the top changes in the gate pipeline are stuck ?10:10
bauzasspeaking of the integrated queue10:11
*** yamamoto_ has quit IRC10:11
bauzasoh no10:11
*** yamamoto has quit IRC10:12
bauzasthe top one is a requirements patch, which is potentially holding the nova change at the bottom10:12
*** edmondsw has joined #openstack-infra10:12
*** edmondsw has quit IRC10:12
*** jamesmcarthur has quit IRC10:14
*** yamamoto has joined #openstack-infra10:14
*** yamamoto has quit IRC10:14
*** jamesmcarthur has joined #openstack-infra10:16
*** hjensas is now known as hjensas|lunch10:17
*** rubasov has joined #openstack-infra10:20
*** pgadiya has joined #openstack-infra10:21
*** jamesmcarthur has quit IRC10:21
*** olaph1 has joined #openstack-infra10:21
*** olaph has quit IRC10:23
*** cuongnv has quit IRC10:24
*** rubasov has left #openstack-infra10:24
*** jamesmcarthur has joined #openstack-infra10:28
*** sree has quit IRC10:29
*** matbu has quit IRC10:30
*** alexchadin has quit IRC10:30
*** salv-orlando has joined #openstack-infra10:31
*** alexchadin has joined #openstack-infra10:31
*** dave-mccowan has joined #openstack-infra10:32
*** jamesmcarthur has quit IRC10:32
*** andreas_s has joined #openstack-infra10:33
*** pgadiya has quit IRC10:33
*** ijw has joined #openstack-infra10:35
*** tosky has joined #openstack-infra10:35
*** andreas_s_ has quit IRC10:36
*** jamesmcarthur has joined #openstack-infra10:38
*** ijw has quit IRC10:39
*** lucas-afk is now known as lucasagomes10:41
*** ganso has joined #openstack-infra10:43
*** tellesnobrega has quit IRC10:43
*** jamesmcarthur has quit IRC10:43
*** tpsilva has joined #openstack-infra10:43
*** ldnunes has joined #openstack-infra10:43
*** pbourke has quit IRC10:44
*** olaph has joined #openstack-infra10:46
*** olaph1 has quit IRC10:46
*** pbourke has joined #openstack-infra10:46
lyarwoodalso seeing the novnc deps issue here FWIW10:46
*** alexchadin has quit IRC10:47
*** slaweq_ has joined #openstack-infra10:48
*** alexchadin has joined #openstack-infra10:49
*** armaan has quit IRC10:52
*** slaweq_ has quit IRC10:52
*** oidgar has quit IRC10:52
*** armaan has joined #openstack-infra10:53
*** jamesmcarthur has joined #openstack-infra10:54
*** slaweq_ has joined #openstack-infra10:54
*** andreas_s_ has joined #openstack-infra10:55
*** andreas_s has quit IRC10:58
*** jamesmcarthur has quit IRC10:58
*** yamamoto has joined #openstack-infra10:59
*** slaweq_ has quit IRC11:02
*** alexchadin has quit IRC11:02
*** alexchadin has joined #openstack-infra11:03
*** alexchadin has quit IRC11:03
*** alexchadin has joined #openstack-infra11:03
*** alexchadin has quit IRC11:04
*** yamamoto has quit IRC11:04
*** oidgar has joined #openstack-infra11:04
*** alexchadin has joined #openstack-infra11:04
*** yamamoto has joined #openstack-infra11:04
*** alexchadin has quit IRC11:04
*** jamesmcarthur has joined #openstack-infra11:05
*** namnh has quit IRC11:05
*** alexchadin has joined #openstack-infra11:05
*** alexchadin has quit IRC11:05
*** alexchadin has joined #openstack-infra11:06
*** alexchadin has quit IRC11:06
*** jamesmcarthur has quit IRC11:09
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul autohold: allow filtering per commit  https://review.openstack.org/53699311:11
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: [WIP] zuul web: add admin endpoint, enqueue commands  https://review.openstack.org/53900411:15
*** jamesmcarthur has joined #openstack-infra11:16
*** hjensas|lunch is now known as hjensas11:18
*** andreas_s_ has quit IRC11:19
*** jamesmcarthur has quit IRC11:20
*** andreas_s has joined #openstack-infra11:20
*** jamesmcarthur has joined #openstack-infra11:22
*** rfolco|off is now known as rfolco11:27
*** jamesmcarthur has quit IRC11:28
*** jamesmcarthur has joined #openstack-infra11:30
*** salv-orlando has quit IRC11:32
*** salv-orlando has joined #openstack-infra11:32
*** Nil_ has quit IRC11:33
*** d0ugal has quit IRC11:34
*** jamesmcarthur has quit IRC11:35
*** armaan has quit IRC11:35
*** armaan has joined #openstack-infra11:35
*** yamamoto has quit IRC11:36
*** salv-orlando has quit IRC11:37
*** ijw has joined #openstack-infra11:40
*** dsariel has joined #openstack-infra11:41
*** ijw has quit IRC11:45
*** jamesmcarthur has joined #openstack-infra11:45
*** yamamoto has joined #openstack-infra11:46
*** andreas_s has quit IRC11:47
*** andreas_s has joined #openstack-infra11:47
*** alexchadin has joined #openstack-infra11:49
*** jamesmcarthur has quit IRC11:50
*** yamamoto has quit IRC11:51
*** dhajare_ has quit IRC11:52
*** d0ugal has joined #openstack-infra11:54
*** d0ugal has quit IRC11:54
*** d0ugal has joined #openstack-infra11:54
*** jamesmcarthur has joined #openstack-infra11:54
*** jamesmcarthur has quit IRC11:59
*** tellesnobrega has joined #openstack-infra12:03
*** jamesmcarthur has joined #openstack-infra12:05
*** ijw has joined #openstack-infra12:08
*** edmondsw has joined #openstack-infra12:08
*** ijw has quit IRC12:09
*** ijw has joined #openstack-infra12:10
*** jamesmcarthur has quit IRC12:10
*** yamamoto has joined #openstack-infra12:10
*** zhurong has joined #openstack-infra12:13
*** yamamoto has quit IRC12:14
*** alexchadin has quit IRC12:17
*** alexchadin has joined #openstack-infra12:18
*** sree has joined #openstack-infra12:19
*** coolsvap has quit IRC12:19
*** jamesmcarthur has joined #openstack-infra12:20
*** alexchadin has quit IRC12:22
*** alexchadin has joined #openstack-infra12:22
*** jamesmcarthur has quit IRC12:25
*** agopi has quit IRC12:25
*** liusheng has joined #openstack-infra12:26
*** jamesmcarthur has joined #openstack-infra12:27
toskyanyone else noticed issues with the xenial images and devstack?12:27
toskyI see "The following packages have unmet dependencies:" "novnc : Depends: python-numpy but it is not going to be installed" "Depends: websockify but it is not going to be installed"12:28
toskynamely here https://review.openstack.org/#/c/304019/12:28
* tosky bbl12:28
*** bfournie has quit IRC12:29
toskyand I'm not the only one, see https://review.openstack.org/531206 (that's your job, chandankumar )12:29
*** bfournie has joined #openstack-infra12:29
*** wolverineav has joined #openstack-infra12:30
chandankumartosky: yup12:30
chandankumartosky: cmurphy was talking about this in the morning, i donot know the status about the same12:31
*** tmorin1 has joined #openstack-infra12:31
cmurphyi posted https://review.openstack.org/#/c/539178/2 to try to help debug but it would be nice of an infra-root was around to look12:32
*** jamesmcarthur has quit IRC12:32
*** tmorin has quit IRC12:32
*** bfournie has quit IRC12:33
*** jamesmcarthur has joined #openstack-infra12:34
cmurphyaha, i can reproduce12:35
*** yamamoto has joined #openstack-infra12:35
*** liusheng has quit IRC12:36
cmurphyinfra-root: I think it's something wrong with the latest image, I can reproduce the problem on a new image built with the build-image.sh script but on my regular ubuntu vms using the CI mirrors the packages are fine12:36
*** liusheng has joined #openstack-infra12:36
odyssey4methat sounds like the images were built using packages more recent than available in the mirrors12:37
cmurphythat sounds about right12:38
cmurphylibgfortran3 : Depends: gcc-5-base (= 5.4.0-6ubuntu1~16.04.5) but 5.4.0-6ubuntu1~16.04.6 is to be installed12:38
*** sshnaidm is now known as sshnaidm|afk12:39
*** jamesmcarthur has quit IRC12:39
*** yamamoto_ has joined #openstack-infra12:40
*** dhajare_ has joined #openstack-infra12:40
*** yamamoto_ has quit IRC12:43
*** makowals_ has quit IRC12:43
*** yamamoto has quit IRC12:43
*** katkapilatova has joined #openstack-infra12:43
*** jpena is now known as jpena|lunch12:45
*** makowals has joined #openstack-infra12:46
*** janki has quit IRC12:46
*** links has quit IRC12:49
*** edmondsw_ has joined #openstack-infra12:49
*** jamesmcarthur has joined #openstack-infra12:50
*** edmondsw has quit IRC12:51
*** salv-orlando has joined #openstack-infra12:52
*** jamesmcarthur has quit IRC12:54
*** makowals_ has joined #openstack-infra12:55
*** gcb has quit IRC12:56
*** makowals has quit IRC12:57
*** slaweq_ has joined #openstack-infra12:58
*** edmondsw_ has quit IRC12:58
*** jamesmcarthur has joined #openstack-infra12:59
*** edmondsw has joined #openstack-infra13:00
*** zhurong_ has joined #openstack-infra13:03
*** jamesmcarthur has quit IRC13:04
*** slaweq_ has quit IRC13:05
*** bfournie has joined #openstack-infra13:05
*** sree has quit IRC13:08
*** jamesmcarthur has joined #openstack-infra13:12
*** olaph1 has joined #openstack-infra13:12
*** salv-orl_ has joined #openstack-infra13:13
*** olaph has quit IRC13:14
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Fix relaunch attempts when hitting quota errors  https://review.openstack.org/53693013:15
*** dhajare_ has quit IRC13:16
*** jamesmcarthur has quit IRC13:16
*** salv-orlando has quit IRC13:16
*** zhurong has quit IRC13:17
*** hemna_ has joined #openstack-infra13:18
*** zhurong_ has quit IRC13:19
*** zhurong has joined #openstack-infra13:19
*** dprince has joined #openstack-infra13:19
*** sshnaidm|afk is now known as sshnaidm13:19
*** zhurong has quit IRC13:20
*** tellesnobrega has quit IRC13:21
*** alexchadin has quit IRC13:21
*** chenying_ has quit IRC13:21
*** trown|outtypewww is now known as trown13:25
*** agopi has joined #openstack-infra13:25
*** trown is now known as trown|rover13:25
*** jamesmcarthur has joined #openstack-infra13:26
cmurphyit actually just seems like the mirror is out of sync, on a new vm using archive.ubuntu.com things are fine13:27
*** amoralej is now known as amoralej|lunch13:28
*** rlandy has joined #openstack-infra13:28
toskycmurphy: but of course we can't just blindy retry until we find a mirror13:30
cmurphytosky: i think they're all out of sync13:30
toskyouch13:30
*** ykarel is now known as ykarel|away13:31
*** jamesmcarthur has quit IRC13:31
*** weshay|ruck|afk is now known as weshay|ruck13:31
*** ijw has quit IRC13:32
*** rosmaita has joined #openstack-infra13:32
AJaegercmurphy: the mirrors are on AFS - so in sync ;(13:34
AJaegercmurphy, tosky, we need an infra-root to help again13:34
cmurphyAJaeger: in sync with each other but not in sync with upstream i think13:34
AJaegercmurphy: yep13:34
*** sree has joined #openstack-infra13:35
AJaegertime for another #status notice? What should I send?13:35
*** ykarel|away has quit IRC13:36
*** dave-mccowan has quit IRC13:37
*** jamesmcarthur has joined #openstack-infra13:37
*** slaweq has quit IRC13:37
*** slaweq has joined #openstack-infra13:38
cmurphy"devstack jobs are currently failing due to mirror issues, more info shortly" ?13:39
odyssey4menot just devstack - anything using ubuntu-xenial13:39
odyssey4mewell, at least every functional test13:40
cmurphyanything trying to install certain packages on ubuntu-xenial13:40
cmurphy"some ubuntu-xenial jobs are currently failing due to mirror issues, more info shortly" ?13:40
AJaegerWhat about #status notice Our ubuntu-xenial images (used for e.g. unit tests and devstack) are currently failing to install any packages, please restrain from *recheck* or *approve* until the issue has been investigated and fixed13:41
cmurphy+113:41
AJaegercmurphy, odyssey4me thanks13:41
odyssey4mesgtm13:41
AJaeger#status notice Our ubuntu-xenial images (used for e.g. unit tests and devstack) are currently failing to install any packages, restrain from *recheck* or *approve* until the issue has been investigated and fixed.13:41
openstackstatusAJaeger: sending notice13:41
*** sree has quit IRC13:41
cmurphynot seeing anything crazy in cacti so far13:42
*** jamesmcarthur has quit IRC13:42
AJaeger#thanks cmurphy for your investigation13:42
-openstackstatus- NOTICE: Our ubuntu-xenial images (used for e.g. unit tests and devstack) are currently failing to install any packages, restrain from *recheck* or *approve* until the issue has been investigated and fixed.13:43
cmurphy:)13:43
fungicatching up, do we need to pause image builds and roll back the most recent additions?13:43
*** yamamoto has joined #openstack-infra13:44
*** links has joined #openstack-infra13:44
fungiahh, image packages newer than mirror package lists13:44
*** mihalis68 has joined #openstack-infra13:44
openstackstatusAJaeger: finished sending notice13:44
openstackstatusAJaeger: Added your thanks to Thanks page (https://wiki.openstack.org/wiki/Thanks)13:44
AJaegerfungi: according to what cmurphy found out: Yes, please roll back13:45
AJaegergood morning, fungi!13:45
*** jpena|lunch is now known as jpena13:46
*** jamesmcarthur has joined #openstack-infra13:46
*** yamamoto has quit IRC13:49
*** janki has joined #openstack-infra13:49
*** jamesmcarthur has quit IRC13:50
*** jamesmcarthur has joined #openstack-infra13:50
openstackgerritJeremy Stanley proposed openstack-infra/project-config master: Pause ubuntu-xenial image uploads  https://review.openstack.org/53921313:51
AJaegerfungi, took the liberty to +2A ^13:52
fungiconfig-core: i think that ^ is how we pause uploads and can manually apply it (on the builders?) with puppet disabled if so13:52
*** alexchadin has joined #openstack-infra13:52
*** jpich has quit IRC13:54
fungi#status log nb01, nb02 and nb03 have beeen placed in the emergency maintenance list in preparation for a manual application of https://review.openstack.org/53921313:54
openstackstatusfungi: finished logging13:54
*** kgiusti has joined #openstack-infra13:54
*** alexchadin has quit IRC13:55
mordredfungi, AJaeger: wow. what a week we've been having13:56
*** alexchadin has joined #openstack-infra13:56
*** annp_ has joined #openstack-infra13:56
cmurphyit must be an exciting life to wake up to fires every day13:56
AJaegermordred: when it rains, it pours - or what's the saying?13:57
*** psachin has quit IRC13:57
mordredAJaeger: yup. that's right13:57
mordredcmurphy: 'exciting' ... yes13:57
mordredfungi: can I do anything to help - I'm assuming you're on the rollback already13:57
*** sree has joined #openstack-infra13:58
*** tellesnobrega has joined #openstack-infra13:58
*** tmorin has joined #openstack-infra13:59
*** tmorin1 has quit IRC14:00
*** alexchadin has quit IRC14:00
*** jcoufal has joined #openstack-infra14:00
fungiyeah, i'm fumbling my way through it14:01
AJaegerEmilienM: just saw your email on openstack-dev - are you tracking the same problem? ^14:01
EmilienMAJaeger: in the tripleo meeting right now14:02
fungii've updated the configs on the three builders and am trying to remember which order to delete in (image-delete the individual uploads, then dib-image-delete the source image? or the other way around?)14:02
EmilienMI'll catchup in 1h or less14:02
toskywhat is the trigger that promote a newly built image? Shouldn't that be gated too with some tests? (any basic devstack test would have caught the issue)14:02
*** sree_ has joined #openstack-infra14:02
*** sree_ is now known as Guest4157414:03
*** makowals_ has quit IRC14:04
*** olaph has joined #openstack-infra14:04
*** olaph1 has quit IRC14:04
fungitosky: devstack fails too often due to other random reasons. we used to do something like that and had to stop because it was less reliable than just assuming the images are viable14:05
toskyuhm14:05
rosmaitamaybe an "image freeze" during milestone week?14:05
*** jamesmcarthur has quit IRC14:05
*** sree has quit IRC14:05
rosmaitawhich i guess was actually last week anyway, so nevermind14:06
*** jamesmcarthur has joined #openstack-infra14:06
*** dhajare_ has joined #openstack-infra14:06
*** kiennt26 has joined #openstack-infra14:06
toskymaybe it's possible to isolate package installation failures from other failures - the logs from apt (or yum) can be identified14:06
toskyand on the other side, a failed devstack job for issues not related to the packages may slow down a new image by one or two days, but a real issues blocks the development for half a day in the best case14:08
*** makowals has joined #openstack-infra14:08
fungimordred: do we have a convenience tool for this? i'm resorting to munging the tabular output from image-list into the several required parameters to image-delete14:08
*** jamesmcarthur has quit IRC14:09
*** jamesmcarthur has joined #openstack-infra14:09
mordredfungi: I don't think so - but I think we should have one "roll back to the previous ubuntu-xenial image" is a thing14:09
mordredShrews: ^^ ?14:10
*** Spazmotic has joined #openstack-infra14:10
*** yamamoto has joined #openstack-infra14:11
*** ldnunes has quit IRC14:11
fungiokay, i seem to have gotten the four required parameters to the delete command correct14:13
mordred\o/14:13
*** Goneri has joined #openstack-infra14:13
*** yamamoto has quit IRC14:14
*** katkapilatova has quit IRC14:14
*** olaph1 has joined #openstack-infra14:14
*** yamamoto has joined #openstack-infra14:15
*** yamamoto has quit IRC14:15
fungiworking on dib-image-delete for that one next14:16
* Shrews was heads down in code... catching up14:16
*** olaph has quit IRC14:16
*** ykarel|away has joined #openstack-infra14:17
mordredShrews: tl;dr - convenience command for nodepool to say "hey man, please to roll back image $label on all providers"14:17
*** jamesmcarthur has quit IRC14:17
Shrewsmordred: fungi: dib-image-delete should take care of all uploads before deleting the on-disk image14:18
*** jamesmcarthur has joined #openstack-infra14:18
fungiShrews: oh, it'll delete ready images based on that image id?14:18
Shrewsfungi: it should14:18
fungineat, i'll give that a shot next tim14:18
Shrewsfungi: https://docs.openstack.org/infra/system-config/nodepool.html#bad-images14:18
fungie14:18
fungithanks!14:18
fungicertainly easier than14:19
fungisudo -H -u nodepool nodepool image-list|grep ' ubuntu-xenial .* 00:0'|cut -d'|' -f2-4 | sed -e 's/^/sudo -H -u nodepool nodepool image-delete --image ubuntu-xenial --build-id/' -e 's/|/--upload-id/' -e 's/|/--provider/'>foo.sh14:19
Shrewsfungi: if you do the uploads yourself first, the builder may aggressively try to re-upload them before you delete the dib14:19
fungiwell, i had preconfigured them to pause14:19
Shrewsfungi: ah14:20
Shrewsjust an extra step for yourself  :)14:20
fungithough yes, it did that anyway because puppet was apparently already running when i added them to the emergency list so it went behind me and undid the pause while i was working on deletion syntax14:20
fungii've currently got a bunch that i can't delete until their uploads finish14:21
fungithough sounds like nodepool should delete them for me as soon as they finish uploading, since i also told it to delete the source image14:22
*** jamesmcarthur has quit IRC14:22
fungi#status log most recent ubuntu-xenial images have been deleted from nodepool, so future job builds should revert to booting from the previous (working) image while we debug14:24
Shrewsfungi: actually, i'm not 100% certain what happens when you try to delete a dib when there is a currently uploading image14:24
openstackstatusfungi: finished logging14:24
Shrewsfungi: will have to remind myself with a code dive14:24
cmurphyyay thank you fungi and AJaeger14:24
lennybIs it possible to configure zuul v2 so that one pipeline (gate) will abort the other one (test). I mean if commit A is already in the gate there is no reason to run another tests on it14:24
*** ldnunes has joined #openstack-infra14:24
mordredlennyb: I'm not sure what you mean by 'the other one'?14:25
mordredlennyb: youmean if check is still running and it goes in to gate to abort the jobs running in check (for instance)14:25
*** derekjhyang has quit IRC14:26
*** watanabe_isao has joined #openstack-infra14:27
fungiabout the only time that situation crops up with our zuul config is when a change is already enqueued in the gate pipeline and someone leaves a "recheck" comment on it14:27
AJaegerconfig-core, could you review https://review.openstack.org/#/c/538306/ to remove a job that is moved in-tree, please?14:28
*** eyalb has left #openstack-infra14:28
lennybmordred, yes exactly14:28
AJaegerfungi: we should sent a #status notice that it's safe to recheck/approve again - see my previous notice. Will you do that or shall I?14:28
*** dklyle has quit IRC14:29
fungiAJaeger: are jobs succeeding again already?14:29
AJaegerfungi: so, you suggest to better verify that everything works? Yeah, probably better...14:30
*** alexchadin has joined #openstack-infra14:30
fungii'm not where i can check that easily yet, nor am i certain i read scrollback closely enough to know which jobs were impacted14:30
AJaegercmurphy, tosky , any ideas? ^14:30
openstackgerritTristan Cacqueray proposed openstack-infra/zuul-jobs master: Add buildset-artifacts-location and fetch roles  https://review.openstack.org/53067914:31
Shrewsfungi: so, i *think* the builder will dtrt when you delete a dib that has an in-progress upload14:31
fungineed to finish up my morning routine and then revisit the details so i can start digging into the corresponding mirror problem14:31
fungiShrews: thanks again. just going to tell myself i'm fuzz-testing nodepool commands ;)14:31
*** dhajare_ has quit IRC14:32
Shrewsfungi: build is marked as DELETED, existing uploads removed, cleanup thread will periodically go back and keep trying to remove uploads until all are gone, then the dib is removed from disk14:32
fungisounds perfect14:33
cmurphyAJaeger: i just sent a bunch of rechecks to the patches I care about so I can report back if they worked :)14:33
fungithanks cmurphy! that's a huge help14:33
cmurphyi mostly just want my patches merged >.>14:33
fungidon't we all? ;)14:33
* fungi will be back in a few moments to resume troubleshooting14:35
toskyAJaeger: I can recheck my job that was failing14:36
*** slaweq_ has joined #openstack-infra14:36
*** lucasagomes is now known as lucas-hungry14:36
*** amoralej|lunch is now known as amoralej14:39
*** yamamoto has joined #openstack-infra14:40
openstackgerritFabien Boucher proposed openstack-infra/zuul-jobs master: Propose to move submit-log-processor-jobs and submit-logstash-jobs in zuul-jobs  https://review.openstack.org/53784714:40
*** watanabe_isao has quit IRC14:41
*** slaweq_ has quit IRC14:41
pabelangermorning14:42
openstackgerritMerged openstack-infra/project-config master: Pause ubuntu-xenial image uploads  https://review.openstack.org/53921314:42
*** yamamoto has quit IRC14:43
*** eharney has joined #openstack-infra14:44
*** d0ugal has quit IRC14:44
pabelangercmurphy: AJaeger: Shrews: fungi: odd, we should have been protected from that on ubuntu, but I think we dropped some configuration when we migrated to nodepoolv314:45
pabelangerhttps://review.openstack.org/511492/14:45
pabelangerI can repropose that14:45
*** slaweq_ has joined #openstack-infra14:46
AJaegerplease do14:46
*** d0ugal has joined #openstack-infra14:46
*** yamamoto has joined #openstack-infra14:48
*** jamesmcarthur has joined #openstack-infra14:50
openstackgerritPaul Belanger proposed openstack-infra/project-config master: Switch (back) ubuntu DIB to use AFS mirror in rackspace  https://review.openstack.org/53922814:50
pabelangerconfig-core: cmurphy: ^ will help fix our DIB issues from this morning14:50
*** adarazs is now known as adarazs_brb14:50
pabelangersorry about that, I proposed the original change to remove it (by mistake)14:50
*** slaweq_ has quit IRC14:50
cmurphythanks pabelanger14:51
*** katkapilatova has joined #openstack-infra14:52
openstackgerritPaul Belanger proposed openstack-infra/project-config master: Revert "Pause ubuntu-xenial image uploads"  https://review.openstack.org/53922914:52
pabelangerand revert14:52
*** ijw has joined #openstack-infra14:53
*** katkapilatova has left #openstack-infra14:53
pabelangerI'll see if AFS mirrors are working properly14:53
*** dave-mccowan has joined #openstack-infra14:56
AJaegerthanks, pabelanger14:56
pabelangeryah, I seen an issue14:56
openstackgerritSteve McLellan proposed openstack-infra/project-config master: Change searchlight-ui to use publish-to-pypi-horizon  https://review.openstack.org/53923114:56
pabelangerlet me get some coffee and start to clean it up14:56
*** r-daneel has joined #openstack-infra14:57
*** d0ugal has quit IRC14:58
*** ldnunes has quit IRC14:59
mordredlennyb: if so, no - I do not believe that is possible.15:02
*** dhajare_ has joined #openstack-infra15:03
*** tellesnobrega has quit IRC15:03
cmurphynovnc successfully installs now https://zuul.openstack.org/stream.html?uuid=4728ed69d92048698a99fe3f597820e3&logfile=console.log15:04
pabelangeryah, our AFS mirrors haven't updated since 01-25-2018. Working to remove lock and verify repo, then will run reprepro again15:05
pabelangerfor ubuntu15:05
*** links has quit IRC15:05
*** alexchadin has quit IRC15:05
*** yamahata has joined #openstack-infra15:06
*** dsariel has quit IRC15:06
fultonjhi pabelanger you mentioned on openstack-dev you have zuul/github doing initial testing w/ ansible/ansible ? is there a URL i could see for that?15:09
*** annp_ has quit IRC15:09
*** andreas_s has quit IRC15:10
*** andreas_s has joined #openstack-infra15:11
pabelangerfultonj: You'll see the PRs on http://zuul.openstack.org/ but in check pipeline, they'll only run against shade jobs. Maybe mordred has a PR which shows the results, I'm not 100% everything is working correctly15:12
*** ldnunes has joined #openstack-infra15:12
*** yamamoto has quit IRC15:12
pabelangerfultonj: they will just show as ansible/ansible for the project on zuul.o.o15:12
AJaegerfultonj, pabelanger - not anymore, see  https://review.openstack.org/53844715:13
pabelangerOh, right. I forgot about that15:14
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Set remote url on every getRepo in merger  https://review.openstack.org/53571615:15
pabelangerclarkb: chown for jenkins user on logs.o.o is done15:16
*** hongbin has joined #openstack-infra15:16
EmilienMAJaeger: back here - not exactly I think, we had some timeouts in our centos multinode jobs15:16
pabelangerclarkb: we might want to 0775 directories too? Or update upload-logs to confirm that directory permission15:17
EmilienMtrown|rover, weshay|ruck : do we know the reasons of our timeouts yet? or at least which jobs are affected?15:17
openstackgerritAndreas Jaeger proposed openstack-infra/project-config master: Add ansible-role-k8s-cinder to zuul.d  https://review.openstack.org/53460815:17
*** zhongjun has quit IRC15:18
AJaegerEmilienM: we had xenial problems as well - just wanted to understand whether you're hit by the same or have some others...15:18
fultonjAJaeger: i see you yanked out 14 lines of yaml, i'm now trying to understand what they did between zuul and github and clicking around in https://github.com/openstack-infra/project-config15:19
*** efried has joined #openstack-infra15:19
EmilienMAJaeger: I don't think our issues were xenial problems although I'll need to check15:19
mordredfultonj, pabelanger: yah - we had to turn them off for now15:19
EmilienMAJaeger: our main issue now is multinode timing out a lot15:19
EmilienMAJaeger: I'm checking details now15:20
mordredfultonj: there is a bug currently where we were reporting merge issues on changes that weren't what we had configured to report on (it's an edge case)15:20
pabelangerfetch-subunit-out just had a MODULE_FAILURE: http://logs.openstack.org/08/536508/2/gate/openstack-tox-validate/dd01df1/job-output.txt.gz#_2018-01-30_14_29_18_56241115:20
EmilienMAJaeger: I'll continue on #tripleo15:20
pabelangerdoes somebody have a minute to dig into why?15:20
fultonjmordred: OK, so you plan to introduce it back after the bug is dealt with?15:21
*** ijw has quit IRC15:21
mordredfultonj: absolutely15:21
fultonjmordred: tripleo is using ceph-ansible (github hosted) and i am interested in getting cross-ci working15:22
mordredfultonj: yah - I thnk there's several good opportunities for using the support once we've got things ironed out15:22
fultonjand using ansible/ansible as an example to ci how that could be done15:22
*** oidgar has quit IRC15:22
*** dsariel has joined #openstack-infra15:22
mordredyes!15:22
fultonjmordred: would you mind pointing me in the direction of the right submissions to read on how the ansible/ansible submission works?15:23
fultonjby the time i figure it out you might have it fixed :)15:23
* fultonj currently clicking through history of https://github.com/openstack-infra/project-config15:24
*** d0ugal has joined #openstack-infra15:24
*** camunoz has joined #openstack-infra15:25
pabelangerfultonj: https://docs.openstack.org/infra/zuul/admin/drivers/github.html is specific to the driver integration for zuul.  Then, users are able to use PRs from github as normal, and zuul will get the events.  Zuul docs then expand on ideas of pipelines / jobs, which same as gerrit integration15:26
fultonjpabelanger: excellent, thank you!15:27
weshay|ruckEmilienM, I wanted to get w/ you re: the timeouts and start to record each step of the deployment15:27
EmilienMweshay|ruck: see #tripleo15:27
EmilienMweshay|ruck: it's on our side probably, the undercloud takes more time than before to deploy15:27
weshay|ruckk15:27
pabelangerfungi: AJaeger: https://review.openstack.org/539228/ and https://review.openstack.org/539229/ fix and unpause ubuntu DIBs. Are we ready to land it?15:28
*** agopi has quit IRC15:28
*** agopi has joined #openstack-infra15:28
pabelangeractually, I have to run an errand for a few hours in 20mins. So, also happy to wait until I am back to support15:28
sc`from the peanut gallery: one of my xenial chef jobs completed its first of three chef runs successfully. the three times is a legacy thing, because once is good enough15:28
*** andreas_s has quit IRC15:29
AJaegersc`: and it failed before?15:30
*** yamamoto has joined #openstack-infra15:30
*** andreas_s has joined #openstack-infra15:30
*** b_bezak has quit IRC15:31
*** gcb has joined #openstack-infra15:31
sc`i had refrained from kicking off jobs earlier this morning. it's likely that it would have failed a couple hours ago15:31
*** b_bezak has joined #openstack-infra15:31
*** eharney has quit IRC15:32
*** armax has joined #openstack-infra15:33
*** kiennt26 has quit IRC15:33
pabelangerokay, no longer running errand, due to weather.15:33
*** lucas-hungry is now known as lucasagomes15:34
AJaegerpabelanger: 539229 has now two +2s - didn't +A - wanted you or fungi to approve and babysit15:34
*** Goneri has quit IRC15:35
*** b_bezak has quit IRC15:35
fungipabelanger: that MODULE_FAILURE looks like it happened in the "Check for stestr directory" task. i wonder if it should have been created during the run phase?15:36
*** adarazs_brb is now known as adarazs15:36
*** dsariel has quit IRC15:37
bcafarelhi infra-core, can you take a look at https://review.openstack.org/#/c/535390/ ? small review mostly to get meetbot in our channel (#networking-sfc)15:37
AJaegerbcafarel: note we can only merge this while no meetings happen15:39
*** andreas_s has quit IRC15:40
corvusfungi: i believe we decided we should build images from our mirrors, in order to solve the problem of mismatches.15:41
fungipabelanger: should i remove nb* and mirror-update from the emergency maintenance list?15:41
mordredfultonj: also, http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/main.yaml#n1361 is relevant config15:41
fungicorvus: correct, and we just approved the fix for the regression in that15:42
fultonjmordred: thank you15:42
mordredfungi: which MODULE_FAILURE?15:42
mordredah - I see it15:42
fungimordred: the one pabelanger linked15:42
fungiyeah15:42
corvusfungi: cool15:42
fultonjmordred: is there a bug for the revert patch?15:42
bcafarelAJaeger: ok thanks, as long as a merge window opens sometimes it will be fine :)15:42
fultonjhttps://review.openstack.org/#/c/538447 has no topic15:42
* fultonj just wants to follow the bug if there is one (clicking through launchpad)15:43
EmilienMwe are seeing a bunch of multinode performances issues, at least with rax-dfw now15:44
mordredfultonj: I do not believe we've created one yet - I'll let you know when we do15:45
*** andreas_s has joined #openstack-infra15:45
fultonjthanks15:45
*** slaweq_ has joined #openstack-infra15:45
mordredfungi, pabelanger: that's weird- that's a stat task - I'm not sure why that would MODULE_FAILURE15:46
EmilienMAJaeger, pabelanger, mordred : not sure if you're aware about RAX DFW issue but I opened 5 random gate timeouts in tripleo gate, and all from RAX DFW15:46
fungimordred: fultonj: https://storyboard.openstack.org/#!/story/2001333 looks close, from the one-sentence description, but hard to know without more detail15:46
EmilienMother clodus are fine15:46
EmilienMclouds even15:46
fultonjthanks fungi15:46
corvusfungi, mordred, fultonj: we also need to work with the tc to come up with a plan for approving the use of openstack zuul for third-party reporting on non-openstack git repos15:46
clarkbtosky: fungi we also dont want to end up with special images that are required to get tests to pass. Its hard to determine if failure is "global" or "local" in that case15:46
mordredcorvus: yah15:46
*** felipemonteiro__ has joined #openstack-infra15:47
corvusfungi, mordred, fultonj: i'm sure it won't be a big deal, but we need at least something slightly more official than "it sounded good to the infra folks" :)15:47
toskyclarkb: it's not possible to think about at test for mirrors? Something that just install packages?15:47
fungicorvus: what are the potentially controversial bits? do you think it warrants a tc resolution, or what?15:47
*** eharney has joined #openstack-infra15:47
EmilienMrax-ord and rax-dfw cause TripleO timeouts in gate15:47
toskyclarkb: that would not be a test for a special image, just a normal acceptance test for a guest image15:48
corvusfungi: i'm thinking we should have a resolution establishing a process or criteria or something.  because we're using openstack project resources for not-openstack-project things.  the things we want to use them on are good things to use them on because they help advance the project, but it seems that the openstack project should decide to do that, and not just us.15:49
*** Goneri has joined #openstack-infra15:49
*** dsariel has joined #openstack-infra15:50
fungiEmilienM: what are the nature of the timeouts you're seeing? job/playbook timed out results? network timeouts (git fetch, package installs...) logged in the job run? disk access timeouts? some other sort of timeout?15:50
corvus(i think we've informally established that desire enough on all sides for testing on ansible/ansible to proceed, but when we move past that, we should have a plan)15:50
*** slaweq_ has quit IRC15:50
fungicorvus: makes sense. so basically we need to draft a process for determining how/when/whether we support that and seek edits/blessing from the tc15:51
openstackgerritMerged openstack-infra/project-config master: Switch (back) ubuntu DIB to use AFS mirror in rackspace  https://review.openstack.org/53922815:51
EmilienMfungi: performances and network15:51
EmilienMlet me get numbers15:51
fungiEmilienM: so job runs exceeding their configured playbook timeouts?15:51
clarkbtosky: ya we can test specific things but if the acceptance test is our tests run and pass (which is typivally what is suggested) we run the risk of requiring our "special" images15:52
fultonjcorvus: perhaps a ptg topic?15:52
EmilienMpackages take 837.54 to be downloaded + installed in RAX while 300s elsewhere15:52
corvusfultonj: i'm hoping it's not controversial enough to warrant that.  maybe we can write something up and get it approved before then.15:52
EmilienMit's 8+ min more15:52
fultonjcorvus: ok15:53
EmilienMkeystone resources (endpoints, services, etc) take 10 min to be created!15:53
EmilienMinstead of 315:53
fungiEmilienM: ooh, so also possible slowness on the package mirrors in those regions15:53
EmilienMyeah probably that15:53
toskyclarkb: right, and it's not what I'd request; even when I spoke about running our test, the goal was to either to focus on the part which allows us to call an image as "working"15:53
fungiEmilienM: but non-package-related slowness may indicate actual performance issues on the job nodes themselves, i guess15:53
EmilienMfungi: sure, but the mirrors slowness on RAX is killing us15:54
fultonjcorvus: if you draft a spec i'd be happy to review15:54
fungiEmilienM: thanks, i'm taking a look at our resource graphs for those now15:54
corvusfultonj: cool, i'll let you know15:55
clarkbEmilienM: keystone respurces if created by osc are directly tied to disk performance due to how osc's plugin mechanism uses entry points15:55
fultonjthanks15:55
fungilooks like we might be capping out our network interface on mirror.dfw.rax.o.o15:55
corvusfungi, EmilienM: we seem to peak at 100Mbit outbound on mirror.dfw.rax http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=3104&rra_id=all15:55
fungiyep, exactly what i was just looking at15:56
corvuswhat's our max there?15:56
fungithe historic graphs appear to indicate that it used to be higher15:56
corvusord goes well past 200 http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=3063&rra_id=all15:56
corvusthey should be the same flavors and have the same cap15:56
fungiyeah, they look like the same flavor15:57
*** pcichy has joined #openstack-infra15:57
*** xarses_ has joined #openstack-infra15:57
*** david-lyle has joined #openstack-infra15:57
fungibut it really does appear that around utc midnight on the morning of january 25 it started getting capped to 100mbps outbound for the dfw mirror15:58
*** andreww has joined #openstack-infra15:58
*** felipemonteiro has joined #openstack-infra15:58
*** slaweq_ has joined #openstack-infra15:58
fungithe server's been up and running since days before that, so doesn't correspond to a reboot/replacement15:59
*** rosmaita has quit IRC15:59
EmilienMI have 11 jobs that ran over the last hours which have the same problem15:59
EmilienMthey all run on dfw rax15:59
*** andreas_s has quit IRC15:59
*** rosmaita has joined #openstack-infra15:59
fungiEmilienM: yeah, that could easily be caused by packet loss we'd experience from the rate limiting on the mirror's network interface15:59
pabelangerokay, reprepro finish and packages look to all be there16:00
*** olaph1 has quit IRC16:00
pabelangergoing to update packages now16:00
fungii'm going spelunking in rackspace tickets for any hints16:00
clarkb100mbps is supsicious could just be a link negotiating slower unexpectedly16:01
*** andreas_s has joined #openstack-infra16:01
fungiwe can scale down our max-servers in dfw in the interim while we get the provider to look into it16:01
*** rosmaita has quit IRC16:01
*** felipemonteiro__ has quit IRC16:01
*** xarses_ has quit IRC16:02
EmilienMfungi: that would be great until they reply to your ongoing ticket16:02
pabelangerfungi: not mirror-update, I am still working on that. but nb* can be16:02
*** olaph has joined #openstack-infra16:02
fungiwell, i don't know whether there is (yet) an ongoing ticket16:02
weshay|ruckEmilienM, I see one on ovh-gra1 too16:02
EmilienMweshay|ruck: link?16:02
weshay|ruckhttp://logs.openstack.org/06/534706/3/check/tripleo-ci-centos-7-scenario001-multinode-oooq-container/4615b2f/job-output.txt.gz#_2018-01-30_09_04_51_56183816:03
*** alexchadin has joined #openstack-infra16:03
EmilienMweshay|ruck: no, it's not the same issue16:03
EmilienMweshay|ruck: see http://logs.openstack.org/06/534706/3/check/tripleo-ci-centos-7-scenario001-multinode-oooq-container/4615b2f/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-01-30_09_43_3416:03
EmilienMweshay|ruck: package download was "fast", 300s16:03
weshay|ruckk16:04
EmilienMversus RAX which takes 800+16:04
honzais there a way to use different templates in project-config/zuul.d/projects.yaml based on the branch?  kind of like the branch: under jobs: etc16:04
trown|roverweshay|ruck: ya that was different16:04
*** slaweq_ has quit IRC16:04
fungi#status log removed nb01, nb02 and nb03 from the emergency maintenance list now that it's safe to start building new ubuntu-xenial images again16:04
openstackstatusfungi: finished logging16:04
trown|roverweshay|ruck: http://logs.openstack.org/06/534706/3/check/tripleo-ci-centos-7-scenario001-multinode-oooq-container/4615b2f/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-01-30_09_43_3416:04
trown|rovertoo slow :(16:04
*** armaan has quit IRC16:04
*** andreas_s has quit IRC16:05
prometheanfirefungi: oh, cool16:05
prometheanfireso, rechecks?16:05
corvushonza: if the template itself is in a repo with branches, it will automatically apply implied branch matchers to all of its jobs.  so a template defined on stable/pike should end up only adding jobs which run on stable/pike.16:05
*** andreas_s has joined #openstack-infra16:06
fungitrown|rover: weshay|ruck: EmilienM: what we're specifically looking into now is (seemingly low) network rate limiting on the mirror server in the rax dfw region causing packet loss (and so slowness/errors) for package retrieval in jobs16:06
corvushonza: if you want to do something more explicit (ie, define a template on master which runs jobs on stable/pike) you can add a 'branches:' attribute to the jobs themselves in the template.16:06
EmilienMfungi: exactly16:06
fungiprometheanfire: sure, though we're getting some poor performance in rax.dfw at the moment16:06
honzacorvus: this template is general (doesn't specify branches), and i need to apply an older version of it to stable/pike16:06
pabelangeris it possible our cache in mirror.ord.rax is getting invalidated to fast and we have to download packages more often from web?16:07
EmilienMfungi: I said to tripleo folks to not approve patches or recheck for now until we sort things out16:07
prometheanfirefungi: just generally bad perf or something specific (so I can ping people)16:07
pabelangerI know we make it smaller because of tripleo-test-cloud-rh1, maybe we can raise the storage size, because we are now taking that cloud offline16:07
*** rosmaita has joined #openstack-infra16:08
pabelanger/dev/mapper/main-proxycache   99G   48G   46G  52% /var/cache/apache216:08
pabelangercurrently16:08
honzacorvus: my project uses nodejs8-x for master, and nodejs6-x for stable/pike16:09
fungiprometheanfire: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=3104&rra_id=all shows what looks like our rate limit for the mirror in dfw suddenly dropped to 100mbps at midnight utc on the morning of the 25th16:09
honzacorvus: nodejs{x}-jobs is a template which takes a param (nodejs version)16:09
prometheanfireok16:10
fungiprometheanfire: as clarkb said, could indicate an interface negotiation problem on a physical link somewhere, so might be impacting more than just us16:10
*** sree has joined #openstack-infra16:10
honzacorvus: maybe: templates: - name: nodejs6-jobs branch: stable/pike instead of just template: - nodejs6-jobs16:11
* honza shrugs16:11
fungithough if it is, we're monopolizing most of that link ;)16:11
corvushonza: are you asking about openstack's zuul, or something else?16:11
honzacorvus: yes16:11
corvushonza: i don't think our templates take parameters16:12
*** e0ne has quit IRC16:12
efriedAre we cleared to start rechecking at this point?16:12
*** ldnunes has quit IRC16:12
corvushonza: can you point me at your project's configuration?16:12
honzacorvus: https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L16792-L1680116:13
AJaegerwow, what'S this? " : ERROR Project openstack/python-ironicclient does not have the default branch master"16:13
*** Guest41574 has quit IRC16:13
AJaegeron https://review.openstack.org/53882616:13
honzacorvus: the question is: can i set a specific template for a branch in that place?16:13
*** ldnunes has joined #openstack-infra16:13
prometheanfirefungi: pinged the team internally16:14
fungiprometheanfire: server show on mirror01.dfw.rax.openstack.org indicates it's built from their performance1-8 flavor, and flavor show on that claims a rxtx_factor of 1600.0 so i don't think we're getting anywhere near that16:14
AJaegerhonza: you cannot. You can take the single jobs from the template and add the branches to those16:14
corvushonza: the best thing to do i think is remove the nodejs templates from that file, then create a .zuul.yaml in openstack/tripleo-ui on each branch, and add the appropriate template to the project in those files16:15
mordredhonza, corvus: i was just about to type the same thing :)16:15
prometheanfirefungi: oh, do you not mean the rackspace mirror?16:15
mordredprometheanfire: no, we don't use the rackspace mirror16:15
pabelangerfungi: replacement mirrors are usually easy (what could go wrong?) to bring online. We could consider that if we thing something is wrong with our current instance, hoping we get onto better hardware.16:16
fungiprometheanfire: correct, a cloud compute instance we're running named mirror01.dfw.rax.openstack.org... i'm in the process of filling in a trouble ticket describing the issue now16:16
prometheanfirepabelanger: that's what I'd suggest16:16
pabelangerprometheanfire: ah, missed that.16:16
pabelangeroh, suggest :)16:17
pabelangerneed more coffee16:17
prometheanfirepabelanger: I didn't get around to it, did't know that it was an os-infra run mirror :P16:17
*** Goneri has quit IRC16:17
prometheanfire:D16:17
*** Nil_ has joined #openstack-infra16:17
mordredhonza, corvus: that said - if the node6 vs. node8 is global across openstack javascript projects ... like, all openstack projets doing javascript are expected to be doing node8 in master but node6 in stable/pike, then I bet we could make an 'openstack-javascript-jobs' for that - but we'd likely want to get that defined in the PTI first16:17
*** yamamoto has quit IRC16:17
mordredmy understanding is that so far we're not quite that coordinated yet, right?16:18
*** olaph1 has joined #openstack-infra16:19
*** olaph has quit IRC16:19
*** ykarel|away has quit IRC16:20
*** yamamoto has joined #openstack-infra16:20
fungi#status log ticket 180130-ord-0000697 filed to investigate an apparent 100Mbps rate limit on the mirror01.dfw.rax instance16:24
openstackstatusfungi: finished logging16:24
*** yamamoto has quit IRC16:25
AJaegerinfra-root, once again "ERROR Project openstack/python-ironicclient does not have the default branch master" - now on https://review.openstack.org/53884516:26
AJaegerand "ERROR Project openstack/tripleo-heat-templates does not have the default branch master" on https://review.openstack.org/537787 ;(16:26
corvusAJaeger: could be a problem on a merger or executor16:26
honzacorvus: AJaeger: mordred: cool, thanks16:27
corvusAJaeger: i'll see if i can correlate16:27
AJaegerthanks for checking, corvus - hope those three changes (first 12 mins ago) help.16:27
* AJaeger will be back later...16:28
corvusoh that's an executor only error16:29
*** dhajare_ has quit IRC16:29
*** chandankumar is now known as chkumar|off16:30
*** Goneri has joined #openstack-infra16:33
*** inc0 has quit IRC16:33
*** inc0 has joined #openstack-infra16:33
openstackgerritDavid Shrewsbury proposed openstack-infra/project-config master: Use max-ready-age on nodepool nodes  https://review.openstack.org/53924816:33
Shrewsoops. didn't mean to throw that out yet (stupid fast fingers)16:34
*** david-lyle has quit IRC16:34
honzaAJaeger: corvus: would you mind having a quick look?  https://review.openstack.org/#/c/539250/16:35
*** alexchadin has quit IRC16:35
*** jtomasek has quit IRC16:36
*** dhill_ has joined #openstack-infra16:36
pabelangerokay, we have 2 corrupt packages in ubuntu reprepro: http://paste.openstack.org/show/658030/16:38
pabelangerdeleting them now to force download again16:38
*** andreas_s has quit IRC16:40
openstackgerritJeremy Stanley proposed openstack-infra/project-config master: Temporarily reduce max-servers in rax-dfw  https://review.openstack.org/53925516:41
openstackgerritJeremy Stanley proposed openstack-infra/project-config master: Revert "Temporarily reduce max-servers in rax-dfw"  https://review.openstack.org/53925616:41
fungii'll wip 539256 for now16:41
bnemecEmilienM: weshay|ruck trown|rover: https://bugs.launchpad.net/tripleo/+bug/174628716:42
openstackLaunchpad bug 1746287 in tripleo "RDO cloud ci jobs using Rackspace mirror" [High,Triaged]16:42
fungii went through all the other mirrors and didn't see any similar bandwidth capping behavior16:42
bnemec^may be related to the rax-fdw issues.16:42
bnemec*dfw16:42
bnemecI just noticed that last night.16:42
fungibnemec: eep, thanks for finding16:42
pabelangerwait, rackspace dfw mirror is only having issues?16:43
weshay|ruckbnemec, that's what we were asked to do16:43
weshay|ruckpabelanger, and I discussed this a while back16:43
weshay|ruckcan't prove that16:43
weshay|ruckbut I think that's right16:43
weshay|ruckmaybe david16:43
weshay|ruck:)16:43
bnemecSeriously?  Why would we use another cloud's mirror when the repos are hosted _in_ RDO cloud?16:43
fungii'd be surprised if we asked rdo to use our mirror with their ci system16:44
EmilienMbnemec: someone has to do the work and setup the mirror I guess16:44
pabelangerright, RDO is using mirror.dfw.rax.openstack.org too16:44
bnemecI'm arguing that we don't need a mirror.16:44
bnemecThe repos are local to the cloud.16:44
pabelangerI would beat that is sucking up the bandwidth now16:44
*** andreas_s has joined #openstack-infra16:44
fungiwe've previously held the position that our mirrors are not a stable api, and that we take them offline, rename or reorganize them at our whim without coordinating with outside organizations or broadly announcing16:45
pabelangerYes, I've been pushing for RDO cloud to run an AFS mirror for months, sadly this hasn't happened yet16:45
weshay|ruckpabelanger, what ever you suggest my man16:45
*** ramishra has quit IRC16:46
fungiit would be unfortunate if us taking an extended maintenance in rax-dfw broke rdo's ci system16:46
bnemecI guess I could see an argument for an in-cloud mirror to take load off the repo vms, but it seems wrong to be round-tripping through another cloud.16:46
*** david-lyle has joined #openstack-infra16:47
*** david-lyle has quit IRC16:48
pabelangerfungi: yes, RDO has been made away of the issue, but haven't launched there on AFS mirror yet. There are open patches to support centos upstream now, but we also haven't reviewed them16:48
*** david-lyle has joined #openstack-infra16:49
pabelangergiven that OVB jobs are not running live in RDO, an no longer using tripleo-test-cloud-rh1, it make sense now if that mirror is over run. The amount of jobs it has to support now has likely doubled16:49
pabelangers/not/now*16:49
*** olaph1 has quit IRC16:51
*** slaweq_ has joined #openstack-infra16:52
*** olaph has joined #openstack-infra16:53
corvusthe 3 default branch issues i've checked all localize to ze0416:53
corvusbut i don't see anything wrong with those repos16:53
*** andreas_s has quit IRC16:53
*** andreas_s has joined #openstack-infra16:54
*** tmorin has quit IRC16:55
*** jtomasek has joined #openstack-infra16:56
*** jtomasek has quit IRC16:56
*** slaweq_ has quit IRC16:57
*** jtomasek has joined #openstack-infra16:57
*** jtomasek has quit IRC16:58
*** dprince has quit IRC16:59
*** gcb has quit IRC17:00
*** andreas_s has quit IRC17:00
*** aviau has quit IRC17:01
*** aviau has joined #openstack-infra17:01
*** slaweq has quit IRC17:03
*** erlon_ has quit IRC17:03
*** slaweq has joined #openstack-infra17:03
*** slaweq_ has joined #openstack-infra17:03
clarkbanticw: ^ possible that your issues are related to the rax dfw mirror slowness?17:04
corvusah ok, i think the executor must have run out of space17:04
corvusfatal: unable to write new index file17:04
corvuswarning: Clone succeeded, but checkout failed.17:04
corvusthat ^ is in the logs17:04
corvusso it looks like we're not trapping that error early enough17:05
corvusAJaeger: i think the 'default branch' errors are transient; it looks like we had some significant load spikes earlier, should be better now17:05
corvuswe probably do need to add another executor or two :/17:06
*** tosky has quit IRC17:06
openstackgerritPaul Belanger proposed openstack-infra/system-config master: Revert "Reduce apache reverse proxy cache to 40GB"  https://review.openstack.org/53926317:07
*** slaweq_ has quit IRC17:07
*** Goneri has quit IRC17:07
*** sree has quit IRC17:08
*** slaweq has quit IRC17:08
*** janki has quit IRC17:11
*** sree has joined #openstack-infra17:12
*** kopecmartin has quit IRC17:12
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: Added endpoint add summit speaker assistance  https://review.openstack.org/53926417:13
*** felipemonteiro has quit IRC17:14
*** felipemonteiro__ has joined #openstack-infra17:14
pabelangerclarkb: this also might explain timeouts I've been seeing in integrated gate this week, if mirror is oversubcribed for jobs17:16
pabelangerand no bandwidth17:16
pabelangerokay, AFS mirror for ubuntu released17:16
pabelangerreprepro happy again17:16
pabelangerremoving mirror-update.o.o from emergency file17:16
*** sree has quit IRC17:17
clarkbdo we know why it got unhappy?17:17
pabelangerclarkb: yah, we had networking issue to AFS volume and then reprepro timed out, eventually killed, and left lock file17:18
pabelangerwe had 2 packages corrupt, so had to manually delete and redownload17:18
clarkbso even if lock file was cleaned it wouldve needed intervention?17:18
openstackgerritMerged openstack-infra/openstackid-resources master: Added endpoint add summit speaker assistance  https://review.openstack.org/53926417:18
*** janki has joined #openstack-infra17:19
pabelangerclarkb: yah, we wouldn't have released the volume, since CRC checks failed. So we were protected in that case17:19
*** janki has quit IRC17:19
*** Goneri has joined #openstack-infra17:20
*** dprince has joined #openstack-infra17:20
*** yamamoto has joined #openstack-infra17:21
*** yamamoto has quit IRC17:27
pabelangerclarkb: we just had another gate reset for nova timeout, 536936 is the fix but due to dib failure this morning didn't merged. Any objection is we enqueue and promote to land it?17:27
pabelangerwe've been dealing with issue for a few days now17:27
corvuspabelanger: yeah, we should promote important integrated gate fixes17:28
*** ralonsoh_ has joined #openstack-infra17:28
pabelangercorvus: okay, will do so now17:29
*** dprince has quit IRC17:29
clarkb++17:29
*** dprince has joined #openstack-infra17:29
pabelangersorry, 537933 is what I'll be promoting17:31
*** Pramod has joined #openstack-infra17:31
*** ralonsoh has quit IRC17:31
inc0good morning, is zuul down?17:33
inc0http://zuulv3.openstack.org no worky for me :(17:33
smcginnisinc0: drop the "v3" part of that.17:34
fungiinc0: that was only a temporary (302) redirect from zuul.openstack.org. the official status interface url is and has been http://zuul.openstack.org/17:34
*** panda is now known as panda|bbl17:34
inc0ok, thanks, that works:)17:34
pabelanger#status log 537933 promoted to help address with integrated gate timeout issue with nova17:34
openstackstatuspabelanger: finished logging17:34
fungiinc0: also findable via the status tab on http://status.openstack.org/17:35
fungiwhen in doubt17:36
*** markvoelker has joined #openstack-infra17:36
fungier, via the zuul tab on that page i mean17:36
mriedempabelanger: thanks for promoting https://review.openstack.org/#/c/539266/117:37
mriedemoops17:37
mriedem53793317:37
*** dsariel has quit IRC17:41
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: do not set OS_TEST_TIMEOUT in tox.ini  https://review.openstack.org/53927317:44
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: only try to reset the db user password if it needs to be  https://review.openstack.org/53927417:44
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: add a script to clean up test databases  https://review.openstack.org/53927517:44
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: log database management actions in tests  https://review.openstack.org/53927617:44
*** myoung is now known as myoung|food17:44
*** tesseract has quit IRC17:46
*** felipemonteiro has joined #openstack-infra17:49
*** electrofelix has quit IRC17:49
*** electrofelix has joined #openstack-infra17:50
*** felipemonteiro__ has quit IRC17:51
*** ralonsoh__ has joined #openstack-infra17:54
*** jamesmcarthur has quit IRC17:56
*** trown|rover is now known as trown|lunch17:57
*** ralonsoh_ has quit IRC17:58
*** alexchadin has joined #openstack-infra17:58
*** iyamahat has joined #openstack-infra17:58
*** derekh has quit IRC17:59
fungiconfig-core: any takers for temporarily reducing our max-servers in rax-dfw to lower the load on the mirror there? https://review.openstack.org/53925518:00
fungii haven't seen any response on our ticket yet, fwiw18:01
*** kjackal has quit IRC18:01
corvusfungi: done18:01
fungii also have a revert for that prepped and uploaded, but set to wip for now18:01
fungithanks corvus!18:01
*** bhavik1 has joined #openstack-infra18:02
*** kjackal has joined #openstack-infra18:03
*** Pramod has quit IRC18:05
*** Pramod has joined #openstack-infra18:06
*** Swami has joined #openstack-infra18:07
*** slaweq has joined #openstack-infra18:08
corvusEmilienM: i believe the change to use override-checkout in job branch matching is in production, so you might try that when you get a sec18:09
*** bhavik1 has quit IRC18:10
EmilienMcorvus: that? https://review.openstack.org/#/c/537612/6/.zuul.yaml18:10
EmilienMI just did recheck to see how that works now18:10
EmilienMcorvus: looks like it works18:10
EmilienMI see the jobs now \o/18:10
EmilienMcorvus: I'll confirm once the jobs have run and looked at logs to confirm the right branch was checked out18:11
EmilienMcorvus: thanks18:11
*** slaweq has quit IRC18:13
*** david-lyle has quit IRC18:13
openstackgerritMerged openstack-infra/project-config master: Temporarily reduce max-servers in rax-dfw  https://review.openstack.org/53925518:15
*** andreww has quit IRC18:18
*** xarses_ has joined #openstack-infra18:19
*** cshastri has quit IRC18:19
corvusEmilienM: great, thanks!18:20
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: added endpoint update summit speaker assistance  https://review.openstack.org/53928518:23
*** yamamoto has joined #openstack-infra18:23
*** priteau has quit IRC18:24
*** priteau has joined #openstack-infra18:25
*** alexchadin has quit IRC18:27
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul autohold: allow filtering per commit  https://review.openstack.org/53699318:27
pabelangerso, lost internet access at home, down at coffee shop now18:28
pabelangercatching up18:28
*** priteau has quit IRC18:29
*** yamamoto has quit IRC18:29
AJaegerconfig-core, infra-root, I would like some feedback on https://review.openstack.org/#/c/538908 - that one removes integrated-gate from nova and adds the single jobs to it so that the irrelevant files lists are respected. I don't like the removal of the template, we will forget to update it ;( Does anybody have better ideas on how to solve these?18:31
*** ralonsoh__ has quit IRC18:33
*** slaweq has joined #openstack-infra18:33
*** myoung|food is now known as myoung18:34
openstackgerritMonty Taylor proposed openstack-infra/project-config master: Add third-party-check pipeline  https://review.openstack.org/53928618:34
openstackgerritMonty Taylor proposed openstack-infra/project-config master: Revert "Temporarily remove ansible/ansible from pipelines"  https://review.openstack.org/53928718:34
mordredcorvus, pabelanger: ^^ there's a stab at re-enabling ansible/ansible18:35
mordredAJaeger: looking18:35
mordredAJaeger: ugh. no - I don't have a beter idea yet18:36
AJaegerwe could move all the lists to the template so that all projects share it...18:37
*** alexchadin has joined #openstack-infra18:38
*** slaweq has quit IRC18:38
*** salv-orl_ has quit IRC18:39
*** salv-orlando has joined #openstack-infra18:39
*** alexchadin has quit IRC18:41
openstackgerritMerged openstack-infra/openstackid-resources master: added endpoint update summit speaker assistance  https://review.openstack.org/53928518:42
*** armaan has joined #openstack-infra18:43
*** salv-orlando has quit IRC18:44
AJaegerconfig-core, could you review https://review.openstack.org/538632 https://review.openstack.org/538631 and https://review.openstack.org/538306 , please? Those remove some jobs from project-config18:46
mordredShrews: I put an autohold into zuul and it didn't seem to hold the node18:46
mordredShrews: do I need to put in the autohold before the job starts?18:47
*** olaph1 has joined #openstack-infra18:47
*** olaph has quit IRC18:49
mordredclarkb, pabelanger : I'm still seeing devstack jobs using rax dfw for ubuntu-cloud-archive, but we landed the hack to always use rax for UCA https://review.openstack.org/#/c/535522/18:50
mordredexample: http://logs.openstack.org/32/539032/1/check/osc-functional-devstack/c1bde97/job-output.txt.gz#_2018-01-30_17_53_10_07148318:50
mordredI don't believe that's why that job failed - but I just notcied it and was confused18:51
clarkbmordred: the jobs running on rax dfw should use rax dfw but the jobs running in other clouds should use the other clouds right?18:52
pabelangeryah, thought we did revert on that18:55
clarkboh did we revert the fix?18:55
clarkbinfra meeting in ~5 minutes. I am currently working to sort out the agenda much later than anticipated as babysitter didn't arrive until a few minutes ago18:56
mordredpabelanger: oh - well, that would explain it at leat18:56
fungiso far, reducing rax-dfw max-servers doesn't seem to have done much to lower the mirror network utilization18:56
fungipabelanger: do you have any rough idea what job volume rdo has? is it possible _most_ of the utilization on that mirror is rdo and not us?18:57
mordredpabelanger: why did we revert it? and I don't see a revert for that in the tree18:57
pabelangermordred: no, I thought we did. But I might be confused18:57
pabelangermordred: Oh, 535522 is the revert. So yes, ignore me18:58
*** armaan has quit IRC18:58
pabelangerfungi: yah, I think that amount of jobs has grown now that they moved from tripleo-test-cloud-rh1 into RDO cloud. It looks like mirror.regionone.rdo-cloud.rdoproject.org is online now, working to switch RDo jobs to now use that18:59
Shrewsmordred: just before it finishes, iirc18:59
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul autohold: allow filtering per commit  https://review.openstack.org/53699318:59
fungipabelanger: should we stop using nodes in dfw entirely until we can get rdo admins to stop using our mirror in their ci system?18:59
*** armaan has joined #openstack-infra19:00
clarkbmeeting time19:00
mordredShrews: ok. well, the node doesn't seem to be held :(19:00
*** jpena is now known as jpena|off19:00
AJaegerfungi, pabelanger, we have such a high load here that we need every node.19:00
pabelangerfungi: i think we are still a few hours out from stopping RDO, so we can either firewall their IP range (breaks them) or turn of cloud?19:01
fungii guess we can discuss during open discussion in the meeting, or after19:01
Shrewsmordred: try autohold-list19:01
AJaegerI'm happy to give them a chnace to update it - for a few hours19:01
mordredShrews: yah- I did that: | openstack | git.openstack.org/openstack/python-openstackclient | osc-functional-devstack |   1   | mordred OSC release bug19:02
Shrewsmordred: then it didn't match b/c it would have been removed from the list19:03
anticwdoes setting WF -1 kill any bulders in progress?19:03
Shrewsmordred: i can help more after the meeting19:03
Shrewsmordred: possible you might not need the git.o.o part? i forget19:04
corvusanticw: no (on purpose)19:04
*** trown|lunch is now known as trown19:05
*** trown is now known as trown|rover19:05
anticwcorvus: is there a machanism to kill jobs in progress?19:06
*** amoralej is now known as amoralej|off19:06
*** SumitNaiksatam has joined #openstack-infra19:07
AJaegeranticw: abandon19:07
anticwthanks19:08
*** alexchadin has joined #openstack-infra19:09
AJaegeranticw: and when you restore, it goes to check again and needs again at +A19:09
mordredAJaeger: abandon doesn't kill the in-progress jobs, does it?19:10
AJaegermordred: abandon kills running jobs AFAIK19:11
*** alexchadin has quit IRC19:11
AJaegeranticw: rebase also dequeues19:11
*** Pramod has quit IRC19:12
*** slaweq has joined #openstack-infra19:12
EmilienMcorvus: I confirm, I checked logs and everything is fine, nice work!19:12
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: added endpoint get summit attendance by id  https://review.openstack.org/53930119:13
*** camunoz has quit IRC19:13
corvusEmilienM: yay!  mordred ^ EmilienM tested the feature we plan to use to resolve ansible/devel19:14
mordredEmilienM: woot!19:14
funginice!19:15
*** slaweq has quit IRC19:16
*** harlowja has joined #openstack-infra19:17
openstackgerritMerged openstack-infra/openstackid-resources master: added endpoint get summit attendance by id  https://review.openstack.org/53930119:18
*** jamesmcarthur has joined #openstack-infra19:21
*** armaan has quit IRC19:21
mordredShrews: fwiw, I didn't create the autohold with eh git.o.o prefix19:22
*** armaan has joined #openstack-infra19:23
Shrewsmordred: that job did fail, yes?19:24
*** yamamoto has joined #openstack-infra19:25
mordredShrews: oh yah19:25
mordredShrews: I hit a recheck on it19:25
mordredso we can see if it catches this time19:26
mordredShrews: oh - it did fail with RETRY_LIMIT rather than with a normal FAILURE19:26
Shrewsmordred: ah, has to be FAILURE iirc19:26
mordredGOTCHA19:26
mordredI'll resubmit and change the job so that it'll reprt ath as failure so we can see what's broken19:26
*** lucasagomes is now known as lucas-afk19:26
EmilienMcorvus: which btw I need this feature too for testing github/ceph-ansible with tripleo jobs running in openstack infra19:27
mordredShrews: thanks19:28
corvusEmilienM: yes -- we're not quite ready for that yet (both technically and policy-wise) but we're getting close and eager to do it19:28
*** Pramod has joined #openstack-infra19:28
EmilienMawesome19:29
*** yamamoto has quit IRC19:31
*** camunoz has joined #openstack-infra19:34
*** shardy has quit IRC19:42
*** alexchadin has joined #openstack-infra19:43
openstackgerritKendall Nelson proposed openstack-infra/storyboard master: Create a StoryBoard gui manual  https://review.openstack.org/32547419:47
openstackgerritKendall Nelson proposed openstack-infra/storyboard master: Create a StoryBoard gui manual  https://review.openstack.org/32547419:48
*** david-lyle has joined #openstack-infra19:51
*** pcaruana has quit IRC19:52
clarkbShrews: pabelanger AJaeger I think Shrews intends on fixing the nodepool pool deadlock issue properly so this would be just a temporary workaround20:01
clarkbso I'm fine with it20:01
AJaegerclarkb: as mentioned, let'S try - and monitor ;)20:01
*** sambetts is now known as sambetts|afk20:01
clarkbI've +2'd it but not approved as I am not in a great place to do the monitoring20:01
AJaegerclarkb: same with me - I let others approve20:02
tobiashups, missed the meeting, looks like I'm already mentally in summer time mode20:02
AJaegertobiash: it's already dark outside ;)20:02
pabelangerclarkb: Shrews: AJaeger: yah, anything to help keep nodes in-use helps, I've +2'd20:03
*** alexchadin has quit IRC20:03
AJaegerconfig-core, could you review https://review.openstack.org/538632 https://review.openstack.org/538631 and https://review.openstack.org/538306 , please? Those remove some jobs from project-config20:03
AJaegernothing urgent - but if you review, consider mine as well, please ;)20:04
Shrewsclarkb: AJaeger: pabelanger: how about I WIP it until tomorrow morning my time. We can approve it then and I'll have all day to watch it?20:04
AJaegerShrews: wfm20:04
Shrewsonly a couple of hours more until I EOD20:04
pabelanger++20:04
Shrewsdone20:05
mordredclarkb: got a sec for a devstack change? https://review.openstack.org/#/c/510939/1 ?20:06
tobiashAJaeger: almost forgot about that while doing reviews ;)20:07
AJaeger;)20:08
*** jamesmcarthur has quit IRC20:11
*** dsariel has joined #openstack-infra20:11
*** rfolco is now known as rfolco|off20:14
*** iyamahat has quit IRC20:14
*** felipemonteiro has quit IRC20:15
*** felipemonteiro__ has joined #openstack-infra20:15
*** ldnunes has quit IRC20:15
*** slaweq has joined #openstack-infra20:17
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: set up tests to run with sqlite  https://review.openstack.org/53930820:17
fungi#status log reenqueued release pipeline jobs for openstack/tripleo-ipsec 8.0.020:18
openstackstatusfungi: finished logging20:18
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: set up tests to run with sqlite  https://review.openstack.org/53930820:18
clarkbmordred: done20:24
mriedemcan we be rechecking things now?20:26
mriedemhttps://review.openstack.org/#/c/537933/ is merged20:26
*** d0ugal has quit IRC20:26
*** yamamoto has joined #openstack-infra20:27
pabelangermriedem: yah, believe so. We still having an issue with rax-dfw mirror, but working to address that20:29
*** dprince has quit IRC20:29
mriedemack20:29
*** sai is now known as sai-pto20:29
*** pcichy has quit IRC20:30
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: Added endpoint get promo code csv format  https://review.openstack.org/53931220:32
openstackgerritMerged openstack-infra/openstackid-resources master: Added endpoint get promo code csv format  https://review.openstack.org/53931220:33
*** yamamoto has quit IRC20:33
*** sshnaidm is now known as sshnaidm|afk20:38
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: WIP: Provider wedge test  https://review.openstack.org/53931620:40
*** d0ugal has joined #openstack-infra20:41
*** e0ne has joined #openstack-infra20:42
*** jamesmcarthur has joined #openstack-infra20:46
mordredShrews: poo. the autohold seems like it maybe worked this time (no autohold in the list) but the node is not actually held and is something else now20:47
Shrewsmordred: gremlins?20:47
mordredShrews: I suppose so20:48
Shrewsmordred: 0002276213 node is held for you20:49
* Shrews hands mordred more coffee20:49
Shrewsdmsimard|afk: you have 2 held nodes that are several days old20:50
dmsimard|afkShrews: yes it's for a failing fedora job that I have no idea why it is failing, iirc there's the review number it's there for20:51
dmsimard|afkShrews: between logs.o.o exploding and other things this week I won't have time to look20:51
dmsimard|afkbut it's preventing a stack of patches from landing we've been waiting for a while20:51
dmsimard|afkfeel free to release them and I can re-hold them once I have time to spend on it20:52
Shrewsmordred: are you saying that held node is not what you were expecting?20:52
Shrewsdmsimard|afk: i'm just poking you about them b/c we usually forget about our held nodes. i'll let you delete them when you're ready20:52
dmsimard|afkShrews: I won't have time to look at it this week so they're probably wasted unless someone else wants to check :(20:53
*** wolverineav has quit IRC20:53
dmsimard|afkit's this patch: https://review.openstack.org/#/c/514489/20:53
pabelangergoing to start work on mirror0220:54
*** e0ne has quit IRC20:55
mordredShrews: oh - well - it's possible I looked for the node info wrongly21:00
*** panda|bbl is now known as panda21:02
mordredShrews: yup. pebkac. thank you21:03
pabelangerclarkb: looks like we have 2 volumes for for afscache / proxycache in rax. Any objections to see about just using 1 now?21:03
pabelangerwonder if that was because we added proxycache after afscache volume was created21:04
fungipabelanger: yeah, we could i suppose switch to different 100gb logical volumes on a single 200gb cinder volume21:04
fungiand you are correct about the timeline there21:04
pabelangerkk21:05
fungiwe already had a 100gb cinder volume for afs cache, and so i added another for th apache proxy cache rather than take the outage necessary to migrate between filesystems21:05
pabelangerI might need some help with logical volumes, I usually just use mount_volume.sh script21:07
*** iyamahat has joined #openstack-infra21:08
*** slaweq_ has joined #openstack-infra21:08
*** iyamahat has quit IRC21:09
*** iyamahat has joined #openstack-infra21:09
*** felipemonteiro has joined #openstack-infra21:10
*** slaweq_ has quit IRC21:13
*** felipemonteiro__ has quit IRC21:13
pabelangerokay, believe I done it21:15
pabelangerrebooting to confirm21:16
*** olaph has joined #openstack-infra21:18
pabelangerhttp://mirror02.dfw.rax.openstack.org/21:19
pabelangerstill waiting for DNS21:19
pabelangeroh, ha. seen an issue21:19
*** olaph1 has quit IRC21:19
pabelangerI might need to expand LV for proxycache, i should have used 100% for lvcreate, but did 50%, so it is only using 1/4 of volume21:20
pabelangerfungi: any suggestions to do that?^21:21
*** eharney_ has joined #openstack-infra21:21
pabelangerI think lvextend21:21
pabelangeryah21:22
pabelangerhttps://docs.openstack.org/infra/system-config/sysadmin.html#expanding-an-existing-logical-volume21:22
fungiyep21:22
fungiyou got it21:22
mordredShrews: ok. SO - that autoheld held a change that is not oneof the failng changes21:23
*** eharney has quit IRC21:23
*** eharney_ is now known as eharney21:23
pabelangerthere we go21:24
pabelangeryah21:24
openstackgerritPaul Belanger proposed openstack-infra/system-config master: Add mirror02.dfw.rax.o.o to cacti  https://review.openstack.org/53932821:26
mordredShrews: but I'm juggling too many brain balls, so it's still probably my fault - I'm taking a stab at debugging via adding lots of prints21:26
pabelangerinfra-root: adds mirror02.dfw.rax.o.o to cacti^ I haven't changed the DNS yet on mirror.dfw.rax.o.o.21:27
smcginnisWe have a project-config change that needs to merge before we can do a release.21:28
smcginnisCan someone other than AJaeger take a look at https://review.openstack.org/#/c/539231/21:28
* mnaser looks21:28
mnasersmcginnis: done21:29
smcginnismnaser: Thanks!21:29
*** kgiusti has left #openstack-infra21:29
mnasernp21:29
*** yamamoto has joined #openstack-infra21:29
*** eharney has quit IRC21:30
pabelangerfungi: clarkb: okay, RDO is building replacement DIBs, but it doesn't look to be a fast filesystem.  Something going on with hardware on their side, but rough math is another 8 hours for image to finish building (unless something changes behind the scenes).21:32
*** ianychoi has quit IRC21:34
*** ianychoi has joined #openstack-infra21:35
*** yamamoto has quit IRC21:35
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: set up tests to run with sqlite  https://review.openstack.org/53930821:36
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: convert list of auth_codes to single value  https://review.openstack.org/53933121:36
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: do not allow story id to change via payload  https://review.openstack.org/53933221:36
fungipabelanger: thanks, i've also sort of been following along in #rdo21:36
openstackgerritMerged openstack-infra/zuul master: Remove zuul._projects  https://review.openstack.org/53551821:37
Shrewsmordred: that seems unpossible that it's not a failing change21:38
openstackgerritMerged openstack-infra/project-config master: Change searchlight-ui to use publish-to-pypi-horizon  https://review.openstack.org/53923121:38
*** eharney has joined #openstack-infra21:39
pabelangerfungi: kk21:39
*** threestrands has joined #openstack-infra21:40
*** trown|rover is now known as trown|outtypewww21:42
*** armaan has quit IRC21:46
pabelangerfungi: clarkb: what time do we want to consider DNS update for mirror02?21:46
*** armaan has joined #openstack-infra21:47
*** andreww has joined #openstack-infra21:47
clarkbcorvus mentioned a few hours from meeting so probably anytime after now if still no update on the ticket21:48
clarkbpabelanger: ^21:48
pabelangerclarkb: okay, lets first land https://review.openstack.org/539328/ and make sure cacti is happy, then we can update DNS21:49
*** xarses_ has quit IRC21:50
clarkbstill no update on ticket? (I'm not in a good spot to check21:51
clarkbI +2'd that change too21:51
*** xarses_ has joined #openstack-infra21:51
pabelangertrying to pull that up myself21:51
*** andreww has quit IRC21:51
pabelangerclarkb: yah, nothing back yet21:52
mordredShrews: yah - I'm pretty sure I'm just looking at things wrong21:53
*** Goneri has quit IRC21:54
*** slaweq has quit IRC21:55
openstackgerritDoug Hellmann proposed openstack-infra/storyboard master: set up tests to run with sqlite  https://review.openstack.org/53930821:55
*** panda is now known as panda|off21:57
*** felipemonteiro has quit IRC21:57
*** eharney has quit IRC21:57
smcginnisI see high-ish queue lengths in zuul.o.o. Something to worry about?21:57
*** felipemonteiro has joined #openstack-infra21:57
openstackgerritMerged openstack-infra/nodepool master: zk: check for client in properties  https://review.openstack.org/53555921:58
clarkbsmcginnis: if they dont fall in 5-10 minutes then maybe21:58
openstackgerritMerged openstack-infra/project-config master: Add third-party-check pipeline  https://review.openstack.org/53928621:58
openstackgerritMerged openstack-infra/project-config master: Revert "Temporarily remove ansible/ansible from pipelines"  https://review.openstack.org/53928721:58
smcginnisclarkb: Oh, already have dropped.21:58
smcginnisHmm, actually, fluctuating all over. Guess that means things are moving.21:59
*** mylu has joined #openstack-infra21:59
clarkbya moving is good as long as it goes both directions21:59
pabelangertopic:zuulv3-projects has started again, I think we've entered increase of dynamic reloads on zuul, but memory looks flatish22:00
*** olaph1 has joined #openstack-infra22:00
*** olaph has quit IRC22:03
corvuspabelanger: it has?22:08
corvus(i haven't done anything)22:10
pabelangercorvus: I'm not sure how to confirm, aside from looking at topic and +W, but it does seem zuul is increased dynamic reloads. Again, I am not sure how best to track that22:11
*** edmondsw has quit IRC22:11
corvuspabelanger: is it something we need to track?22:12
*** edmondsw has joined #openstack-infra22:12
*** mihalis68 has quit IRC22:13
*** priteau has joined #openstack-infra22:13
pabelangercorvus: I think so, we spent a lot of time yesterday recovering from outage, but results queue grew, then emptied, and repeated for a few hours22:14
*** salv-orlando has joined #openstack-infra22:14
corvuspabelanger: that's pretty normal.22:14
pabelangerokay, maybe I'm over thinking it22:14
pabelangerbut seem to have been growing ready nodes (upwards of 300 ready) then once results queue empties they get allocated to zuul.22:15
*** xarses_ has quit IRC22:15
*** edmondsw has quit IRC22:17
*** rcernin has quit IRC22:18
*** xarses_ has joined #openstack-infra22:19
*** andreww has joined #openstack-infra22:20
*** andreww has quit IRC22:20
*** andreww has joined #openstack-infra22:21
*** tpsilva has quit IRC22:22
*** xarses_ has quit IRC22:23
pabelangerianw: I'll be EOD shortly, do you mind taking over on https://review.openstack.org/539328/ ? Thing we want to confirm cacti.o.o is good for mirror02.dfw.rax.o.o then, if no reply to open ticket in rax, update DNS for mirror.dfw.rax.o.o to point to mirror0222:24
pabelangerclarkb: ^22:24
ianwpabelanger: ok, so we're replacing or round-robining?22:25
pabelangerianw: I think replace to start, I'm not sure what is involved with round-robining right now.22:26
corvusyou can't rr a cname, you'd have to replace it with a records anyway.22:26
corvus(which is fine; just pointing out if we wanted to rr, we'd need to drop the cname and add 2 A records and 2 AAAA records)22:27
*** yamamoto has joined #openstack-infra22:27
*** priteau has quit IRC22:27
*** felipemonteiro__ has joined #openstack-infra22:27
corvusi don't see a need to rr though -- one server should supply plenty of bandwidth.22:28
ianwyep, anyway let's KISS with a replacement.  so i'll wait until cacti is seeing it so we've actually got stats if it's any better then update22:28
corvus++22:28
pabelangerwfm22:28
*** felipemonteiro has quit IRC22:31
ianwshould we run a md5sum or something on the afs to prime it?22:31
corvuswe use a really small subset, that would probably take unreasonably long22:32
*** esberglu has quit IRC22:32
corvushowever, a pip install of openstack/requirements might be worthwhile22:32
*** jcoufal has quit IRC22:35
*** bfournie has quit IRC22:44
*** hemna_ has quit IRC22:45
*** andreww has quit IRC22:51
*** rcernin has joined #openstack-infra22:52
*** xarses_ has joined #openstack-infra22:54
*** xarses_ has quit IRC22:54
*** xarses_ has joined #openstack-infra22:55
*** armax has quit IRC22:56
*** andreas_s has joined #openstack-infra22:58
*** xarses_ has quit IRC22:59
*** mriedem is now known as mriedem_afk23:02
*** andreas_s has quit IRC23:02
*** dave-mccowan has quit IRC23:03
*** ianychoi has quit IRC23:05
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Merger: retry network operations  https://review.openstack.org/53935623:11
*** yamamoto_ has joined #openstack-infra23:12
ianwinteresting that 8.8.8.8 still seems to work in cn123:12
ianwbut whatever resolver is given by default has omissions23:14
ianw$ host twitter.com23:14
ianwHost twitter.com not found: 2(SERVFAIL)23:14
*** yamamoto has quit IRC23:15
clarkbpuppet should set up unbound then use opendns and google dns23:16
clarkbso that may not be a major issue for us23:16
ianwyep, it seems resolving via them is fine ... just as long as people aren't doing "ping -c1 facebook.com # is the network up?" :)23:19
fungisome firewalls are just greater than others, that's all23:20
*** salv-orlando has quit IRC23:21
*** salv-orlando has joined #openstack-infra23:22
mordredcorvus (and other infra-root) ... we are running an integration test on an ansible PR!23:22
* fungi cheers from the sidelines23:24
*** jamesmcarthur has quit IRC23:24
mordredcorvus: fwiw, the link we provided to the status link on the GH PR is "https://zuul01.openstack.org/"23:24
mordredhttps://github.com/ansible/ansible/pull/20974 is the PR I just triggered, in case anyone wants to see23:25
*** xarses_ has joined #openstack-infra23:25
*** salv-orlando has quit IRC23:26
*** andreww has joined #openstack-infra23:27
*** andreww has quit IRC23:27
*** andreww has joined #openstack-infra23:28
*** jamesmcarthur has joined #openstack-infra23:28
*** andreww has quit IRC23:28
*** andreww has joined #openstack-infra23:28
corvusmordred: i don't see any zuul-related links on that; maybe 'contributors' only?23:29
*** jamesmcarthur has quit IRC23:29
corvusmordred: oh, the little yellow dot opens up a popup with zuul in it23:29
*** xarses_ has quit IRC23:30
dmsimard|afkHuh, Red Hat acquires CoreOS.23:30
corvusmordred: looks like we need to set webapp.status_url in zuul.conf?23:31
corvusmordred: (which probably needs to change to web.status_url ?)23:32
openstackgerritMonty Taylor proposed openstack-infra/system-config master: Set status_url for zuul  https://review.openstack.org/53935723:33
mordredcorvus: yah ^^23:33
mordredcorvus: but also, yeah - we can hold off on that until we update that to web.status_url23:34
corvusmordred: let's move in parallel23:35
*** olaph has joined #openstack-infra23:36
*** olaph1 has quit IRC23:37
*** jamesmcarthur has joined #openstack-infra23:38
*** jamesmcarthur has quit IRC23:38
*** slaweq has joined #openstack-infra23:40
fungi#status log scheduled maintenance will make the citycloud-sto2 api endpoint unavailable intermittently between 2018-01-31 07:00 and 2018-02-01 07:00 utc23:42
openstackstatusfungi: finished logging23:42
mordredfungi: have a sec for an easy +A? https://review.openstack.org/#/c/539357/23:43
fungi#status log scheduled maintenance will make the citycloud-kna1 api endpoint unavailable intermittently between 2018-02-07 07:00 and 2018-02-08 07:00 utc23:43
openstackstatusfungi: finished logging23:43
*** yamahata has quit IRC23:44
*** slaweq has quit IRC23:45
*** armaan has quit IRC23:45
*** iyamahat has quit IRC23:47
*** dhill_ has quit IRC23:47
*** dsariel has quit IRC23:48
*** stakeda has joined #openstack-infra23:49
*** abelur_ has joined #openstack-infra23:50
*** ganso has quit IRC23:53
*** dingyichen has joined #openstack-infra23:54
*** rcernin has quit IRC23:57
tonybI saw something that installing packagines in ubuntu nodes is failing and we shoudl avoid rechecks .... is that still the case?23:58
ianwtonyb: pabelanger fixed the mirror, so should be ok now23:59
tonybianw: Thanks.23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!