Friday, 2018-10-05

*** jamesmcarthur has quit IRC00:00
*** hamzy_ has joined #openstack-infra00:01
*** hamzy_ is now known as hamzy00:02
*** Swami has quit IRC00:04
*** jamesmcarthur has joined #openstack-infra00:07
*** jamesmcarthur has quit IRC00:08
*** jamesmcarthur has joined #openstack-infra00:11
*** jamesmcarthur has quit IRC00:11
*** lathiat has quit IRC00:14
*** slagle has quit IRC00:15
*** slagle has joined #openstack-infra00:15
*** rcernin has quit IRC00:27
*** rcernin has joined #openstack-infra00:29
*** longkb has joined #openstack-infra00:36
*** ijw has joined #openstack-infra00:38
*** dave-mccowan has quit IRC01:16
*** mrsoul has joined #openstack-infra01:17
*** jamesmcarthur has joined #openstack-infra01:17
*** lathiat has joined #openstack-infra01:17
*** felipemonteiro has joined #openstack-infra01:20
*** jamesmcarthur has quit IRC01:22
*** diablo_rojo has quit IRC01:24
*** aluria has quit IRC01:29
*** felipemonteiro has quit IRC01:36
*** felipemonteiro has joined #openstack-infra01:36
*** edmondsw has joined #openstack-infra01:45
*** rickhlwong has joined #openstack-infra01:53
*** jamesmcarthur has joined #openstack-infra01:54
*** hongbin has joined #openstack-infra01:57
*** agopi has joined #openstack-infra01:57
rickhlwongexit01:58
rickhlwongquit01:58
*** rickhlwong has quit IRC01:58
*** gyee has quit IRC02:01
*** ramishra has joined #openstack-infra02:09
*** ijw has quit IRC02:14
*** felipemonteiro has quit IRC02:17
*** jamesmcarthur has quit IRC02:20
*** yamamoto has quit IRC02:23
*** yamamoto has joined #openstack-infra02:23
*** jamesmcarthur has joined #openstack-infra02:24
*** jamesmcarthur has quit IRC02:35
*** dosaboy has quit IRC02:35
*** jamesmcarthur has joined #openstack-infra02:36
*** vivsoni has quit IRC02:56
*** eernst has joined #openstack-infra02:56
*** rh-jelabarre has quit IRC02:57
*** annp has joined #openstack-infra03:01
*** lathiat has quit IRC03:08
*** felipemonteiro has joined #openstack-infra03:12
*** agopi has quit IRC03:21
*** agopi has joined #openstack-infra03:21
*** rlandy has quit IRC03:22
*** sung-il9 has joined #openstack-infra03:23
*** lathiat has joined #openstack-infra03:34
openstackgerritDominique Martinet proposed openstack/gertty master: gitrepo DiffFile: convert tab to ยป + spaces  https://review.openstack.org/60815403:35
openstackgerritDominique Martinet proposed openstack/gertty master: typo - s/fileojb/fileobj/  https://review.openstack.org/60815503:35
*** kaiokmo has quit IRC03:41
*** kaiokmo has joined #openstack-infra03:41
*** hongbin has quit IRC03:42
*** udesale has joined #openstack-infra03:49
*** jamesmcarthur has quit IRC03:57
*** witek has quit IRC04:06
*** ykarel__ has joined #openstack-infra04:15
*** eernst has quit IRC04:15
*** eernst has joined #openstack-infra04:16
*** ykarel__ is now known as ykarel04:34
*** roman_g has quit IRC04:41
*** rcernin has quit IRC04:42
*** eernst has quit IRC04:45
*** rcernin has joined #openstack-infra04:46
*** kukacz has quit IRC04:52
*** flaper87 has quit IRC04:52
*** niedbalski has quit IRC04:52
*** jbadiapa has quit IRC04:52
*** openstackgerrit has quit IRC04:52
*** d0ugal has quit IRC04:52
*** dhill_ has quit IRC04:52
*** kukacz has joined #openstack-infra04:57
*** niedbalski has joined #openstack-infra04:57
*** flaper87 has joined #openstack-infra04:57
*** jbadiapa has joined #openstack-infra04:57
*** openstackgerrit has joined #openstack-infra04:57
*** d0ugal has joined #openstack-infra04:57
*** dhill_ has joined #openstack-infra04:57
*** Bhujay has joined #openstack-infra04:59
*** Bhujay has quit IRC05:00
*** Bhujay has joined #openstack-infra05:00
*** agopi has quit IRC05:03
*** felipemonteiro has quit IRC05:03
*** jamesmcarthur has joined #openstack-infra05:04
AJaegerfrickler, could you review https://review.openstack.org/601583 and https://review.openstack.org/608069 , please?05:05
*** jamesmcarthur has quit IRC05:09
*** kjackal has joined #openstack-infra05:15
openstackgerritMerged openstack-infra/zuul-jobs master: support passing extra arguments to bdist_wheel in build-python-release  https://review.openstack.org/60790005:22
*** agopi has joined #openstack-infra05:24
*** kjackal has quit IRC05:33
*** e0ne has joined #openstack-infra05:34
*** jtomasek has joined #openstack-infra05:45
*** e0ne has quit IRC05:50
openstackgerritMatthew Thode proposed openstack-infra/project-config master: enable sqlite in python  https://review.openstack.org/60816105:51
prometheanfire:(05:51
*** e0ne has joined #openstack-infra05:53
*** Emine has quit IRC05:56
*** kjackal has joined #openstack-infra06:01
*** e0ne has quit IRC06:12
AJaegerfrickler: could you review prometheanfire's change above as well, please? ^06:27
AJaegerthanks!06:27
openstackgerritMerged openstack-infra/project-config master: remove job settings for trove repositories  https://review.openstack.org/60158306:34
*** udesale has quit IRC06:39
*** markvoelker has joined #openstack-infra06:40
*** kopecmartin|scho is now known as kopecmartin|ruck06:41
openstackgerritMerged openstack-infra/project-config master: always build universal wheels  https://review.openstack.org/60790206:42
*** markvoelker has quit IRC06:45
*** Bhujay has quit IRC06:46
*** quiquell|off is now known as quiquell|brb06:50
*** udesale has joined #openstack-infra06:53
*** pcaruana has joined #openstack-infra06:57
*** kjackal has quit IRC06:58
openstackgerritMerged openstack-infra/project-config master: enable sqlite in python  https://review.openstack.org/60816107:00
*** ijw has joined #openstack-infra07:03
*** rcernin has quit IRC07:04
*** roman_g has joined #openstack-infra07:07
*** ijw has quit IRC07:07
*** witek has joined #openstack-infra07:08
prometheanfireAJaeger: mind kicking off another gentoo image?07:08
*** quiquell|brb is now known as quiquell07:08
*** rtjure has quit IRC07:09
AJaegerprometheanfire: I don't have those permissions, only infra-root can help ^07:10
*** jpena|off is now known as jpena07:10
*** kjackal has joined #openstack-infra07:10
prometheanfireah, k07:11
*** rtjure has joined #openstack-infra07:12
*** njohnston has quit IRC07:17
*** amoralej|off is now known as amoralej07:17
*** shardy has joined #openstack-infra07:28
*** olivierb_ has joined #openstack-infra07:36
*** dosaboy has joined #openstack-infra07:38
*** bauzas is now known as PapaOurs07:39
*** tosky has joined #openstack-infra07:45
*** ykarel has quit IRC07:46
*** ykarel has joined #openstack-infra07:46
*** aluria has joined #openstack-infra07:47
*** e0ne has joined #openstack-infra07:49
*** lpetrut has joined #openstack-infra07:54
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: Revert "Revert "web: rewrite interface in react""  https://review.openstack.org/60747908:13
*** priteau has joined #openstack-infra08:25
*** aojeagarcia has joined #openstack-infra08:26
quiquellHello any infra-root here that can prioritiza promotion blocker review https://review.openstack.org/#/c/608080/ at tripleo gate ?08:35
*** derekh has joined #openstack-infra08:35
openstackgerritAndreas Jaeger proposed openstack-infra/project-config master: remove job settings for neutron repositories  https://review.openstack.org/59786508:37
quiquellprometheanfire: Are you infra-root ?08:38
*** panda|off is now known as panda08:40
*** markvoelker has joined #openstack-infra08:41
*** electrofelix has joined #openstack-infra08:42
*** ykarel_ has joined #openstack-infra08:44
*** ykarel has quit IRC08:47
*** ykarel_ is now known as ykarel|lunch08:51
*** gfidente has joined #openstack-infra09:01
openstackgerritPierre Riteau proposed openstack/diskimage-builder master: Fail build due to missing kauditd only when SELinux is enabled  https://review.openstack.org/60818709:03
openstackgerritColleen Murphy proposed openstack-infra/puppet-pip master: [DNM] debugging  https://review.openstack.org/60704809:03
*** janki has joined #openstack-infra09:05
*** chandankumar has joined #openstack-infra09:06
*** agopi has quit IRC09:14
*** markvoelker has quit IRC09:15
*** efried has quit IRC09:17
*** efried1 has joined #openstack-infra09:17
*** efried1 is now known as efried09:19
openstackgerritmelissaml proposed openstack/diskimage-builder master: fix a typo  https://review.openstack.org/60819109:26
*** Emine has joined #openstack-infra09:32
*** stephenfin is now known as finucannot09:35
*** chandankumar has quit IRC09:48
*** ykarel|lunch is now known as ykarel09:54
*** chandankumar has joined #openstack-infra09:59
*** yamamoto has quit IRC10:07
*** yamamoto has joined #openstack-infra10:08
*** yamamoto has quit IRC10:09
*** yamamoto has joined #openstack-infra10:10
*** markvoelker has joined #openstack-infra10:12
*** yamamoto has quit IRC10:13
*** yamamoto has joined #openstack-infra10:14
AJaegerdhellmann: https://review.openstack.org/#/c/597576 is ready for review, isn't it? And please WIP https://review.openstack.org/597865 again10:14
*** yamamoto has quit IRC10:16
*** yamamoto has joined #openstack-infra10:17
*** Emine has quit IRC10:17
*** udesale has quit IRC10:18
*** yamamoto has quit IRC10:21
*** chandankumar has quit IRC10:26
*** yamamoto has joined #openstack-infra10:27
*** yamamoto has quit IRC10:32
*** yamamoto has joined #openstack-infra10:33
*** sshnaidm is now known as sshnaidm|off10:35
*** yamamoto has quit IRC10:38
*** longkb has quit IRC10:41
*** longkb has joined #openstack-infra10:42
*** longkb has quit IRC10:43
*** jtomasek has quit IRC10:43
*** markvoelker has quit IRC10:44
*** kjackal has quit IRC10:44
*** ijw has joined #openstack-infra10:45
*** ijw has quit IRC10:50
*** jamesmcarthur has joined #openstack-infra10:51
*** annp has quit IRC10:53
*** kjackal has joined #openstack-infra10:56
*** jamesmcarthur has quit IRC10:58
*** jchhatbar has joined #openstack-infra10:59
*** janki has quit IRC11:00
*** jchhatbar has quit IRC11:01
*** jchhatbar has joined #openstack-infra11:02
*** jpena is now known as jpena|lunch11:04
*** jesusaur has quit IRC11:06
*** jchhatba_ has joined #openstack-infra11:06
*** jtomasek has joined #openstack-infra11:07
*** yamamoto has joined #openstack-infra11:08
*** jchhatbar has quit IRC11:09
*** dtantsur|afk is now known as dtantsur11:09
*** jesusaur has joined #openstack-infra11:14
*** rpittau_ has quit IRC11:17
*** yamamoto has quit IRC11:18
*** yamamoto has joined #openstack-infra11:19
*** quiquell is now known as quiquell|lunch11:20
*** yamamoto has quit IRC11:23
*** tpsilva has joined #openstack-infra11:23
*** yamamoto has joined #openstack-infra11:24
fungiquiquell|lunch: i've promoted 608080,2 just now11:38
*** markvoelker has joined #openstack-infra11:41
*** quiquell|lunch is now known as quiquell11:42
quiquellfungi: thanks !11:42
mrhillsmanhey folks, any thoughts on this:11:45
mrhillsmanOct  5 11:44:56 openlab-zuul zuul-scheduler[6129]: 2018-10-05 11:44:56,309 WARNING kazoo.client: Transition to CONNECTING11:45
mrhillsmanOct  5 11:45:03 openlab-zuul zuul-scheduler[6129]: 2018-10-05 11:45:03,108 WARNING kazoo.client: Connection dropped: outstanding heartbeat ping not received11:45
dhellmannAJaeger : done11:45
mrhillsman`nc -v -w2 192.168.200.147 2181` returns success so i know the issue is not the port being inaccessible11:46
dhellmannfungi , lifeless : I think we don't need to care about the contents of the setup.cfg files. My reasoning is in http://lists.openstack.org/pipermail/openstack-dev/2018-October/135461.html11:46
dhellmannlet me know if my logic is faulty, of course11:46
*** jtomasek has quit IRC11:48
sekharvajjulaHi guys. I am trying to setup a 3rdparty CI for running Ironic testcases on Nokia Hardware. Still having issues related to zuul configurations and the project dependencies. Planning to post my logs and config files in some mailing list, to get some support. Do you think openstack-infra@lists.openstack.org is right place?11:51
*** e0ne has quit IRC11:54
*** rh-jelabarre has joined #openstack-infra11:56
fungidhellmann: i've also replied. depends on what you mean by "we"11:56
fungidhellmann: if "we" means "openstack, don't care about anybody else" then sure11:57
dhellmannI'm not sure why updating the setup.cfg in oslo.config affects anyone else?11:57
fungisekharvajjula: yes11:58
*** agopi has joined #openstack-infra11:58
dhellmannI'm not saying no one anywhere needs to update that setting. I'm saying the reason given for doing it in all of *our* files doesn't make sense to me.11:58
fungidhellmann: the concern seemed to be that if pbr updates its setup.cfg file to respect the transition in latest wheel, then there will be an avalanche of changes proposed to other projects by people who don't know any better?11:59
*** agopi has quit IRC12:00
dhellmannfungi : there is already such an avalanche12:00
*** jtomasek has joined #openstack-infra12:00
dhellmannthere are at least 3 people writing those patches12:00
dhellmannthe idea that it is somehow "more correct" confuses me12:01
*** jtomasek has quit IRC12:01
fungii just want to make it clear when we provide such advice that we're only talking about projects using openstack-project-specific release workflow not needing to worry about it12:01
dhellmannwhat benefit do you perceive in any project anywhere including that flag in setup.cfg?12:01
*** jtomasek has joined #openstack-infra12:02
fungiwhat benefit do you perceive in any project anywhere building universal wheels?12:02
fungior ceasing to use deprecated section names?12:02
fungi(which was the actual question in the pbr change)12:02
dhellmannuniversal wheels published to PyPI are more useful to more consumers. I don't see any real point in using them locally.12:02
fungii'll repeat: i just want to make it clear when we provide such advice that we're only talking about projects using openstack-project-specific release workflow not needing to worry about it12:03
dhellmannhaving the flag in that file is a convenience so that when running "python setup.py bdist_wheel" you don't have to add the extra argument. So projects that publish packages by hand may want it.12:03
dhellmannok, I guess I assumed that if we're talking about this on the openstack mailing list that was implied12:03
fungiother non-openstack projects will still want a mechanism to build universal wheels, using standard commands recommended by python packaging guides12:04
dhellmannare you talking about non-openstack projects hosted on openstack.org? or globally?12:04
fungimore the former12:04
dhellmanndo we have any of those who are not using the CI system to publish packages after a tag is applied?12:05
dhellmannI imagine we do12:05
fungii'm not going to recommend openstack's special release build jobs to non-openstack projects hosted on out infrastructure12:05
fungis/out/our/12:05
*** boden has joined #openstack-infra12:05
dhellmannsure12:05
fungithere are also potentially non-openstack projects following code review discussions on pbr changes. it's quite popular12:06
*** jpena|lunch is now known as jpena12:06
*** felipemonteiro has joined #openstack-infra12:07
fungiand one of the initial takeaways which started that ml thread was that someone got the impression configuring your setup.cfg to build universal wheels was actively harmful and should be avoided12:07
*** udesale has joined #openstack-infra12:12
fungi(a ptl, in fact)12:13
*** markvoelker has quit IRC12:15
sekharvajjulafungi: Thanks. should I subscribe to the mailing list first in order to send mails?12:15
fungisekharvajjula: i recommend doing so, yes, for two reasons. first, posts from non-subscribers will get stuck in the moderation queue until someone gets around to approving them; second, replies are likely to go to the list only and subscribing will still get you a copy without having to periodically check the list archives12:17
mrhillsmanany idea why i see success status and logs but zuul is not reporting back to github12:17
mrhillsmanand all of our nodepool nodes are stuck in-use12:18
mrhillsmanpayloads are successfully delivered12:18
mrhillsmanjobs are queued up12:18
fungimrhillsman: i'm not really familiar with github... you may have more luck asking in #zuul12:18
mrhillsmanthx12:19
sekharvajjulafungi: Sure. I have sent a subscription request. I will await for its approval, before sending my query.12:19
AJaegerfungi, could you review next python3-first change, please? https://review.openstack.org/#/c/597576 openstack-ansible is ready...12:19
fungisekharvajjula: subscription confirmation is automated. if you try to subscribe you should immediately receive a request confirmation reply with instructions to follow12:20
TheJuliafungi: I am aware of the arm64 notes, but I haven't ever gotten far enough to give them a spin and build a job on them. They essentially would be needed to prepare a ramdisk since we can't cross-os build it. Maybe it might make sense to try and move one of our IPA jobs to arm64 at some point.12:23
*** rlandy has joined #openstack-infra12:25
*** amoralej is now known as amoralej|lunch12:27
*** agopi has joined #openstack-infra12:27
fungiTheJulia: out of curiosity, is it that you really can't use a multiarch environment to cross-build those from amd64, or simply that it's a lot more complicated to set up than it's worth?12:27
*** dtantsur is now known as dtantsur|brb12:27
fungii've done a fair amount of arm/armel/armhf/arm64 compiles on amd64 systems in service of slow socs12:28
fungiand i agree at least that it's a pain12:29
*** e0ne has joined #openstack-infra12:29
sekharvajjulafungi: Hmm. Did not get any mail still. waiting :)12:29
fungisekharvajjula: did you subscribe using the web form at http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra or by sending a message to openstack-infra-request@lists.openstack.org with a subject line that says "subscribe"?12:32
*** aidin has joined #openstack-infra12:32
sekharvajjulafungi: 1st option (on the web page).12:34
openstackgerritMerged openstack-infra/project-config master: remove job settings for OpenStackAnsible repositories  https://review.openstack.org/59757612:35
*** dosaboy has quit IRC12:35
AJaegeryeah! Only three repos to go - out of 65 - for python3-first ^12:35
*** dosaboy has joined #openstack-infra12:36
*** dosaboy has quit IRC12:36
fungisekharvajjula: looks like you tried twice? 12:03:53 and 12:26:38 utc12:40
*** trown|outtypewww is now known as trown12:41
fungisekharvajjula: the confirmation messages were both delivered to your mta at least. 250 response from nokia-com.mail.protection.outlook.com12:41
fungiinternal queue ids on your end are 2645699854801 and 2645699857133 respectively12:42
fungisekharvajjula: have you checked your spam folder or similar?12:42
sekharvajjulafungi: I checked in spam, Junk. not found anywhere. :(12:45
fungisekharvajjula: we've had problems with outlook.com/hotmail/office-365 not delivering messages from our listserv in the past (users of that service incorrectly reporting moderator notifications as spam), so if you have another e-mail address you can use you might have more luck12:47
sekharvajjulafungi: Tried from Gmail and it worked. Thank you!12:49
fungiglad you found a workaround. now i get to start keeping an eye out for microsoft blocking our mailing lists again12:49
*** ansmith has joined #openstack-infra12:50
*** dosaboy has joined #openstack-infra12:56
*** dims_ has quit IRC12:58
sekharvajjulafungi: I got a confirmation mail. So I have sent an email with log attachment to the mailing list. when does it show in the archives?12:58
*** mriedem has joined #openstack-infra12:59
*** jcoufal has joined #openstack-infra13:01
* mordred waves to people13:05
fungisekharvajjula: if the attached log was large, it may be stuck in the moderation queue. i'll check in a few minutes13:08
fungimordred: have you made it all the way back from texas to... texas now?13:08
sekharvajjulafungi: It has 21MB attachment.13:09
fungiyes, that's fairly large. i think anything over 10mb ends up needing a moderator to check it out first13:10
mordredfungi: yes! it was a long journey13:10
fungibeset with armadillos at every turn?13:10
mordredSO MANY DILLOS13:11
*** ykarel has quit IRC13:12
*** efried is now known as fried_rice13:14
*** dave-mccowan has joined #openstack-infra13:15
*** Seb-Solon has joined #openstack-infra13:18
*** dave-mccowan has quit IRC13:21
*** kjackal has quit IRC13:24
openstackgerritStephen Finucane proposed openstack-dev/pbr master: Special case long_description_content_type  https://review.openstack.org/56517713:24
*** evrardjp has joined #openstack-infra13:25
openstackgerritStephen Finucane proposed openstack-dev/pbr master: Use templates for cover and lower-constraints  https://review.openstack.org/60094313:26
*** kjackal has joined #openstack-infra13:26
*** eernst has joined #openstack-infra13:27
*** kjackal_v2 has joined #openstack-infra13:28
*** ramishra has quit IRC13:28
TheJuliafungi: The limitation is DIB has no understanding of it and when I took like five minutes to ponder it, it seemed very invasive.13:29
TheJuliafungi: part of that being because binaries are executed in the chroot which means the loader is going to go "nope nope nope"13:30
*** kaiokmo has quit IRC13:30
*** panda is now known as panda|off13:31
fungisekharvajjula: i looked in the moderation queue and your post wasn't held or anything. what address did you send it to? openstack-infra@lists.openstack.org or something else?13:32
sekharvajjulafungi: openstack-infra-request@lists.openstack.org13:33
sekharvajjulaSorry. I will send it again to openstack-infra@lists.openstack.org13:33
fungisekharvajjula: that's the problem. the request address is basically the listserv's api for sending commands (subscribe, unsubscribe, and a few others)13:33
*** quiquell is now known as quiquell|off13:34
slaweqmordred: hi, quick question, if I would like to tell devstack to clone neutron from specific patch, like: https://review.openstack.org/#/c/608259/1 how I should configure my local.conf file for that? do You know?13:35
sekharvajjulafungi: sent13:35
AJaegerslaweq: "depends-on: https://review.openstack.org/#/c/608259/"13:36
AJaegerslaweq: https://docs.openstack.org/infra/manual/developers.html#cross-repository-dependencies13:36
slaweqAJaeger: it will not work because I need to use it in grenade job as "old" version13:36
slaweqand depends-on not works like that there AFAIK13:37
mordredslaweq: it should actually13:37
mordredsince that's a patch to stable/rocky, it'll get applied to the 'old' version in a grenade run triggered by a master patch13:37
*** ykarel has joined #openstack-infra13:38
openstackgerritMerged openstack-dev/pbr master: Support wheel 0.32.0+  https://review.openstack.org/60789413:38
mordred(zuul stacks patches onto branches for all of the branches involved in the dependency chain)13:38
slaweqso if I will do my "master patch" as depends-on this one from rocky it will deploy old version from this one?13:38
*** amoralej|lunch is now known as amoralej13:39
*** ykarel has quit IRC13:39
*** ykarel_ has joined #openstack-infra13:39
mordredslaweq: yes - at least it should13:40
slaweqmordred: AJaeger: thx, I will check that then :)13:41
fungiit will deploy any dependencies targeting stable/rocky on the stable/rocky branches of those projects which grenade will use to populate the "old" state and any master branch dependencies on the master branches of those projects which grenade will use to set up the "new" state13:41
sekharvajjulafungi: thanks for the support. I can now see my mail in the archive.13:43
sekharvajjulaCan someone help me on this issue http://lists.openstack.org/pipermail/openstack-infra/2018-October/006203.html ?13:43
prometheanfirecan someone kick another gentoo rebiuld?  I think the last one happened just before https://review.openstack.org/608161 merged13:44
fungisekharvajjula: it may make sense to continue discussing in the #openstack-third-party-ci channel too, since in theory there are a greater number of openstack third-party ci operators there13:45
*** eernst has quit IRC13:45
mriedemmordred: AJaeger: corvus: clarkb: is there any way to make zuulv3 required-projects branch-specific?13:46
mriedemi.e. only require a project on master for example13:47
fungiprometheanfire: most recent image completed building 5 hours 44 minutes ago so did indeed probably start building before 608161 merged13:47
sekharvajjulafungi: Yes I have this issue open there as well. But I am in Europe/Finland. I see that people in #openstack-third-party-ci channel are active in US timezones. It would be good to carry this issue in mailchain than in irc, where I can also attach logs easily!13:47
prometheanfirefungi: timezones :(13:47
prometheanfireI did a recheck, so should see13:47
AJaegermriedem: what's the use case?13:48
mriedemAJaeger: devstack on master is going to require the openstack/placement repo13:48
mriedemand i'd rather not have to update 300 individual job definitions13:48
*** dtantsur|brb is now known as dtantsur13:48
AJaegermriedem: devstack is branched, so you can update only master.13:49
mriedemlegacy-devstack-dsvm-updown isn't,13:49
AJaegermriedem: convert to Zuul v3 ;)13:49
mriedemthat's still in openstack-zuul-jobs,13:49
mriedemthere are probably going to be more13:49
fungiahh, so for the legacy jobs specifically13:49
mriedemi've had to do this already for a few grenade jobs13:49
mriedemyes13:49
mriedemhttps://review.openstack.org/#/q/topic:cd/placement-solo+(status:open+OR+status:merged)13:50
AJaegerJust add it, a stable/rocky change would then checkout master of placement...13:50
fungimriedem: the legacy devstack jobs don't automatically copy required-projects into the devstack projects list though, right?13:50
AJaegerfungi: they don't...13:50
fungiso zuul will simply be pushing a copy of the repo which devstack doesn't know to use anyway13:50
AJaegermriedem: and we don't really touch legacy jobs anymore, these are old and should be thrown away...13:50
mriedemso i can just add openstack/placement to required-projects in legacy-dsvm-base?13:51
fungii _think_ it will be effectively ignored by devstack on earlier branches13:51
mriedemand that will be ok for stable branch job runs too?13:51
fungini the legacy jobs that is13:51
fungishould be, but a depends-on will tell you13:51
mriedemok13:52
openstackgerritMatt Riedemann proposed openstack-infra/openstack-zuul-jobs master: Add openstack/placement to legacy-dsvm-base required-projects  https://review.openstack.org/60826613:54
fungiprometheanfire: i see gentoo-17-0-systemd-0000000896 building on nb02 for ~7 minutes now13:54
openstackgerritMatt Riedemann proposed openstack-infra/devstack-gate master: Include openstack/placement in PROJECTS starting in Stein  https://review.openstack.org/60685313:55
prometheanfirefungi: that works too :D13:55
fungiprometheanfire: well, it's because i deleted the previous two13:56
prometheanfireah13:56
fungii suppose i should have clarified13:56
*** kaiokmo has joined #openstack-infra13:56
prometheanfirewell, wonder if my rekick worked :P13:56
prometheanfireI think it might have13:56
*** felipemonteiro has quit IRC13:57
*** EmilienM is now known as EvilienM13:57
*** eharney has joined #openstack-infra14:02
openstackgerritJeremy Stanley proposed openstack-infra/system-config master: Add note about mounting one AFS volume in another  https://review.openstack.org/60827214:03
*** sthussey has joined #openstack-infra14:06
*** ykarel_ is now known as ykarel14:09
*** dansmith is now known as SteelyDan14:20
*** ykarel is now known as ykarel|afk14:23
*** aidin has quit IRC14:24
*** kjackal has quit IRC14:25
*** olivierb_ has quit IRC14:27
*** auristor has quit IRC14:27
*** auristor has joined #openstack-infra14:28
*** panda|off has quit IRC14:36
*** panda has joined #openstack-infra14:37
*** udesale has quit IRC14:38
*** e0ne has quit IRC14:38
*** munimeha1 has joined #openstack-infra14:43
TheJuliaHey infra folks, are we aware of any CI infrastructures that support our CI infrastructure that do not permit nested virt?14:46
*** hwoarang has quit IRC14:47
fungiTheJulia: if you mean cloud providers where we boot nodes not having nested virt support or, in some cases, having very broken behavior when you try to engage nested virt, definitely14:47
fungii should clarify, nested virt acceleration. we of course support non-accelerated booting of virtual machines inside the test node virtual machines14:48
* TheJulia wonders how we could detect or something14:48
fungiTheJulia: https://docs.openstack.org/infra/manual/testing.html#known-differences-to-watch-out-for14:48
fungi"Nested virt is not available in all clouds. And in clouds where it is enabled we have observed a higher rate of crashed test VMs when using it. As a result we enforce qemu when running devstack and may further restrict the use of nested virt."14:48
fungiTheJulia: clouds which simply don't provide it are easy to detect from within the vm. clouds which break terribly when you try to use the advertised nested virt acceleration on the other hand are nontrivial to identify14:49
fungialso, obviously, we have no access to troubleshoot such breakage since we lack direct access to the hypervisor layer14:50
fungiso basically your test nodes just suddenly stop responding and you get no ability to debug the cause14:50
TheJuliaYeah, except I also don't want to extend timeouts to be 2.5-6x longer to not enable nested virt for a job. :(14:50
* TheJulia sighs14:51
fungiconvince cloud providers to replace old hardware and upgrade the software they're using?14:51
fungialso we have a mix of kvm and xen providers, so nesting acceleration options differ among them14:51
fungimix of intel and amd cpus and varying generations of those even within providers leads to inconsistency there as well14:53
*** jamesmcarthur has joined #openstack-infra14:56
logan-the last few cycles of nested virt breakage on xenial have been triggered by guest kernel updates in the nodepool images14:59
*** hwoarang has joined #openstack-infra15:00
logan-currently, with xenial running in both the guest and hv, nested virt seems to only work if the hv is running a 4.15 hwe kernel.15:01
*** boden has quit IRC15:01
*** dkrol has joined #openstack-infra15:02
dkrolHello, I'm openstack trove developer and I have a problem with running tests on openstack infra. As a part of our testing job we are building vm image to be used during tests. During this job I'm getting an error - "Sorry, user stack is not allowed to execute '/bin/bash -c dd if=/opt/stack/trove/integration/scripts/files/requirements/ubuntu-requirements.txt of=/tmp/dib_build.HmlRR06T/hooks/requirements.txt' as stack on ubuntu15:04
dkrolDo you know what maybe the reason of this and what can I do about it ?15:04
fungidkrol: looks like you need devstack troubleshooting help. i recommend the #openstack-qa channel since they're the devstack maintainers15:10
dkrolgreat, thank you15:11
openstackgerritIlya Shakhat proposed openstack-infra/project-config master: Move Stackalytics jobs to project repository  https://review.openstack.org/60828615:15
*** munimeha1 has quit IRC15:22
*** Emine has joined #openstack-infra15:23
prometheanfirecloudnull: /win 2015:23
prometheanfirewell, oops15:23
*** gyee has joined #openstack-infra15:24
*** bnemec is now known as beekneemech15:27
*** munimeha1 has joined #openstack-infra15:27
clarkbTheJulia: fungi: at the very least building relationships with cloud providers so that we can work together with them to debug issues. We've been successful more recently doing this with logan- and mnaser and dpawlik.15:27
openstackgerritIlya Shakhat proposed openstack-infra/project-config master: Move performa jobs to project repository  https://review.openstack.org/60828915:28
clarkbmriedem: fyi https://review.openstack.org/#/c/608038/ is an e-r query that would be good to get in15:29
*** ykarel|afk is now known as ykarel|ptotillwe15:33
*** ykarel|ptotillwe is now known as ykarel|ptotwedne15:34
prometheanfirenb01 and nb02 are not mirrored to eachother?15:34
clarkbprometheanfire: no, they are independent distributed builders15:34
fungias is nb0315:35
*** pcaruana has quit IRC15:39
prometheanfireah15:39
pabelanger+1 open cloud ops!15:39
*** boden has joined #openstack-infra15:44
*** e0ne has joined #openstack-infra15:50
*** e0ne has quit IRC15:55
*** diablo_rojo has joined #openstack-infra15:55
TheJuliaclarkb: Yeah, and I recognize its not great that we've enabled some jobs with nested virt because of timeouts and constraints, and not wanting to keep things tied up too long running tests. Kind of created my own nightmare in a sense.15:58
openstackgerritClark Boylan proposed openstack-infra/zuul-jobs master: Retry failed git pushses on workspace setup  https://review.openstack.org/60830315:58
*** lpetrut has quit IRC15:59
openstackgerritDean Troyer proposed openstack-infra/project-config master: Add publish jobs for StarlingX api-ref and releasenotes  https://review.openstack.org/60830415:59
openstackgerritMerged openstack-infra/elastic-recheck master: Generic OOM killer query  https://review.openstack.org/60803815:59
*** emerson has quit IRC16:00
openstackgerritClark Boylan proposed openstack-infra/zuul-jobs master: Retry failed git pushses on workspace setup  https://review.openstack.org/60830316:00
*** jchhatba_ has quit IRC16:00
*** emerson has joined #openstack-infra16:01
clarkbTheJulia: ironic may be in a special place due to its tight coupling to "hardware", but I'v etried to push containers more to support the arbitrary compute in a VM use case. Unfortunately it hasn't caught on much16:01
*** PapaOurs is now known as bauzas16:08
TheJuliaI guess it kind of is a special case in a sense, but still :(16:08
*** armax has quit IRC16:09
clarkbya whether virtual or baremetal ironic wants to talk to a "machine" and containers are all the bits you get after the machine stuff is sorted16:09
*** mriedem has quit IRC16:09
*** armax has joined #openstack-infra16:11
*** jpena is now known as jpena|off16:12
openstackgerritClark Boylan proposed openstack-infra/zuul-jobs master: Retry failed git pushses on workspace setup  https://review.openstack.org/60830316:12
*** ginopc has quit IRC16:17
*** ianychoi_ is now known as ianychoi16:19
*** derekh has quit IRC16:19
*** jtomasek has quit IRC16:21
johnsomclarkb FYI, I finally got a good test run using a bionic nodepool instance with OVH hosts that have vmx enabled. They are crashing out the same way as the xenial instances are. So, we are stuck waiting for another kernel rev that fixes the last regression.16:26
clarkbjohnsom: logan- has indicated that the xenial hardware enablement kernels fix things in limestone. I don't think that bionic has hwe kernels yet?16:27
johnsomYeah, maybe not.  I know it was working on the last round of kernels, but this round has a regression. sigh16:28
logan-yeah i've noticed bionic guests living on base xenial kernel hvs crash the same way as xenial guests16:28
johnsomFYI: http://logs.openstack.org/39/600539/18/check/octavia-v2-dsvm-scenario/6c294a7/controller/logs/libvirt/qemu/instance-0000000a_log.txt.gz16:28
johnsomI'm going to update my kernel bug with the new kernel version info.16:28
logan-xenial hwe magically allows regular xenial guests to work, no idea why. we are about 75% upgraded to hwe on the limestone nodepool cloud.16:28
*** trown is now known as trown|lunch16:30
*** aojeagarcia has quit IRC16:32
TheJuliaoh joy :(16:33
*** gfidente has quit IRC16:37
*** ansmith has quit IRC16:42
*** pcaruana has joined #openstack-infra16:42
johnsomlogan- Good to know. I might keep randomly doing re-checks to see if I win the limestone lottery with my test patch and see if they are booting there.16:44
*** ansmith has joined #openstack-infra16:45
logan-johnsom: i'll ping ya once I get the rest of the HVs cycled thru for updates to hwe too. hopefully it is a stable fix.. it seems like we get new brokenness every time there's a new kernel package16:46
johnsomYeah, recently it has been alternating for sure.16:46
*** dims has joined #openstack-infra16:46
*** pcaruana has quit IRC16:50
*** jtomasek has joined #openstack-infra16:52
*** jamesmcarthur has quit IRC17:02
*** amoralej is now known as amoralej|off17:05
*** dave-mccowan has joined #openstack-infra17:08
*** yamamoto has quit IRC17:08
*** yamamoto has joined #openstack-infra17:09
openstackgerritJeremy Stanley proposed openstack-infra/puppet-gerritbot master: Use default IRC port  https://review.openstack.org/60831317:13
*** yamamoto has quit IRC17:14
*** dave-mccowan has quit IRC17:14
openstackgerritJeremy Stanley proposed openstack-infra/gerritbot master: Identify with SASL  https://review.openstack.org/60831417:14
fungiclarkb: mwhahaha: proposed fix for join race in gerritbot ^17:14
openstackgerritMerged openstack-infra/nodepool master: Run release-zuul-python on release  https://review.openstack.org/60764917:17
*** dtantsur is now known as dtantsur|afk17:18
*** aidin has joined #openstack-infra17:25
openstackgerritJeremy Stanley proposed openstack-infra/puppet-gerritbot master: Use port 6697 for IRC over SSL/TLS  https://review.openstack.org/60831317:25
*** ansmith has quit IRC17:25
*** aidin has quit IRC17:29
*** florianf has quit IRC17:29
*** ansmith has joined #openstack-infra17:30
*** shardy has quit IRC17:31
openstackgerritJeremy Stanley proposed openstack-infra/zuul-website master: Update events lists/banner after 2018 Ansiblefest  https://review.openstack.org/60832017:31
*** trown|lunch is now known as trown17:32
*** ykarel|ptotwedne has quit IRC17:33
*** eharney has quit IRC17:34
*** jamesmcarthur has joined #openstack-infra17:35
*** jamesmcarthur has quit IRC17:39
*** felipemonteiro has joined #openstack-infra17:41
*** jamesmcarthur has joined #openstack-infra17:48
*** eharney has joined #openstack-infra17:49
*** felipemonteiro has quit IRC17:49
openstackgerritJeremy Stanley proposed openstack-dev/specs-cookiecutter master: Clean up .gitignore references to personal tools  https://review.openstack.org/60832917:50
AJaegerinfra-root, is nodepool somehow miscounting nodes? Looking at http://grafana.openstack.org/d/T6vSHcSik/zuul-status?panelId=20&fullscreen&orgId=1 , we always stay around 950 - but have a capacity of 1114. And there's 1000 node request waiting...17:51
AJaegerwow, 668 of our used nodes are CentOS 7...17:52
clarkbAJaeger: dmsimard pointed out at ansiblefest that that is a really good number to look at to see how much of the usage is tripleo17:52
clarkbsince most of the centos usage is tripleo17:52
clarkbas for why the counting seems off Iam not sure17:53
*** jamesmcarthur has quit IRC17:53
clarkbhrm looking at ovh bhs1 we aren't using the full capacity there17:53
pabelangerAJaeger: looks like packethost could maybe be dropped in max-servers: http://grafana.openstack.org/dashboard/db/nodepool-packethost17:53
clarkbI think that implies a quota issue in that region17:53
clarkbprobably a quota issue in packethost depressing usage too17:53
pabelangerand will reduce nodepool launch errors too17:53
clarkbmy guess is leaked ports in both17:54
AJaegerpabelanger: that's 20 nodes for packethost - but where are the other 144?17:54
clarkbAJaeger: ovh bhs117:54
clarkbthis is a side effect of nodepool checking quota I think17:54
clarkbwe set max-servers and boot min(max-servers, whatever cloud quota says we can boot)17:54
pabelangerAJaeger: OVH is the rest: http://grafana.openstack.org/dashboard/db/nodepool-ovh17:54
AJaegerclarkb, pabelanger : indeed ovh bhs117:54
AJaegerthanks!17:55
pabelangerlots of errors there17:55
clarkbpabelanger: likely due to port quota17:55
pabelangerYah17:55
clarkbI'm checking bhs1 one17:55
clarkbs/one/now/17:55
AJaegerclarkb: yeah, centos = tripleo looks like a good approximination.17:55
clarkbya 263 DOWN ports17:55
clarkbI will start a loop to delete those17:55
AJaegerthanks, clarkb17:56
pabelangerbut yah, centos-7 is hungry today: http://grafana.openstack.org/d/rZtIH5Imz/nodepool?orgId=1&from=now-1h&to=now17:56
AJaegerpabelanger: for CentOS, I think the problem is that they had guide problems - abandoned everything in gate and restored.17:56
AJaegerAnd a restore pushes everything through check again17:56
pabelangeryah17:57
AJaegermwhahaha ^17:57
pabelangerwe have zuul dequeue CLI now17:57
AJaeger(can't find other tripleo folks on IRC)17:57
pabelangerbut, that would still pushin it via check17:57
clarkbI don't think dequeing is the solution though, it is going to be prioritizing stabilization fixes over features17:57
AJaegermight be worth discussing options with tripleo. promotion is also an option.17:58
pabelangerAJaeger: is tripleo doing a release this week?17:58
clarkbwe promoted a change early today (thanks fungi) but I read the change and it was to fix periodic jobs17:58
AJaegerno idea ;(17:58
clarkbI don't think those changes should be promoted17:58
mwhahahathe rocky release screwed our upgrade jobs17:58
mwhahahawhich hosed the gate17:58
AJaegermwhahaha: your abandon and restore crashes our CI resources ;/17:58
AJaegermwhahaha: we need a better solution since restore means everything goes through check again...17:59
*** yamamoto has joined #openstack-infra17:59
fungii'll admit i didn't read the change quiquell|off asked to have promoted, just knew they were struggling with a bunch of new issues and took his word for it that it was urgent. i'll look more closely in the future17:59
*** josephrsandoval has joined #openstack-infra17:59
mwhahahaAJaeger: so would have failing the gate and reseting every patch18:00
clarkbfungi: no worries, but something we should check in the future as promoting a change resets the entire gate queue (so should really be reserved for changes that give the existing changes in the gate a better chance at merging)18:00
AJaegermwhahaha: right now 660 out of 950 nodes are CentOS-7...18:00
AJaegermwhahaha: I agree - I'm just saying, we need to find a better approach together.18:00
mwhahahayes i am working on reducing the load18:00
mwhahahahttps://review.openstack.org/#/q/topic:reduce-consumption+(status:open+OR+status:merged)18:00
mwhahahaso everything takes time18:01
clarkbAJaeger: bhs1 availability is increasing as the ports are deleted18:01
fungiclarkb: yep, it basically didn't even cross my mind that someone would ask to have a change promoted for tripleo without understanding what it did18:01
AJaegermwhahaha: I mean also for abandon/restore - and promote. I hope we can do  this together in a better way...18:01
AJaegerclarkb: great, thanks18:01
mwhahahasure18:01
mwhahahawhat was promoted earlier?18:01
AJaegermwhahaha: 608080,218:01
pabelangerfungi: maybe a discussion for smaller window size in gate pipeline can help minimize the number of changes that would reset on a promote18:01
AJaegerpabelanger: we limit it for tripleo ;)18:02
clarkbpabelanger: I'm not concerned about it if we promote changes that actually fix bugs in the gate18:02
mwhahahaAJaeger: yea that didn't cause the problem18:02
AJaeger(at least on request)18:02
clarkbpabelanger: the problem is that the changes being promoted don't always do that18:02
mwhahahahttps://review.openstack.org/#/c/608056/ that broke the gate18:03
mwhahahacause upgrade jobs suddenly weren't getting the latest version of code18:03
mwhahahabecause version numbers :/18:03
mwhahahahappy friday18:03
AJaegerah, a release...18:03
pabelangerAJaeger: clarkb: yah, if concern is in gate, it is not stable, maybe a new window-floor change, reduces the any given number of nodes from nodepool is something to try. Not a longer solution, since all other projects also get bumped down18:04
clarkbpabelanger: right, I'm saying I'd much rather put effort on directly addressing the problems18:04
clarkbthis is why I keep pushing elastic-recheck so that we can understand what the problems are and work from there to fix them18:05
clarkband I've been picking infra related ones off of that list and trying to push fixes too18:05
pabelangerclarkb: Yup, agree. hopefully everybody is onboard with that18:05
clarkb(like the stackviz python3 issue and the git push retries)18:05
pabelanger++18:05
AJaegerclarkb: much appreciated - thanks18:05
*** felipemonteiro has joined #openstack-infra18:06
AJaegeranybody else to review clarkb's  git push retries? https://review.openstack.org/#/c/608303/18:06
clarkbAJaeger: we may want to test that one a bit before merging just because it will affect most jobs18:07
AJaegerclarkb: ovh bhs1 looks good again...18:07
AJaegerclarkb: yes, indeed...18:07
AJaegerDo we know what the  fix for packethost is?18:09
clarkbAJaeger: not yet, I suspect it is also port leaks18:09
clarkbIcan check as soon as current bhs1 port delete scrpt completes18:10
clarkbbasically both clouds exhibit the same issue in neutron18:10
clarkband soon gra1 will too18:10
AJaegerargh ;(18:10
*** electrofelix has quit IRC18:10
clarkbthis is quite literally a bug in openstack affecting our ability to consume openstack for testing18:10
clarkbunfortuantely I think it was fixed in newer openstack and so the answer from openstack is going to be "upgrade"18:11
pabelangeris it something we can handle in nodepool, like we do for clean-floating-ips?18:12
clarkbpabelanger: possibly, we'd have to sort out the lifecycle of states and metadata info for a port and whether or not it is safe to delete any DOWN port that doens't have an instance that maps to a nodepool instance18:13
fungiclarkb: are you sure it's leaks and not just a lag in them getting deleted? the response from ovh seemed to suggest it was a performance problem on their backend18:13
clarkbfungi: yes aiui when you delete a nova instance the nova service tells neutron to delete the port. If that request from nova to neutron times out then the port leaks18:13
clarkbfungi: the performance issue is in handling that request between nova and neutron18:13
clarkbif it fails/timesout the port leaks18:14
clarkbfixing the performance issue makes neutron more reliable for nova and you leak less18:14
fungiexcept when we turned off our use of the region, all the seemingly "leaked" ports freed up on their own a few minutes later18:14
clarkbfungi: I think those may not have leaked yet18:14
clarkbthe outstanding requests to remove those ports were still in fligh?18:14
fungithen again, might also have been some poor sysadmin doing cleanup behind the scenes too18:15
clarkbwhen I deleted ports a previous time some of them were definitely days old18:15
fungigood to know18:15
clarkbunfrotauntely I didn't care to chekc the age of the ports in this list18:15
fungiso, yeah, maybe that bug was still present in newton (what they were upgrading to)?18:15
clarkbnewton ish iirc18:16
clarkb| created_at            | 2018-10-05T10:37:53Z is a randomly selected packethost port18:17
*** jcoufal has quit IRC18:18
clarkbthat is DOWN18:18
*** jcoufal has joined #openstack-infra18:18
clarkb{"forbidden": {"message": "Quota exceeded for cores: Requested 8, but already used 3600 of 3600 cores", "code": 403}} is packethost issue18:19
clarkbit thinks we've booted ~450 instances18:20
clarkbquota isn't updating their I guess18:20
openstackgerritJeremy Stanley proposed openstack-infra/puppet-infra-cookiecutter master: Dissuade .gitignore references to personal tools  https://review.openstack.org/60833418:20
*** imacdonn has quit IRC18:22
*** imacdonn has joined #openstack-infra18:22
clarkbcleaning up the ports in packethost likely won't help like it did in bhs118:22
clarkbI've got to pop out fo ra few right now too, back after lunch things18:22
fungiyeah, if it's like last time, scaling waaaay back on max-servers in that region is likely the only solution until they resolve the performance issue there again18:23
openstackgerritDean Troyer proposed openstack-infra/project-config master: Add publish jobs for StarlingX api-ref and releasenotes  https://review.openstack.org/60830418:29
*** Swami has joined #openstack-infra18:36
*** slaweq has quit IRC18:37
*** slaweq has joined #openstack-infra18:38
*** eharney has quit IRC18:40
*** felipemonteiro has quit IRC18:49
*** jamesmcarthur has joined #openstack-infra18:49
*** jmorgan1 has quit IRC18:51
*** jmorgan1 has joined #openstack-infra18:56
*** ansmith has quit IRC18:58
*** josephrsandoval has quit IRC19:02
*** zul has quit IRC19:02
*** zul has joined #openstack-infra19:03
*** graphene has joined #openstack-infra19:10
*** ansmith has joined #openstack-infra19:14
*** felipemonteiro has joined #openstack-infra19:14
*** diablo_rojo has quit IRC19:14
*** jesusaur has quit IRC19:16
*** jamesmcarthur has quit IRC19:17
*** jesusaur has joined #openstack-infra19:23
*** dave-mccowan has joined #openstack-infra19:23
*** diablo_rojo has joined #openstack-infra19:28
openstackgerritMerged openstack-infra/zuul-website master: Update events lists/banner after 2018 Ansiblefest  https://review.openstack.org/60832019:29
*** felipemonteiro has quit IRC19:30
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: Fix on heuristic to get Summit Pages  https://review.openstack.org/60834119:30
openstackgerritClark Boylan proposed openstack-infra/zuul-jobs master: Retry failed git pushses on workspace setup  https://review.openstack.org/60830319:31
openstackgerritClark Boylan proposed openstack-infra/zuul-jobs master: Add test workspace setup role  https://review.openstack.org/60834219:31
openstackgerritClark Boylan proposed openstack-infra/project-config master: Test workspace setup role changes  https://review.openstack.org/60834319:33
*** slagle has quit IRC19:35
*** e0ne has joined #openstack-infra19:38
*** jcoufal has quit IRC19:38
*** ansmith has quit IRC19:39
*** ansmith has joined #openstack-infra19:40
*** e0ne has quit IRC19:43
*** kjackal_v2 has quit IRC19:46
*** eharney has joined #openstack-infra19:46
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: docker-compose quickstart example  https://review.openstack.org/60834419:46
*** Emine has quit IRC19:47
openstackgerritMichael Vollman proposed openstack-infra/project-config master: Add os_manila role  https://review.openstack.org/60834519:53
*** e0ne has joined #openstack-infra19:55
*** e0ne has quit IRC19:56
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: Fix on heuristic to get Summit Pages  https://review.openstack.org/60834119:57
*** agopi is now known as agopi|away19:59
*** kjackal has joined #openstack-infra19:59
*** ansmith has quit IRC20:03
openstackgerritMichael Vollman proposed openstack-infra/project-config master: Add os_manila role  https://review.openstack.org/60834520:16
openstackgerritMerged openstack-infra/nodepool master: Fix race in test_launchNode_delete_error  https://review.openstack.org/60467820:18
*** graphene has quit IRC20:23
*** graphene has joined #openstack-infra20:24
openstackgerritMichael Vollman proposed openstack-infra/project-config master: Add os_manila role  https://review.openstack.org/60834520:27
*** e0ne has joined #openstack-infra20:43
openstackgerritDean Troyer proposed openstack-infra/project-config master: Add publish jobs for StarlingX api-ref and releasenotes  https://review.openstack.org/60830420:47
*** e0ne has quit IRC20:49
*** munimeha1 has quit IRC20:52
mtreinishfungi: did you ever get a chance to check the proxy for the subunit2sql db?20:56
fungimtreinish: thanks for the reminder! and no ;)20:59
fungichecking now20:59
mtreinishfungi: heh, no worries. Thanks for looking into it20:59
fungiwhat's the hostname again?20:59
mtreinishthe proxy runs on logstash.o.o21:00
*** kopecmartin|ruck is now known as kopecmartin|off21:00
mtreinishI don't know the hostname for the trove instance where the db actually lives though21:00
clarkbI think the firewall on the proxy node (logstash.o.o) is what needs fixing21:01
funginp21:01
fungiclarkb: yeah, that was my first hunch21:01
fungijust trying to remember which server ;)21:01
fungiwe've got a listening service on 0.0.0.0:3306 but no firewall entry for that21:02
mtreinishfungi: heh, well at least simpleproxy is still up and running21:03
fungihttps://git.openstack.org/cgit/openstack-infra/system-config/tree/playbooks/group_vars/logstash.yaml#n321:03
fungilooks like it _should_ be open?21:04
fungioh, bad grep21:04
fungiACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:mysql21:04
fungiokay, so maybe not the fw21:04
clarkbiptables -n ftw21:05
fungitelnet logstash.openstack.org mysql21:05
fungii can issue a `help` to it21:05
fungimtreinish: what symptom are you seeing?21:05
mtreinishfungi: it's not able to open a mysql session with the server: http://paste.openstack.org/show/731606/21:07
*** rtjure has quit IRC21:07
mtreinishI'm able to open up telnet on logstash.o.o 3306 and issue help (it doesn't get a response though)21:08
fungiinteresting. i definitely get help output returned21:09
fungioh, but only sometimes21:09
fungii got lucky the first try, it seems21:10
fungii just got it to return some help output again21:10
fungibut that took a lot of tries21:10
fungiokay, so likely something screwy here21:10
clarkbfungi: probably test the backend from logstash.o.o and see if that connectivity is bad?21:13
fungisomething in rackspace is hitting that port at the moment21:13
clarkbwe point the health status app/server at it21:14
clarkbiir21:14
fungihow do we configure simpleproxy?21:14
mtreinishfungi: there was a puppet file in puppet-subunit2sql for simpleproxy iirc21:14
mtreinishone sec I'll dig it up21:14
clarkbfungi: https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/logstash.pp#n43 is the start of the config21:15
mtreinishheh, clarkb is faster21:15
fungioic, it's all cli opts21:16
fungiyeah, i'm getting "no route to host" when i try hitting the backend21:17
fungifrom logstash.o.o21:17
clarkbweird I'd point at the trove instance except you get help sometimes21:18
fungiincomplete arp for the backend routed gateway where that address lies21:19
fungi10.208.160.1 dev eth1  INCOMPLETE21:19
fungimust be getting an arp response for it occasionally and then dropping out again21:20
fungi10.208.160.1 dev eth1  FAILED21:20
fungiit's retrying quite a bit21:20
*** trown is now known as trown|outtypewww21:21
fungioh, fun, there's some layered routing here21:22
fungiso the db server lies in 10.176.0.0/1221:22
fungiwhich is reached as such:21:22
fungi10.176.0.0/12 via 10.208.160.1 dev eth121:22
*** bobh has joined #openstack-infra21:23
fungioh, nevermind, the other i was seeing is the local source route21:23
fungi10.208.160.0/19 dev eth1  proto kernel  scope link  src 10.208.160.2921:23
fungianyway, 10.208.160.29 seems to be unable to get arp replies for 10.208.160.121:23
fungi*most* of the time21:24
fungireview.o.o can reach it fine21:25
fungiyeah, from review i can connect to the subunit2sql backend database and query it, no problem21:27
fungilogstash.o.o on the other had seems to have layer 2 connectivity problems to the backend gateway21:27
fungiyay clouds!21:27
fungisame backend gateway address for both of them (10.208.160.1)21:28
*** kjackal has quit IRC21:29
lifelessdhellmann: fungi: re: univeral setting in setup.cfg. Its better to set it in setup.cfg because for projects that *aren't* univeral wheel compatible, building a universal wheel by adding the CLI flag will create corrupt wheels.21:30
lifelessdhellmann: fungi: noone anywhere should ever use the CLI flag without understanding the details of what that particular project does, to know that its universal wheel compatible.21:31
lifelessdhellmann: fungi: *our* projects are (*now*) universal wheel compatible because we tracked down and made sure noone was doing conditional dependencies via any other method than the declaritive metadata one21:32
lifelessdhellmann: fungi: but in general thats not true. So its unreasonable to expect that folk outside of our community, building wheels of our projects, will know whether its safe or not.21:33
lifelessuntil they check. etc.21:33
lifelessif I can make an observation, the actual issue here seems to be that we're doing these 600 patches by hand. Surely an appropriately privileged bot could just commit them itself.21:33
*** graphene has quit IRC21:35
*** graphene has joined #openstack-infra21:37
fungiyeah, that was sort of where i was going with the earlier discussion of bypassing review (and perhaps even ci) for mechanically-generated changes21:38
mtreinishfungi: hah, well that's fun21:38
fungimtreinish: yeah, i have a feeling a reboot of logstash.o.o _might_ fix it, but there's a good chance it'll require a trouble ticket with the provider instead21:39
*** boden has quit IRC21:39
fungiclarkb: how disruptive would rebooting logstash.o.o be?21:39
clarkbfungi: not very, we'll lose log indexing for jobs that complete while it is offline but that should be fine21:40
clarkbespecially now that job demand has fallen way off21:40
fungiokay, i'll give that a shot next21:40
fungiunless you have other suggestions21:40
clarkbOnly other idea would be to trigger a hypervisor move somehow21:43
*** priteau has quit IRC21:46
funginot sure how to do that in rackspace21:47
fungigiving the reboot a shot21:47
*** jamesmcarthur has joined #openstack-infra21:47
fungiit's on its way back up now21:49
fungii can ssh into it21:49
fungi10.208.160.1 dev eth1 lladdr 00:00:0c:9f:f0:01 REACHABLE21:49
fungii can hit the db from it21:50
fungii can hit the db through the proxy again21:50
fungimtreinish: give it another try?21:50
clarkbI had to start the jenkins log client to get gearman running again. I seem to remember writing a change to switch us over to boring old geard21:51
clarkbbut I guess that never merged21:51
fungi#status log rebooted logstash.openstack.org to "fix" broken layer 2 connectivity to backend gateway21:51
openstackstatusfungi: finished logging21:51
*** jamesmcarthur has quit IRC21:52
*** slagle has joined #openstack-infra22:00
mtreinishfungi: yay, it works for me now22:00
mtreinishthanks22:00
*** bobh has quit IRC22:04
clarkblifeless: fungi: in the past at least we've automated the proposal of changes so that testing can happen and to give project teams autonomy over whether or not things go into their project. Good example is the translations. Whether or not testing or even ability to refuse merging is desirable in this case I have no idea22:04
clarkbI don't really know what happened other than wheel updated and didn't do so in a backward compat manner22:04
*** EvilienM is now known as EmilienM22:06
clarkblifeless: and for 4) in most cases zuul job config is testable pre merge. So you can totally push up ad hoc testing changes to have things tested in unique/new/odd/etc ways per patchset in gerrit22:09
clarkblifeless: and for at least one cloud we are doing continuous deployment to it from Zuul then running zuul jobs on it :)22:16
*** rlandy has quit IRC22:24
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: docker-compose quickstart example  https://review.openstack.org/60834422:25
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: docker-compose quickstart example  https://review.openstack.org/60834422:26
lifelessclarkb: thats awesome22:28
lifelessclarkb: so thats an infra-run cloud?22:29
clarkblifeless: no it is a limestone networks run cloud by logan-, but logan- is willing to have us/zuul run the deploys post merge22:29
lifelessclarkb: (pkging) yes, I know we push up proposals programmatically; I rather suspect that at 600 contact points it would be better to deal with fallout from some % being inappropriate than manual review of them all.22:30
lifelessclarkb: wahoo - that is brilliant.22:30
lifelessclarkb: is infra the only customer of the cloud?22:30
clarkblifeless: no I think logan- said their business developmenty stuff runs on that cloud too, but it isn't exposed to their customers22:30
lifelessclarkb: (4) - yeah, I know you can poke at the metadata and the code under test now, but its still very action at a distance. Having some contained but direct mechanism might be useful - I'm specifically thinking about vendor drivers here22:31
lifelesse.g. if I wanted to do adhoc exploration of an ESXi connected nova-compute, how would I?22:32
clarkbwell you'd need vmware licenses first and those cost something like what my house does22:32
clarkbbut if you figured that out, I'd store that info in zuul secrets and deploy them directly?22:32
lifelessclarkb: so no - what I'm propsing there is that there is an ESXi wired into deployment logic, and I both can get my nova-compute deployed on it, or get a limited shell of some sort on it, brokered by infra22:33
clarkblifeless: yes if you, vmware, said we have an esxi pool over "here" you'd put the crdentials for it in zuul secrets and talk to that pool22:34
lifelessclarkb: ESXi is perhaps a bad example, because I work at VMware, and I'm not actually thinking VMware specific here. Uh, whats that proprietary container driver - windriver or something... etc22:34
clarkbthen your jobs can interact with that remote set of resources as necessary.22:35
lifelessclarkb: and I'm *definitely* not wearing a VMware hat for this discussion22:35
clarkbsure22:35
clarkbI guess what I'm suggesting is that regardless of what it is vmware or cisco switching gear etc it has to exist somewhere (and be paid for)22:35
*** eernst has joined #openstack-infra22:35
lifelessyes, absolutely.22:36
clarkbbut if you've done that zuul allows you to feed the secrets to authenticate into it.22:36
lifelessthe delta vs what the 3rd party test rigs do today is being part of a persistent cloud, rather than being a deploy-a-cloud-via-devstack-or-whatever job endpoint22:36
lifelesspersistent community-run cloud22:36
clarkbUnfortunatley I think we've found that our ability to run such a thing is basically nil22:37
clarkbit is incredibly difficult to get access to those resources in a community space22:38
clarkband the one set we've had was taken from us without even a single email22:38
lifelessfor the proprietary bits, and for the actual-hardware bits - yes, thats certainly a big thing; I don't think we can shift that needle easily.22:39
lifelessFor the devstack jobs, - the vms that infra has access to don't have that problem; so the basic architecture I describe could be realised  - and give benefits I think - and if/when vendors see the light they could join in22:40
*** eernst has quit IRC22:42
clarkbyes the proposal is more feasible if constrained to what is possible given current resource access22:43
*** eernst has joined #openstack-infra22:43
fungiclarkb: sorry, the wheel update is a bit of a red herring. the actual situation is that there are still a lot of openstack projects which are pure python but don't declare in setup.cfg that they should build universal wheels, which makes running python3 by default in some jobs install them from sdist. dhellmann wrote a set of changes (which i believe we merged) to have the openstack-specific wheel22:47
fungibuilds (overrideably) pass a command-line option to build universal wheels by default so that projects don't need to update their configuration22:47
clarkbgotcha22:47
fungimainly so as to avoid yet another round of mass patches22:48
*** eernst has quit IRC22:48
*** eernst has joined #openstack-infra22:48
fungiit came up on the pbr change to update testing for latest wheel release because wheel has now deprecated the [wheel] section in favor of [bdist_wheel] which has also been supported for a very long time. the pbr change updated pbr's setup.cfg to change that section name so as to silence the deprecation warning22:49
fungithis caused AJaeger to ask whether we should also consider porting that setup.cfg change to projects with an existing [wheel] section22:49
fungiso that was the (tenuous) connection22:51
fungisome discussions which arose from that led some people to misinterpret the situation as suggesting that configuring packages in setup.cfg to build universal wheels was actively harmful and to be avoided22:52
*** eernst has quit IRC22:54
lifelessSounds like an antipattern that has never happened before.22:56
*** eernst has joined #openstack-infra23:00
*** Swami has quit IRC23:04
*** eernst has quit IRC23:05
*** pbourke has quit IRC23:05
*** eernst has joined #openstack-infra23:05
*** pbourke has joined #openstack-infra23:05
fungiwell, the main concern raised is that we have people not particularly communicative within the community who are prone to push massive batches of identical changes to every project, without warning and uncoordinated with the rest of the cross-project efforts we have going on23:05
lifelessI know, been there, seen it.23:07
lifelessGoogle has a team specifically chartered to do such holistic cross-the-code-base maintenance.23:08
fungiit's been asserted that these individuals are tasked with pumping their employers' contribution statistics23:08
lifeless(On their code, not ours :P)23:08
*** eernst has quit IRC23:12
*** ijw has joined #openstack-infra23:23
*** rfolco has quit IRC23:23
*** felipemonteiro has joined #openstack-infra23:24
*** gyee has quit IRC23:31
openstackgerritMerged openstack-dev/pbr master: Special case long_description_content_type  https://review.openstack.org/56517723:32
*** bdodd_ has joined #openstack-infra23:34
*** roman_g has quit IRC23:35
*** bdodd has quit IRC23:35
*** sthussey has quit IRC23:36
*** ijw has quit IRC23:53
*** felipemonteiro has quit IRC23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!