Tuesday, 2019-11-26

*** jamesmcarthur has joined #openstack-meeting00:03
*** macz has quit IRC00:05
*** jamesmcarthur has quit IRC00:08
*** mattw4 has quit IRC00:12
*** mmethot_ has quit IRC00:21
*** mmethot has joined #openstack-meeting00:22
*** jamesmcarthur has joined #openstack-meeting00:37
*** jamesmcarthur has quit IRC00:42
*** Liang__ has joined #openstack-meeting01:02
*** Liang__ is now known as LiangFang01:04
*** nanzha has joined #openstack-meeting01:10
*** ociuhandu has joined #openstack-meeting01:12
*** igordc has quit IRC01:13
*** nanzha has quit IRC01:19
*** nanzha has joined #openstack-meeting01:19
*** ociuhandu has quit IRC01:28
*** jamesmcarthur has joined #openstack-meeting01:39
*** ociuhandu has joined #openstack-meeting01:40
*** yaawang has joined #openstack-meeting01:41
*** jamesmcarthur has quit IRC01:43
*** ociuhandu has quit IRC01:44
*** ricolin has joined #openstack-meeting01:46
*** eharney has quit IRC01:49
*** armax has quit IRC01:57
*** nanzha has quit IRC01:58
*** nanzha has joined #openstack-meeting02:10
*** jamesmcarthur has joined #openstack-meeting02:10
*** brinzhang has joined #openstack-meeting02:15
*** jamesmcarthur has quit IRC02:15
*** mhen has quit IRC02:24
*** bbowen has joined #openstack-meeting02:25
*** armax has joined #openstack-meeting02:31
*** yaawang has quit IRC02:36
*** ociuhandu has joined #openstack-meeting02:38
*** yaawang has joined #openstack-meeting02:38
*** EmilienM|PTO is now known as EmilienM02:42
*** ociuhandu has quit IRC02:48
*** jamesmcarthur has joined #openstack-meeting02:52
*** nanzha has quit IRC02:52
*** nanzha has joined #openstack-meeting02:53
*** jamesmcarthur has quit IRC02:57
*** tonyb has joined #openstack-meeting03:00
*** diablo_rojo has quit IRC03:02
*** ricolin has quit IRC03:03
*** apetrich has quit IRC03:09
*** artom has quit IRC03:20
*** artom has joined #openstack-meeting03:21
*** jamesmcarthur has joined #openstack-meeting03:29
*** jamesmcarthur has quit IRC03:34
*** ociuhandu has joined #openstack-meeting03:43
*** ociuhandu has quit IRC03:47
*** nanzha has quit IRC03:50
*** ociuhandu has joined #openstack-meeting03:53
*** nanzha has joined #openstack-meeting03:55
*** tpatil has joined #openstack-meeting03:56
*** hongbin has joined #openstack-meeting04:00
*** kiyofujin has joined #openstack-meeting04:01
*** ociuhandu has quit IRC04:01
*** tashiromt has joined #openstack-meeting04:02
*** ociuhandu has joined #openstack-meeting04:02
tpatil#startmeeting masakari04:05
openstackMeeting started Tue Nov 26 04:05:41 2019 UTC and is due to finish in 60 minutes.  The chair is tpatil. Information about MeetBot at http://wiki.debian.org/MeetBot.04:05
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.04:05
*** openstack changes topic to " (Meeting topic: masakari)"04:05
openstackThe meeting name has been set to 'masakari'04:05
tpatilRoll call?04:05
*** ociuhandu has quit IRC04:07
*** ricolin has joined #openstack-meeting04:08
tashiromtHi04:09
kiyofujinhi04:09
tpatilkiyofujin, tashiromt: Hi04:10
tpatilDo you have anything for discussion?04:12
tpatilAs samP is not available, let's end this meeting. If there is something urgent, please use openstack-discuss ML or #openstack-masakari IRC.04:14
tashiromtI dont have any topic today.04:14
tpatilOk04:14
tpatil#endmeeting04:15
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"04:15
openstackMeeting ended Tue Nov 26 04:15:01 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)04:15
openstackMinutes:        http://eavesdrop.openstack.org/meetings/masakari/2019/masakari.2019-11-26-04.05.html04:15
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/masakari/2019/masakari.2019-11-26-04.05.txt04:15
openstackLog:            http://eavesdrop.openstack.org/meetings/masakari/2019/masakari.2019-11-26-04.05.log.html04:15
*** kiyofujin has quit IRC04:16
*** ricolin has quit IRC04:29
*** dtrainor_ is now known as dtrainor04:29
*** jamesmcarthur has joined #openstack-meeting04:31
*** Lucas_Gray has joined #openstack-meeting04:33
*** jamesmcarthur has quit IRC04:35
*** Lucas_Gray has quit IRC04:39
*** Wryhder has joined #openstack-meeting04:39
*** Wryhder is now known as Lucas_Gray04:40
*** tpatil has quit IRC04:42
*** tashiromt has quit IRC04:43
*** nanzha has quit IRC05:20
*** Lucas_Gray has quit IRC05:21
*** nanzha has joined #openstack-meeting05:24
*** ociuhandu has joined #openstack-meeting05:30
*** brinzhang_ has joined #openstack-meeting05:31
*** ociuhandu has quit IRC05:35
*** brinzhang has quit IRC05:35
*** brinzhang has joined #openstack-meeting05:36
*** brinzhang_ has quit IRC05:36
*** brinzhang has quit IRC05:37
*** brinzhang has joined #openstack-meeting05:37
*** brinzhang has quit IRC05:46
*** markmcclain has quit IRC05:56
*** markmcclain has joined #openstack-meeting05:57
*** links has joined #openstack-meeting05:59
*** nanzha has quit IRC06:05
*** nanzha has joined #openstack-meeting06:06
*** jamesmcarthur has joined #openstack-meeting06:07
*** jamesmcarthur has quit IRC06:12
*** hongbin has quit IRC06:18
*** pcaruana has joined #openstack-meeting06:32
*** nanzha has quit IRC06:45
*** whoami-rajat__ has joined #openstack-meeting06:48
*** nanzha has joined #openstack-meeting06:54
*** nanzha has quit IRC06:59
*** nanzha has joined #openstack-meeting07:08
*** keiko-k has joined #openstack-meeting07:17
*** nanzha has quit IRC07:20
*** nanzha has joined #openstack-meeting07:21
*** slaweq has joined #openstack-meeting07:23
*** keiko-k has quit IRC07:23
*** keiko-k has joined #openstack-meeting07:27
*** slaweq has quit IRC07:28
*** apetrich has joined #openstack-meeting07:29
*** apetrich has quit IRC07:34
*** nanzha has quit IRC07:44
*** tpatil has joined #openstack-meeting07:56
*** slaweq has joined #openstack-meeting07:57
*** nanzha has joined #openstack-meeting07:58
*** takahashi-tsc has joined #openstack-meeting07:59
*** ykatabam has quit IRC07:59
*** brinzhang has joined #openstack-meeting08:03
*** tesseract has joined #openstack-meeting08:16
*** tpatil has quit IRC08:23
*** rubasov has quit IRC08:26
*** rubasov has joined #openstack-meeting08:27
*** takahashi-tsc has quit IRC08:27
*** nanzha has quit IRC08:33
*** brinzhang has quit IRC08:38
*** keiko-k has quit IRC08:41
*** nanzha has joined #openstack-meeting08:43
*** ralonsoh has joined #openstack-meeting08:53
*** hyunsikyang has joined #openstack-meeting08:56
*** priteau has joined #openstack-meeting08:58
*** macz has joined #openstack-meeting08:59
*** macz has quit IRC09:03
*** tssurya has joined #openstack-meeting09:04
*** ociuhandu has joined #openstack-meeting09:04
*** apetrich has joined #openstack-meeting09:04
*** ociuhandu has quit IRC09:08
*** nanzha has quit IRC09:09
*** nanzha has joined #openstack-meeting09:10
*** nanzha has quit IRC09:29
*** nanzha has joined #openstack-meeting09:33
*** ociuhandu has joined #openstack-meeting09:37
*** SotK has quit IRC09:37
*** ociuhandu has quit IRC09:43
*** ociuhandu has joined #openstack-meeting09:44
*** ykatabam has joined #openstack-meeting09:44
*** SotK has joined #openstack-meeting09:46
*** ociuhandu has quit IRC09:48
*** nanzha has quit IRC09:59
*** ociuhandu has joined #openstack-meeting10:00
*** LiangFang has quit IRC10:09
*** nanzha has joined #openstack-meeting10:09
*** electrofelix has joined #openstack-meeting10:09
*** e0ne has joined #openstack-meeting10:22
*** apetrich has quit IRC10:49
*** lpetrut has joined #openstack-meeting10:52
*** takamatsu has quit IRC10:55
*** ociuhandu has quit IRC11:03
*** nanzha has quit IRC11:07
*** nanzha has joined #openstack-meeting11:10
*** nanzha has quit IRC11:17
*** tetsuro has quit IRC11:18
*** priteau has quit IRC11:19
*** nanzha has joined #openstack-meeting11:24
*** apetrich has joined #openstack-meeting11:27
*** apetrich has quit IRC11:27
*** apetrich has joined #openstack-meeting11:29
*** nanzha has quit IRC11:29
*** nanzha has joined #openstack-meeting11:29
*** whoami-rajat__ has quit IRC11:35
*** rcernin has quit IRC11:40
*** ociuhandu has joined #openstack-meeting11:41
*** macz has joined #openstack-meeting11:41
*** electrofelix has quit IRC11:43
*** electrofelix has joined #openstack-meeting11:43
*** macz has quit IRC11:45
*** ociuhandu has quit IRC11:46
*** e0ne has quit IRC11:48
*** whoami-rajat__ has joined #openstack-meeting11:48
*** e0ne has joined #openstack-meeting11:48
*** ykatabam has quit IRC11:51
*** raildo has joined #openstack-meeting11:53
*** whoami-rajat__ has quit IRC11:54
*** rfolco has joined #openstack-meeting12:05
*** kopecmartin has joined #openstack-meeting12:06
*** apetrich has quit IRC12:23
*** enriquetaso has joined #openstack-meeting12:25
*** Lucas_Gray has joined #openstack-meeting12:26
*** nanzha has quit IRC12:45
*** enriquetaso has quit IRC12:45
*** jawad_axd has joined #openstack-meeting12:48
*** enriquetaso has joined #openstack-meeting12:49
*** enriquetaso has quit IRC13:08
*** apetrich has joined #openstack-meeting13:14
*** ociuhandu has joined #openstack-meeting13:20
*** liuyulong has joined #openstack-meeting13:23
*** ociuhandu has quit IRC13:26
*** raildo has quit IRC13:28
*** raildo has joined #openstack-meeting13:29
*** mriedem has joined #openstack-meeting13:35
*** jamesdenton has joined #openstack-meeting13:46
*** rfolco has quit IRC13:47
*** rfolco has joined #openstack-meeting13:49
*** rfolco has quit IRC13:50
*** bbowen has quit IRC14:17
*** bbowen has joined #openstack-meeting14:20
*** ociuhandu has joined #openstack-meeting14:22
*** ociuhandu has quit IRC14:27
*** rfolco has joined #openstack-meeting14:28
*** links has quit IRC14:32
*** enriquetaso has joined #openstack-meeting14:40
*** Lucas_Gray has quit IRC14:45
*** jawad_axd has quit IRC14:48
*** ociuhandu has joined #openstack-meeting14:49
*** ociuhandu has quit IRC14:54
*** rbudden has joined #openstack-meeting15:15
*** ociuhandu has joined #openstack-meeting15:19
*** ociuhandu has quit IRC15:28
*** ociuhandu has joined #openstack-meeting15:42
*** diablo_rojo has joined #openstack-meeting15:53
*** mriedem has quit IRC15:55
*** rsimai is now known as rsimai_away15:58
*** mriedem has joined #openstack-meeting15:59
slaweq#startmeeting neutron_ci16:00
openstackMeeting started Tue Nov 26 16:00:04 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"16:00
openstackThe meeting name has been set to 'neutron_ci'16:00
njohnstono/16:00
slaweqhi16:00
bcafarelo/16:01
slaweqliuyulong: ralonsoh: CI meeting, are You around?16:02
ralonsohhi sorry16:03
slaweqok, lets start16:03
slaweqGrafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate16:03
slaweqplease open the link first16:03
slaweq#topic Actions from previous meetings16:04
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"16:04
slaweqfirst one is16:04
slaweqnjohnston prepare etherpad to track stadium progress for zuul v3 job definition and py2 support drop16:04
njohnstonDone!  Do you want to go over it now or in the stadium section16:04
njohnston?16:05
njohnston#link https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop16:05
slaweqthx njohnston16:06
slaweqwe can go over this list in stadium section16:06
njohnstonsounds good16:06
slaweqwe have also second action item16:07
slaweqnjohnston to check failing NetworkMigrationFromHA in multinode dvr job16:07
njohnstonworking on that this morning, haven't gotten to the bottom of it yet16:07
slaweqok, sure16:08
njohnstonI didn't get to it until about 30 minutes ago :(16:08
slaweqok16:08
slaweqcan I assign it to You for next week also as a reminder?16:09
njohnstonsure16:09
slaweq#action njohnston to check failing NetworkMigrationFromHA in multinode dvr job16:09
slaweqthx16:09
slaweqand that are all actions from last week for today16:09
slaweq#topic Stadium projects16:10
*** openstack changes topic to "Stadium projects (Meeting topic: neutron_ci)"16:10
slaweqtempest-plugins migration16:11
slaweqnjohnston: recently pushed phase 2 patch for neutron-dynamic-routing: https://review.opendev.org/#/c/695014/16:11
slaweqI already +2 it16:11
slaweqplease also review this one16:11
slaweqafter this will be merged we will only need vpnaas to do still16:11
slaweqand I know mlavalle is making some progress with it16:12
bcafarelthe removal step is usually easier yes16:12
slaweqbcafarel: exactly16:13
njohnstonSo just to show the link again: https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop16:13
slaweqso this should be basically all about this migration of plugins16:13
slaweqyes, lets go with Your link njohnston16:14
njohnstonI went through all the projects looking for zuul legacy jobs for zuulv316:14
*** ijw has joined #openstack-meeting16:14
njohnstonFor py27, I wanted to note that the goal is just to drop testing and other flags (like in setup.cfg), not necessarily to merge code hostile to py27 at this time16:14
njohnstonAlso there is agreed upon a phased approach, so I designate each project in a phase for when to remove support16:15
njohnstonWe are in phase 1, phase 2 starts at week R-2216:15
njohnstonObviously there is a lot of content in the etherpad, are there specific points or projects people would like to discuss?16:16
njohnstonI specifically exempted networking-ovn since master development is going to cease there16:16
njohnstonand networking-tempest-plugin because the branchless tempest makes py27 something dependent on tempest and I need to check exactly where we are on that16:17
slaweqone thing about legacy to zuulv3 convertion16:17
slaweqall grenade jobs are sill legacy16:17
slaweqso we will need to convert them but it has to be done first in grenade repo16:17
njohnstonWhat work do we need to do in order to convert them, if the job definition comes from the grenade repo?16:17
slaweqfor jobs which are defined in grenade repo I think we don't need anything to do16:18
slaweqbut we have e.g. neutron-grenade-multinode job which is defined in neutron repo16:18
njohnstonFor this survey I only examined jobs that are natively defined in the repo, so I did not consider grenade at all except where jobs were defined natively for grenade like networking-midonet-grenade-ml2 or networking-odl-grenade16:18
slaweqand we will need to convert it at some point16:19
bcafarelany specific work items we should start looking into first? or just pick one/add our name/mark as done steps16:19
*** jamesmcarthur has joined #openstack-meeting16:19
njohnstonpick anything in a phase 1 project16:19
slaweqok, I will try to pick some of those soon too16:20
njohnstonthanks!16:20
bcafarelack I'll clean sfc py2 stuff as a start16:21
slaweqthanks for preparing this etherpad - it's great list of things todo16:21
slaweq:)16:21
njohnstonthis is a good example of a py27 change: https://review.opendev.org/#/c/692031/16:21
njohnstonI'll link that at the top16:21
bcafarel+1 I was looking at neutron removal, but better if we have a sample for stadium project16:22
slaweqthx16:22
*** tesseract has quit IRC16:23
slaweqok, do You have anything else regarding stadium projects?16:24
slaweqor can we move on?16:24
njohnstonno, I think that is all.16:24
slaweqok, so lets move on16:25
slaweq#topic Grafana16:25
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"16:25
slaweq#link http://grafana.openstack.org/dashboard/db/neutron-failure-rate16:25
slaweqneutron-grenade job is going to high failure rate recently16:26
slaweqbut this is now removed from queues as part of dropping of py27 support16:26
njohnstonwe'll have to see if the spike in e.g. check queue unit test jobs continues today16:29
slaweqyes, and also functiona/fullstack tests16:29
slaweqbut for functional and fullstack I saw at least couple of changes today where job failed due to patch on which it was run16:30
slaweqso those were not an issues with job itself16:30
njohnstonright, so hopefully we can wait and see it revert to mean16:30
slaweqyes16:30
slaweqfrom what I was checking today16:31
slaweqI noticed only one issue which hits as quite often16:31
slaweqhttps://bugs.launchpad.net/neutron/+bug/185055716:31
openstackLaunchpad bug 1850557 in neutron "DHCP connectivity after migration/resize not working" [Medium,Confirmed] - Assigned to Slawek Kaplonski (slaweq)16:31
slaweqI just raised Importance of this bug to High16:31
slaweqI was trying to reproduce it locally but I wasn't able to do that16:31
slaweqbut I will continue investigating this issue16:32
slaweqand that's all what I have about grafana for today16:33
slaweqanything else You want to add/ask?16:33
*** ociuhandu has quit IRC16:33
ralonsohin https://bugs.launchpad.net/neutron/+bug/185055716:33
openstackLaunchpad bug 1850557 in neutron "DHCP connectivity after migration/resize not working" [High,Confirmed] - Assigned to Slawek Kaplonski (slaweq)16:33
ralonsohwe have only one compute node16:33
ralonsohactually this is functional16:34
slaweqthere is one compute and one "all-in-one" node16:34
*** ociuhandu has joined #openstack-meeting16:34
ralonsohok so the OF rules should be the same16:34
slaweqyes, and as I wrote in comment #2 DHCP request comes from VM to the DHCP server after migration16:35
slaweqso connectivity at least in this direction works fine16:35
slaweqI can't say where this response is lost exactly16:35
slaweq#action slaweq to continue investigating issue https://bugs.launchpad.net/neutron/+bug/185055716:38
openstackLaunchpad bug 1850557 in neutron "DHCP connectivity after migration/resize not working" [High,Confirmed] - Assigned to Slawek Kaplonski (slaweq)16:38
slaweqok, lets move on16:38
slaweq#topic Tempest/Scenario16:38
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)"16:38
slaweqas I said already I'm aware only about this one issue which imacts us now16:39
slaweqbut ralonsoh did You maybe had chance to check my proposal of merging/dropping some tempest/neutron-tempest-plugin jobs?16:39
njohnstoncool16:39
ralonsohslaweq, yes but not enough time16:40
ralonsohslaweq, I created a list of tests, from both jobs16:40
ralonsohto see how many of them can be merged16:40
ralonsohbut I didn't finish sorry16:40
*** rbudden has quit IRC16:40
slaweqralonsoh: no problem, if You will have some time, please reply to my email about it and we will continue this discussion there16:41
ralonsohsure16:41
slaweqthx a lot16:41
slaweqand one last thing from me16:41
slaweqlike a heads-up16:41
slaweqwe are starting process of merging networking-ovn driver to neutron16:42
njohnston+10016:42
slaweqso I pushed patch https://review.opendev.org/#/c/696094/ to move ovn jobs to neutron repo16:42
slaweqit's not ready yet but please be aware of it and take a look at this patch when it will be not WIP anymore16:42
njohnstonok16:42
slaweqalso I'm not sure if we shouldn't move from .zuul.yaml file to zuul.d/ directory16:42
slaweqand add definition of different jobs there16:43
slaweqas this .zuul.yaml file is getting to be nightmare now16:43
slaweqwhat do You think about such potential change?16:43
ralonsohslaweq, we can move it once we finish the migration16:43
njohnstonI think it's a good idea to do16:43
ralonsohbut yes, it's better to have several files in a directory16:43
slaweqok, thx for supporting this idea16:44
slaweqralonsoh: I will try to do it "in parallel" to ovn-merge work16:44
ralonsohsure16:44
slaweqif this will be merged first I will rebase my networking-ovn jobs patch16:44
slaweqor I will rebase this "zuul.d dir" patch if needed :)16:44
slaweq#action slaweq to move job definitions to zuul.d directory16:45
bcafarel+1 to move to split files in zuul.d later it is easier to read16:45
*** macz has joined #openstack-meeting16:45
slaweqok, that's all what I have for today16:45
slaweqperiodic jobs are working very well since few days at least16:46
slaweqso we are good there16:46
slaweqdo You have anything else to talk about now?16:46
slaweqor if not, we can finish a bit earlier today16:46
ralonsohjust 10 secs, to "sell" my patches, related to bugs in Neutron16:46
ralonsohhttps://review.opendev.org/#/c/695060/16:46
ralonsohnjohnston, ^^ as an expert in DBs16:46
ralonsohthat's all16:46
njohnstonralonsoh: I'll take a look right away16:46
slaweqI even set it as "important change" to review :)16:47
ralonsohthanks!!16:47
njohnstonthanks for a great meeting all16:47
slaweqok, so I think we can finish earlier today16:48
slaweqthx for attending16:48
slaweqand see You online16:48
slaweqo/16:48
njohnstono/16:48
slaweq#endmeeting16:48
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:48
openstackMeeting ended Tue Nov 26 16:48:18 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:48
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-11-26-16.00.html16:48
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-11-26-16.00.txt16:48
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-11-26-16.00.log.html16:48
ralonsohbye16:48
*** rbudden has joined #openstack-meeting16:59
*** ociuhandu has quit IRC17:15
*** njohnston is now known as njohnston|lunch17:19
*** igordc has joined #openstack-meeting17:20
*** jamesmcarthur has quit IRC17:22
*** tobiash_ is now known as tobiash17:47
*** tssurya has quit IRC18:12
*** armax has quit IRC18:14
*** ijw has quit IRC18:26
*** njohnston|lunch is now known as njohnston18:33
*** ijw has joined #openstack-meeting18:37
*** ijw has quit IRC18:40
*** ijw has joined #openstack-meeting18:40
*** electrofelix has quit IRC18:50
*** ralonsoh has quit IRC18:51
*** e0ne has quit IRC18:51
clarkbanyone else here for the infra meeting? we'll get started shortly19:00
fungii'm here for something19:00
fungii suspect it's probably the infra meeting19:01
*** mkolesni has joined #openstack-meeting19:01
ianwo/19:01
clarkb#startmeeting infra19:01
mkolesnihi19:01
openstackMeeting started Tue Nov 26 19:01:17 2019 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: infra)"19:01
openstackThe meeting name has been set to 'infra'19:01
clarkb#link http://lists.openstack.org/pipermail/openstack-infra/2019-November/006528.html Our Agenda19:01
*** artom has quit IRC19:01
clarkb#topic Announcements19:01
*** openstack changes topic to "Announcements (Meeting topic: infra)"19:01
clarkbJust a note that this is a big holiday week(end) for those of us in the USA19:01
clarkbI know many are already afk and I'll be afk no later than thursday :)19:02
fungii'll likely be increasingly busy for the next two days as well19:02
fungi(much to my chagrin)19:03
*** ijw_ has joined #openstack-meeting19:03
clarkbI guess I should also mention that OSF individual board member elections are coming up and now is the nomination period19:03
*** artom has joined #openstack-meeting19:03
clarkb#topic Actions from last meeting19:04
*** openstack changes topic to "Actions from last meeting (Meeting topic: infra)"19:04
clarkbthank you fungi for running last week's meeting, I failed at accounting for DST changes when scheduling dentist visit19:04
clarkb#link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-11-19-19.05.txt minutes from last meeting19:04
clarkbNo actions recorded. Let's move on19:04
fungino sweat19:04
clarkb#topic Priority Efforts19:04
*** openstack changes topic to "Priority Efforts (Meeting topic: infra)"19:04
clarkb#topic OpenDev19:05
fungii half-assed it in hopes nobody would ask me to run another meeting ;)19:05
*** openstack changes topic to "OpenDev (Meeting topic: infra)"19:05
clarkb#link https://etherpad.openstack.org/p/rCF58JvzbF Governance email draft19:05
clarkbI've edited that draft after some input at the ptg19:05
clarkbI think it is ready to go out and I'm mostly waiting for monday (to avoid getting lost in the holdiay) and for thoughts on who to send it to?19:05
*** dgroisma has joined #openstack-meeting19:06
clarkbI had thought about sending it to all the top level projects that are involved (at their -discuss mailing lists)19:06
clarkbbut worry that might create too many separate discussions? we unfortunatley don't have a good centralized mechanism yet (though this proposal aims to create one)19:06
*** ijw has quit IRC19:06
clarkbI don't need an answer now, but if you have thoughts on the destination fo that email please leave a note in the etherpad19:06
fungiyou could post it to the infra ml, and then post notices to other related project mailing lists suggesting discussion on the infra ml19:07
fungiand linking to the copy in the infra ml archive19:07
fungithat should hopefully funnel discussion into one place19:07
clarkbI'm willing to try that. If there are no objects I'll give that a go monday ish19:07
clarkbianw: any movement on the gitea git clone thing tonyb has run into?19:08
fungiwe upgraded gitea and can still reproduce it, right?19:08
ianwnot really, except it still happens with 1.9.619:08
clarkbfungi: yes19:08
*** lpetrut has quit IRC19:08
ianwtonyb uploaded a repo that replicates it for all who tried19:09
clarkbacross different git version and against specific and different backends19:09
ianwit seems we know what happens; git dies on the gitea end (without anything helpful) and the remote end doesn't notice and sits waiting ~ forever19:09
clarkbwhcih I think points to a bug in the gitea change to use go-git19:09
clarkbianw: and tcpdump doesn't show any Fins from gitea19:10
clarkbwe end up playing ack pong with each side acking bits that were previous transferred (to keep the tcp connection open)19:10
ianwi'm not sure on go-git; it seems it's a result of "git upload-pack" dying, which is (afaics) basically just system() called out to19:10
clarkbah19:10
fungihow long is that upload-pack call running, do we know?19:11
fungicould it be that go-git decides it's taking too long and kills it?19:11
clarkbwhen I reproduce it takes about 20 seconds to hit the failure case19:11
*** dgroisma has quit IRC19:11
clarkbI don't think that is long enough for gitea to be killing it. However we can double check those timeout values19:12
ianwfrom watching what happens, it seems to chunk the calls and so about 9 go through, then the 10th (or so) fails quickly19:12
ianwwe see the message constantly in the logs; but there don't seem to be that many reports of issues, though, only tonyb19:12
fungithis is observed straight to the gitea socket, no apache or anything proxying it right?19:12
clarkbfungi: correct19:12
fungiianw: there can be only one tonyb19:13
clarkb(there is no apache in our gitea setup. just haproxy to gitea fwiw)19:13
fungithat's what i thought, thanks19:13
ianwi think we probably need custom builds with better debugging around the problem area to make progress19:15
clarkbI guess the next step is to try and see why upload-pack fails (strace it maybe?) and then trace back up through gitea to see if it is the cause or simply not handling the failure properly?19:15
clarkbI would expect that gitea should close the tcp connection if the git process under it failed19:15
ianwyeah, i have an strace in the github bug, that was sort of how we got started19:15
clarkbah19:15
ianwit turns out the error message is ascii bytes in decimal, which when you decode is actually a base-64 string, which when decoded, shows the same message captured by the strace :)19:16
*** dgroisma has joined #openstack-meeting19:16
clarkbwow19:16
ianwi know mordred already has 1.10 patches up19:17
ianwi'm not sure if we want to spend effort on old releases?19:17
clarkbyeah there were a few issues he had to work through, but maybe we address those and get to 1.10 then try to push upstream to help us debug further?19:17
fungithat sounds good19:17
clarkbseems like a good next step. lets move on19:18
clarkb#topic Update Config Management19:18
*** openstack changes topic to "Update Config Management (Meeting topic: infra)"19:18
clarkbzbr_ had asked about helping out on the mailing list and I tried to point to this topic19:18
clarkbLong story short if you'd like to help us uplift our puppet into ansible and containers we appreciate the help greatly. Also most of the work can be done without root as we have a fairly robust testing system set up which will allow you test it all before merging anything19:19
fungiit was a great read19:19
clarkbThen once merged an infra-root can help deploy to production19:19
ianw++ i think most tasks there stand-alone, have templates (i should reply with some prior examples) and are gate-testable with our cool testinfra setup19:19
clarkbThat was all I had on this topic. Anyone have other related items?19:20
*** eharney has joined #openstack-meeting19:20
clarkb#topic Storyboard19:21
*** openstack changes topic to "Storyboard (Meeting topic: infra)"19:21
clarkbfungi: diablo_rojo anything to mention about storyboard?19:21
fungithe api support for attachments merged19:22
funginext step there is to negotiate and create a swift container for storyboard-dev to use19:22
clarkbexciting19:22
fungithen the storyboard-webclient draft builds of the client side implementation for story attachments should be directly demonstrable19:23
fungi(now tat we've got the drafts working correctly again after the logs.o.o move)19:23
fungii guess we can also mention that the feature to allow regular expressions for cors and webclient access in the api merged19:24
fungisince that's what we needed to solve that challenge19:24
fungiso storyboard-dev.openstack.org now allows webclient builds to connect and be usable from anywhere, including your local system i suspect19:25
*** enriquetaso has quit IRC19:25
fungi(though i haven't tested that bit, not sure if it needs to be a publicly reachable webclient to make openid work correctly)19:25
clarkbsounds like good progress ona couple fronts there19:25
fungiany suggestions on where we should put the attachments for storyboard-dev?19:26
fungii know we have a few places we're using for zuul build logs now19:26
clarkbmaybe vexxhost would be willing to host storyboard attachments as I expect there will be much less of them than job log files?19:26
fungifor production we need to make sure it's a container which has public indexing disabled19:27
fungiless critical for storyboard-dev but important for production19:27
clarkbfungi: I think we control that at a container level19:27
clarkb(via x-meta settings)19:27
fungi(to ensure just anyone can't browse the container and find attachments for private stories)19:28
fungicool19:28
fungiand yeah, again for storyboard-dev i don't think we care if we lose attachment objects19:28
fungifor production there wouldn't be an expiration on them though, unlike build logs19:28
fungimaybe we should work out a cross-cloud backup solution for that19:29
fungito guard against unexpected data loss19:29
clarkbI think swift supports that somehow too, but maybe we also have storybaord write twice?19:29
fungiyeah, we could probably fairly easily make it write a backup to a second swift endpoint/container19:30
*** dtroyer has joined #openstack-meeting19:30
fungithat at least gets us disaster recovery (though not rollback)19:31
fungicertainly enough to guard against a provider suddenly going away or suffering a catastrophic issue though19:31
fungianyway, that's probably it for storyboard updates19:32
clarkb#topic General Topics19:32
*** openstack changes topic to "General Topics (Meeting topic: infra)"19:32
clarkbfungi: anything new re wiki?19:32
funginope, keep failing to find time to move it forward19:33
clarkbianw: for static replacement are we ready to start creating new volumes?19:34
clarkbI think afs server is fully recovered from the outage? and we are releasing volumes successfully19:34
ianwyes, i keep meaning to do it, maybe give me an action item so i don't forget again19:34
fungiyeah, some releases still take a *really* long time, but they're not getting stuck any longer19:34
clarkb#action ianw create AFS volumes for static.o.o replacement19:35
fungithough on a related note, we need to get reprepro puppetry translated to ansible so we can move our remaining mirroring to the mirror-update server. none of the reprepro mirrors currently take advantage of the remote release mechanism19:35
ianwfungi: yeah, the "wait 20 minutes from last write" we're trying with fedora isn't working19:35
ianwyeah, i started a little on reprepro but not pushed it yet, i don't think it's too hard19:36
fungii think it shouldn't be too hard, it's basically a package, a handful of templated configs, maybe some precreated directories, and then cronjobs19:36
*** jamesmcarthur has joined #openstack-meeting19:36
clarkbits mostly about getting files in the correct places19:36
clarkbthere are a lot of files but otherwise not too bad19:36
fungiand a few wrapper scripts19:36
clarkbNext up is the tox python version default changing due to python used to install tox19:37
clarkb#link http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010957.html19:37
clarkbianw: fwiw I agree that the underlying issue is tox targets that require a specific python version and don't specify what that is19:37
clarkbthese tox configs are broken anywhere someone has installed tox with python3 instead of 219:38
ianwyeah, i just wanted to call out that there wasn't too much of a response, so i think we can leave it as is19:38
clarkbwfm19:38
fungiyep, with my openstack hat on (not speaking for the stable reviewers though) i feel like updating stable branch tox.ini files to be more explicit shouldn't be a concern19:38
fungithere's already an openstack stable branch policy carve-out for updating testing-related configuration in stable branches19:39
clarkbI think we're just going to have to accept there will be bumps on the way to migrating away from python219:39
clarkband we've run into other bumps too so this isn't unique19:39
clarkbAnd that takes us to mkolesni's topic19:40
clarkbHosting submariner on opendev.org.19:40
mkolesnithanks19:40
clarkbI think we'd be happy to have you but there were questions about CI?19:40
mkolesnidgroisma, you there?19:40
fungimkolesni: thanks for sticking around through 40 minutes of other discussion ;)19:40
mkolesnifungi, no prob :)19:40
mkolesnilet me wake dgroisma ;)19:40
dgroismawe wanted to ask what it takes to move some of our repos to opendev.org19:41
mkolesnicurrently we have all our ci in travis on the github19:41
clarkbThe git import is usually pretty painless. We point out gerrit management scripts at an existing repo source that is publicly accessible and suck the existing repo content into gerrit. This does not touch the existing PRs or issues though19:42
dgroismathere are many question around ci, main one if we could keep using travis19:42
clarkbFor CI we run Zuul and as far as I know travis doesn't integrate with gerrit19:42
mkolesniclarkb, yeah for sure the prs will have to be manually migrated19:42
clarkbIt may be possible to write zuul jobs that trigger travis jobs19:43
clarkbThat said my personal opinion is that much of the value in hosting with opendev is zuul19:43
clarkbI think it would be a mistake to put effort into continuing to use travis (though maybe it would help us to understand your motiviations for the move if Zuul is not part of that)19:43
fungithe short version on moving repos is that you define a short stanza for the repository including information on where to import existing branches/tags from, also define a gerrit review acl (or point to an existing acl) and then automation creates the projects in gerrit and gitea. after that you push up a change into your repo to add ci configuration for zuul so that changes can be merged (this can19:43
fungibe a no-op job to just merge whatever you say should be merged)19:43
mkolesnibtw are there any k8s projects hosted on opendev?19:44
fungihow do you define a kubernetes project?19:44
fungiairship does a bunch with kubernetes19:44
fungiso do some openstack projects like magnum and zun19:44
clarkbfungi: as does magnum and mnaser's k8s deployment tooling19:44
mkolesnione thats in golang for example :)19:44
clarkbthere are golang projects19:44
fungiyeah, programming language shouldn't matter19:45
clarkbbits of airship are golang as an example19:45
fungiwe have plenty of projects which aren't even in any programming language at all for taht matter19:45
fungifor example, projects which contain only documentation19:45
mkolesniyou guys suggested a zuul first approach19:46
mkolesnito transition to zuul and then do a migration19:46
mkolesnibut there was hesitation for that as well since zuul will have to test github based code for a while19:46
*** eharney has quit IRC19:46
clarkbmkolesni: dgroisma how many jobs are we talking about and are they complex or do they do simple things like "execute unittests", "build docs", etc?19:47
fungiwell, it wouldn't have to be the zuul we're running. zuul is free software anyone can run wherever they like19:47
clarkbMy hunch is they can't be too complex due to travis' limitations19:47
mkolesniclarkb, dgroisma knows best and can answer that19:48
clarkband if that is the case quickly adding jobs in zuul after migrating shouldn't be too difficult and is something we can help with19:48
dgroismathe jobs are a bit complex, we are dealing with multicluster and require multiple k8s clusters to run for e2e stuff19:48
clarkbdgroisma: and does travis provide that or do your jobs interact with external clusters?19:48
dgroismathe clusters are kind based (kubernetes in docker), so its just running bunch of containers19:48
fungiis that a travis feature, or something you've developed and happens as part of your job payload?19:48
mkolesnifungi, well currently we rely on github and travis and dont have our own infra so we'd prefer to avoid standing up the infra just for migration sake19:49
fungimkolesni: totally makes sense, just pointing that out for clarity19:49
mkolesniok sure19:49
dgroismaits out bash/go tooling19:49
dgroismaour tooling not travis feature19:50
mkolesniwe use dapper images for the environment19:50
fungiokay, so from travis's perspective it's just some shell commands being executed in a generic *nix build environment?19:50
dgroismayes19:50
fungiin that case, making ansible run the same commands ought to be easy enough19:50
dgroismathe migration should be ok, we just run some make commands19:50
clarkbfungi: dgroisma mkolesni and we can actually prove that out pre migration19:51
clarkbwe have a sandbox repo which you can push job configs to which will run your jobs premerge19:51
clarkbThat is probably the easiest way to make sure zuul will work for you, then if you decide to migrate to opendev simply copy that job config into the repos once they migrate19:51
clarkbthat should give you some good exposure to gerrit and zuul too which will likely be useful in your decision making19:52
ianwyeah, you will probably also find that while you start with basically ansible running shell: mycommand.sh ... you'll find many advantages in getting ansible to do more and more of what mycommand.sh over time19:52
mkolesniclarkb, so you mean do initial migration, test the jobs, and if all is good sync up whatever is left and carry on?19:52
mkolesnior is the sandbox where we stick the project itself?19:52
clarkbmkolesni: no I mean, push jobs into opendev/sandbox which already exists in opendev to run your existing test jobs against your software19:52
fungiyou could push up a change to the opendev/sandbox repo which replaces all the files with branch content from yours and a zuul config19:52
fungiit doesn't need to get approved/merge19:52
mkolesniah ok19:53
clarkbThen if you are happy with tose results you can migrate the repos and copy the config you've built in the sandbox repo over to your migrated repos19:53
clarkbthis way you don't have to commit to much while you test it out and don't have to run your own zuul19:53
fungizuul will test eth change as written, including job configuration19:53
mkolesnidgroisma, does that approach sound good to you? for a poc of the CI?19:53
dgroismayes sounds good19:53
mkolesniok cool19:53
mkolesnido you guys have any questions for us?19:53
mkolesnii think the creators guide covers everything else we need19:54
funginot really. it's all free/libre open source software right?19:54
clarkbI'm mostly curious to hear what your motivation is if not CI (most people we talk to are driven by the CI we offer)19:54
clarkbalso we'd be happy to hear feedback on your experience fiddling with the sandbox repo and don't hesitate to ask questions19:54
dgroismagerrit reviews19:54
fungisounds like the ci is a motivation and they just want a smooth transition from their existing ci?19:54
*** e0ne has joined #openstack-meeting19:54
mkolesnigithub sucks for collaborative development :)19:54
clarkboh neat we agree on that too :)19:54
dgroisma:)19:54
mkolesniand as former openstack devs we're quite farmiliar with gerrit and its many benefits19:55
fungiat least i didn't hear any indication they wanted a way to keep using travis any longer than needed19:55
mkolesnino i don19:55
mkolesnii don't think we're married to travis :)19:55
clarkbok sounds like we have a plan for moving forward. Once again feel free to ask questions as you interact with Zuul19:56
fungiwelcome (back) to opendev! ;)19:56
clarkbI'm going to quickly try to get to the last couple topics before our hour is up19:56
mkolesniok thanks we'll check out the sandbox repo19:56
dgroismathank you very much19:56
mkolesnithanks for your time19:56
clarkbianw: want to tldr the dib container image fun?19:56
clarkbmkolesni: dgroisma you're welcome19:56
ianwi would say my idea is that we have Dockerfile.opendev Dockefile.zuul Dockerfile.<insertwhateverhere>19:57
clarkbianw: if I read your email correctly it is that layering doesn't work for our needs here and maybe we should just embrace that and have different dockerfiles?19:57
ianwand just build layers together that make sense19:57
ianwi don't know if everyone else was thinking the same way as me, but I had in my mind that there was one zuul/nodepool-builder image and that was the canonical source of nodepool-builder images19:58
clarkbIt did make me wonder if a sidecar appraoch would be more appropriate here19:58
clarkbbut I'm not sure what kind of rpc that would require (and we don't have in nodepool)19:58
ianwbut i don't think that works, and isn't really the idea of containers anyway19:58
fungiand then we would publish container images for the things we're using into the opendev dockerhub namespace, even if there are images for that software in other namespaces too, as long as those images don't do what we specifically need? (opendev/gitea being an existing example)>19:58
clarkbfungi: ya that was how I read it19:58
ianwfungi: yep, that's right ... opendev namespace is just a collection of things that work together19:58
fungii don't have any objection to this line of experimentation19:59
clarkbwith the sidecar idea I had it was don't try to layer everything but instead incorporate the various bits as separate containers19:59
ianwit may be useful for others, if they buy into all the same base bits opendev is built on19:59
clarkbnodepool builder would run in its own container context then execute dib in another container context and somehow get the results (shared bind mount?)19:59
fungiyeah, putting those things in different containers make sense when they're services19:59
fungibut putting openstacksdk in a different container from dib and nodepool in yet another container wouldn't work i don't think?20:00
clarkbWe are at time now20:00
clarkbThe last thing I wanted to mention is I've started to take some simple notes on maybe retiring some services?20:00
ianwno, adding openstacksdk does basically bring you to multiple inheritance, which complicates matters20:00
clarkb#link https://etherpad.openstack.org/infra-service-list20:00
fungithanks clarkb20:00
clarkbianw: fungi ya I don't think the sidecar is a perfect fit20:00
clarkbre opendev services, if you have a moment over tea/coffee/food it would be great for a quick look and thoughts20:01
clarkbI think if we can identify a small number of services then we can start to retire them in a controlled fashion20:01
clarkb(mostly the ask stuff is what brought this up in my head because it comes up periodically that ask stops working and we really don't have the time to keep it working)20:01
clarkbthanks everyone!20:02
clarkb#endmeeting20:02
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"20:02
openstackMeeting ended Tue Nov 26 20:02:03 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-11-26-19.01.html20:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-11-26-19.01.txt20:02
openstackLog:            http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-11-26-19.01.log.html20:02
fungithanks!20:02
clarkbFeel free to continue discussion in #openstack-infra or on the openstack-infra mailing list20:02
*** dgroisma has quit IRC20:02
*** zaneb has joined #openstack-meeting20:09
*** jakecoll has joined #openstack-meeting20:16
*** mkolesni has quit IRC20:18
*** dtroyer has left #openstack-meeting20:18
*** enriquetaso has joined #openstack-meeting20:27
*** simon-AS559 has joined #openstack-meeting20:27
*** senrique_ has joined #openstack-meeting20:29
*** enriquetaso has quit IRC20:32
*** trandles has joined #openstack-meeting20:41
*** senrique__ has joined #openstack-meeting20:42
*** senrique_ has quit IRC20:43
*** oneswig has joined #openstack-meeting20:57
*** janders has joined #openstack-meeting21:00
oneswig#startmeeting scientific-sig21:00
openstackMeeting started Tue Nov 26 21:00:08 2019 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: scientific-sig)"21:00
openstackThe meeting name has been set to 'scientific_sig'21:00
oneswigI remembered the runes this week...21:00
oneswig#link agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_November_29th_201921:00
jandershey Stig!21:00
trandlesHello all21:00
oneswigHi janders trandles21:00
slaweqhi21:00
oneswigI understand we'll be missing a few due to Thanksgiving21:01
oneswigHi slaweq, welcome!21:01
oneswigThanks for coming21:01
slaweqthx for invite me :)21:01
oneswignp, I appreciate it.21:02
oneswigLet's go straight to that - hopefully jomlowe will catch us up21:02
oneswig#topic Helping supporting Linuxbridge21:03
*** openstack changes topic to "Helping supporting Linuxbridge (Meeting topic: scientific-sig)"21:03
slaweqok21:03
*** armax has joined #openstack-meeting21:03
oneswigHi slaweq, so what's the context?21:03
slaweqso, first of all sorry if my message after PTG was scary for anyone21:03
slaweqbasically in neutron developers team we had feeling that linuxbridge agent is almost not used as there was no almost none development of this driver21:04
slaweqand as we want not to include ovn driver as one of in-tree drivers21:04
slaweqour idea was to maybe start thinking slowly about deprecating linuxbridge agent21:04
slaweqbut apparently it is used by many deployments21:05
slaweqso we will for sure not deprecate it21:05
slaweqYou don't need to worry about it21:05
oneswigI think it is popular in this group because it's quite simple and faster21:05
oneswigslaweq: thanks!21:05
rbuddenhello21:05
oneswigIs it causing overhead to keep linuxbridge in tree?21:05
slaweqas we have clear signal that this driver is used by people who needs simple solution and don't want other, more advanced features21:05
oneswighi rbudden21:05
slaweqbut we don't have almost nobody in our devs team who would take care of LB agent and mech driver21:06
oneswigdoes it need much work?21:06
jandersapologies for a dumb question - we're only talking about the mech driver here, not the LB between the instance and the hypervisor where secgroups are applied right?21:06
rbuddeni was going to ask, what’s the overhead to continue support? what’s needed?21:06
slaweqso if You are using it, it would be great if someone of You could be like point of contact in case when there are e.g. LB related gate issues or things like that21:07
slaweqjanders: that's correct21:07
slaweqoneswig: no, I don't think this would require a lot of work21:07
*** noggin143 has joined #openstack-meeting21:07
noggin143Tim joining21:07
oneswigslaweq: does that mean joining openstack-neutron etc?21:07
oneswigHi noggin143, evening21:08
*** b1airo has joined #openstack-meeting21:08
slaweqoneswig: yes, basically that's what is needed21:08
slaweqand I would like to put someone on the list in https://docs.openstack.org/neutron/latest/contributor/policies/bugs.html21:08
b1airoo/21:08
slaweqas point of contact for linuxbridge stuff21:08
oneswigHi b1airo21:08
oneswig#chair b1airo21:09
openstackCurrent chairs: b1airo oneswig21:09
b1airomorning21:09
slaweqso sometimes we can ping that person to triage some LB related bug or gate issue etc.21:09
*** jmlowe has joined #openstack-meeting21:09
oneswigIs there a backlog of LB bugs somewhere that would be good examples?21:09
slaweqthan I would be sure that it's really "maintained"21:09
b1airooneswig: sorry, could you repeat your question to slaweq - i assume it was something about what is needed from neutron team's perspective to avoid deprecation of linuxbridge ?21:09
oneswigb1airo: that was it, only it has been decided not to deprecate.  It would still be good to help out with its maintenance though.21:10
slaweqoneswig: list of bugs with linuxbridge tag is here:21:10
slaweqhttps://bugs.launchpad.net/neutron/?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commente21:10
slaweqr=&field.subscriber=&field.structural_subscriber=&field.tag=linuxbridge&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_b21:10
slaweqlueprints=on21:10
janderswhat's the performance advantage of LB over ovs, roughly? are we talking 20% or 500%?21:11
slaweqb1airo: yes, we now have clear signal that this driver is widely used so we will not plan to deprecate it21:11
rbuddengood to hear!21:11
noggin143it is good to have a simple driver, even if it is not so functional for those who liked nova-network21:11
slaweqso basically that's all from my side21:12
oneswigperformance data I have21:12
oneswig#link presentation from 2017 https://docs.google.com/presentation/d/1MSTquzyPaDUyqW2pAmYouP6qdYAi6G1n7IJqBzqPCr8/edit?usp=sharing21:12
slaweqif someone of You would like to be such linuxbridge contact person, please reach out to me on irc or email, or speak now if You want :)21:12
oneswigSlide 3021:12
jmlowewoohoo!21:12
oneswigThanks slaweq, appreciated.21:13
noggin143+121:13
b1airoi suspect someone from Nectar might be happy to be a contact - LB is still widely used there as the default plugin with more advanced networking provided by Midonet and more performant local networking done with passthrough21:14
*** rfolco has quit IRC21:14
oneswigMidonet still going?  Forgive my ignorance.21:14
b1airono sorrison online to ping at the moment though...21:14
oneswigb1airo: seconded :-)21:15
b1airoI think Midonet looked to be in trouble but then got bought by Ericsson, who are presumably using Midonet in their deployments21:15
oneswigb1airo: good to hear it found a safe harbour of sorts.21:16
b1airoso possibly an 11th hour reprieve21:16
b1airoit's a really nice platform, despite java21:16
jmloweoof, documentation stops at Newton21:18
oneswigHi jmlowe21:18
jmloweHi21:18
oneswigPerhaps it was just, you know, completed with nothing more to add?21:19
jmloweCould be, always a concern though21:19
oneswigJust kidding :-)21:20
oneswigOK, shall we move on?21:20
oneswig#topic Supercomputing round-up21:21
*** openstack changes topic to "Supercomputing round-up (Meeting topic: scientific-sig)"21:21
oneswig13300 people apparently - it was big and busy!21:21
oneswigWe met a few new people who may find their way to this SIG21:22
jandersgreat! where from?21:23
b1airofingers crossed - did anyone get the name of that guy asking about performance issues?21:23
oneswigNo, he didn't hang around afterwards unfortunately, afaict21:23
jmlowewow, I guess my dreams of going back to Austin are forever dashed with those kinds of attendance numbers21:23
noggin143sounds worse than kubecon21:24
jmloweDon't worry b1airo he'll be back next year, he was there last year21:24
trandlesIt was interesting to see where cloud and cloud-like tech is starting to show up in product lines. In many cases, cloud is creeping in quietly. It's only when you ask about specifics that vendors tell you things like k8s and some SDN is being used as part of their plumbing.21:24
b1airoyeah, i remember :-)21:25
oneswignoggin143: by my calculation, it was as big as kubecon + openstack shanghai combined!21:26
janderswhat would be the top3 most interesting k8s use cases?21:26
trandlesjanders: most "interesting" or "scariest?"21:26
janderslet's look at one of each? :)21:27
oneswigjanders: the Cray Shasta control plane is hosted on K8S, for example...21:27
oneswigfile in whichever category you like!21:27
jandersdoes Bright Computing have anything to do with that?21:27
trandlesThat covers the scary use case, oneswig  ;)21:27
oneswigjanders: perhaps they use that to provision the controller nodes but it wasn't disclosed.21:28
*** raildo has quit IRC21:28
trandlesAFAIK Bright doesn't have anything to do with it...but I'm not 100% sure about that21:28
jmloweShasta is the scariest21:28
jandersoff the record don't be surprised if you see similar arch by Brigth21:28
jandersthis came up in a discussion I had with them a while back after we had a really bad scaling problem with our BCM21:29
jandershow about the interesting bits?21:29
janders(now that we have scary ticked off)21:30
*** jonmills_nasa has joined #openstack-meeting21:30
*** raub has joined #openstack-meeting21:30
oneswigSo i was interested in these very loosely coherent filesystems developed for Summit21:30
trandlesInteresting use case: someone had a poster talking about k8s + OpenMPI21:30
oneswigtrandles: not using ORTE again?21:30
janderstrandles: do you recall how did they approach the comms challange?21:30
noggin143oneswig: any references ?21:30
oneswiglooking...21:31
trandlesoneswig: that I'm not sure about...the poster was vague and the author standing there was getting picked apart by someone who seemed to know all of the gory comms internals21:31
noggin143oneswig: for loosely coherent filesystems, that is.21:31
trandlesI didn't stay to hear details, I just kept wandering off thinking "wow, look for more on that at sc20"21:31
jandersfor sure21:32
jandersshameless plug: https://www.youtube.com/watch?v=sPfZGHWnKNM21:32
trandlesjanders: thx for the link ;)21:33
jandersI can confirm there's a lot of interesting work on K8s-RDMA and there's more coming21:33
*** igordc has quit IRC21:33
oneswignoggin143: it was this session - two approaches presented https://sc19.supercomputing.org/presentation/?id=pap191&sess=sess16721:33
oneswigThere's a link to a paper on the research.21:33
*** senrique__ has quit IRC21:34
janderswhoa... these numbers make io500 look like a sandpit :)21:34
*** senrique__ has joined #openstack-meeting21:34
oneswigIt was interesting because a lot of talk to this point has been on burst buffers, but this work was all about writing data to local NVME and then draining it to GPFS afterwards.  The coherency is loose to the point of non-existence but it may work if the appilcation knows it's not there.21:35
b1airojanders: re Bright, I spoke to them (including CTO - Martjian?), and got the impression they had not done any significant work on containerising control plane with k8s or anything else - they had looked at it a couple of years ago but decided it was too fragile at the time... which is kinda laughable when in the next breath they tell you their upgrade strategy for Bright OpenStack is a complete reinstall and21:35
b1airomigration21:35
noggin143janders: quite an expensive alternative to the sandpit :-)21:35
jandersindeed :)21:36
jandersb1airo: when did that conversation take place?21:36
*** ijw_ has quit IRC21:36
jandersI had two myself, one similar to what you described and another one some time later where I had an impression of a 180deg turn21:36
b1airolast week :-)21:37
jandersoh dear... another 180?21:37
janderssomeone might be going in circles21:37
oneswigOn a related matter, I went to the OpenHPC Bof.  There was a guy from ARM/Linaro who was interested in cloud-native deployments of OpenHPC but I think the steering committee are lukewarm at best on this.21:38
b1airothey don't seem to have moved past saying "OpenStack is too hard to upgrade" as an excuse21:38
oneswigb1airo: you're in that boat with Bright aren't you?21:38
*** d34dh0r53 has quit IRC21:39
noggin143oneswig: +1 on the lukewarm for ARM in HPC/HTC here too21:39
*** igordc has joined #openstack-meeting21:39
jandersrepo repoint; yum -y update; db-sync; service restart - I wonder what's so hard about that...21:39
rbuddenlast i recall they used warewulf for provisioning bare metal21:39
jandersand with containers it should be even easier or so I hear21:39
rbuddeni forget if they still maintain an openstack/cloud release tool21:39
oneswignoggin143: au contraire, on the ARM front there appears to be much interest around the new Fujitsu A64FX21:39
rbuddenthey did at one point21:39
*** d34dh0r53 has joined #openstack-meeting21:39
noggin143oneswig: is the Fujitsu thing real ? I'd heard 202[2-9]21:40
b1airoyep, it's really ugly. i haven't dived depth-first into the details yet, but they are basically saying that to get from 8.1 to 9.0 we should completely reinstall21:40
janders+121:40
oneswigrbudden: All kinds of deployment covered - both warewulf and xcat :-)21:40
trandlesLooks like the 2020 KubeCon North America event once again overlaps SC perfectly. I heard twice from vendors at SC19 who said "I don't know, my kubernetes guy is at KubeCon this week." ...sigh...21:40
janderstrandles: thank you for the early heads-up... Boston this time I see21:41
jandersnice to see Open Infra back in Vancouver too21:41
oneswignoggin143: I spoke to some people at Riken, they are expecting samples imminently, deployment through next year.  It might have been lost in translation but that was the gist afaict21:41
janderslooks like I've got my base conference schedule set :)21:41
b1airotrandles: better than "Hi, did you know Kubernetes is replacing traditional HPC..."21:42
noggin143Kubecon had an interesting launch of the "Research SIG" which was kind of k8s version of the the Scientific WG21:42
oneswignoggin143: were you there?21:42
trandlesb1airo: HA! I didn't see anyone from Trilio at SC...21:42
jandersnoggin143: unfortunately I wasn't able to attend this one - would you be happy to tell us about it?21:42
jandersquite interested21:42
noggin143oneswig: no, but Ricardo was (https://www.youtube.com/watch?v=g9FQxzK9E_M&list=PLj6h78yzYM2NDs-iu8WU5fMxINxHXlien&index=108&t=0s)21:43
b1airotrandles: they were there actually, but just a couple of guys lurking - all the sales folks were at kubecon21:43
*** senrique__ has quit IRC21:43
oneswigI wonder how Kubecon compared to OpenStack Paris, ah those heady days21:44
jandersI wasn't in Paris but San Diego was massive21:44
janders10k ppl21:44
noggin143kubecon had lots of Helm v3, Machine Learning, GPUs, binder etc. Parties were up to the Austin level21:44
jonmills_nasaWe are still trying to use Trilio...21:44
b1airointeresting choice of words jonmills_nasa ...21:44
noggin143We'll be going to the Kubecon Amsterdam one in 202021:45
oneswigWe should probably pay a bit more lip-service to the agenda... any more on SC / Kubecon?21:46
jandersKubecon had a significant ML footprint21:46
jandersquite interesting21:46
noggin143the notebook area seems very active too21:46
oneswigjanders: does seem a good environment for rapid development in ML.21:46
jandersboth from Web Companies side (Ubers of the World)21:46
jandersand more traditional HPC (Erez gave a good talk on GPUs)21:47
oneswigOK let's move on21:47
jandersI was happy to add RDMA support into the picture with my talk21:47
oneswig#topic issues mixing baremetal and virt21:47
*** openstack changes topic to "issues mixing baremetal and virt (Meeting topic: scientific-sig)"21:47
oneswigjanders: good to hear that.21:47
jandersit's good/interesting to hear the phrase "K8s as de-facto standard for ML workloads"21:47
*** jakecoll has quit IRC21:48
jandersI feel this is driving K8s away from being just-a-webhosting-platform21:48
jandersand there's now buy-in for that - good for our group I suppose21:48
oneswigquick one this - back at base the team hit and fixed some issues with nova-compute-ironic last week while we were off gallavanting with the conference crowd21:49
noggin143janders: yup, tensorflow, keras, pytorch - lots of opportunities for data scientists21:49
oneswigAs I understand it, if you're running multiple instances of nova-compute-ironic using a hash ring to share the work, you can sometimes lose resources.21:49
jmloweI'm somewhat interested in mixing baremetal and virt, anybody doing this right now?21:49
noggin143onswig: do you have a pointer ? We're looking to start sharding for scale with Ironic21:50
rbudden+1 is this production ready?21:50
oneswigjmlowe: we have some clients doing this...21:50
raubjmlowe: you are not alone21:50
janders+1, I am very interested, too21:50
oneswig#link brace of patchs available here https://review.opendev.org/#/c/694802/21:50
* trandles too21:50
oneswigI think there were 4-5 distinct patches in the end.21:50
jandersoneswig: can you elaborate more on "losing resources"?21:51
jandershow does this manifest itself?21:51
jandersnodes never getting scheduled to?21:51
janderswhat scale are we thinking?21:51
noggin143oneswig: BTW, we're working with upstream in several ironic areas such as incremental inspection (good for firmware level checks) and benchmarking/burn in as part of the standard cleaning process21:51
oneswigjanders: As I understand it, nodes would become lost from the resource tracker.  But I only have the sketchiest details.21:52
jmloweI have let's say 400 ironic instances in mind, 20-30 sharing with virt21:52
jandersjmlowe: sounds like you're progressing with the nova-compute-as-a-workload architecture? :)21:53
oneswigIt only occurs when using multiple instances of nova-compute-ironic, afaik21:53
noggin143jmlowe: we're running about 4000 at the moment - Arne's presentation from Shanghai at https://twiki.cern.ch/twiki/pub/CMgroup/CmPresentationList/From_hire_to_retire!_Server_Life-cycle_management_with_Ironic_at_CERN_-_OpenInfraSummit_SHANGHAI_06NOV2019.pptx21:53
jmlowejanders: that's what I have in mind21:53
jandersjmlowe: great to hear! :)21:53
oneswigjanders: what do you do for tagged interfaces - only 1 network to the hypervisor, or something crafty with the IB?21:53
janders1 network per port21:54
jandersI had dual and quad port nodes so that was enough21:54
jandersbut trunking vlans/pkeys is definitely something I want going forward21:54
oneswigI thought it was something like that.  A good use case for trunking Ironic ports (if that's ever a thing).21:55
janders(if you're wondering why/how I had quad port IB in servers - the answer is Dell blades)21:55
oneswigWe've let the slug of time crawl away from us again.21:55
oneswig5 mins on Gnocchi?21:55
noggin143oneswig: hah!21:56
jandersok!21:56
oneswig#topic Gnocchi21:56
*** openstack changes topic to "Gnocchi (Meeting topic: scientific-sig)"21:56
b1airommm soft fluffy potato21:56
jandersand I propose we re-add the virt+bm again to next week's agenda21:56
oneswigOK, wasn't expecting that21:56
*** diablo_rojo has quit IRC21:56
oneswigjanders: happy to do so21:56
jandersgreat, thanks oneswig21:56
b1airoi suggest shunting Gnocchi to next week21:57
jandersgood idea, but maybe let21:57
jmlowewhen paired with the carbonara backend it was delicious21:57
jonmills_nasafair enough21:57
oneswigb1airo: makes sense to me.21:57
janderss quickly draw context?21:57
rbuddenyep thanks oneswig21:57
b1airoi could ask Sam or someone else from Nectar to come talk about that and LB + Mido...21:57
oneswigb1airo: would be great, thanks.21:57
jmlowegnocchi isn't getting any more dead, so second to pushing21:57
b1airo:'-D21:58
oneswigNext week I'm also hoping johnthetubaguy will join us to talk about unified limits (quotas for mixed baremetal and virt resources, hierarchical projects, among other things)21:58
jandersgnocchi is good at killing ceph afaik21:58
noggin143oneswig: :-)21:58
janderseven if almost dead itself21:58
jmloweoooh, I'd love to quota vgpu's, I think that's part of it21:58
noggin143now, back to InfluxDB tuning21:58
oneswignoggin143: ha, what a rich seam21:59
noggin143jmlowe: we're looking to do some GPU quota too, another topic ?21:59
jmloweyeah, really need a nvme ceph pool to keep gnocchi from killing things21:59
jandersjmlowe: indeed.21:59
oneswigThese are multi-region discussions, clearly!21:59
jandersokay - almost over time, so thank you all, great chat - I just wish we had another hour! :)21:59
jmloweI found out the hard way21:59
janderswe shall continue next week21:59
b1airoi thought that whole gnocchi on ceph thing looked weird when it first came up21:59
jandersyep, DDoS-a-a-S!22:00
oneswigjmlowe: wasn't it obvious with those billions of tiny writes ...22:00
b1airono-one mentioned iops in the Ceph for HPC BoF at SC19...22:00
trandlesBye folks, I'll read next week's log. ;)22:00
*** jonmills_nasa has quit IRC22:00
oneswigThanks all, alas we must close22:00
oneswig#endmeeting22:00
b1airoi attempted a roundabout question on lazy writes22:00
rbuddenbye all!22:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"22:00
jmloweoneswig: Like most things it starts off well, then all goes to hell about a week after you stop paying attention22:00
openstackMeeting ended Tue Nov 26 22:00:42 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-11-26-21.00.html22:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-11-26-21.00.txt22:00
openstackLog:            http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-11-26-21.00.log.html22:00
b1airobfn22:00
noggin143jmlowe: quote of the week22:01
*** noggin143 has quit IRC22:01
oneswigjmlowe: ah, so often the case22:02
oneswigThanks all, over and out22:02
*** oneswig has quit IRC22:02
*** diablo_rojo has joined #openstack-meeting22:02
*** ociuhandu has joined #openstack-meeting22:02
*** rcernin has joined #openstack-meeting22:04
*** jmlowe has left #openstack-meeting22:05
*** ykatabam has joined #openstack-meeting22:06
*** janders has quit IRC22:06
*** ociuhandu has quit IRC22:08
*** ykatabam has quit IRC22:10
*** trandles has quit IRC22:11
*** pcaruana has quit IRC22:16
*** ykatabam has joined #openstack-meeting22:35
*** slaweq has quit IRC22:37
*** armax has quit IRC22:42
*** simon-AS559 has quit IRC22:45
*** slaweq has joined #openstack-meeting23:11
*** strobert1 has joined #openstack-meeting23:12
*** slaweq has quit IRC23:15
*** ociuhandu has joined #openstack-meeting23:30
*** ociuhandu has quit IRC23:35
*** armax has joined #openstack-meeting23:38
*** rfolco has joined #openstack-meeting23:47
*** rfolco has quit IRC23:51
*** rfolco has joined #openstack-meeting23:51
*** rfolco has quit IRC23:55
*** rfolco has joined #openstack-meeting23:55
*** diablo_rojo has quit IRC23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!