Tuesday, 2019-05-21

*** jamesmcarthur has joined #openstack-meeting00:04
*** carloss has quit IRC00:06
*** panda|rover has quit IRC00:10
*** panda has joined #openstack-meeting00:11
*** ekcs has quit IRC00:28
*** jamesmcarthur has quit IRC00:35
*** markvoelker has joined #openstack-meeting00:41
*** markvoelker has quit IRC00:45
*** mattw4 has quit IRC00:46
*** ttsiouts has quit IRC01:00
*** jamesmcarthur has joined #openstack-meeting01:05
*** jamesmcarthur_ has joined #openstack-meeting01:10
*** jamesmcarthur has quit IRC01:11
*** ttsiouts has joined #openstack-meeting01:15
*** ttsiouts has quit IRC01:20
*** baojg has joined #openstack-meeting01:34
*** baojg has quit IRC01:38
*** ykatabam has quit IRC01:43
*** ttsiouts has joined #openstack-meeting01:49
*** ttsiouts has quit IRC01:54
*** whoami-rajat has joined #openstack-meeting02:00
*** ttsiouts has joined #openstack-meeting02:27
*** yamahata has quit IRC02:32
*** iyamahat has quit IRC02:32
*** ykatabam has joined #openstack-meeting02:34
*** _alastor_ has joined #openstack-meeting02:36
*** ricolin has joined #openstack-meeting02:44
*** _alastor_ has quit IRC02:50
*** wwriverrat has quit IRC02:51
*** hongbin has joined #openstack-meeting03:00
*** ttsiouts has quit IRC03:01
*** wwriverrat has joined #openstack-meeting03:04
*** hemna has joined #openstack-meeting03:07
*** kaisers has quit IRC03:17
*** jamesmcarthur_ has quit IRC03:23
*** hemna has quit IRC03:26
*** psachin has joined #openstack-meeting03:28
*** jamesmcarthur has joined #openstack-meeting03:54
*** armax has quit IRC03:54
*** shilpasd has joined #openstack-meeting03:55
*** ykatabam has quit IRC03:57
*** jamesmcarthur has quit IRC03:59
*** tashiromt has joined #openstack-meeting03:59
*** imsurit_ofc has joined #openstack-meeting04:01
*** ttsiouts has joined #openstack-meeting04:03
*** zerick has quit IRC04:03
*** zerick has joined #openstack-meeting04:04
*** ykatabam has joined #openstack-meeting04:13
*** shilpasd has quit IRC04:18
*** hongbin has quit IRC04:21
*** ykatabam has quit IRC04:24
*** tashiromt has quit IRC04:29
*** markvoelker has joined #openstack-meeting04:35
*** ttsiouts has quit IRC04:36
*** janki has joined #openstack-meeting04:37
*** markvoelker has quit IRC04:39
*** slaweq has joined #openstack-meeting04:49
*** slaweq has quit IRC04:59
*** jamesmcarthur has joined #openstack-meeting05:01
*** pcaruana has joined #openstack-meeting05:02
*** jamesmcarthur has quit IRC05:07
*** vishalmanchanda has joined #openstack-meeting05:10
*** pcaruana has quit IRC05:11
*** ykatabam has joined #openstack-meeting05:14
*** slaweq has joined #openstack-meeting05:16
*** slaweq has quit IRC05:20
*** Lucas_Gray has joined #openstack-meeting05:23
*** e0ne has joined #openstack-meeting05:23
*** e0ne has quit IRC05:24
*** radeks_ has joined #openstack-meeting05:36
*** ttsiouts has joined #openstack-meeting05:39
*** Luzi has joined #openstack-meeting05:43
*** zbr_ has quit IRC06:05
*** slaweq has joined #openstack-meeting06:12
*** ttsiouts has quit IRC06:12
*** lpetrut has joined #openstack-meeting06:15
*** Lucas_Gray has quit IRC06:18
*** ykatabam has quit IRC06:30
*** kaisers has joined #openstack-meeting06:34
*** ykatabam has joined #openstack-meeting06:34
*** jamesmcarthur has joined #openstack-meeting06:35
*** markvoelker has joined #openstack-meeting06:36
*** jamesmcarthur has quit IRC06:39
*** rubasov has quit IRC06:48
*** rubasov has joined #openstack-meeting06:48
*** rubasov has quit IRC06:53
*** rubasov has joined #openstack-meeting06:54
*** trident has quit IRC07:03
*** trident has joined #openstack-meeting07:05
*** tesseract has joined #openstack-meeting07:06
*** markvoelker has quit IRC07:09
*** kopecmartin|off is now known as kopecmartin07:12
*** ykatabam has quit IRC07:18
*** rcernin has quit IRC07:19
*** ralonsoh has joined #openstack-meeting07:19
*** tssurya has joined #openstack-meeting07:20
*** pcaruana has joined #openstack-meeting07:21
*** jamesmcarthur has joined #openstack-meeting07:25
*** ttsiouts has joined #openstack-meeting07:27
*** apetrich has joined #openstack-meeting07:28
*** jamesmcarthur has quit IRC07:30
*** pcaruana has quit IRC07:39
*** yikun_ has joined #openstack-meeting07:40
*** e0ne has joined #openstack-meeting07:42
*** kashyap has left #openstack-meeting07:53
*** rubasov has quit IRC07:55
*** rubasov has joined #openstack-meeting07:57
*** e0ne has quit IRC07:57
*** joxyuki has joined #openstack-meeting08:00
*** e0ne has joined #openstack-meeting08:01
*** JangwonLee has joined #openstack-meeting08:04
*** jaewook_oh has joined #openstack-meeting08:05
*** markvoelker has joined #openstack-meeting08:06
*** takahashi-tsc has joined #openstack-meeting08:07
*** jaewook_oh has quit IRC08:07
*** shubham_potale has joined #openstack-meeting08:12
*** diablo_rojo has joined #openstack-meeting08:17
*** priteau has joined #openstack-meeting08:20
*** hadai has joined #openstack-meeting08:24
*** priteau has quit IRC08:24
*** priteau has joined #openstack-meeting08:29
*** takahashi-tsc has left #openstack-meeting08:31
*** markvoelker has quit IRC08:39
*** ttsiouts has quit IRC08:48
*** joxyuki has left #openstack-meeting08:50
*** ykatabam has joined #openstack-meeting08:56
*** ykatabam has quit IRC08:57
*** hadai has quit IRC08:59
*** tetsuro has joined #openstack-meeting09:01
*** diablo_rojo has quit IRC09:03
*** ttsiouts has joined #openstack-meeting09:18
*** imsurit_ofc has quit IRC09:20
*** jamesmcarthur has joined #openstack-meeting09:26
*** electrofelix has joined #openstack-meeting09:30
*** jamesmcarthur has quit IRC09:31
*** markvoelker has joined #openstack-meeting09:35
*** diablo_rojo has joined #openstack-meeting09:38
*** ttsiouts has quit IRC09:50
*** aarents has quit IRC10:07
*** markvoelker has quit IRC10:09
*** Lucas_Gray has joined #openstack-meeting10:11
*** mmethot has quit IRC10:11
*** boxiang has joined #openstack-meeting10:20
*** boxiang has quit IRC10:21
*** boxiang has joined #openstack-meeting10:22
*** boxiang has quit IRC10:24
*** boxiang has joined #openstack-meeting10:25
*** boxiang has quit IRC10:30
*** bbowen has quit IRC10:51
*** mmethot has joined #openstack-meeting11:01
*** ttsiouts has joined #openstack-meeting11:02
*** markvoelker has joined #openstack-meeting11:06
*** njohnston has joined #openstack-meeting11:06
*** panda is now known as panda|rover|eat11:13
*** jamesmcarthur has joined #openstack-meeting11:27
*** jamesmcarthur has quit IRC11:33
*** ttsiouts has quit IRC11:36
*** markvoelker has quit IRC11:39
*** jamesmcarthur has joined #openstack-meeting11:43
*** bobh has joined #openstack-meeting11:45
*** bobh has quit IRC11:46
*** aarents has joined #openstack-meeting11:52
*** raildo has joined #openstack-meeting11:54
*** janki has quit IRC11:54
*** jamesmcarthur has quit IRC11:55
*** bbowen has joined #openstack-meeting11:57
*** jamesmcarthur has joined #openstack-meeting12:00
*** ttsiouts has joined #openstack-meeting12:06
*** ttsiouts has quit IRC12:10
*** ttsiouts has joined #openstack-meeting12:12
*** jamesmcarthur has quit IRC12:16
*** jamesmcarthur has joined #openstack-meeting12:16
*** tetsuro has quit IRC12:26
*** panda|rover|eat is now known as panda|rover12:27
*** jamesmcarthur has quit IRC12:32
*** ttsiouts has quit IRC12:32
*** diablo_rojo has quit IRC12:33
*** lseki has joined #openstack-meeting12:34
*** psachin has quit IRC12:34
*** priteau has quit IRC12:36
*** priteau has joined #openstack-meeting12:40
*** jamesmcarthur has joined #openstack-meeting12:42
*** boxiang_ has joined #openstack-meeting12:42
*** baojg has joined #openstack-meeting12:46
*** priteau has quit IRC12:51
*** enriquetaso has joined #openstack-meeting12:59
*** mriedem has joined #openstack-meeting13:07
*** eharney has quit IRC13:11
*** janki has joined #openstack-meeting13:11
*** baojg has quit IRC13:22
*** rossella_s has quit IRC13:34
*** ttsiouts has joined #openstack-meeting13:34
*** Lucas_Gray has quit IRC13:38
*** jamesmcarthur has quit IRC13:39
*** Lucas_Gray has joined #openstack-meeting13:39
*** jamesmcarthur has joined #openstack-meeting13:48
*** tidwellr has joined #openstack-meeting13:49
*** mlavalle has joined #openstack-meeting13:56
*** jrbalderrama has joined #openstack-meeting13:56
*** lajoskatona has joined #openstack-meeting13:56
*** jchhatbar has joined #openstack-meeting13:57
*** rf0lc0 has joined #openstack-meeting13:57
*** ttsiouts has quit IRC13:57
*** dmacpher_ has joined #openstack-meeting13:57
*** jrbalderrama has quit IRC13:57
*** mmethot_ has joined #openstack-meeting13:57
*** jrbalderrama has joined #openstack-meeting13:57
*** davidsha has joined #openstack-meeting13:59
*** boden has joined #openstack-meeting13:59
*** jamesmcarthur_ has joined #openstack-meeting13:59
*** Wryhder has joined #openstack-meeting13:59
*** adrianc_ has joined #openstack-meeting13:59
*** shintaro has joined #openstack-meeting14:00
mlavalle#startmeeting networking14:00
openstackMeeting started Tue May 21 14:00:05 2019 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: networking)"14:00
openstackThe meeting name has been set to 'networking'14:00
njohnstono/14:00
davidshao/14:00
lajoskatonao/14:00
rubasovo/14:00
*** gary_perkins_ has joined #openstack-meeting14:00
*** isq_ has joined #openstack-meeting14:00
*** lifeless_ has joined #openstack-meeting14:00
mlavalleok, let's get going14:02
bcafarelo/14:02
*** tidwellr_ has joined #openstack-meeting14:02
mlavalle#topic Announcements14:02
*** openstack changes topic to "Announcements (Meeting topic: networking)"14:02
mlavalleThe next milestone in June 3 - 714:02
mlavalleTrain 114:03
mlavalleso it is not that far away14:03
mlavallea little more than 2 weeks away14:03
mlavalleThis is the complete Train schedule:14:03
mlavalle#link https://releases.openstack.org/train/schedule.html14:03
slaweqhi14:03
*** ktsuyuzaki has joined #openstack-meeting14:04
mlavalleThe call for papers for teh Shanghai Summit is open. The deadline for proposals is July 2nd:14:05
amotokihi14:05
mlavalle#link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006262.html14:05
*** sw3_ has joined #openstack-meeting14:05
*** Lucas_Gray has quit IRC14:06
*** janki has quit IRC14:06
*** mriedem has quit IRC14:06
*** tidwellr has quit IRC14:06
*** kota_ has quit IRC14:06
*** SpamapS has quit IRC14:06
*** rfolco has quit IRC14:06
*** adrianc has quit IRC14:06
*** markmcclain has quit IRC14:06
*** lifeless has quit IRC14:06
*** dosaboy has quit IRC14:06
*** sw3 has quit IRC14:06
*** bswartz has quit IRC14:06
*** Wryhder is now known as Lucas_Gray14:06
*** sw3_ is now known as sw314:06
mlavalleand here's a couple of picyures of a bunch of very handsome gentlemen from the recent Denver PTG:14:06
mlavalle#link https://www.dropbox.com/sh/fydqjehy9h5y728/AAC1gIc5bJwwNd5JkcQ6Pqtra/Neutron?dl=0&subfolder_nav_tracking=114:06
*** jamesmcarthur has quit IRC14:06
*** mmethot has quit IRC14:06
*** electrofelix has quit IRC14:06
*** ralonsoh has quit IRC14:06
*** toabctl has quit IRC14:06
*** yamahata__ has quit IRC14:06
*** dmacpher has quit IRC14:06
*** _pewp_ has quit IRC14:06
*** cheng1 has quit IRC14:06
*** aspiers has quit IRC14:06
*** johnthetubaguy has quit IRC14:06
*** smcginnis has quit IRC14:06
*** gary_perkins has quit IRC14:06
*** mjturek has quit IRC14:06
*** strigazi has quit IRC14:06
*** edmondsw_ has quit IRC14:06
davidshaMass quits on the pictures.... Ouch14:06
*** _pewp_ has joined #openstack-meeting14:06
njohnstonoops netsplit14:06
bcafarelcome on these pictures are not *that* scary ;)14:07
davidshaWas thinking on making the rainbow hair a staple ;)14:07
*** electrofelix has joined #openstack-meeting14:07
mlavalleOne thing to note is that we don't have ladies in the team anymore14:08
*** mriedem has joined #openstack-meeting14:08
mlavallewe are a bunch of bros14:08
mlavalleFinally, I sent out the PTG summary to the ML:14:08
mlavalle#link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006408.html14:09
*** Luzi has quit IRC14:09
mlavallePlease look at your sections of interest. Hopefully, I captured most of the topics faithfully14:09
mlavalleand thanks to davidsha for the great notes taking14:09
njohnstonthanks to you both for the great notes14:10
amotokithanks for the great summary!14:10
tidwellr_++14:10
davidshanp, thanks for summary!14:10
haleyb+2 on the notes, thanks mlavalle14:11
slaweqyes, thx for great summary :)14:11
mlavalleWe will keep the pointer to the notes in the announcements section of our agenda for the rest of the cycle14:11
mlavalleany other announcements?14:11
*** rf0lc0 is now known as rfolco14:12
*** mattw4 has joined #openstack-meeting14:12
mlavalleok, let's move on14:12
mlavalle#topic Blueprints14:12
*** openstack changes topic to "Blueprints (Meeting topic: networking)"14:12
*** eharney has joined #openstack-meeting14:12
*** ralonsoh has joined #openstack-meeting14:12
njohnstonnetsplit rejoin14:12
mlavalleThis is what I have so far in the Train-1 dashboard:14:13
*** SpamapS has joined #openstack-meeting14:13
mlavalle#link https://launchpad.net/neutron/+milestone/train-114:13
mlavalleOn https://blueprints.launchpad.net/neutron/+spec/enginefacade-switch14:14
mlavalleI want to mention that I will start this one taking one weekly bite out of https://review.opendev.org/#/c/545501/14:15
mlavalleI will rebase it later today14:15
mlavalleso he have a recent picture out of it14:15
njohnstonmlavalle: How do you want to plit the work between you, ralonsoh, and me?  The thing about engine facade switch is that when you change a high level function you have to delve through things it calls to make sure they are also using the facade, otherwise Bad Stuff can happen.14:16
njohnston*split14:16
mlavalleand afterwards, if any of you want to nibble out of it, just leave a comment in the patch in the corresponding file14:16
mlavallenjohnston: how about ^^^?14:16
njohnstonmlavalle: cool, sounds good14:16
*** boxiang_ has quit IRC14:17
mlavalleand as we merge patches, keep rebasing this one14:17
mlavallemakes sense?14:17
ralonsohyes14:17
njohnstondefinitely.14:17
bcafarelno promise but I'll try to take a few bites off if I get time14:18
mlavallethanks to the 3 of you :-)14:18
mlavalleOn https://blueprints.launchpad.net/neutron/+spec/smart-nic-ovs14:19
mlavalleI see that slaweq, tidwellr_ and njohnston reviwed the patch14:20
mlavalleand something got wrong with the latest revision14:21
mlavallebut we are moving forward14:21
slaweqyes, I think that it's quite close to be ready in overall14:21
*** amorin has joined #openstack-meeting14:21
slaweqI was also talking with hamdyk about it today on IRC14:21
mlavalleso thanks for keeping it moving forward14:21
mlavalleI also talked to him a few weeks ago. He is in the West Bank, Palestine14:22
mlavallewe don't get many devs from there14:22
*** aspiers has joined #openstack-meeting14:22
mlavallein regards to https://blueprints.launchpad.net/neutron/+spec/multiple-segment-per-network-per-host14:23
mlavallethanks rubasov and lajoskatona for taking a look at the spec: https://review.opendev.org/#/c/65717014:23
lajoskatonamlavalle: no problem14:24
rubasovof course14:24
mlavallelast time I looked at it (about a week ago) it seemed to me he still needs help fleshing out the non linuxbridge parts14:24
mlavallethis week I will try to help with ovs14:24
mlavallerubasov, lajoskatona: could you help him with sr-iov?14:25
*** shintaro has quit IRC14:25
mlavalleof should we reach out to Adrian Chiris of Mellanox?14:25
lajoskatonamlavalle: perhaps, sadly we have no hardware for it, so with limited possibilities14:26
*** armax has joined #openstack-meeting14:26
mlavalleok, I'll ping Adrian14:26
rubasovmlavalle: I'm in the same shoes as lajoskatona14:26
mlavalleFinally, about https://blueprints.launchpad.net/neutron/+spec/event-notifier-ironic14:27
lajoskatonamlavalle: but otherwise hope that we can help in this feature14:27
*** lpetrut has quit IRC14:27
*** _erlon_ has joined #openstack-meeting14:27
mlavalleI can see that hjensas is moving fast, as usual: https://review.opendev.org/#/c/658787/14:28
mlavallethanks ralonsoh and slaweq for the reviews :-)14:29
slaweqmlavalle: sure :)14:29
ralonsohnp14:29
mlavalleI will be going over the PTG summary again this week and will add more blueprints that we need to track14:30
mlavalleany other blueprints we should discuss today?14:30
rubasovthe extraroute improvement spec has an open question14:30
rubasovthat could use more opinions14:31
rubasovhttp://lists.openstack.org/pipermail/openstack-discuss/2019-May/006440.html14:31
rubasovhttps://review.opendev.org/65568014:31
mlavallerubasov: I'll take a stab today14:32
amotokiyeah, it raised an open question on the router abstraction.14:32
rubasovmlavalle: thanks a lot14:32
rubasovamotoki: I'm also trying to involve heat folks, since they raised the use case14:33
amotokirubasov: thanks for looping them in. we would like to see a good consensus.14:33
mlavalleok, moving on14:34
mlavalle#topic Bugs14:34
*** openstack changes topic to "Bugs (Meeting topic: networking)"14:34
*** mattw4 has quit IRC14:35
mlavalleLast week our deputy was tidwellr_14:35
tidwellr_hi14:35
mlavallehis report is here:14:35
mlavalle#link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006410.html14:36
mlavalleit seems it was a relatevile quiet week14:36
tidwellr_unless I missed something, yes :)14:36
mlavalleI want to highlight https://bugs.launchpad.net/neutron/+bug/182457114:37
openstackLaunchpad bug 1824571 in neutron "l3agent can't create router if there are multiple external networks" [Critical,Confirmed] - Assigned to Miguel Lavalle (minsel)14:37
mlavalleThis isn't from last week14:37
mlavallebut was recently promoted to critical by slaweq14:37
mlavalleso I am looking at it now14:38
slaweqyes, I know that at least couple of people hit this recently14:38
haleybmlavalle: let me know if you need help, although a fix wasn't quickly obvious to me14:38
mlavallehaleyb: thanks14:38
*** liuyulong has joined #openstack-meeting14:39
mlavallein regards to the RFE from last week: https://bugs.launchpad.net/neutron/+bug/182944914:39
openstackLaunchpad bug 1829449 in neutron "Implement consistency check and self-healing for SDN-managed fabrics" [Undecided,New]14:39
njohnstonI'd also like to highlight https://bugs.launchpad.net/bugs/1829042 just because it missed the report14:40
openstackLaunchpad bug 1829042 in neutron "Some API requests (GET networks) fail with "Accept: application/json; charset=utf-8" header and WebOb>=1.8.0" [Undecided,New]14:40
mlavallethis is the result of a conversation I had with Jacob in Denver14:40
tidwellr_njohnston: yes, I did forget to include that one in the report as I was compiling it14:40
mlavalleso thanks to tidwellr_ and slaweq for commenting on it14:41
tidwellr_mlavalle: you have some more insight into this?14:41
mlavalleand yes, njohnston thanks. Iwasn't going to skip it. I was going to point to bcafarel's message: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006429.html14:42
njohnstonthanks mlavalle, sorry for jumping the gun :-)14:42
mlavalletidwellr_: not much more of what is in the report. I asked Jacob to submit so we can flesh it out as a team14:43
tidwellr_mlavalle: thanks, I guess we wait :)14:43
mlavalleany other bugs we should discuss today?14:44
tidwellr_https://bugs.launchpad.net/neutron/+bug/1829414 came up in the report, I'm not sure what to make of it since I can't conceive of a real-life scenario where this occurs, but it is interesting14:45
openstackLaunchpad bug 1829414 in neutron "Attribute filtering should be based on all objects instead of only first" [Medium,In progress] - Assigned to Tom Stappaerts (tstappae)14:45
lajoskatonaPerhaps https://bugs.launchpad.net/neutron/+bug/182854314:45
openstackLaunchpad bug 1828543 in neutron "Routed provider networks: placement API handling errors" [Medium,New] - Assigned to Lajos Katona (lajos-katona)14:45
mlavallethanks for bringing them up14:46
mlavallelet's move on14:46
lajoskatonaI started a discussion on ml: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006377.html14:46
mlavalleThis week's deputy is Swami. I already exchanged emails with him, so he is on it14:47
mlavalleand next week it is amotoki's turn14:47
lajoskatonait seems that placement started to use some guide from API-SIG, and nobody else follows that, not even keystonauth, perhaps some advice, would be good how to proceed as it seems to be a xproject problem14:47
mlavallelajoskatona: I will add my two cents soon14:47
lajoskatonamlavalle: thanks14:48
mlavalle#topic neutron-lib14:48
*** openstack changes topic to "neutron-lib (Meeting topic: networking)"14:48
mlavalleboden: you around?14:48
bodenhi14:48
bodenbusiness as usual; working some consumption patches, but have been going a little slow as it seems the gate has been fussy14:49
bodenI don't see any items worth disucssing, unless someone has something?14:49
mlavallenot from me14:50
bodenI will note that we got 14 inches of snow last night in colorado!! :14:50
bodenbe glad you already came to the PTG :)14:50
bodenthat's it14:50
njohnstonyowza14:50
mlavalleLOL14:50
mlavallealmost summer14:51
bodenwinters here14:51
bcafarelso the few snowflakes we got for the summit were really nothing :)14:51
slaweq:D14:51
mlavalle#topic On demand agenda14:51
*** openstack changes topic to "On demand agenda (Meeting topic: networking)"14:51
mlavallenjohnston, ralonsoh: you are up14:52
ralonsohvery quick14:52
njohnstonOK, so this has to do with everyone's favorite topic: OVO.14:52
ralonsohnjohnston, please14:52
mlavalleyaay!14:52
njohnstonIn change https://review.opendev.org/#/c/658155/ ralonsoh proposed removing old OVO compatibility code for QOS objects. The OVO compatibility code is meant to facilitate rolling upgrades so that a server can communicate with an agent when the version difference is 1-off.14:53
njohnston Therefore there should not be any need to keep code beyond 2 cycles after the object version change; all of the changes ralonsoh proposed changing were >1 year old. I think this is useful housecleaning and I wanted to suggest to the community that we undertake such cleaning more broadly.14:53
njohnstonMy main concern is that unnecessary evaluation for potential downgrades may slow down RPC messaging14:54
njohnstonralonsoh: your turn14:54
ralonsohas I commented in the agenda, first I'll submit a doc in devref with examples for future developments14:54
slaweq+1 from me14:55
ralonsohthen I'll clean those compatibility functions not needed anymore14:55
slaweqwe should support N -> N+1 upgrades, right?14:55
davidshaN -> N+1 would mean backporting changes wouldn't it?14:55
njohnstonslaweq: Correct, we're only talking about code that would be N -> N+2,N+3,etc14:55
ralonsohslaweq, if someone add new modifications, this person will need to add the conversion methods14:56
ralonsohbut what we don't support is N -> N-1 with older versions (>1 year)14:56
mlavalleIt makes sense to me14:56
ralonsohand that's all, if you accept it14:56
njohnstonto put it another way: we don't need code in Train to handle an object version we added in Pike14:56
slaweqralonsoh: yes, I maybe did too big shorcut here :)14:57
mlavallecorrect14:57
ralonsohnjohnston, exactly14:57
slaweqI agree with what You said14:57
amotokido we support only N->N+1 in RPC messaging?14:57
tidwellr_I thought so.....14:57
amotokiit means all compute nodes should be upgraded before upgrading N+214:57
njohnstonamotoki: I believe that is all we have ever promised.  I went back and reviewed the old summit presentations about rolling upgrades, and that was the offered solution14:58
ralonsohamotoki, we support N > N+1 and N >N-1 if versions have less than 1 year14:58
ralonsohsorry, not  N > N+114:58
ralonsohalways downgrade14:58
*** bbowen_ has joined #openstack-meeting14:58
davidshaI was thinking14:58
amotokiI understand the upstream community only ensure N->N+1 upgrade.14:58
amotokifor example db migration only supports N->N+114:59
ralonsohonly one version difference, yes amotoki14:59
mlavalleok, guys, we ran out of time15:00
mlavallethanks for attending15:00
amotokiI think guaranteeing N->N+1 upgrade and support only N->N+1 are differnt a bit15:00
mlavallehave a great week15:00
davidshaThanks15:00
bcafarelbye!15:00
amotokilet's contineu this discussion somewhere15:00
mlavalle#endmeeting15:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"15:00
lajoskatonabye15:00
openstackMeeting ended Tue May 21 15:00:37 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/networking/2019/networking.2019-05-21-14.00.html15:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/networking/2019/networking.2019-05-21-14.00.txt15:00
openstackLog:            http://eavesdrop.openstack.org/meetings/networking/2019/networking.2019-05-21-14.00.log.html15:00
slaweqbye15:00
ralonsoh#startmeeting neutron_qos15:01
openstackMeeting started Tue May 21 15:01:03 2019 UTC and is due to finish in 60 minutes.  The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot.15:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:01
*** openstack changes topic to " (Meeting topic: neutron_qos)"15:01
openstackThe meeting name has been set to 'neutron_qos'15:01
ralonsohhi!15:01
*** bbowen has quit IRC15:01
davidshaHey!15:01
ralonsohdavidsha, I have you topic in Open Discussion15:01
davidshaperfect!15:01
rubasovhi15:02
lajoskatonao/15:02
mlavalleit's waht we tyalked aboiut in Denver, right?15:02
*** boden has left #openstack-meeting15:02
ralonsohmlavalle, yes, the classifier15:02
mlavallecool15:02
*** jchhatbar is now known as janki15:02
ralonsohok, let's go15:02
ralonsoh#topic RFEs15:02
*** openstack changes topic to "RFEs (Meeting topic: neutron_qos)"15:02
ralonsoh#link https://bugs.launchpad.net/neutron/+bug/157898915:02
openstackLaunchpad bug 1578989 in neutron "[RFE] Strict minimum bandwidth support (egress)" [Wishlist,Fix released] - Assigned to Bence Romsics (bence-romsics)15:02
ralonsohWe still have two patches under review15:03
ralonsoh#link https://review.opendev.org/#/c/629142/15:03
ralonsoh#link https://review.opendev.org/#/c/629253/15:03
ralonsohrubasov, lajoskatona any update?15:03
rubasovlajoskatona: please go ahead15:03
lajoskatonaralonsoh: yes, I think you refer to tempest tests15:03
ralonsohyes15:04
ralonsohI see you are still waiting for Nova 2.70 and 2.7115:04
lajoskatonaI think I started to get some review from tempest folks, so I hope that I can push them forward15:04
lajoskatonaralonsoh: yes, I take over those patches to push mines :-)15:04
ralonsohok, so your patch is rebased on top of those ones15:05
ralonsohperfect15:05
lajoskatonaralonsoh: so I think the progress is slow, but there's some15:05
ralonsohI'll put this one in my review pile15:05
lajoskatonaralonsoh: thanks15:05
ralonsoh#link https://review.opendev.org/#/c/629253/ is still  failing15:06
ralonsohok, any other update on this RFE?15:06
*** cheng1__ has joined #openstack-meeting15:07
rubasovralonsoh: not from me15:07
lajoskatonaralonsoh: yeah, I wanted to check that after there some progress on the nova v2.70/71 schema checks15:07
ralonsohthank you!15:07
ralonsohlajoskatona, ok15:07
ralonsohperfect, next one15:08
ralonsoh#link https://bugs.launchpad.net/neutron/+bug/156096315:08
openstackLaunchpad bug 1560963 in neutron "[RFE] Minimum bandwidth support (egress)" [Wishlist,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)15:08
ralonsohSlow progress15:08
ralonsohOne patch is almost merged, but I need to send a new version for https://review.opendev.org/#/c/655969/15:08
ralonsohI'll update the status next week15:09
ralonsohthe last RFE I have15:09
ralonsoh#link https://bugs.launchpad.net/neutron/+bug/181847915:09
openstackLaunchpad bug 1818479 in neutron "RFE Decouple placement reporting service plugin from ML2" [Wishlist,Incomplete] - Assigned to Bence Romsics (bence-romsics)15:09
ralonsohI think this one has been discussed in the previous meeting15:09
rubasovralonsoh: that's another one15:10
ralonsohoh, yes!!15:10
ralonsohsorry15:10
rubasovralonsoh: this one we agreed on the ptg to postpone indefinitely15:10
mlavalleyes15:10
ralonsohyou are right15:10
mlavallewe don't have the use case15:10
rubasovI think I set it to Incomplete15:10
ralonsohdo you want me to keep it there? I can freeze it for now15:11
rubasovand added the rfe-postponed tag15:11
rubasovit's probably a good idea to freeze it and not discuss every 2nd week15:11
ralonsohok, I'll remove it from the etherpad for now15:11
rubasovbecause there won't be much happening15:11
rubasovralonsoh: thank you15:11
ralonsohthanks!15:11
ralonsohnext topic15:11
ralonsoh#topic Bugs15:11
*** openstack changes topic to "Bugs (Meeting topic: neutron_qos)"15:11
ralonsoh#link https://bugs.launchpad.net/neutron/+bug/182409515:12
openstackLaunchpad bug 1824095 in neutron "Fullstack job broken in stable/stein branch" [High,Fix committed] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)15:12
ralonsohan small problem with fullstack tests15:12
ralonsohsolved in https://review.opendev.org/#/c/651458/15:12
ralonsohI'll remove this one from the backlog, just for the records15:12
ralonsohnext one15:12
ralonsoh#link https://bugs.launchpad.net/neutron/+bug/182669515:12
openstackLaunchpad bug 1826695 in neutron "[L3][QoS] cache does not removed when router is down or deleted" [Medium,In progress] - Assigned to LIU Yulong (dragon889)15:12
ralonsohpatch #link https://review.opendev.org/#/c/656105/15:12
ralonsohliuyulong, hi15:13
liuyulonghi15:13
ralonsohabout https://review.opendev.org/#/c/656105/ and slaweq's comment15:13
ralonsohwill you push a fullstack test for this one?15:14
liuyulongI will15:14
ralonsohperfect, IMO, the code is OK now15:15
*** cheng1 has joined #openstack-meeting15:15
liuyulongA long test case, set router floating IP QoS, set router down and up.15:15
liuyulongThen verify the TC rules.15:15
ralonsohboth when the router is up, then down and up again15:16
ralonsohnot both, in those three statuses15:16
liuyulongadmin_state down/up, this is the point to bugs.15:17
ralonsohok, just to verify the first UP status is the same as the second15:17
liuyulongThe patch is now based on this: https://review.opendev.org/#/c/656163/15:17
ralonsohbut it's ok15:17
liuyulongThat l3 agent lock minimizing patch.15:18
ralonsohyeah, well, I still have some concerns about https://review.opendev.org/#/c/656163/ and the helpers used15:18
ralonsohin https://review.opendev.org/#/c/656163/8/neutron/common/coordination.py15:18
ralonsohbut this is other topic and we can discuss it in the review15:19
liuyulongSure15:19
ralonsohthank you very much!15:19
ralonsohI don't have more QoS bugs in the list, am I missing someone?15:20
ralonsohcool, next topic15:20
ralonsoh#topic Open Discussion15:21
*** openstack changes topic to "Open Discussion (Meeting topic: neutron_qos)"15:21
ralonsoh#link https://bugs.launchpad.net/neutron/+bug/1476527 - Adoption of Neutron Classifier API into QoS15:21
openstackLaunchpad bug 1476527 in neutron "[RFE] Add common classifier resource" [Wishlist,Triaged] - Assigned to Igor D.C. (igordcard)15:21
ralonsohthe main patch is #link https://review.openstack.org/#/c/63633315:21
ralonsohI have some concerns about it15:21
ralonsohthis patch is modifying the DB and making references to an external project (classifier)15:22
ralonsohI tried to figure out how to make those API and DB changes generic and not dependant on the classifiers (classifications groups in the QoS rule DSCP)15:23
ralonsohbut I didn't find the way15:23
ralonsohhttps://review.opendev.org/#/c/636333/9/neutron/db/migration/alembic_migrations/versions/stein/expand/cd28a37b069f_classifier_dscp.py15:23
*** armstrong has joined #openstack-meeting15:23
*** jrbalderrama has quit IRC15:23
ralonsohthe main problem here is https://review.opendev.org/#/c/636333/9/neutron/db/migration/alembic_migrations/versions/stein/expand/cd28a37b069f_classifier_dscp.py@10315:23
*** priteau has joined #openstack-meeting15:24
ralonsohadding the possibility to create several dscp rules per QoS policy15:24
ralonsohbut depending on something that doesn't exist in Neutron, the classification groups15:24
ralonsohdavidsha, ?15:24
davidshaIf NC isn't part of a deployment then that would function the same as the original unique constraint15:25
davidshaclassification_group being None and it couldn't create a duplicate with None as the classification group, is that the concern?15:26
ralonsohby default, "classification_group_id" can't be a reference to another table ID15:27
ralonsohright?15:27
davidshayup, it defaults to None15:27
ralonsohmlavalle, is OK to have something like this in the dscp table? a parameter which, by default, is going to be None and if NC is loaded, it will be populated15:28
ralonsoh?15:28
mlavalleI'd say yes15:29
mlavallewhat is your concern about it?15:29
ralonsohbecause we'll have something like "classification_group_id" in that table15:30
ralonsohbut this is something not directly related to Neutron, only Neutron15:30
davidshaA DB entry thats always there and unless an external project is enabled can't be anything but None15:30
ralonsohBTW, for me is OK15:30
ralonsohI'm just the devil advocate here15:31
ralonsohthe other concern is: how to "fill" correctly the agents qos extension15:31
davidshakk, It was the way it needed to be written, just the way DB migrations work for extensions15:31
ralonsohok15:32
ralonsohabout the agent extensions15:32
ralonsohhttps://review.opendev.org/#/c/636333/9/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py15:32
mlavalleI'm ok with it15:32
ralonsohmlavalle, perfect15:32
ralonsohdavidsha, how the agents are going to add the classification information in the dscp rules? will you create something like a QoS extension?15:33
davidsharalonsoh: It went out of scope of Neutron Classifier about the actual backend implementation, It was always presumed that the consuming extension would be the one implementing the classification15:34
ralonsohbut the drivers should need to have this information15:35
davidshaAs in the classification definition?15:35
ralonsohthe dscp rules are going to be different depending on if the NC is loaded15:36
davidshaYes15:36
davidshaThis is done through the agent extensions API15:36
davidshahttps://review.opendev.org/#/c/636333/9/neutron/agent/common/agent_extension_api.py15:36
ralonsohyes, that's the point15:37
ralonsohthis code refers to something that doesn't exist in Neutron15:37
davidshaSpecifically this part, or generally15:38
davidsha?15:38
ralonsohwhat you are doing here is to create a base agent extension, inherited by the other extensions15:38
ralonsohadding API calls to something that doesn't exist in Neutron, the classification groups15:39
ralonsoh?15:40
davidshaagent extension APIs*(just to clarify), Ya, it functions in the same way QoS would retrieve the definitions of it's QoS Policies15:40
davidshabut This is something inside Neutron that would need something in Neutron Classifier to propegate the Classification definitions15:41
ralonsohI know, but as I said, we introduce the concept of classification groups in Neutron but Neutron doesn't know anything about this15:42
ralonsohthis is not even an in-tree extension15:42
ralonsohalthough we add it15:43
ralonsohdavidsha, ?15:44
ralonsohsorry for being a pain in the neck15:44
davidshaIt's fine, I'm just trying to find words ;P15:45
davidshaThis is kinda just the result of how the original approval team wanted this implemented, I'm not sure how to work around it15:46
ralonsohmlavalle, is that ok to have something like this in Neutron?15:46
ralonsohhttps://review.opendev.org/#/c/636333/9/neutron/agent/common/agent_extension_api.py15:46
* mlavalle looking15:47
mlavallewe seem to be stumbling with the same thing, don't we?15:48
mlavalleit;s the fact that we are using something that is external and that we don't know about15:48
mlavalleright?15:48
ralonsohthat's the point15:49
davidshaYa, it's the introduction of a soft external dependency15:49
mlavallebut how is this "library" different from say neutron-lib?15:49
davidshaIt's a service plugin + extension rather than a library.15:50
mlavalleit's a library, after all, right?15:50
davidshaIt creates resources to model classifications for other Neutron resources to consume.15:50
*** Lucas_Gray has quit IRC15:51
*** cheng1__ has quit IRC15:51
mlavalleso we are creating a comsumption model in Neutron that didn't exist before?15:51
mlavalleshould we have an abstraction layer in Neutron to consume it?15:51
davidshayes, but through the AgentExtensionAPI which has been there sine newton.15:52
mlavalleahhh, that's the abstraction layer15:52
davidshaYup, its how l2/l3 extensions can retrieve info from the agents.15:53
davidshaNC is adding an extra bit so that extensions can ask it what a classification group id actually means.15:54
*** e0ne has quit IRC15:55
*** janki has quit IRC15:55
mlavalleI think I need to spend some time with https://review.opendev.org/#/c/63633315:55
*** macza has joined #openstack-meeting15:55
mlavalleto wrap my mind around it15:56
*** macza has quit IRC15:56
davidshakk, thanks15:56
ralonsohmlavalle, I'll ping slaweq for this too, if he has some time15:56
mlavalleok15:56
*** macza has joined #openstack-meeting15:56
*** smcginnis has joined #openstack-meeting15:57
ralonsohdavidsha, sorry but we need to find a way to introduce NC in a good way15:57
davidshaack, thats fair15:57
ralonsohok, guys, almost time to finish the meeting15:57
ralonsohsomething else in the agenda?15:57
davidshaThere was a suggestion to move the Service plugin itself into Neutron, is that feasible?15:58
ralonsohdavidsha, I don't remember this suggestion15:58
ralonsohwho did it?15:58
davidsharalonsoh: kk, I thought you mentioned it at the PTG, I must have misunderstood15:58
njohnstonI never understood why NC was not done inside neutron-lib15:59
ralonsohnjohnston, that could be a good solution15:59
ralonsohanyway guys, we need to finish now15:59
ralonsohthank you all15:59
davidshanjohnston, if it was a Mixin for things to include, it would make sense, but not a resource unto itself I think15:59
davidshathanks!16:00
ralonsoh#endmeeting16:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:00
ralonsohbye16:00
openstackMeeting ended Tue May 21 16:00:07 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_qos/2019/neutron_qos.2019-05-21-15.01.html16:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_qos/2019/neutron_qos.2019-05-21-15.01.txt16:00
rubasovthanks16:00
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_qos/2019/neutron_qos.2019-05-21-15.01.log.html16:00
rubasovbye16:00
lajoskatonabye16:00
*** lajoskatona has left #openstack-meeting16:00
slaweq#startmeeting neutron_ci16:00
openstackMeeting started Tue May 21 16:00:47 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
slaweqhi16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
ralonsohhi16:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"16:00
openstackThe meeting name has been set to 'neutron_ci'16:00
njohnstono/16:00
slaweqlets wait few more minutes for mlavalle and others16:01
bcafareljust passing by to say hi before I leave :)16:02
slaweqhi bcafarel :)16:02
bcafarelhi and bye!16:02
*** carloss has joined #openstack-meeting16:02
*** wwriverrat has quit IRC16:02
njohnstona tout a l'heure bcafarel16:02
haleybhi16:02
slaweqok, lets start16:02
slaweqfirst of all16:03
slaweq#link http://grafana.openstack.org/dashboard/db/neutron-failure-rate16:03
slaweqplease load it now to have it ready later :)16:03
*** armstrong has left #openstack-meeting16:03
slaweqand one small announcement - I moved agenda of this meeting to the etherpad: https://etherpad.openstack.org/p/neutron-ci-meetings16:03
slaweqso You can take a look at it and add anything You want to it :)16:03
slaweqfirst topic for today16:04
slaweq#topic Actions from previous meetings16:04
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"16:04
slaweqand first action16:04
slaweqmlavalle to continue debuging reasons of neutron-tempest-plugin-dvr-multinode-scenario failures16:04
slaweqI think that mlavalle is not here now16:05
slaweqso lets skip to actions assigned to other people16:05
slaweqralonsoh to debug issue with neutron_tempest_plugin.api.admin.test_network_segment_range test16:05
ralonsohslaweq, sorry again but I didn't find anything yet16:06
slaweqok, no problem16:06
slaweqI don't think it is very urgent issue for now16:06
slaweqcan I assign it to You for next week also?16:06
ralonsohyes but I'll need a bit of help16:07
ralonsohI don't find the problem there16:07
slaweqTBH I didn't saw this issue recently16:07
mlavalleslaweq: I have to look at an internal issue16:08
slaweqso maybe lets just wait until it will happend again, then report proper bug on launchpad and work on it16:08
slaweqralonsoh: how about that?16:08
ralonsohslaweq, perfect16:08
slaweqralonsoh: ok, thx16:08
slaweqmlavalle: sure, no problem :)16:08
*** panda|rover has quit IRC16:09
slaweqok, so lets go back to mlavalle's actions now16:09
slaweqmlavalle to continue debuging reasons of neutron-tempest-plugin-dvr-multinode-scenario failures16:10
slaweqany updates on this one?16:10
mlavallenot much time doing that16:10
slaweqok, can I assign it to You for next week also?16:10
*** panda has joined #openstack-meeting16:10
mlavalleyes16:10
slaweq#action mlavalle to continue debuging reasons of neutron-tempest-plugin-dvr-multinode-scenario failures16:11
slaweqthx :)16:11
slaweqso next one16:11
slaweqmlavalle to talk with nova folks about slow responses for metadata requests16:11
mlavalledidn't have time, sorry :-)16:11
mlavalle;-(16:11
slaweqno problem :)16:11
slaweqcan I assign it to You for next week than?16:11
mlavalleyes16:11
slaweq#action mlavalle to talk with nova folks about slow responses for metadata requests16:12
slaweqthx16:12
slaweqand the last one:16:12
slaweqslaweq to fix number of tempest-slow-py3 jobs in grafana16:12
slaweqI didn't but haleyb fixed this16:12
slaweqthx haleyb :)16:12
slaweqany questions/comments?16:12
haleyb:)16:12
haleybi don't know wat i did though16:13
slaweqYou fixed number of jobs in grafana dashboard's config :)16:13
slaweqI don't have link to patch now but it was merged for sure already16:14
haleyboh yes, that one16:14
slaweqyep16:14
slaweqok, lets move forward then16:14
slaweqnext topic16:15
slaweq#topic Stadium projects16:15
*** openstack changes topic to "Stadium projects (Meeting topic: neutron_ci)"16:15
slaweqPython 3 migration16:15
slaweqStadium projects etherpad: https://etherpad.openstack.org/p/neutron_stadium_python3_status16:15
slaweqany updates on this?16:15
*** gyee has joined #openstack-meeting16:15
mlavalleI have to talk to yamamoto about this, in regards to midonet16:16
njohnstondid not have a chance this week to work on it; current state of the fwaas tests in neutron-tempest-plugin is that they wait forever and all tests die with timeouts16:17
slaweqthat's bad :/16:18
slaweqso fwaas isn't py3 ready yet?16:18
njohnstonI think it is py3 ready, it just has massive issues in other areas16:18
slaweqahh ok :)16:18
*** igordc has joined #openstack-meeting16:20
slaweqok, so I think that we can move forward as there is no a lot of update on py3 migration16:20
slaweqtempest-plugins migration16:20
njohnstonyeah16:20
slaweqEtherpad: https://etherpad.openstack.org/p/neutron_stadium_move_to_tempest_plugin_repo16:20
njohnstonI realize I gave my tempest plugin update in the py3 section, sorry about that.16:20
slaweqhere I know that bcafarel did some progress and his first patch is even merged16:21
slaweqso he is the first one :)16:21
slaweqnjohnston: no problem :)16:21
slaweqfor networking-bgpvpn I have patches ready for review:16:21
slaweqStep 1: https://review.openstack.org/65299116:21
slaweqStep 2: https://review.opendev.org/#/c/657793/16:21
slaweqso I'm kindly ask for review :)16:22
mlavalleok16:22
slaweqany other comments/questions on this topic?16:23
njohnstonI'll take a look16:23
slaweqthx guys16:23
mlavallenot from me16:23
slaweqok, so lets move on then16:24
slaweq#topic Grafana16:24
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"16:24
slaweq#link http://grafana.openstack.org/dashboard/db/neutron-failure-rate (was alredy given at the beginning)16:24
slaweqthere are some gaps from the weekend but except that I think that our CI looks quite good in last days16:26
njohnstonagreed16:26
slaweqand I also saw many patches merged even without rechecking, which is supprising :)16:26
mlavalleyes, it looks good16:26
njohnstonyep!16:27
*** ricolin has quit IRC16:27
slaweqeven ssh problems happend less often recently but I'm not sure if that is just "by accident" or there was e.g. some change in infra which helped with it somehow maybe16:28
slaweqone problem which I still see are fullstack and (bit less but still) functional tests16:28
slaweqall of them are on quite high failure rates16:29
njohnstonfullstack is high, around 30%16:29
slaweqyep16:29
slaweqfunctional was like that last week too16:30
njohnstonbut in grafana, in check queue, functional seems much less, between 5 and 11%, so hopefully we fixed something compared to last week16:30
slaweqbut then we merged https://review.opendev.org/#/c/657849/ from ralonsoh and I think that helped a lot16:30
slaweqso lets talk about fullstack tests now16:31
slaweq#topic fullstack/functional16:31
*** openstack changes topic to "fullstack/functional (Meeting topic: neutron_ci)"16:32
slaweqI was checking results of fullstack jobs from last couple of days16:32
slaweqand I found couple of failure examples16:32
slaweqone is (again) problem with neutron.tests.fullstack.test_l3_agent.TestHAL3Agent16:33
slaweqlike in http://logs.openstack.org/78/653378/7/check/neutron-fullstack-with-uwsgi/d8b47f9/testr_results.html.gz16:33
slaweqbut I think I saw it more times during last couple of days16:33
slaweqI can find and reopen bug related to this16:34
slaweqbut I don't think I will have time to look into this in next days16:35
slaweqso maybe someone else will want to look16:35
slaweq#action slaweq to reopen bug related to failures of neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost16:35
slaweqI will ask liuyulong tomorrow if he can take a look at this once again16:36
*** mattw4 has joined #openstack-meeting16:36
slaweqfrom other errors I saw also failed neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent test16:36
slaweqhttp://logs.openstack.org/87/658787/5/check/neutron-fullstack/7d35c49/testr_results.html.gz16:36
slaweqralonsoh: I know You were looking into such failure some time ago, right?16:36
*** davidsha has quit IRC16:37
ralonsohslaweq, I don;t remember this one16:37
slaweqralonsoh: ha, found it: https://bugs.launchpad.net/neutron/+bug/179955516:38
openstackLaunchpad bug 1799555 in neutron "Fullstack test neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent timeout" [High,Confirmed] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)16:38
ralonsohslaweq, yes, I was looking at the patch16:38
ralonsohhttps://review.opendev.org/#/c/643079/16:38
ralonsohsame as before, I didn't find anything relevant to solve the bug16:38
ralonsohI'll take a look at those logs16:39
slaweqmaybe we should add some additional logging to master branch16:39
slaweqit may help us investigate when same issue will happened again16:39
slaweqwhat do You think about it?16:40
ralonsohslaweq, I'll propose a patch for this16:40
slaweqralonsoh++ thx16:40
slaweq#action ralonsoh to propose patch with additional logging to help debug https://bugs.launchpad.net/neutron/+bug/179955516:40
openstackLaunchpad bug 1799555 in neutron "Fullstack test neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent timeout" [High,Confirmed] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)16:40
slaweqthere was also once failure of test_min_bw_qos_policy_rule_lifecycle test:16:41
slaweqhttp://logs.openstack.org/46/453346/11/check/neutron-fullstack/7c6c94b/testr_results.html.gz16:41
slaweqand there are some errors in log there http://logs.openstack.org/46/453346/11/check/neutron-fullstack/7c6c94b/controller/logs/dsvm-fullstack-logs/TestMinBwQoSOvs.test_min_bw_qos_policy_rule_lifecycle_egress,openflow-cli_/neutron-openvswitch-agent--2019-05-21--00-19-19-155407_log.txt.gz?level=ERROR16:42
ralonsohslaweq, but I think that was solved16:42
ralonsohslaweq, I'll review it too16:42
slaweqresults are from 2019-05-21 00:1916:43
slaweqbut it is quite old patch so it might be that this is just an old error16:43
slaweqit happend on this patch https://review.opendev.org/#/c/453346/16:44
ralonsohyes, I saw this. I think the patch I applied some weeks ago solved this16:44
slaweqok, so we should be good with this one then :)16:44
slaweqthx ralonsoh for confirmation16:45
ralonsohthere should be no complain about deleting a non exisitng QoS rule16:45
slaweqok, so lets move on to functional tests now16:45
slaweqI saw 2 "new" issues there16:45
slaweqfirst is test_ha_router_failover test fails again16:45
slaweqhttp://logs.openstack.org/61/659861/1/check/neutron-functional/3708673/testr_results.html.gz - I reported it as new bug https://bugs.launchpad.net/neutron/+bug/182988916:45
openstackLaunchpad bug 1829889 in neutron "_assert_ipv6_accept_ra method should wait until proper settings will be configured" [Medium,Confirmed] - Assigned to Slawek Kaplonski (slaweq)16:45
slaweqI will take care of this one16:46
slaweqI think I already know what is the issue there16:46
slaweqand I described it in bug report16:46
slaweqand second issue which I found is:16:46
slaweqneutron.tests.functional.agent.linux.test_bridge_lib.FdbInterfaceTestCase16:46
slaweqhttp://logs.openstack.org/84/647484/9/check/neutron-functional/6709666/testr_results.html.gz16:46
slaweqfor which njohnston reported a bug https://bugs.launchpad.net/neutron/+bug/182989016:46
openstackLaunchpad bug 1829890 in neutron "neutron-functional CI job fails with InterfaceAlreadyExists error" [Undecided,New]16:46
slaweqit failed on patch https://review.opendev.org/#/c/64748416:47
*** tssurya has quit IRC16:47
slaweqand I saw it only on this patch16:47
slaweqbut it don't looks like related to this patch IMO16:47
njohnstonhttp://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Interface%20interface%20already%20exists%5C%2216:47
njohnstonand expand out to 7 days16:48
njohnstonI get 11 failures on 5 different changes16:48
slaweqok, so it is an issue, thx njohnston16:48
slaweqis there any volunteer to look into this one?16:49
mlavalleI have some catch up to do this week16:49
mlavallebut if by next week, nobody volunteers, I'll jump in16:49
*** whoami-rajat has quit IRC16:49
slaweqok, I will add it to my todo for this week, but I'm not sure if I will have time16:50
slaweqso I will not assign it to myself for now, maybe there will be someone else who wants to take it16:50
njohnstonI'm in the same boat; I have a number of things ahead of it, but if I get a chance I'll jump in16:50
slaweqok16:51
slaweqso that's all regarding fullstack/functional jobs16:51
slaweqany questions/comments?16:51
mlavallenot from me16:51
slaweqok16:52
slaweqlets then move on quickly to next topic16:52
slaweq#topic Tempest/Scenario16:52
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)"16:52
*** armax has quit IRC16:52
slaweqfirst of all, I want to mention that we have quite often failures with errors like16:52
slaweq"Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'"16:52
slaweqit causes errors in devstack deployment and job is failing16:52
slaweqit's not related to neutron directly16:53
slaweqand AFAIK infra people are aware of this16:53
slaweqand second thing which I want to mention16:53
slaweqI did patch with summary of tempest jobs16:53
slaweqhttps://review.opendev.org/#/c/660349/ - please review it,16:53
slaweqit's follow up after discussion from Denver16:54
mlavallenice!16:54
slaweqwhen this will be merged, my plan is to maybe switch some of those tempest jobs with neutron-templest-plugin jobs16:54
slaweqas IMHO we don't need to test tempest-full-xxx jobs with every possible config like dvr/l3ha/lb/ovs/....16:55
slaweqwe can IMHO run tempest-full with some default config and then test neutron related things with other configurations16:55
njohnstoncool idea, I like it16:56
slaweqbut I want to have this list merged first and than use it as list of "todo" :)16:56
mlavalleok16:56
slaweqthere is also list of grenade jobs in this patch16:56
slaweqand speaking about grenade jobs, I sent email some time ago16:56
slaweqhttp://lists.openstack.org/pipermail/openstack-discuss/2019-May/006146.html16:57
slaweqplease read it, and tell me what You think about it16:57
slaweqI will also ask gmann and other qa folks about their opinion on this16:57
slaweqand that's all from my side :)16:58
slaweqany questions/comments?16:58
slaweqwe have about 1 minute left16:58
mlavallenot from me16:58
njohnstonyeah I was waiting for gmann's response to that email16:58
slaweqok, I will ping him also :)16:58
njohnstonthanks slaweq16:59
slaweqso if there is nothing else16:59
slaweqthx for attending16:59
slaweqand have a nice week :)16:59
slaweq#endmeeting16:59
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:59
slaweqo/16:59
openstackMeeting ended Tue May 21 16:59:28 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:59
ralonsohbye16:59
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-05-21-16.00.html16:59
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-05-21-16.00.txt16:59
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-05-21-16.00.log.html16:59
*** mattw4 has quit IRC17:07
*** iyamahat has joined #openstack-meeting17:12
*** igordc has quit IRC17:12
*** igordc has joined #openstack-meeting17:13
*** whoami-rajat has joined #openstack-meeting17:15
*** mattw4 has joined #openstack-meeting17:20
*** iyamahat has quit IRC17:24
*** armax has joined #openstack-meeting17:27
*** iyamahat has joined #openstack-meeting17:27
*** eharney has quit IRC17:27
*** yamahata has joined #openstack-meeting17:27
*** enriquetaso has quit IRC17:28
*** enriquetaso has joined #openstack-meeting17:30
*** kopecmartin is now known as kopecmartin|off17:31
*** electrofelix has quit IRC17:36
*** ekcs has joined #openstack-meeting17:36
*** ralonsoh has quit IRC17:37
*** jamesmcarthur_ has quit IRC17:42
*** priteau has quit IRC17:42
*** igordc has quit IRC17:46
*** jamesmcarthur has joined #openstack-meeting17:58
*** enriquetaso has quit IRC17:58
*** rossella_s has joined #openstack-meeting18:00
*** jamesmcarthur has quit IRC18:02
*** jamesmcarthur has joined #openstack-meeting18:12
*** panda has quit IRC18:14
*** panda has joined #openstack-meeting18:15
*** igordc has joined #openstack-meeting18:18
*** eharney has joined #openstack-meeting18:21
*** jamesmcarthur_ has joined #openstack-meeting18:22
*** jamesmcarthur has quit IRC18:24
*** hyunsikyang__ has joined #openstack-meeting18:24
*** mattw4 has quit IRC18:24
*** hyunsikyang has quit IRC18:26
*** mattw4 has joined #openstack-meeting18:28
*** Shrews has joined #openstack-meeting18:40
clarkbanyone else here for the infra meeting?19:00
clarkbwe will get started momentarily19:00
ianwo/19:00
clarkb#startmeeting infra19:01
openstackMeeting started Tue May 21 19:01:13 2019 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: infra)"19:01
openstackThe meeting name has been set to 'infra'19:01
clarkb#link http://lists.openstack.org/pipermail/openstack-infra/2019-May/006382.html19:01
clarkbyou will find today's agenda at that link19:01
clarkb#topic Announcements19:01
*** openstack changes topic to "Announcements (Meeting topic: infra)"19:01
*** enriquetaso has joined #openstack-meeting19:01
cmurphyo/19:01
clarkbThis one is sort of last minute but I'm going to be afk tomorrow to spend time with family fishing19:01
fungigood idea!19:02
fungiyou might be able to catch more family19:02
fungijust need the right bait19:02
clarkbor allergies from the great outdoors (though appaerntly there is a line of thought that allergies are more of a city thing because usda said grow male trees in cities back in the 40s)19:03
clarkb#topic Actions from last meetin19:03
*** openstack changes topic to "Actions from last meetin (Meeting topic: infra)"19:03
clarkb#link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-05-14-19.01.txt minutes from last meeting19:03
clarkbNo actions from last meeting as I had completed my action from the prior meeting19:03
clarkb#topic Specs approval19:04
*** openstack changes topic to "Specs approval (Meeting topic: infra)"19:04
*** zaneb has quit IRC19:04
clarkbNo specs up for approval but I did want to make a note that there are a small number of specs starting to trickle out from ptg/summit conversations/work19:04
*** zaneb has joined #openstack-meeting19:05
clarkbso if you've got a moment skimming through or even better doing proper reviews on those would be great19:05
clarkbwe do have a rather packed agenda (I expect) so lets dive right in19:05
clarkb#topic Priority Efforts19:05
*** openstack changes topic to "Priority Efforts (Meeting topic: infra)"19:05
clarkb#topic Update Config Management19:05
*** openstack changes topic to "Update Config Management (Meeting topic: infra)"19:05
clarkbOne major item to take note of here is puppet inc deleted their puppet 3 and 4 apt/rpm repos19:06
fungi(on the previous topic, there was some renewed interest in the irc bot spec too)19:06
clarkbas a result we have switched to installing puppet 4 from the archive via a direct package install of the .deb19:06
clarkbwe no longer have centos 7 machines in production so no rpms to worry about and the puppet-agent pacakge includes everything it needs to function at least with our puppet apply method19:06
clarkbAnd we only have one last remaining puppet3 instance and that is ask.o.o19:07
*** gyee has quit IRC19:07
clarkb#link https://review.opendev.org/#/c/647877/ Last puppet 4 upgrade19:07
clarkbianw: any reason to not merge ^ today? I didn't want to step on your xenial upgrade for ask19:07
ianwclarkb: should be good, i can watch that in today19:07
clarkbgreat I can help watch it too as I've learned some of the patterns for how puppet 4 gets unahppy19:08
clarkbOn the zuul driven CD side of things the reorg of base.yaml to split it up into a bunch of separate playbooks called by run_all.sh has merged19:08
clarkbthis means if you are making changes to base.yaml you will need to rebase and split your stuff out too19:08
fungiuneventfully too as far as i can tell19:08
fungiother than it seems to have (maybe?) shrunk the duration of our ansipup pulses19:09
clarkbone nice side effect of this is we actually fully test our ansible + puppet stuff in those system-config-run-base type jobs now19:09
fungiyes, over time we ought to be able to whittle away at the beaker and apply jobs in favor of these19:09
clarkbAnd over the longer term we can break stuff out of run_all.sh and have zuul jobs trigger those playbooks instead19:10
clarkbany questions, concerns, or things I've missed on this topic?19:10
ianwfungi: i think the overall time is still reflected in http://grafana.openstack.org/d/qzQ_v2oiz/bridge-runtime?orgId=1 and hasn't shrunk; though i will update the graph to take into account the other stats now being sent19:11
fungiahh19:11
fungisounded like we'd gone from 45 minutes between starts to 3019:11
*** senrique_ has joined #openstack-meeting19:11
fungibut that could also be coincidence19:11
clarkbI did want to say thank you to cmurphy for getting us on to more modern puppet even if puppet pulled the rug out from under us in the end. It was a fair bit of work and should helpfully result in a more sustainable setup between now and the future19:11
ianw++ !19:12
fungi#thanks cmurphy for driving our massive puppet upgrade19:12
fungioh, right, we don't keep statusbot in here19:12
cmurphy:)19:12
* mordred hands cmurphy a nicely glazed antelope19:13
*** enriquetaso has quit IRC19:13
clarkbWe did still have unhappy docker jobs in limestone. I think the plan to use the mirror nodes for that is still a good one, but we started brainstorming other debugging ideas in -infra today19:13
* cmurphy prefers cantaloupe tbh19:13
clarkbs/did/do/19:14
fungialso known as deglazed antelope19:14
clarkbI expect using the mirror will solve all the problems so this is mostly an exercise in understanding the quirks and features of docker tooling19:14
clarkb(I expect that because other people/jobs have used those mirrors successfully)19:14
fungii'm leaning toward missing a v6 iptables rule as a likely suspect. it fits the observed behavior19:15
clarkbalright anything else on the topic of config management before we move on?19:15
funginothing springs to mind19:16
clarkb#topic OpenDev19:16
*** openstack changes topic to "OpenDev (Meeting topic: infra)"19:16
clarkbI deleted the cgit cluster yesterday19:16
mordred\o/19:16
clarkbI have not heard any screaming yet19:16
* mordred screams in joy19:17
clarkbAs for next steps, I'm personally interested in improving the stability and sustainability of our new tooling19:18
clarkbI think we also have some cleanup work like project renames (which we have later on the agenda)19:18
clarkbfor stability and sustainability I'd like to see our gitea image builds be reliable (hence the docker job debugging), rebuild the gitea06 server which has a corrupted fs as well as eventually rebuilding all gitea servers with more disk and less cpu (based on cacti data)19:19
clarkb#link https://review.opendev.org/#/c/640027/ control plane clouds.yaml on nodepool builders19:19
fungiyeah, i'm fine putting off further service migrations until we get cleanup from the last one behind us19:19
*** gyee has joined #openstack-meeting19:19
clarkb#link https://review.opendev.org/#/c/640044/5 Build ubuntu bionic control plane images with nodepool19:19
clarkbThese two changes from mordred are part of being able to sanely rebuild the gitea servers19:20
clarkbif people have time to review them that would be great19:20
fungiawesome19:20
mordredlong live our new nodepool-created-base-images-overlords19:21
clarkbThat said I don't think this is the only opendev work that has to be done. I think ianw's work to build opendev in region mirrors is helpful because it should result in more reliable jobs and starts the process of rotating out the old names19:21
mordred++19:21
clarkbI think we can probably start to entertain the idea of people picking off services like etherpad and paste and the like as well19:21
clarkbsince those should be self contained19:21
fungii see that as part of getting the image builds more reliable anyway19:21
fungi(the mirrors work)19:21
clarkbya19:22
mordredya19:22
ianwthat would be migrations of such things to more containerised approaches?19:23
clarkbianw: in many cases yes I expect we'll couple the container deployment to the new naming scheme. In particular we have to update apache configs for many things so may as well take that on with the container approach (or ansible if containers just don't make sense)19:24
mordred++19:24
fungispeaking of, has the approach so far been application container with apache proxy in the outer system context?19:25
fungior are we deploying apache container proxying to application container?19:25
clarkbI'm not sure any of our currently docker'd services have an apache container (grafana might?)19:26
mordredI agree with clarkb19:26
mordredwe're not apache-ing gitea19:26
fungithinking about upcoming work i expect to be doing on mailman 3 and maybe mediawiki19:26
clarkbfungi: but beacuse we use system network namespace we should be able to haev an apache container listen on 443 that proxies to port 8080 or whatever just like we do today19:26
mordredyup. but we could also install apache on the host os and have it proxy to port 8080 too, if we find that to be more pleasant19:27
ianwclarkb: grafana not dockered yet ...19:27
clarkbianw: ah19:27
fungigot it. and if the application needs a database, is that in yet another container or in the system context?19:27
mordredcontainer19:27
clarkbin any case we don't have to think about what traffic looks like over a docker bridge19:27
*** mlavalle has left #openstack-meeting19:28
clarkbfungi: gitea runs a mariadb container19:28
fungicool19:28
clarkbfungi: with all of the state mounted from the host19:28
clarkb(which was important after we nuked docker with k8s)19:28
fungibindmount?19:28
clarkbfungi: ya19:28
mordredyeah. and I think for stuff like that it gives us a good way to get modern db services from our upstreams19:28
fungiso do we have a standard mariadb container image, or is it gitea-specific?19:29
mordredso I could see us deciding to docker the apache when we get to it for consistency sake and ease of latest apache19:29
fungilike, for example if i wanted to dockerize the mediawiki deployment19:29
*** whoami-rajat has quit IRC19:29
mordredfungi: standard mariadb container19:30
fungigranted, i haven't thought about what this looks like for applications which run in the context of an apache plugin19:30
mordredfungi: we have one of those already!19:30
fungioh?19:30
mordredfungi: zuul-proxy is an apache thing - so for that, we build a container based on teh apache container and then install zuul-proxy in it19:30
clarkboh right we do have an apache example then19:31
fungiaha, so i could in theory do the same with mod_php and fastcgi and whatever else19:31
clarkbfungi: yup19:31
mordredfungi: yup19:31
clarkbfungi: also I've successfully dockered my local nextcloud which does php cgi things and can look at how they've constructed things19:31
mordredand the nice part about doing that is it locates most of the work in the CI step19:31
fungiand then bindmount in all the application scripts and data19:31
mordredfungi: exactly19:31
fungior would the application scripts go im the image and then just bindmount the data?19:32
mordredI'd put the application scripts in the image19:32
mordredand just bind mount data19:32
mordredand any config files19:32
mordredfungi: https://opendev.org/zuul/zuul-preview/src/branch/master/Dockerfile is the zuul-preview Dockefile19:32
fungii guess depends on whether you want to rebuild the container image each time the application source changes19:33
mordredand is a good example19:33
clarkbthey use fpm in a dedicated container that runs separately from nginx19:33
fungioh interesting19:33
mordredfungi: I'd argue we do want to rebuild container image with each source change, since then we can CI the change19:33
fungisure, makes sense19:33
clarkbwhich does the port proxy thing19:33
mordredfungi: also https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/gitea/templates/docker-compose.yaml.j2 is good to cargo-cult from for things with databases19:34
fungiso what is our take if the application already has a recommended docker image? rebuild that if we can rather than writing from scratch?19:35
clarkbfungi: in some cases we use it as is (mariadb) in others we layer over the top (zuul-preview + apache) and in others we build from scratch (gitea)19:35
mordredfungi: kind of depends - looking at how they're building their docker image and whether it would be nice to use theirs or build our own is a judgement call19:35
clarkblikely comes down to how many changes we need to make19:35
fungithe two examples i gave both also maintain docker images (mailman3 and mediawiki)19:35
mordredgitea maintains a docker image - we do not use it19:36
clarkbya not sure there is a hard fast rule here19:36
mordredbut - I think it'll mostly come down to analyzing whether the one that's there is good for our use19:36
fungiwill keep that in mind19:36
ianwi guess this is actually very similar to choosing a puppet module -- if upstream stops/goes away etc you're left holding the upgrade bag anyway (see: ask.openstack.org)19:36
mordredthe gitea upstream image puts multiple processes in a single image19:36
mordredso it's more like a lightweight vm19:36
mordredand we were wanting to embrace the single-process-container model a bit more (iirc)19:37
fungiahh, so that's one of the smells we'd check for19:37
mordredianw: ++19:37
clarkbAnything else on opendev before we move on? running out of time and want to get to the other topics19:38
fungihuh, the mediawiki dockerfile sets it up with sqlite19:38
fungian interesting choice19:39
clarkb#topic Storyboard19:39
*** openstack changes topic to "Storyboard (Meeting topic: infra)"19:39
clarkblooks like storyboard has elected to have the openstack release team manage its release process19:39
fungithe telemetry team for openstack moved all their task tracking over to storyboard.openstack.org at the end of last week and i imported all their bugs over the weekend19:39
fungiyeah, i'm not sure what the deal is with release process choices there, maybe diablo_rojo_phon or SotK can comment19:40
clarkbI don't mind as long as they are happy with it. Also we'll have to keep in mind we can't just push a tag if they are using the managed process19:41
fungii don't think storyboard has been doing releases up to now19:41
clarkbah19:41
fungibut maybe the idea is to start? i really don't know19:41
clarkbseems like a reasonable thing to do19:42
clarkbanything else? We've got a number of topics to go so I'll keep this moving19:42
fungialso on import tooling, the patch to make it so we can import launchpad blueprints as stories is semi-usable now, i pushed some fixes for it last week, but there are still a number of gotchas i spotted which need to be addressed before it's usable for production imports19:42
fungii didn't have anything else to mention19:43
clarkb#topic General Topics19:43
*** openstack changes topic to "General Topics (Meeting topic: infra)"19:43
clarkbFirst up Gerrit project renames19:43
fungifriday the 31st still?19:43
clarkbI think so. mordred has indicated he can't help that day iirc and I imagine ianw would rather sleep/weekend19:44
clarkbbut I expect fungi corvus and myself to be around19:44
clarkbThere are a number of things we should get sorted out before that day though19:44
clarkb#link https://review.opendev.org/#/c/655476/ Fixes for rename playbook19:44
*** jamesmcarthur_ has quit IRC19:44
clarkbI think we want that change or something like it ready to go first19:44
mordredclarkb: yes - I cannot help that day - but I doubt I'll be needed19:44
clarkbThen we also want to generate changes for the openstack-infra stragglers and the openstack-dev straggler repos19:45
clarkbso that we can follow normal renaming process19:45
clarkbother projects like airship have already started to push their changes19:45
clarkbany volunteers to do this for the infra repos?19:45
clarkbmaybe we can start with an etherpad to collect the ones we think need to be renamed. How about https://etherpad.openstack.org/openstack-infra-stragger-opendev-renames19:46
clarkbI just made that url up hopefully not a name collision19:46
ianwi can help get things ready ... but yeah a list would be good19:47
clarkb#link https://etherpad.openstack.org/openstack-infra-stragger-opendev-renames Put list of repos that need to be renamed for openstack-infra and openstack-dev here19:47
clarkbNext up is the trusty server upgrades19:47
clarkb#link https://etherpad.openstack.org/p/201808-infra-server-upgrades-and-cleanup19:47
*** radeks_ has quit IRC19:48
clarkbWe are down to status, static, refstack, and wiki19:48
clarkbthank you ianw for taking care of ask.o.o19:48
fungi#link https://review.opendev.org/651352 Replace transitional package names for Xenial19:48
fungithat could use another review19:48
mordredI've got status on my list next after gitea 1.8.0 patches - now that the per-service playbook patch has landed19:48
ianw... if we have a sec, i would like to discuss our thoughts on the future of ask.o.o, but maybe at end if time19:49
clarkbianw: ya should have a few minutes at the end19:49
fungior we can discuss it in #openstack-infra after if we run out19:49
clarkbThe last major item on the agenda is the opendev in region mirrors deployment19:49
clarkb#link https://review.opendev.org/#/c/658281/ actual change to implement opendev.org mirrors19:49
clarkbif you have time to review that please do. This will get us tls on our mirrors19:50
clarkband update us to bionic19:50
fungithanks mordred! i'll take another go at wiki-dev after that merges19:50
clarkb#topic Open Discussion19:51
*** openstack changes topic to "Open Discussion (Meeting topic: infra)"19:51
clarkbaka the future of ask19:51
fungias we've discussed previously, it's a bit of an unusual service in that the sort of folks who are relying on it are highly unlikely to be the folks who want to help us maintain it19:51
clarkbianw: I was thinking about this earlier today and one idea I had was to basically send email to the openstack-discuss list explaining that it is basically on lifesupport via some hacky workarounds to problems (link to those details). Then basically ask people to help us do it better and mention that docker/ansible are options19:52
clarkbAnd let that serve as notice that we'll probably just turn the service off when xenial is eol19:52
fungiif it's got any future, it probably needs people who have an interest in it continuing to exist because their project benefits from it being available to their users19:52
clarkbassuming no improvements are made to make it sustainable19:52
ianwyeah, that was sort of my plan, draft an email and send something, just wanted to make sure others were ok with it19:52
mordredclarkb, ianw: should we wait just a little bit on that mirror patch and boot them using the ubuntu-bionic-minimal images we're about to start building?19:52
clarkbmordred: that seems reasonable if we can get these changes in soonish19:53
mordredalso - is linaro-london a thing?19:53
clarkbmordred: however our mirrors have long been run on cloud provided images so probably not a major deal if we don't get that in first19:53
clarkbmordred: aiui linaro-london is a thing but linaro the china cloud is no longer19:53
ianwmordred: ok, i have a couple of changes out to add dns entries and setup the first mirror, which i've launched.  i wouldn't mind debugging issues on that19:53
mordredgotcha - so we should add linaro-london to the image-building patch19:54
ianw#link https://review.opendev.org/#/c/660235/ dns entries19:54
fungiwe could also stand to leard from history. we did send a similar message two or three years ago about turning off the wiki, and there was outcry that people who are unable to help maintain it heavily use it, and then we ended up not turning it off even though it's still mostly unmaintained19:54
fungis/leard/learn/19:54
ianw#link https://review.opendev.org/#/c/660237/ system-config updates19:54
mordredcool- mostly was thinking it's not a huge deal - but since we're about to boot a bunch of things, it seems like good timing if we can make it work19:54
mordredbut if not - I don't think we should hold up progress19:54
clarkbfungi: ya though in the case of the wiki the people that heavily use it are the types that can direct resources to help which maybe makes that a worse situation than ask19:55
fungiyep19:55
mordredwe can always just make jimmy run it19:55
clarkbianw: sending that email has my vote19:55
clarkbianw: just to be clear on that19:55
mordredhe's not busy enough already :)19:55
ianwyeah, i mean we don't want to be taking on major puppet refactors at this point19:56
ianwtaking over dead upstream puppet modules etc19:57
ianwstackoverflow have a sort of community thing  https://area51.stackexchange.com/faq19:57
clarkbIn general I think we need to start making arguments for helping the opendev team from the constituent projects particularly when a service (like ask) doesn't have a ton of overalp in end users19:58
clarkbthis seems like a reasonable start since the service is on life support19:58
ianwreally, in terms of "critical mass of people looking at your problems, one of whom might have an answer" being on a larger site like stackoverflow probably has a lot of benefits ...19:58
*** senrique_ has quit IRC19:58
mordredyeah. it's also openstack specific - so it's only a service serving one of our tenants19:58
mordred(ask is)19:58
mordred(that's me agreeing with clarkb)19:58
mordred(parenthetically)19:59
fungiwell, the same could be said of any of our services which currently say "openstack" on them19:59
mordredfungi: sure - but some of them are more likely to be renamed .opendev.org  - like paste or etherpad - and serve the general opendev world19:59
fungiit could be made not-openstack-specific if we wanted, but at that point we really are just running a site competing with far better-funded alternatives19:59
mordredothers are specific to serving the openstack project ecosystem - which is also fine19:59
mordredfungi: exactly19:59
clarkbalso many of them are developer tools and developers seem to have a much easier time of understanding this says openstack but its an open tool for me to use20:00
mordredyah20:00
clarkband we are at time20:00
clarkbthank you everyone20:00
clarkb#endmeeting20:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"20:00
openstackMeeting ended Tue May 21 20:00:22 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-05-21-19.01.html20:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-05-21-19.01.txt20:00
openstackLog:            http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-05-21-19.01.log.html20:00
*** Shrews has left #openstack-meeting20:01
*** jamesmcarthur has joined #openstack-meeting20:02
*** senrique_ has joined #openstack-meeting20:07
*** bbowen_ has quit IRC20:18
*** senrique_ has quit IRC20:37
*** e0ne has joined #openstack-meeting20:39
*** enriquetaso has joined #openstack-meeting20:50
*** e0ne has quit IRC20:56
*** jamesmcarthur has quit IRC21:11
*** diablo_rojo has joined #openstack-meeting21:19
*** bbowen_ has joined #openstack-meeting21:27
*** slaweq has quit IRC21:27
*** Lucas_Gray has joined #openstack-meeting21:28
*** _alastor_ has joined #openstack-meeting21:40
*** slaweq has joined #openstack-meeting21:43
*** raildo has quit IRC21:45
*** _alastor_ has quit IRC21:46
*** Lucas_Gray has quit IRC21:47
*** slaweq has quit IRC21:48
*** mriedem has quit IRC21:51
*** Lucas_Gray has joined #openstack-meeting21:52
*** ykatabam has joined #openstack-meeting22:00
*** senrique_ has joined #openstack-meeting22:01
*** enriquetaso has quit IRC22:02
*** _alastor_ has joined #openstack-meeting22:05
*** rcernin has joined #openstack-meeting22:06
*** tesseract has quit IRC22:07
*** _alastor_ has quit IRC22:10
*** senrique_ has quit IRC22:14
*** ttsiouts has joined #openstack-meeting22:15
*** carloss has quit IRC22:22
*** senrique_ has joined #openstack-meeting22:29
*** ayoung has quit IRC22:31
*** ekcs has quit IRC22:31
*** ekcs has joined #openstack-meeting22:35
*** senrique_ has quit IRC22:37
*** senrique_ has joined #openstack-meeting22:38
*** rubasov has quit IRC22:38
*** _alastor_ has joined #openstack-meeting22:39
*** rcernin has quit IRC22:40
*** ttsiouts has quit IRC22:40
*** rcernin has joined #openstack-meeting22:41
*** armax has quit IRC22:50
*** ttsiouts has joined #openstack-meeting22:56
*** ttsiouts has quit IRC23:01
*** ttsiouts has joined #openstack-meeting23:05
*** mattw4 has quit IRC23:11
*** mattw4 has joined #openstack-meeting23:12
*** _erlon_ has quit IRC23:17
*** Lucas_Gray has quit IRC23:20
*** senrique_ has quit IRC23:38
*** macza has quit IRC23:39
*** macza has joined #openstack-meeting23:40
*** lseki has quit IRC23:43
*** macza has quit IRC23:47

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!