Tuesday, 2019-09-03

*** jamesmcarthur has quit IRC00:06
*** mmethot has joined #openstack-meeting00:40
*** markvoelker has joined #openstack-meeting00:46
*** markvoelker has quit IRC00:51
*** jamesmcarthur has joined #openstack-meeting01:00
*** yamamoto has joined #openstack-meeting01:00
*** cheng1 has quit IRC01:05
*** yamamoto has quit IRC01:05
*** cheng1 has joined #openstack-meeting01:07
*** radeks has quit IRC01:15
*** jamesmcarthur has quit IRC01:17
*** dviroel has quit IRC01:31
*** hongbin has joined #openstack-meeting01:44
*** mhen has quit IRC01:50
*** yamamoto has joined #openstack-meeting02:01
*** apetrich has quit IRC02:10
*** larainema has joined #openstack-meeting02:12
*** jamesmcarthur has joined #openstack-meeting02:17
*** bobh has joined #openstack-meeting02:25
*** jamesmcarthur has quit IRC02:27
*** bobh has quit IRC02:32
*** bobh has joined #openstack-meeting02:40
*** bobh has quit IRC02:47
*** hongbin has quit IRC03:26
*** hongbin has joined #openstack-meeting03:28
*** e0ne has joined #openstack-meeting03:47
*** ykatabam has quit IRC03:50
*** tpatil has joined #openstack-meeting04:00
*** ykatabam has joined #openstack-meeting04:18
*** hongbin has quit IRC04:29
*** tpatil has quit IRC04:30
*** jamesmcarthur has joined #openstack-meeting04:31
*** jamesmcarthur has quit IRC04:35
*** links has joined #openstack-meeting04:49
*** e0ne has quit IRC04:49
*** hongbin has joined #openstack-meeting04:49
*** hongbin has quit IRC04:49
*** e0ne has joined #openstack-meeting04:50
*** e0ne has quit IRC04:51
*** Luzi has joined #openstack-meeting04:59
*** Garyx has quit IRC05:07
*** e0ne has joined #openstack-meeting05:08
*** Garyx has joined #openstack-meeting05:09
*** yamamoto has quit IRC05:12
*** e0ne has quit IRC05:16
*** yamamoto has joined #openstack-meeting05:29
*** dmacpher_ has joined #openstack-meeting05:36
*** dmacpher has quit IRC05:39
*** jraju__ has joined #openstack-meeting05:42
*** links has quit IRC05:42
*** yamamoto has quit IRC05:50
*** ykatabam has quit IRC05:57
*** e0ne has joined #openstack-meeting06:00
*** e0ne has quit IRC06:03
*** e0ne has joined #openstack-meeting06:12
*** e0ne has quit IRC06:20
*** yamamoto has joined #openstack-meeting06:21
*** yamamoto has quit IRC06:25
*** radeks has joined #openstack-meeting06:31
*** yamamoto has joined #openstack-meeting06:32
*** keiko-k has joined #openstack-meeting06:49
*** aloga has joined #openstack-meeting06:55
*** tridde has quit IRC07:00
*** hyunsikyang has quit IRC07:02
*** pcaruana has joined #openstack-meeting07:06
*** tesseract has joined #openstack-meeting07:06
*** trident has joined #openstack-meeting07:09
*** keiko-k has quit IRC07:09
*** slaweq has joined #openstack-meeting07:09
*** keiko-k has joined #openstack-meeting07:13
*** rcernin has quit IRC07:23
*** kaisers has joined #openstack-meeting07:24
*** joxyuki has joined #openstack-meeting07:29
*** tpatil has joined #openstack-meeting07:37
*** jbadiapa has quit IRC07:40
*** hyunsikyang has joined #openstack-meeting07:41
*** kopecmartin|off is now known as kopecmartin07:44
*** priteau has joined #openstack-meeting07:47
*** dkushwaha has joined #openstack-meeting07:55
*** jbadiapa has joined #openstack-meeting07:56
*** keiko-k has quit IRC07:57
dkushwaha#startmeeting tacker08:02
openstackMeeting started Tue Sep  3 08:02:17 2019 UTC and is due to finish in 60 minutes.  The chair is dkushwaha. Information about MeetBot at http://wiki.debian.org/MeetBot.08:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.08:02
*** openstack changes topic to " (Meeting topic: tacker)"08:02
openstackThe meeting name has been set to 'tacker'08:02
*** nitinuikey has joined #openstack-meeting08:02
dkushwaha#topic Roll Call08:02
*** openstack changes topic to "Roll Call (Meeting topic: tacker)"08:02
tpatilHi08:02
*** keiko-k has joined #openstack-meeting08:02
dkushwahatpatil, hello08:02
*** rubasov has quit IRC08:04
dkushwahawho is here for Tacker weekly meeting?08:04
keiko-kHello08:04
dkushwahahi keiko-k08:04
*** rubasov has joined #openstack-meeting08:05
joxyukihi08:05
*** links has joined #openstack-meeting08:05
dkushwahahello joxyuki08:05
hyunsikyanghi08:06
*** jraju__ has quit IRC08:06
nitinuikeyhi08:06
dkushwahahi all08:07
dkushwahalets start..08:08
*** rubasov has quit IRC08:08
*** rubasov has joined #openstack-meeting08:08
dkushwaha#topic announcement08:08
*** openstack changes topic to "announcement (Meeting topic: tacker)"08:08
*** e0ne has joined #openstack-meeting08:09
dkushwahaPTL nomination in progress. I had nominated again myself for U cycle08:09
dkushwaha#link http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009005.html08:09
joxyukigreat :)08:11
dkushwaha2: We have around less than a month for Train cycle release08:11
dkushwaha#link https://releases.openstack.org/train/schedule.html08:11
dkushwahawe needs to hurry for our patches08:12
dkushwaha#chair joxyuki08:12
openstackCurrent chairs: dkushwaha joxyuki08:12
dkushwaha#topic BP08:12
*** openstack changes topic to "BP (Meeting topic: tacker)"08:12
dkushwahatpatil, anything form you on VNF package..08:13
tpatilYes, uploaded a update specs and I have seen you have voted +2 on it08:13
tpatilabout implementation, we have made major changes to the patches, we will push these patches by tomorrow.08:14
*** ociuhandu has joined #openstack-meeting08:14
*** rubasov has quit IRC08:14
*** rubasov has joined #openstack-meeting08:14
dkushwahatpatil, nice08:14
tpatilpython-tackerclient: moved the commands from python-openstackclient to tackerclient and currently working on writing the unit tests08:14
tpatilin the current python-tackerclient, there are no unit tests written for currently supports OSC commands08:15
tpatiladded the framework and also using requests_mock here08:15
tpatiltackerclient: patches will be pushed by e.o.d tomorrow08:15
dkushwahatpatil, sounds great.08:15
tpatiltosca-parser: patch is already up on the community gerrit. please review08:15
tpatil#link : https://review.opendev.org/#/c/675561/08:16
dkushwahatpatil, i will review it.08:18
dkushwahaIt goes tough on tosca-parser side as it not under Tacker repo08:18
dkushwahaI will ping tosca-parser guys if they can be available08:18
tpatilI can understand. Niraj has already sent an mail to Bob asking to review the tosca-parser patch08:18
tpatilWe will try to follow-up with Bob again08:19
dkushwahatpatil, hmm, i also pinged him for some other patches, but seems no response08:19
dkushwahatpatil, i think doc is missing still missing in package support patch. once client patch available could you please give some highlight, that how to test08:21
tpatilSure, I will add the doc to explain how to setup and test vnf packages08:22
dkushwahatpatil, thanks08:22
dkushwahamoving next..08:23
dkushwaha#topic Gate Jobs08:23
*** openstack changes topic to "Gate Jobs (Meeting topic: tacker)"08:24
dkushwahaIn functional test, I observed multiple jobs are failing randomly. I am working on them to fix08:25
dkushwahaMy plan is to, 1: Fix current functional jobs. 2: Currently functional jobs are non-voting, we needs to make voting for project stability health08:26
dkushwahaworking on the same08:26
joxyuki+1 :)08:27
dkushwahamoving next..08:28
dkushwaha#topic multi-container support08:28
*** openstack changes topic to "multi-container support (Meeting topic: tacker)"08:28
dkushwahahyunsikyang, anything form your side08:29
hyunsikyangI uploaded initial stage code first. I checked your review.08:30
hyunsikyangThanks.08:30
hyunsikyangAccording to your comment, I will fix it.08:31
dkushwahahyunsikyang, thanks08:32
hyunsikyangAnd about user guide, I will add my user guide to kubernetes VIM user guide.08:32
dkushwahahyunsikyang, yea, I think it is not a good idea to have different doc for multi-interface, so it can be merged in existing one08:33
*** rubasov has quit IRC08:33
hyunsikyangOK08:34
dkushwahamoving next..08:34
dkushwaha#topic Open Discussion08:35
*** openstack changes topic to "Open Discussion (Meeting topic: tacker)"08:35
hyunsikyangCan I ask a one thing? Is there any docs for making a testing code?08:36
*** ralonsoh has joined #openstack-meeting08:37
dkushwahahyunsikyang, do you mean doc to add test cases?08:38
hyunsikyangyes:)08:38
joxyukithis is a guide of running unit test but not creating test. https://docs.openstack.org/project-team-guide/project-setup/python.html#running-python-unit-tests08:38
hyunsikyangAh Thanks Jo:)08:38
joxyukiI do not know other. I learned from existing codes.08:39
hyunsikyangok! thanks08:40
*** tesseract has quit IRC08:40
dkushwahahyunsikyang, that docs is to run/debug existing test cases. But no doc about how to add test cases. you can check existing test cases and test case patches for your reference08:40
hyunsikyangok:)08:40
hyunsikyangI will check other patches08:41
dkushwahahyunsikyang, https://review.opendev.org/#/c/582487/ https://review.opendev.org/#/c/662171/08:42
dkushwahathese patches will give some idea about adding FT and UTs08:43
*** ociuhandu has quit IRC08:43
dkushwaharest you can ping me on Tacker channel for any help08:43
dkushwahaDo we have anything else to discuss? otherwise we can close this meeting08:45
hyunsikyangOk thanks all:)08:48
dkushwahaThanks Folks, Closing this meeting. Please keep helping on review part as well.08:50
dkushwaha#endmeeting08:50
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"08:50
openstackMeeting ended Tue Sep  3 08:50:21 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)08:50
openstackMinutes:        http://eavesdrop.openstack.org/meetings/tacker/2019/tacker.2019-09-03-08.02.html08:50
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/tacker/2019/tacker.2019-09-03-08.02.txt08:50
openstackLog:            http://eavesdrop.openstack.org/meetings/tacker/2019/tacker.2019-09-03-08.02.log.html08:50
*** tpatil has quit IRC08:54
*** rcernin has joined #openstack-meeting09:00
*** slaweq has quit IRC09:22
*** slaweq has joined #openstack-meeting09:23
*** brinzhang_ has joined #openstack-meeting09:24
*** brinzhang_ has quit IRC09:25
*** brinzhang_ has joined #openstack-meeting09:25
*** brinzhang has quit IRC09:27
*** apetrich has joined #openstack-meeting09:36
*** Zoup has joined #openstack-meeting09:48
*** Zoup has quit IRC09:55
*** yamamoto has quit IRC09:56
*** ianw has quit IRC10:00
*** ianw has joined #openstack-meeting10:08
*** artom has joined #openstack-meeting10:13
*** dviroel_ has joined #openstack-meeting10:21
*** keiko-k has quit IRC10:25
*** ianw has quit IRC10:27
*** ianw has joined #openstack-meeting10:28
*** ianw has quit IRC10:28
*** ianw has joined #openstack-meeting10:29
*** yamamoto has joined #openstack-meeting10:29
*** ianw has quit IRC10:34
*** yamamoto has quit IRC10:35
*** ianw has joined #openstack-meeting10:36
*** ociuhandu has joined #openstack-meeting10:37
*** ianw_ has joined #openstack-meeting10:41
*** ianw_ has quit IRC10:41
*** ianw_ has joined #openstack-meeting10:43
*** ianw_ has quit IRC10:44
*** ianw has quit IRC10:44
*** ianw has joined #openstack-meeting10:44
*** bbowen has quit IRC10:46
*** ianw has quit IRC10:46
*** ianw has joined #openstack-meeting10:50
*** ianw has quit IRC10:52
*** ianw has joined #openstack-meeting10:52
*** nitinuikey has quit IRC11:01
*** rubasov has joined #openstack-meeting11:04
*** markvoelker has joined #openstack-meeting11:09
*** shilpasd has joined #openstack-meeting11:14
*** markvoelker has quit IRC11:14
*** ociuhandu has quit IRC11:23
*** carloss has joined #openstack-meeting11:31
*** njohnston has joined #openstack-meeting11:34
*** Garyx has quit IRC11:38
*** Garyx has joined #openstack-meeting11:41
*** Garyx has quit IRC11:41
*** Garyx has joined #openstack-meeting11:41
*** ociuhandu has joined #openstack-meeting11:43
*** ociuhandu has quit IRC11:48
*** bbowen has joined #openstack-meeting11:51
*** ricolin_ has joined #openstack-meeting11:51
*** ociuhandu has joined #openstack-meeting11:52
*** ricolin has quit IRC11:54
*** bbowen_ has joined #openstack-meeting11:54
*** bbowen has quit IRC11:56
*** Lucas_Gray has joined #openstack-meeting12:01
*** larainema has quit IRC12:15
*** markvoelker has joined #openstack-meeting12:18
*** yamamoto has joined #openstack-meeting12:25
*** ociuhandu has quit IRC12:30
*** ociuhandu has joined #openstack-meeting12:30
*** dviroel_ has quit IRC12:31
*** ociuhandu has quit IRC12:31
*** ociuhandu_ has joined #openstack-meeting12:31
*** ociuhandu has joined #openstack-meeting12:32
*** Lucas_Gray has quit IRC12:32
*** Lucas_Gray has joined #openstack-meeting12:35
*** ociuhandu_ has quit IRC12:36
*** bbowen_ has quit IRC12:37
*** bbowen has joined #openstack-meeting12:39
*** dviroel_ has joined #openstack-meeting12:45
*** jamesmcarthur has joined #openstack-meeting12:46
*** dviroel_ is now known as dviroel12:47
*** redrobot has quit IRC12:55
*** dmsimard has quit IRC12:56
*** dmsimard has joined #openstack-meeting12:58
*** david-lyle has quit IRC13:02
*** dklyle has joined #openstack-meeting13:02
*** pcaruana has quit IRC13:03
*** ociuhandu has quit IRC13:21
*** ociuhandu has joined #openstack-meeting13:22
*** enriquetaso has joined #openstack-meeting13:23
*** ociuhandu has quit IRC13:28
*** ociuhandu has joined #openstack-meeting13:29
*** mriedem has joined #openstack-meeting13:29
*** Garyx has quit IRC13:30
*** cheng1 has quit IRC13:31
*** cheng1 has joined #openstack-meeting13:32
*** pcaruana has joined #openstack-meeting13:34
*** redrobot_ has joined #openstack-meeting13:36
*** redrobot_ is now known as redrobot13:37
*** cheng1 has quit IRC13:38
*** Luzi has quit IRC13:41
*** cheng1 has joined #openstack-meeting13:41
*** brinzhang_ has quit IRC13:46
*** brinzhang_ has joined #openstack-meeting13:46
*** pcaruana has quit IRC13:47
*** eharney has joined #openstack-meeting13:49
*** yamamoto has quit IRC13:55
*** yamamoto has joined #openstack-meeting13:59
*** pcaruana has joined #openstack-meeting13:59
*** Roamer` has joined #openstack-meeting14:00
*** Garyx has joined #openstack-meeting14:03
*** yamamoto has quit IRC14:04
*** pcaruana has quit IRC14:07
*** pcaruana has joined #openstack-meeting14:10
*** ociuhandu has quit IRC14:13
*** ociuhandu has joined #openstack-meeting14:17
*** links has quit IRC14:20
*** Lucas_Gray has quit IRC14:27
*** rcernin has quit IRC14:32
*** ociuhandu has quit IRC14:34
*** cheng1 has quit IRC14:35
*** ociuhandu has joined #openstack-meeting14:35
*** cheng1 has joined #openstack-meeting14:38
*** ociuhandu has quit IRC14:39
*** Garyx has quit IRC14:47
*** Garyx has joined #openstack-meeting14:52
*** links has joined #openstack-meeting14:55
*** ociuhandu has joined #openstack-meeting15:07
*** pcaruana has quit IRC15:27
*** ociuhandu has quit IRC15:43
*** ociuhandu has joined #openstack-meeting15:44
*** ociuhandu has quit IRC15:47
*** ociuhandu has joined #openstack-meeting15:47
*** dtrainor has quit IRC15:50
*** beekneemech is now known as bnemec15:51
slaweq#startmeeting neutron_ci16:00
openstackMeeting started Tue Sep  3 16:00:06 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** e0ne has quit IRC16:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"16:00
openstackThe meeting name has been set to 'neutron_ci'16:00
*** mlavalle has joined #openstack-meeting16:00
slaweqwelcome back after short break :)16:00
mlavallethanks!16:00
ralonsohhi16:00
slaweqlets wait 1 or 2 minutes for njohnston and others16:01
bcafarelo/ (though I will probably leave soon)16:01
*** cheng1 has quit IRC16:02
slaweqok, let's start than16:02
slaweqfirst of all: Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate16:03
slaweqplease open now that it will be ready later :)16:03
slaweq#topic Actions from previous meetings16:03
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"16:03
slaweqralonsoh will continue working on error patterns and open bugs for functional tests16:03
ralonsohyes, I found today something that could be an error16:03
ralonsohrelated to pyroute216:03
ralonsohtomorrow, reviewing the CI tests, I'll report an error if there is one16:04
*** cheng1 has joined #openstack-meeting16:04
ralonsohno other patterns found this week16:04
ralonsohjust for information: the possible error is in test_get_devices_info_veth_different_namespaces16:05
ralonsohthat's all16:05
slaweqthx ralonsoh16:05
slaweqI also saw today 2 failures which I wanted to raise here but lets do it later in functional tests section16:05
slaweqnext one:16:05
slaweqmlavalle will continue debugging https://bugs.launchpad.net/neutron/+bug/183844916:05
openstackLaunchpad bug 1838449 in neutron "Router migrations failing in the gate" [Medium,Confirmed] - Assigned to Miguel Lavalle (minsel)16:05
mlavallelast night I left my latest comments here: https://bugs.launchpad.net/neutron/+bug/183844916:06
*** macz has joined #openstack-meeting16:07
slaweqmlavalle: so, based on Your comment, it looks that it could be introduced by https://review.opendev.org/#/c/597567, right?16:07
slaweqas it is some race condition in case when router is updated16:08
mlavallewell, not necessarilly that patch16:08
mlavallethere are other 2 patches that have touched the "ralted routers" code later16:08
mlavallewhich I also mention in the bug16:08
mlavallethe naive solution would be for each test case in the test_magrations script to create its router in separate nets / subnets16:09
mlavallethat would fix our tests16:09
mlavallebut this is a real bug IMO16:10
mlavallewhich we need to fix16:10
mlavalleright?16:10
slaweqso different routers from those tests are using same networks/subnets now?16:10
bcafareland backport :)16:10
mlavalleI am assuming that16:11
slaweqisn't it that there is one migration "per test"?16:11
slaweqand then new network/subnet created per each test16:11
mlavallewell, if that is the case, the problem is even worst16:11
mlavallewhy would the router under migration have related routers?16:12
*** gyee has joined #openstack-meeting16:12
slaweqbut here: https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/scenario/test_migration.py#L124 it looks like every test has got own network and subnet16:13
slaweqIMO it's created here: https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/scenario/test_migration.py#L12916:13
slaweqthis method is defined here https://github.com/openstack/neutron-tempest-plugin/blob/d11f4ec31ab1cf7965671817f2733c362765ebb1/neutron_tempest_plugin/scenario/base.py#L17316:14
mlavallesee my comment just before yours please ^^^^16:14
mlavallewe agree16:14
slaweqyes, so it seems that it is "even worst" :)16:15
mlavalleI have a question that I want to confirm....16:16
slaweqshoot16:16
mlavallethe related routers ids are sent to the L3 agent in the router update rpc message, right?16:17
slaweqyes (IIRC)16:18
slaweqbut delete router should also be sent in such case to controller, no?16:18
mlavallein other words, this line https://github.com/openstack/neutron/blob/master/neutron/agent/l3/agent.py#L72016:18
mlavallereturns a non empty list, correct16:18
mlavallein the case of a router migration, what is sent from the server to the agent is a router update16:19
mlavallebecause we are not deleting the router16:19
mlavallewe are just setting admin_state_up to false16:20
mlavallelet's move on16:20
mlavalleI can test that locally16:20
slaweqbut, according to Your paste:16:20
slaweqhttp://paste.openstack.org/show/769795/16:21
slaweqrouter 2cd0c0f0-75ab-444c-afe1-be7c6a1a7702 was deleted on compute16:21
slaweqand was only updated on controller16:21
slaweqwhy?16:21
mlavalleyes, but that is the result of processing an update16:21
mlavallean update message16:21
mlavallenot a delete message16:21
slaweqtrue16:22
mlavallethat I am 100% sure16:22
mlavallethe difference is that when the update message contains "related routers" updates16:23
mlavalleis is processed in a different manner16:23
mlavalleand therefore the router is not deleted locally in the agent16:23
mlavalledelete the router locally means removing the network space form that agent, even though the router still exists16:24
slaweqbut what are related routers in such case?16:24
mlavallethat is excatly the conundrum :-)16:24
slaweq:)16:24
mlavallewhy does the router under migration has related routers?16:25
slaweqmaybe You should send patch with some additional debug logging to get that info later16:25
*** ricolin_ has quit IRC16:25
mlavallethat is exactly the plan16:25
slaweqok16:25
slaweq#action mlavalle to continue investigating router migrations issue16:26
slaweqfine ^^?16:26
mlavalleyeap16:26
slaweqok16:26
slaweqthx a lot mlavalle for working on this16:27
slaweqlet's move on16:27
mlavalle:-)16:27
*** kopecmartin is now known as kopecmartin|off16:27
slaweqnext action16:27
slaweqnjohnston will get the new location for periodic jobs logs16:27
njohnstonI did that at the end of the last meeting16:27
njohnstonand then contacted the glance folks for their broken job that was affecting all postgresql16:27
slaweqnjohnston: do You have link to those logs then?16:28
slaweqjust to add it to CI meeting agenda for the future :)16:28
njohnstonyou can look here: http://zuul.openstack.org/builds?pipeline=periodic-stable&project=openstack%2Fneutron16:29
njohnstonor in the buildsets view http://zuul.openstack.org/buildsets?project=openstack%2Fneutron&pipeline=periodic16:30
*** ociuhandu_ has joined #openstack-meeting16:30
* mlavalle needs to step out 5 minutes16:31
mlavallebrb16:31
slaweqthx a lot16:31
slaweqand thx a lot for taking care of broken postgresql job16:31
slaweqok, let's move on to the next topic16:32
slaweq#topic Stadium projects16:32
*** openstack changes topic to "Stadium projects (Meeting topic: neutron_ci)"16:32
slaweqfirst16:32
slaweqPython 3 migration16:32
slaweqStadium projects etherpad: https://etherpad.openstack.org/p/neutron_stadium_python3_status16:32
slaweqI saw today that we are in pretty good shape with networking-odl now thank to lajoskatona16:32
slaweqand we made some progress with networking-bagpipe16:32
slaweqso only "not touched" yet is networking-midonet16:33
*** ociuhandu has quit IRC16:33
slaweqanything else You want to add regarding to python 3 migration?16:33
njohnstonNo, I think that covers it, thanks!16:33
slaweqthx16:34
slaweqso next is16:34
*** ociuhandu_ has quit IRC16:35
slaweqtempest-plugins migration16:35
slaweqEtherpad: https://etherpad.openstack.org/p/neutron_stadium_move_to_tempest_plugin_repo16:35
slaweqany updates on this one?16:35
*** dtrainor has joined #openstack-meeting16:36
njohnstonI don't have any; I don't see tidwellr and mlavalle is AFK, they are who I would look at for updates.16:36
slaweqnjohnston: right16:36
slaweqso lets move on then16:36
slaweq#topic Grafana16:36
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"16:36
slaweq#link http://grafana.openstack.org/dashboard/db/neutron-failure-rate16:36
slaweqI see that neutron-functional-python27 in gate queue was failing quite often last week16:38
slaweqwas there any known problem with it?16:38
slaweqdo You know?16:38
ralonsohno sorry16:38
slaweqok, maybe it wasn't any big problem as there was only few runs of this job then16:39
slaweqso it wasn't relatively many failures16:39
* mlavalle is back16:39
*** yamamoto has joined #openstack-meeting16:39
slaweqfrom other things I see that neutron-tempest-plugin-scenario-openvswitch in check queue is failing quite often16:40
slaweqany volunteer to check this job more deeply?16:40
slaweqif no, I will assign it to myself for this week16:41
ralonsohI can take a look, but I don't know when16:41
ralonsohI have a very busy agenda now16:41
slaweqralonsoh: thx16:42
slaweqbut if You are busy, I can take a look into this16:42
slaweqso I will assign it to myself16:42
slaweqto not overload You :)16:42
slaweqok?16:42
ralonsohthanks!16:42
slaweq#action slaweq to check reasons of failures of neutron-tempest-plugin-scenario-openvswitch job16:43
slaweqfrom other things which I noticed from grafana, neutron-functional-with-uwsgi high failure rate but it's non voting so no big problem (yet)16:43
*** yamamoto has quit IRC16:43
slaweqI hope we will be able to focus more on stabilizing uwsgi jobs in next cycle :)16:44
njohnstonunit test failures look pretty high - 40% to 50% today16:44
slaweqnjohnston: correct16:45
slaweqbut again, please note that it's "just" 6 runs today16:45
slaweqso it don't have to big problem16:45
njohnstonok16:46
njohnston:-)16:46
slaweqand I think that I saw some patches with failures related to patch itself16:46
slaweqso lets keep an eye on it :)16:46
slaweqok?16:46
njohnstonsounds good.  We'll have better data later - I see 10 neutron jobs in queue right now16:47
slaweqagree16:47
njohnstonFYI for those that don't know, you can put "neutron" in http://zuul.openstack.org/status to see all neutron jobs16:47
ralonsohbtw, test_get_devices_info_veth_different_namespaces is a problem now16:48
*** links has quit IRC16:48
ralonsohI see many CI jobs failing because of this16:48
ralonsohdo we have a new pyroute2 version?16:48
slaweqthx njohnston16:48
slaweqok, so as ralonsoh started with it, let's move to next topic :)16:48
slaweq#topic fullstack/functional16:48
*** openstack changes topic to "fullstack/functional (Meeting topic: neutron_ci)"16:49
ralonsohso yes, I need to check what is happening with this test16:49
slaweqand indeed I saw this test failing today also, first I though it is maybe related to the patch on which it was run16:49
ralonsohI'm on it now16:49
slaweqbut now it's clear it's some bigger issue16:49
slaweqtwo examples of failure:16:49
slaweq    https://a23f52ac6d169d81429a-a52e23b005b6607e27c6770fa63e26fe.ssl.cf1.rackcdn.com/679462/1/gate/neutron-functional/6d6a4c1/testr_results.html.gz16:49
slaweq    https://e33ddd780e29e3545bf9-6c7fec3fffbf24afb7394804bcdecfae.ssl.cf5.rackcdn.com/679399/6/check/neutron-functional/bc96527/testr_results.html.gz16:49
ralonsohyes, at least is consistent16:50
slaweqohh, so it's now failing 100% times?16:50
ralonsohyes16:50
slaweqralonsoh: will You report bug for it?16:50
ralonsohyes16:51
slaweqthx16:51
slaweq#action ralonsoh to report bug and investigate failing test_get_devices_info_veth_different_namespaces functional test16:51
slaweqplease set it as critical :)16:51
ralonsohok16:51
slaweqthx16:52
slaweqas I said before, I also saw 2 other failures in functional tests today16:52
slaweq1. neutron.tests.functional.agent.test_firewall.FirewallTestCase.test_rule_ordering_correct16:52
slaweqhttps://019ab552bc17f89947ce-f1e24edd0ae51a8de312c1bf83189630.ssl.cf2.rackcdn.com/670177/7/check/neutron-functional-python27/74e7c20/testr_results.html.gz16:52
slaweqI saw it first time16:52
slaweqdid You maybe got something similar before?16:53
ralonsohno, first time16:53
ralonsohok, that's because we need https://review.opendev.org/#/c/679428/16:53
slaweqok, I will check tomorrow exactly if this failed test_rule_ordering_correct test wasn't related to patch on which it was running16:54
*** dtrainor has quit IRC16:54
slaweq#action slaweq to check reason of failure neutron.tests.functional.agent.test_firewall.FirewallTestCase.test_rule_ordering_correct16:54
slaweqralonsoh: we need https://review.opendev.org/#/c/679428/ to fix issue with test_get_devices_info_veth_different_namespaces ?16:55
slaweqor for what?16:55
ralonsohno no16:55
ralonsohthe last one, test_rule_ordering_correct16:55
*** dtrainor has joined #openstack-meeting16:55
ralonsohthe error in this test16:55
ralonsohFile "neutron/agent/linux/ip_lib.py", line 941, in list_namespace_pids16:55
ralonsoh    return privileged.list_ns_pids(namespace)16:55
slaweqahh, ok16:55
slaweq:)16:55
slaweqright16:56
slaweqmlavalle: njohnston: if You will have some time, please review https://review.opendev.org/#/c/679428/ :)16:56
njohnstonslaweq ralonsoh: +2+w16:56
ralonsohthanks!16:56
slaweqnjohnston: thx :)16:56
slaweqYou're fast16:56
mlavallehe is indded16:56
slaweqLOL16:57
slaweqok, and the second failed test which I saw:16:57
slaweqneutron.tests.functional.agent.linux.test_keepalived.KeepalivedManagerTestCase.test_keepalived_spawns_conflicting_pid_vrrp_subprocess16:57
slaweqhttps://storage.bhs1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/logs_66/677166/11/check/neutron-functional-python27/864c837/testr_results.html.gz16:57
slaweqbut I think I already saw something similar before16:57
ralonsohsame problem16:58
slaweqahh, right16:58
slaweqdifferent stack trace but there is list_ns_pids in it too :)16:58
slaweqok, so we should be better in functinal tests with Your patch16:58
slaweqwe are almost out of time16:59
slaweqso quickly, one last thing for today16:59
slaweq#topic Tempest/Scenario16:59
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)"16:59
*** bbowen_ has joined #openstack-meeting16:59
slaweqRecently I noticed that we are testing all jobs with MySQL 5.716:59
slaweqSo I asked on ML about Mariadb: http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008925.html16:59
slaweqAnd I will need to add periodic job with mariadb to Neutron16:59
slaweqare You ok with such job?16:59
njohnston+116:59
ralonsohsure16:59
slaweqmlavalle? I hope You are fine too with such job :)17:00
mlavalleI'm ok17:00
slaweq#action slaweq to add mariadb periodic job17:00
slaweqthx17:00
slaweqso we are out of time now17:00
ralonsohbye!17:00
slaweqthx for attending17:00
mlavalleo/17:00
slaweq#endmeeting17:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"17:00
slaweqo/17:00
openstackMeeting ended Tue Sep  3 17:00:47 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-09-03-16.00.html17:00
njohnstono/17:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-09-03-16.00.txt17:00
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-09-03-16.00.log.html17:00
*** bbowen has quit IRC17:02
*** shilpasd has quit IRC17:02
*** ociuhandu has joined #openstack-meeting17:02
*** ociuhandu has quit IRC17:07
*** e0ne has joined #openstack-meeting17:12
*** enriquetaso has quit IRC17:13
*** mattw4 has joined #openstack-meeting17:21
*** e0ne has quit IRC17:23
*** e0ne has joined #openstack-meeting17:23
*** mattw4 has quit IRC17:29
*** pcaruana has joined #openstack-meeting17:29
*** e0ne has quit IRC17:30
*** artom has quit IRC17:31
*** artom has joined #openstack-meeting17:31
*** mattw4 has joined #openstack-meeting17:34
*** lseki has joined #openstack-meeting17:41
*** priteau has quit IRC17:43
*** brinzhang has joined #openstack-meeting17:45
*** brinzhang_ has quit IRC17:48
*** enriquetaso has joined #openstack-meeting18:01
*** markvoelker has quit IRC18:03
*** enriquetaso has quit IRC18:07
*** armax has joined #openstack-meeting18:08
*** enriquetaso has joined #openstack-meeting18:12
*** e0ne has joined #openstack-meeting18:15
*** jamesmcarthur has quit IRC18:19
*** markvoelker has joined #openstack-meeting18:20
*** markvoelker has quit IRC18:24
*** enriquetaso has quit IRC18:27
*** enriquetaso has joined #openstack-meeting18:28
*** bbowen__ has joined #openstack-meeting18:31
*** bbowen_ has quit IRC18:35
*** markvoelker has joined #openstack-meeting18:43
*** mlavalle has left #openstack-meeting18:46
*** jamesmcarthur has joined #openstack-meeting18:47
*** Shrews has joined #openstack-meeting18:49
*** gyee has quit IRC18:53
*** AJaeger_ has joined #openstack-meeting18:56
clarkbAnyone else here for the infra meeting? we'll get started in a few minutes18:59
fungisure, why not ;)18:59
clarkboh we hvae a fungi18:59
corvusclarkb: i won't be able to attend the meeting today, but should be back around tomorrow.  i plan on writing up a report from the gerrit user summit and sending it out this week.  we learned a lot about upgrading, developed closer ties with the gerrit community, and there's a lot of excitement around zuul.18:59
clarkbcorvus: that is great to hear18:59
* fungi is taking a quick break between completion of storm prep and start of evacuation prep19:00
clarkbI don't think we'll have mordred today either19:00
ianwo/19:00
corvuscorrect, i believe mordred is pto next 2 weeks19:00
clarkb#startmeeting infra19:01
openstackMeeting started Tue Sep  3 19:01:04 2019 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: infra)"19:01
openstackThe meeting name has been set to 'infra'19:01
clarkb#link http://lists.openstack.org/pipermail/openstack-infra/2019-September/006474.html19:01
clarkb#topic Announcements19:01
*** openstack changes topic to "Announcements (Meeting topic: infra)"19:01
dmsimardclarkb: added a last minute topic on the wiki.19:01
AJaeger_o/19:02
clarkbdmsimard: ok if we have time we can get to it19:02
*** igordc has joined #openstack-meeting19:02
dmsimardsure19:02
clarkbWe are in openstack TC and PTL election period. If you would like to run for a position you have about ~4 hours left to nominate yourself19:02
AJaeger_any nominations for Infra yet?19:02
clarkbI've nominated myself for Infra PTL. As noted in that change If I do get elected for the U cycle I'd like to plan for this to be my last cycle19:02
AJaeger_thanks, clarkb !19:03
clarkbAJaeger_: at least check just the one (myself)19:03
*** AJaeger_ is now known as AJaeger19:03
clarkbGiven that I'd like to put effort into mentoring more team members and leaders in the group if there is interest19:03
clarkbplease feel free to reach out if you have any interest in being more involved19:03
clarkbOther announcements: fungi I suppose we shouldn't count on your presence much the next few days?19:04
*** jamesmcarthur has quit IRC19:04
fungiprobably not, no19:05
fungii'll check in when i can19:05
clarkbfungi: let us know if you need help with stuff that is time sensitive19:05
clarkb#topic Actions from last meeting19:06
*** openstack changes topic to "Actions from last meeting (Meeting topic: infra)"19:06
fungiwill do, but i'm mostly avoiding time-sensitive stuff ;)19:06
clarkb#link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-08-27-19.01.txt minutes from last meeting19:06
clarkbfungi: good idea :)19:06
clarkbI see no actions from last meeting19:07
*** gyee has joined #openstack-meeting19:07
clarkb#topic Priority Efforts19:07
*** openstack changes topic to "Priority Efforts (Meeting topic: infra)"19:07
clarkb#topic OpenDev19:07
*** openstack changes topic to "OpenDev (Meeting topic: infra)"19:07
clarkbYesterday we noticed that 4 of 8 gitea backends had stale gitconfig lockfiles which prevented gerrit git replication from happening19:08
clarkbthank you ianw for cleaning that up19:08
clarkbI think ianw tracked that back to poor IO performance based on journald complaints in dmesg at roughly the same time19:08
clarkbianw: ^ anything to add to that?19:08
clarkbI've still not come up with a great way to work around that. Maybe we need to update gitea to clean up stale locks?19:09
ianwi just had a thought that maybe we should put some randomisation into the git gc cron job, i think they all kick off at the same time?19:09
clarkbthey do kick off at the same time19:09
fungiyeah, maybe staggering them will help avoid a thundering herd on the storage backend19:11
clarkbseems like a reasonable thing to try19:11
AJaegergood idea19:11
clarkbianw: do you want to write that chnage or should we find a volunteer?19:12
ianwclarkb: just doing it now :)19:12
clarkbgreat thanks19:12
clarkbAny other opendev business before we move on?19:12
clarkb#topic Update Config Management19:14
*** openstack changes topic to "Update Config Management (Meeting topic: infra)"19:14
clarkbWe merged a bunch of changes from mordred to build gerrit images19:14
Shrewsclarkb: he also put up this one: https://review.opendev.org/67842819:15
clarkb#link https://review.opendev.org/#/q/status:open+(topic:gus2019+OR+topic:gerrit-images) Gerrit and docker related changes that need review19:15
clarkbShrews: cool that should be captured by the query above19:16
Shrewsyah19:16
clarkbWould be great if people can take some time to get through those as I think interesting and useful things were learned at the gerrit user summit19:16
Shrewsany reason we haven't +Ad https://review.opendev.org/678413 yet?19:18
clarkbShrews: fungi notes that it will need someone to restart gerrit19:18
clarkbI can probably work on that tomorrow19:19
fungiyeah, alternatively it could just happen as part of the rename maintenance?19:19
*** jamesmcarthur has joined #openstack-meeting19:20
clarkbprobably best to make sure we don't break thing as ahead of time?19:21
clarkbjust to reduce potential impacts during renames. I can likely work on that this week (today is weird as have school stuff with kids this afternoon)19:22
clarkbAnything else? if not we'll move on to storyboard19:23
diablo_rojo_phonNothing currently.19:23
clarkb#topic Storyboard19:24
*** openstack changes topic to "Storyboard (Meeting topic: infra)"19:24
clarkbdiablo_rojo_phon: says nothing to report :)19:24
diablo_rojo_phonI do.19:24
fungii did get mysql slow query logging reestablished and whip up a refreshed analysis19:24
diablo_rojo_phonOh cool.19:24
clarkboh that was for last topic then? in any case go for it on storyboard19:24
*** e0ne_ has joined #openstack-meeting19:24
fungi#link http://files.openstack.org/user/fungi/storyboard/19:24
fungimentioned last week in #storyboard19:25
*** ralonsoh has quit IRC19:25
funginow that i've worked out the process, i'm happy to do further updates for it we can compare19:25
diablo_rojo_phonfungi: I suppose I should commence asking mordred to take a look?19:25
fungiSotK also mentioned he might try to tease out some useful nuggets for ideas of what to improve19:26
*** e0ne_ has quit IRC19:26
*** e0ne has quit IRC19:27
clarkbdiablo_rojo_phon: he is currently on vacation but imagine as soon as he is back bugging him would be great :)19:28
clarkbAnything else on storyboard?19:29
diablo_rojo_phonWill do!19:29
diablo_rojo_phonNo I don't think so.19:29
clarkb#topic General Topics19:29
*** openstack changes topic to "General Topics (Meeting topic: infra)"19:29
clarkb#link https://etherpad.openstack.org/p/201808-infra-server-upgrades-and-cleanup19:30
dmsimardNeed to pick up kids from school, if I'm not back we can talk about my topic in the infra channel19:30
clarkbdmsimard: ok19:30
clarkbfirst up is server upgrades19:30
clarkbfungi: ^ anything new to add re wiki?19:30
funginot really, no19:30
clarkbRemoving/replacing static.o.o is next19:31
clarkb#link https://etherpad.openstack.org/p/static-services19:31
fungiother community responsibilities have taken priority this past week. i hope to pick it back up soon (but welcome anyone who wants to help out with it too)19:31
clarkbianw: ^ I tried to capture a todo list at the bottom of that document19:31
clarkbianw: can you check if it is accurate and we can start to ask for volunteers?19:31
ianwoh sorry, yeah forgot to do that, will do19:32
clarkbthanks19:32
clarkbthanks19:32
clarkbsilly keyboard19:32
clarkbNext up is AFS mirrors and debugging failures to vos release tem19:32
clarkb*them19:32
clarkbianw: auristor mentioned rsync may be to blame (and possibly the switch from rsync on xenial to bionic?))19:33
clarkbhave we learned anything concrete yet and/or are there more changes for us to review?19:33
ianwi think fedora has locked itself again19:34
ianwthe reason is takes forever to release now is still unclear ...19:34
ianwlooking at the rysnc "-i" output, it does not do something like touch every single file19:34
fungiyou ruled out metadata being modified i guess19:35
ianwbut maybe updating the metadata on the upper directories is enough19:35
ianwto determine this i think we need to turn on file auditing on the servers again19:35
ianw#link https://lists.openafs.org/pipermail/openafs-info/2019-August/042861.html19:36
ianwthe only follow up was basically that version mis-match of client/server shouldn't matter ... that really only leaves bionic rsync v xenial rsync i guess :/19:37
clarkbat least we could rule out the version mismatch19:37
ianwi will keep poking so we can get it stable19:38
clarkbthanks19:38
clarkbNext up is pointing out the current plan to do project renames on September 1619:38
clarkbI expect we'll do that about 7am Pacific time to get as early a start as possible?19:39
clarkb(assuming I'm helping could be earlier if others want to drive it)19:39
clarkbIf you know of projects that need to be renamed they are highly encouraged to do that during this round of renames :)19:39
AJaegerwe should sent out an announcement...19:40
clarkbAJaeger: good idea. I'm guessing your day is about done. I'll do that after the meeting19:40
AJaegerthanks, clarkb19:41
clarkbNext is PTG planning. I've been unable to spend time on this yet, but wanted to point out that it isn't too early for anyone to suggest topics19:42
clarkb#link https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019 Feel free to add PTG topics19:42
clarkbThat was it for the agenda that I sent out yesterday. dmsimard are you back to talk about ara and swift hosted logs?19:43
clarkbif not we can pick up the conversation in the infra channel as you mentioned /me gives it a few minutes before openning the floor19:43
fungiwe also tend to be pretty efficient at coming up with things to talk about or hack on at the ptg once we get into a room together19:46
clarkb#topic Open Discussion19:46
*** openstack changes topic to "Open Discussion (Meeting topic: infra)"19:46
fungii like to treat the planning list as more of a resource to help jog our memories so we don't forget things we wanted to talk about19:47
clarkbWe'll talk ara in the infra channel. Anything else before we call it a meeting19:47
funginothing on my end19:47
clarkbfungi: ++ ya its more about having good notes so that we can let the room move without spending a ton of time looking info up19:47
fungiagreed19:47
clarkbsounds like that is it19:51
clarkbthank you everyone. We'll end a bit early today19:51
clarkb#endmeeting19:51
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"19:51
openstackMeeting ended Tue Sep  3 19:51:37 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:51
openstackMinutes:        http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-09-03-19.01.html19:51
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-09-03-19.01.txt19:51
openstackLog:            http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-09-03-19.01.log.html19:51
fungithanks clarkb!19:51
*** AJaeger has left #openstack-meeting19:53
*** Shrews has left #openstack-meeting19:53
*** pcaruana has quit IRC20:00
*** bbowen__ has quit IRC20:01
*** Lucas_Gray has joined #openstack-meeting20:14
*** slaweq has quit IRC20:17
*** dmacpher_ has quit IRC20:20
*** jamesmcarthur has quit IRC20:27
*** jamesmcarthur has joined #openstack-meeting20:30
*** artom has quit IRC20:30
*** senrique_ has joined #openstack-meeting20:33
*** enriquetaso has quit IRC20:34
*** artom has joined #openstack-meeting20:36
*** radeks has quit IRC20:37
*** markvoelker has quit IRC20:45
*** Lucas_Gray has quit IRC20:46
*** Lucas_Gray has joined #openstack-meeting20:47
*** ociuhandu has joined #openstack-meeting20:48
*** Lucas_Gray has quit IRC20:50
*** rbudden has joined #openstack-meeting20:50
*** trident has quit IRC20:52
*** Lucas_Gray has joined #openstack-meeting20:52
*** jamesmcarthur has quit IRC20:53
*** janders has joined #openstack-meeting21:01
*** martial has joined #openstack-meeting21:01
jandersg'day21:01
martialhello21:01
martial#start-meeting Scientific-SIG21:02
martial(sigh I do this all the time :) )21:02
martial#startmeeting Scientific-SIG21:02
openstackMeeting started Tue Sep  3 21:02:30 2019 UTC and is due to finish in 60 minutes.  The chair is martial. Information about MeetBot at http://wiki.debian.org/MeetBot.21:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:02
*** openstack changes topic to " (Meeting topic: Scientific-SIG)"21:02
openstackThe meeting name has been set to 'scientific_sig'21:02
*** janders42 has joined #openstack-meeting21:02
martialcute informal meeting today as well :)21:03
janders42I'm making up the numbers by joining twice21:03
janders42trying to get my mac to cooperate21:03
*** trident has joined #openstack-meeting21:03
martialjanders & janders42 : good man !21:04
martialI am nowadays using irccloud, simpler way to go online21:04
janders42good idea - I should look at this, too21:05
janders42what are you up to these days martial?21:05
martialtrying to train some models for ML for a Yolo test21:05
janders42I'm fighting our GPFS hardware, finally winning21:05
janders42(looking up initial benchmark of what the kit is capable of)21:06
janders42Run status group 0 (all jobs):  READ: bw=36.1GiB/s (38.7GB/s), 4618MiB/s-4745MiB/s (4842MB/s-4976MB/s), io=800GiB (859GB), run=21579-22174msecRun status group 1 (all jobs): WRITE: bw=24.0GiB/s (26.8GB/s), 3194MiB/s-3224MiB/s (3349MB/s-3381MB/s), io=800GiB (859GB), run=31757-32057msec[root@s206 ~]# cat bw.fio# Do some important numbers on SSD drives21:06
janders42, to gauge what kind of# performance you might get out of them.## Sequential read and write speeds are tested, these are expected to be# high. Random reads should also be fast, random writes are where crap# drives are usually separated from the good drives.## This uses a queue depth of 4. New SATA SSD's will support up to 32# in flight commands, so21:06
janders42 it may also be interesting to increase the queue# depth and compare. Note that most real-life usage will not see that# large of a queue depth, so 4 is more representative of normal use.#[global]bs=10Mioengine=libaioiodepth=32size=100gdirect=1runtime=60#directory=/mntfilename=/dev/md/stripenumjobs=8[seq-read]rw=readstonewall[seq-write]rw=writestone21:06
janders42wall[root@s206 ~]#21:06
janders4239GB/s read, 27GB/s write per node21:06
janders42there will be 621:06
janders42with HDR20021:07
janders42so should be good21:07
janders42this is evolution of our BeeGFS design - more balanced, non-blocking in terms of PCIe bandwidth21:07
martialthat is quite good21:07
janders42unfortunately the interconnect doesnt fully work yet, but I'll be at the DC later this morning hopefully getting it to work21:08
janders42have you ever looked at k8s/GFPS integration?21:08
janders42this is meant to be HPC OpenStack storage backend and HPC storage backend but I think there's some interesting work happening in k8s-gpfs space21:09
*** bbowen has joined #openstack-meeting21:09
martialI confess we have not had the need for now21:10
martialwhat are your reading recommendations on this topic?21:10
janders42I think all I had was some internal email from IBM folks21:11
janders42I hope to learn more about this in the coming months - and will report back21:11
martialthat would indeed be useful21:12
martialmaybe a small white paper on the subect?21:13
*** b1airo has joined #openstack-meeting21:13
martial#chair b1airo21:14
openstackCurrent chairs: b1airo martial21:14
janders42if it will be possible to remote into Shanghai SIG meetup, I can give a lightning talk about GPFS as OpenStack and/or k8s backend21:14
janders42hey Blair!21:14
b1airoo/21:14
b1airoissues connecting today sorry21:14
janders42another "interesting" thing I've come across lately is how Mellanox splitter cables work on some switches21:14
b1airohow goes it janders4221:14
martialnot sure if Stig will have that capability at the Scientific SIG meet in Shanghai but I hope so21:14
b1airoi'd be keen to see that lightning talk janders4221:15
janders42yeah that would be great! :)  - unlikely I will be able to attend in person at this stage21:15
martialnot sure if Blair will be there. I will not21:15
martial(Shanghai)21:15
janders42I plugged a 50GE to 4x10gE cable into my mlnx-eth switch yesterday and enabled the splitter function on the port21:16
janders42and the switch went "port eth1/29 reconfigured. Port eth1/30 reconfigured"21:16
janders42erm... I did NOT want to touch port 30 - it's an uplink for an IPMI switch...21:16
janders42boom21:17
janders42another DC trip21:17
janders42nice "feature"21:17
janders42with some switches it is possible to use splitters and still use all of the ports21:17
janders42with others - the above happens - connecting a splitter automagically kills off the port immediately below21:17
*** rmcall has joined #openstack-meeting21:17
janders42oh well lesson learned21:17
janders42hopefully will undo this later this morning21:18
*** ociuhandu has quit IRC21:18
martialor maybe you will lose your uplink ... <disconnect>21:18
martial(that's why we love technology :) )21:19
b1airono, i won't make it to Shanghai. already have a couple of trips before the end of the year and another to the States early Jan21:19
martialjoining us in Denver ?21:20
janders42Denver = SC?21:20
b1airojanders42: iirc that functionality is documented regarding the breakout ports21:20
janders42yeah... @janders21:21
janders42RTFM21:21
janders42:D21:21
janders42just quite counter-intuitive21:21
martialSC yes21:21
b1airoreconfiguring the port requires a full switchd restart and associated low-level changes, which interrupts all the way down to L2 i guess21:21
b1airoSC yes21:22
janders42just looked at the calendar and SC19 and Kubecon clash21:22
janders42I'm hoping to go to Kubecon21:22
janders42otherwise would be happy to revisit Denver21:22
janders42how long of a trip is it for you Blair to get to LAX? 14 hrs? Bit closer than from here I suppose..21:22
b1airoyeah bit closer, just under 12 i think21:23
janders42nice!21:23
b1airousually try to fly via San Fran though21:24
janders42yeah LAX can be hectic21:24
janders42I quite like Dallas21:24
janders42very smooth21:24
janders42good stopover while heading to more central / eastern parts of the States21:24
janders42not optimal for San Diego though - that'd be a LAX/SFO connection21:25
martial(like I said a very AOB meeting today ;) )21:25
janders42since Blair is here I will re-post the GPFS node benchmarks21:26
b1airo:-)21:26
b1airoyes please21:26
janders42Run status group 0 (all jobs):  READ: bw=36.1GiB/s (38.7GB/s), 4618MiB/s-4745MiB/s (4842MB/s-4976MB/s), io=800GiB (859GB), run=21579-22174msecRun status group 1 (all jobs): WRITE: bw=24.0GiB/s (26.8GB/s), 3194MiB/s-3224MiB/s (3349MB/s-3381MB/s), io=800GiB (859GB), run=31757-32057msec[root@s206 ~]# cat bw.fio# Do some important numbers on SSD drives21:26
janders42, to gauge what kind of# performance you might get out of them.## Sequential read and write speeds are tested, these are expected to be# high. Random reads should also be fast, random writes are where crap# drives are usually separated from the good drives.## This uses a queue depth of 4. New SATA SSD's will support up to 32# in flight commands, so21:26
janders42 it may also be interesting to increase the queue# depth and compare. Note that most real-life usage will not see that# large of a queue depth, so 4 is more representative of normal use.#[global]bs=10Mioengine=libaioiodepth=32size=100gdirect=1runtime=60#directory=/mntfilename=/dev/md/stripenumjobs=8[seq-read]rw=readstonewall[seq-write]rw=writestone21:26
janders42wall[root@s206 ~]#21:26
janders42this is from a single node with 12NVMes - no GPFS yet21:26
janders42but we did manage to get a 12NVMe non-blocking PCIe topology going21:26
janders4239GB/s read 27GB/s write21:26
janders42we'll have six of those puppies on HDR200 so should be good21:27
janders42but having said that I need to head off to the DC soon to bring this dropped out IPMI switch back on the network - otherwise I can't build the last node.,,21:27
b1airowould be interesting to see how those numbers change with different workload characteristics21:28
janders42bad news is I haven't found any way whatsoever to build these through Ironic21:28
*** rmcall has quit IRC21:28
janders4214 drives per box booting thru UEFI from drives number 8 and 9 is too much of an ask21:28
b1airobut those look like big numbers for relatively low depth21:28
janders42and these drives need to be SWRAID, too21:28
janders42I think there's a fair bit of room for tweaking, this was just to prove that the topology is right21:29
janders42it would be very interesting to see how the numbers stack up against our IO500 #4 BeeGFS cluster21:30
b1airowhat's the GPFS plan with these? declusered raid?21:30
janders42in the ten node challenge21:30
janders42GPFS-EC21:30
*** markvoelker has joined #openstack-meeting21:31
janders42though we will run 4 nodes of EC and 2 nodes of non-EC just to understand the impact (or lack of) of EC on throughput/latency21:31
janders42for prod it will be all EC21:31
janders42the idea behind this system is a mash up of Ceph style arch, RDMA transport and NVMes connected in a non-blocking fashion21:32
janders42hoping to get the best of all these worlds21:32
janders42so far the third bit looks like a "tick"21:32
janders42these run much smoother than our BeeGFS nodes in the early days21:32
janders42these do have some blocking which is causing all sorts of special effects if not handled carefully21:33
janders42the new GPFSes have none21:33
janders42which gets a little funny cause people hear this and ask me - so what's the difference between BeeGFS and GPFS, why do we need both?21:34
janders42I used to say BeeGFS is a Ferrari and GPFS is a beefed up Ford Ranger21:34
b1airoha!21:34
janders42but it really comes to a Ferrari and Porsche Cayenne now really21:34
*** markvoelker has quit IRC21:35
b1airoi guess a lot of it comes down to the question of what happens when either a) the network goes tits up, and/or b) the power suddenly goes out from a node/rack/room21:35
janders42all good questions21:35
janders42and with BeeGFS I have a catch-all answer21:35
janders42it's scratch21:35
janders42it's not backed up21:35
janders42if it goes it goes21:35
b1airoi.e., do you still have a working cluster tomorrow :-), and is there any data still on it21:35
janders42GPFS on the other hand... ;)21:36
b1airoget out of jail free card ;-)21:36
janders42if one was to go through a car crash, I recommend being in the Cayenne not the Ferrari21:36
janders42let's put it this way21:36
janders42but yeah it's an interesting little project21:37
*** senrique_ has quit IRC21:37
janders42if it wasnt all the drama with HDR VPI firmware slipping and splitter cables giving us shit it would be up and running by now21:37
janders42hopefully another month21:37
janders42and on that note I better start getting ready to head out to the DC to fix that IPMI switch..21:37
janders42we got mlnx 1GE switches with Cumulus on them for IPMI21:38
janders42I can't remember why21:38
janders42they are funky, but it is soo distracting switching between mlnx-os and cumulus21:38
janders42completely different philosophy and management style21:38
janders42probably wouldnt get those again - just some braindead cisco like ones for IPMI21:38
janders42MLNX-100GE and HDR ones are great to work with though21:39
janders42(except the automatic shutdown of the other port when enabling splitters :)  )21:39
b1airoif you go Cumulus all the way then you can obviously get 1G switches too, but then bye bye IB21:39
b1airoi'm slightly distracted in another Zoom meeting now sorry21:40
janders42no worries21:40
janders42wrapping up here, too21:40
janders42I hope I haven't bored you to death with my GPFS endeavours21:40
b1airono, sounds fun!21:41
janders42if you happen to have a cumulus-cisco cheat sheet I'd love that21:41
janders42I wish I could just paste config snippets to Google Translate21:41
janders42:D21:41
martialwith that, should we move to AOB? :)21:44
martialIf not, I will propose we adjourn Today's meeting21:45
martialsee you all next week21:47
martial#endmeeting21:47
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"21:47
openstackMeeting ended Tue Sep  3 21:47:21 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:47
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-09-03-21.02.html21:47
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-09-03-21.02.txt21:47
openstackLog:            http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-09-03-21.02.log.html21:47
*** janders42 has quit IRC21:52
*** dtrainor has quit IRC22:04
*** dtrainor has joined #openstack-meeting22:04
*** mriedem has quit IRC22:10
*** Lucas_Gray has quit IRC22:12
*** Wryhder has joined #openstack-meeting22:12
*** Wryhder is now known as Lucas_Gray22:13
*** ociuhandu has joined #openstack-meeting22:30
*** ociuhandu has quit IRC22:35
*** senrique_ has joined #openstack-meeting22:45
*** armax has quit IRC22:48
*** diablo_rojo has joined #openstack-meeting22:56
*** Lucas_Gray has quit IRC22:56
*** rcernin has joined #openstack-meeting23:06
*** carloss has quit IRC23:18
*** macz has quit IRC23:41
*** armax has joined #openstack-meeting23:50
*** b1airo has quit IRC23:52

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!