Tuesday, 2018-09-18

*** gyee has quit IRC00:04
*** jamesmcarthur has joined #openstack-meeting00:10
*** slaweq has joined #openstack-meeting00:11
*** slaweq has quit IRC00:16
*** aagate has quit IRC00:19
*** e0ne has joined #openstack-meeting00:20
*** jamesmcarthur has quit IRC00:29
*** longkb has joined #openstack-meeting00:29
*** markvoelker has quit IRC00:44
*** jamesmcarthur has joined #openstack-meeting00:49
*** jamesmcarthur has quit IRC01:10
*** jamesmcarthur has joined #openstack-meeting01:12
*** e0ne has quit IRC01:20
*** bobh has joined #openstack-meeting01:22
*** jamesmcarthur has quit IRC01:22
*** yamahata has quit IRC01:25
*** iyamahat has quit IRC01:25
*** e0ne has joined #openstack-meeting01:27
*** jamesmcarthur has joined #openstack-meeting01:33
*** felipemonteiro has quit IRC01:38
*** felipemonteiro has joined #openstack-meeting01:54
*** mriedem_away is now known as mriedem01:55
*** jamesmcarthur has quit IRC02:01
*** diablo_rojo has quit IRC02:02
*** manjeets has joined #openstack-meeting02:06
*** felipemonteiro has quit IRC02:10
*** slaweq has joined #openstack-meeting02:11
*** slaweq has quit IRC02:16
*** tetsuro has joined #openstack-meeting02:26
*** hongbin has joined #openstack-meeting02:37
*** mriedem has quit IRC02:38
*** tetsuro has quit IRC02:38
*** tetsuro has joined #openstack-meeting02:38
*** e0ne has quit IRC02:40
*** e0ne has joined #openstack-meeting02:48
*** jamesmcarthur has joined #openstack-meeting02:52
*** apetrich has quit IRC02:52
*** sagara has joined #openstack-meeting02:56
*** jamesmcarthur has quit IRC02:57
*** felipemonteiro has joined #openstack-meeting02:58
*** lbragstad has quit IRC03:04
*** e0ne has quit IRC03:05
*** lbragstad has joined #openstack-meeting03:11
*** jamesmcarthur has joined #openstack-meeting03:13
*** jamesmcarthur has quit IRC03:17
*** felipemonteiro has quit IRC03:23
*** hongbin has quit IRC03:25
*** jamesmcarthur has joined #openstack-meeting03:33
*** jamesmcarthur has quit IRC03:37
*** erlon has joined #openstack-meeting03:42
*** felipemonteiro has joined #openstack-meeting03:46
*** erlon has quit IRC03:48
*** lbragstad has quit IRC03:48
*** felipemonteiro has quit IRC03:52
*** jamesmcarthur has joined #openstack-meeting03:54
*** felipemonteiro has joined #openstack-meeting03:56
*** jamesmcarthur has quit IRC03:58
*** tpatil has joined #openstack-meeting03:58
*** samP has joined #openstack-meeting03:59
samPHi all for masakari04:00
tpatilHi04:00
samP#startmeeting masakari04:00
openstackMeeting started Tue Sep 18 04:00:30 2018 UTC and is due to finish in 60 minutes.  The chair is samP. Information about MeetBot at http://wiki.debian.org/MeetBot.04:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.04:00
*** openstack changes topic to " (Meeting topic: masakari)"04:00
openstackThe meeting name has been set to 'masakari'04:00
samPHi all04:00
*** felipemonteiro has quit IRC04:00
*** tetsuro has quit IRC04:01
sagaraHi04:01
samPsagara: Hi04:01
tpatilsamP: Today, I'm in India, waiting for taxi to go to Office04:01
tpatilI might disconnect in 10 mins, is it ok?04:01
samPtpatil: no problem04:02
tpatilThanks04:02
samPright after the PTG, nothing much to discuss, unless you have any topics04:02
samPHere is the etherpad for PTG discussion in Stein.04:03
samP#link https://etherpad.openstack.org/p/masakari-ptg-stein04:03
tpatilNo new topics to be discussed, but awaiting for patches to be merged pushed into stable branches04:04
samPI will prepare the spec repo to Stein.04:04
samPtpatil: Sure, I will take a look.04:04
sagaraI also don't have any special topic.04:04
samPsagara: Thanks04:05
tpatilsamP: You have already approved those patches, need another +204:05
*** yamamoto has joined #openstack-meeting04:06
samPtpatil: Ah ok, Sagara san please take a look, when you have time.04:07
sagarasamP: OK04:08
samPI will try to add more details or specs for Stein work items. Then, we can discuss more details in coming meetings04:08
samPIf no other topics to discuss, let's end today's meeting04:09
tpatilOk04:09
samPtpatil: have a good time in India!04:09
sagaratpatil: have a good time!04:09
tpatilThank you04:09
samPplease pass my regards to team !!04:10
tpatilSure04:10
tpatilI have got few mascots, will share with them04:10
samPOK then, CU on next week.04:10
tpatilYes04:10
samPtpatil: great.. thanks04:10
samPTill then,04:11
*** slaweq has joined #openstack-meeting04:11
samPPlease use openstack-dev ML with [masakari] or #openstack-masakari @freenode IRC for further discussion04:11
samPThank you !04:11
samP#endmeeting04:11
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"04:11
openstackMeeting ended Tue Sep 18 04:11:47 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)04:11
openstackMinutes:        http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-09-18-04.00.html04:11
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-09-18-04.00.txt04:11
openstackLog:            http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-09-18-04.00.log.html04:11
*** tpatil has quit IRC04:12
*** diablo_rojo has joined #openstack-meeting04:14
*** jamesmcarthur has joined #openstack-meeting04:14
*** slaweq has quit IRC04:16
*** bobh has quit IRC04:18
*** sagara has quit IRC04:19
*** jamesmcarthur has quit IRC04:19
*** tetsuro has joined #openstack-meeting04:20
*** diablo_rojo has quit IRC04:48
*** jamesmcarthur has joined #openstack-meeting04:55
*** jamesmcarthur has quit IRC05:00
*** slaweq has joined #openstack-meeting05:01
*** slaweq has quit IRC05:06
*** diablo_rojo has joined #openstack-meeting05:10
*** armax has quit IRC05:21
*** jamesmcarthur has joined #openstack-meeting05:22
*** jamesmcarthur has quit IRC05:27
*** belmoreira has joined #openstack-meeting05:30
*** diablo_rojo has quit IRC05:44
*** tetsuro has quit IRC05:49
*** tetsuro has joined #openstack-meeting05:50
*** tetsuro has quit IRC06:03
*** jamesmcarthur has joined #openstack-meeting06:04
*** tetsuro has joined #openstack-meeting06:04
*** psachin has joined #openstack-meeting06:05
*** janki has joined #openstack-meeting06:08
*** jamesmcarthur has quit IRC06:08
*** slaweq has joined #openstack-meeting06:19
*** joxyuki has joined #openstack-meeting06:22
*** jamesmcarthur has joined #openstack-meeting06:25
*** jamesmcarthur has quit IRC06:29
*** ykatabam has quit IRC06:44
*** jamesmcarthur has joined #openstack-meeting06:46
*** jamesmcarthur has quit IRC06:50
*** rossella_s has quit IRC07:01
*** rcernin has quit IRC07:03
*** jamesmcarthur has joined #openstack-meeting07:27
*** jamesmcarthur has quit IRC07:31
*** akozhevnikov has joined #openstack-meeting07:41
*** priteau has joined #openstack-meeting07:41
*** ahrechny has joined #openstack-meeting07:57
*** kopecmartin has joined #openstack-meeting08:00
*** cloudrancher has joined #openstack-meeting08:02
*** YanXing_an has joined #openstack-meeting08:02
*** phuoc_ has joined #openstack-meeting08:03
*** persia has quit IRC08:06
*** persia has joined #openstack-meeting08:07
*** AndreyS_ has joined #openstack-meeting08:12
*** belmoreira has quit IRC08:15
*** AndreyS_ has quit IRC08:16
*** cloudrancher has quit IRC08:16
*** AndreyS has joined #openstack-meeting08:16
*** cloudrancher has joined #openstack-meeting08:17
*** trinaths has joined #openstack-meeting08:21
*** psachin has quit IRC08:22
phuoc_#startmeeting tacker08:30
openstackMeeting started Tue Sep 18 08:30:39 2018 UTC and is due to finish in 60 minutes.  The chair is phuoc_. Information about MeetBot at http://wiki.debian.org/MeetBot.08:30
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.08:30
*** openstack changes topic to " (Meeting topic: tacker)"08:30
openstackThe meeting name has been set to 'tacker'08:30
YanXing_ancool08:30
longkbGreat LOL08:31
phuoc_#chair YanXing_an08:31
openstackCurrent chairs: YanXing_an phuoc_08:31
phuoc_Because Tacker PTL is not here08:31
phuoc_I and YanXing_an host this meeting08:31
trinathsOk08:31
phuoc_#topic Roll Call08:32
*** openstack changes topic to "Roll Call (Meeting topic: tacker)"08:32
phuoc_anyone here08:32
trinathsHi08:32
trinathsI'm trinaths08:32
joxyukihi08:32
phuoc_hi trinaths, joxyuki, YanXing_an08:33
YanXing_anphuoc_, hi08:33
phuoc_#topic Announcements08:33
*** openstack changes topic to "Announcements (Meeting topic: tacker)"08:33
phuoc_I remind that we will have vPTG on this Friday08:34
phuoc_#link https://etherpad.openstack.org/p/Tacker-PTG-Stein08:35
phuoc_the time is on 21 September08:35
*** markvoelker has joined #openstack-meeting08:36
YanXing_anwow, only a few days left08:36
phuoc_you guys can add your idea in that link, each of topic will be presented08:36
phuoc_I think the time for each of them is 20~30 minutes08:37
joxyukiI want to talk my topic first because I am available til 8:30 UTC08:37
phuoc_joxyuki, I will set your topic in the top08:38
joxyukiphouc_, ok :)08:38
trinathsOk08:39
phuoc_hi ahrechny, AndreyS, akozhevnikov08:39
ahrechnyhi08:39
phuoc_I see that you guys have proposed some important blueprints08:39
ahrechnyyes, and we try to update our reviews before08:40
akozhevnikovhi08:41
phuoc_I will discuss with other core reviewers like dkushwaha (PTL), YanXing_an, gongysh (ex-PTL) to set you guys blueprints to high priority08:41
*** alexchadin has joined #openstack-meeting08:41
*** tetsuro has quit IRC08:41
phuoc_and VDU healing too (with contribution from joxyuki and tpatil)08:42
joxyukiyea, I will upload spec by vPTG08:43
ahrechnyYes, we interested in the heal too08:43
*** cloudrancher has quit IRC08:43
phuoc_did you guys check the time slot, it is from 7:00 UTC to 11:00 UTC08:43
phuoc_in vPTG, you guys can prepare a simple or detail presentation, and we can discuss and give advices08:45
phuoc_YanXing_an, I hope you can attend in that day :-)08:45
ahrechnyok, we will prepare it08:45
phuoc_ahrechny, thanks08:46
YanXing_anphuoc_, i think i can attend it if no urgent things08:46
phuoc_I think in this meeting, I just remind about vPTG08:47
YanXing_anphuoc_ thanks a lot08:48
phuoc_We will spend more time on 21 September08:48
phuoc_Thank you guys08:48
phuoc_anyone has other thing to talk now08:49
trinathsCan someone share tacker odl devstack localrc .. where I can run through latest addition to tacker08:49
phuoc_trinaths, I think we can discuss on tacker channel08:51
trinathsOk08:51
trinathsNo issues08:51
phuoc_I will end the meeting now, thank you ^^08:52
phuoc_#endmeeting08:52
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"08:52
openstackMeeting ended Tue Sep 18 08:52:13 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)08:52
openstackMinutes:        http://eavesdrop.openstack.org/meetings/tacker/2018/tacker.2018-09-18-08.30.html08:52
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/tacker/2018/tacker.2018-09-18-08.30.txt08:52
openstackLog:            http://eavesdrop.openstack.org/meetings/tacker/2018/tacker.2018-09-18-08.30.log.html08:52
*** joxyuki has left #openstack-meeting08:52
*** trinaths has left #openstack-meeting08:53
*** lbragstad has joined #openstack-meeting08:59
*** belmoreira has joined #openstack-meeting09:04
*** cloudrancher has joined #openstack-meeting09:08
*** belmoreira has quit IRC09:10
*** cloudrancher has quit IRC09:10
*** markvoelker has quit IRC09:11
*** belmoreira has joined #openstack-meeting09:11
*** hyunsikyang has quit IRC09:13
*** YanXing_an has quit IRC09:25
*** kopecmartin has quit IRC09:26
*** alexchadin has quit IRC09:26
*** tetsuro has joined #openstack-meeting09:30
*** alexchadin has joined #openstack-meeting09:34
*** sambetts_ is now known as sambetts09:44
*** longkb has quit IRC09:56
*** Emine has joined #openstack-meeting09:58
*** kopecmartin has joined #openstack-meeting10:02
*** e0ne has joined #openstack-meeting10:04
*** markvoelker has joined #openstack-meeting10:07
*** iyamahat has joined #openstack-meeting10:29
*** yamamoto has quit IRC10:31
*** kopecmartin has quit IRC10:31
*** alexchadin has quit IRC10:33
*** iyamahat has quit IRC10:35
*** markvoelker has quit IRC10:41
*** yamamoto has joined #openstack-meeting10:44
*** litao__ has quit IRC10:51
*** Luzi has joined #openstack-meeting10:53
*** tetsuro has quit IRC11:00
*** alexchadin has joined #openstack-meeting11:07
*** alexchadin has quit IRC11:11
*** alexchadin has joined #openstack-meeting11:14
*** kopecmartin has joined #openstack-meeting11:14
*** artom has joined #openstack-meeting11:15
*** phuoc_ has left #openstack-meeting11:18
*** imacdonn has quit IRC11:19
*** imacdonn has joined #openstack-meeting11:19
*** akozhevnikov has quit IRC11:19
*** tssurya has joined #openstack-meeting11:26
*** alexchadin has quit IRC11:36
*** yamamoto has quit IRC11:37
*** alexchadin has joined #openstack-meeting11:40
*** e0ne has quit IRC11:51
*** cloudrancher has joined #openstack-meeting11:55
*** yamamoto has joined #openstack-meeting12:01
*** markvoelker has joined #openstack-meeting12:04
*** priteau has quit IRC12:14
*** priteau has joined #openstack-meeting12:17
*** kopecmartin has quit IRC12:22
*** cloudrancher has quit IRC12:24
*** raildo has joined #openstack-meeting12:26
*** kopecmartin has joined #openstack-meeting12:54
*** efried_off is now known as efried13:05
*** mriedem has joined #openstack-meeting13:07
*** r-mibu_ has joined #openstack-meeting13:07
*** r-mibu_ has quit IRC13:08
*** ahrechny has quit IRC13:12
*** r-mibu has joined #openstack-meeting13:13
*** r-mibu has quit IRC13:14
*** mjturek has joined #openstack-meeting13:15
*** r-mibu has joined #openstack-meeting13:15
*** sfotony has joined #openstack-meeting13:21
*** alexchadin has quit IRC13:22
*** dustins has joined #openstack-meeting13:26
*** jamesmcarthur has joined #openstack-meeting13:28
*** awaugama has joined #openstack-meeting13:31
*** alexchadin has joined #openstack-meeting13:32
*** efried is now known as efried_afak13:32
*** efried_afak is now known as efried_afk13:32
*** jamesmcarthur has quit IRC13:32
*** alexchadin has quit IRC13:45
*** efried_afk is now known as efried13:47
*** apetrich has joined #openstack-meeting13:48
*** erlon has joined #openstack-meeting13:51
*** lbragstad has quit IRC13:53
*** lbragstad has joined #openstack-meeting13:54
*** alexchadin has joined #openstack-meeting13:54
*** awaugama has quit IRC13:55
*** mjturek has quit IRC13:57
*** awaugama has joined #openstack-meeting13:57
*** njohnston has joined #openstack-meeting13:58
*** radeks_ has joined #openstack-meeting13:58
*** sfotony1 has joined #openstack-meeting14:06
*** mjturek has joined #openstack-meeting14:06
*** Emine has quit IRC14:06
*** annabelleB has joined #openstack-meeting14:07
*** sfotony has quit IRC14:07
*** sfotony1 has quit IRC14:10
*** jgriffith has quit IRC14:10
*** jgriffith has joined #openstack-meeting14:10
*** alexchadin has quit IRC14:11
*** alexchadin has joined #openstack-meeting14:12
*** radeks_ has quit IRC14:14
*** Emine has joined #openstack-meeting14:19
*** yamamoto has quit IRC14:19
*** rossella_s has joined #openstack-meeting14:21
*** annabelleB has quit IRC14:22
*** bobh has joined #openstack-meeting14:22
*** yamamoto has joined #openstack-meeting14:24
*** yamamoto has quit IRC14:24
*** yamamoto has joined #openstack-meeting14:24
*** annabelleB has joined #openstack-meeting14:28
*** rossella_s has quit IRC14:29
*** yamamoto has quit IRC14:29
*** Luzi has quit IRC14:30
*** rossella_s has joined #openstack-meeting14:33
*** mjturek has quit IRC14:43
*** hongbin has joined #openstack-meeting14:46
*** mjturek has joined #openstack-meeting14:47
*** iyamahat has joined #openstack-meeting14:52
*** rossella_s has quit IRC14:56
*** rossella_s has joined #openstack-meeting15:01
*** armax has joined #openstack-meeting15:02
*** AndreyS has quit IRC15:09
*** jamesmcarthur has joined #openstack-meeting15:12
*** belmoreira has quit IRC15:15
*** Leo_m has joined #openstack-meeting15:22
*** felipemonteiro has joined #openstack-meeting15:24
*** Emine has quit IRC15:31
*** felipemonteiro has quit IRC15:39
*** ssbarnea has joined #openstack-meeting15:41
*** bobh has quit IRC15:46
slaweq#startmeeting neutron_ci16:00
openstackMeeting started Tue Sep 18 16:00:37 2018 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** jamesmcarthur has quit IRC16:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"16:00
slaweqhi16:00
openstackThe meeting name has been set to 'neutron_ci'16:00
*** mlavalle has joined #openstack-meeting16:00
mlavalleo/16:00
*** sfotony has joined #openstack-meeting16:01
slaweqlets wait few more minutes for others16:01
slaweqmaybe someone else will join16:01
mlavallenp16:02
*** longkb has joined #openstack-meeting16:02
mlavalleI didn't show up late, did I?16:02
slaweqmlavalle: no, You were just on time :)16:02
mlavalleI was distracted doing the homework you gave me last meeting and I was startled by the time16:03
* haleyb wanders in16:03
slaweq:)16:03
*** jamesmcarthur has joined #openstack-meeting16:04
njohnstono/16:04
slaweqhi njohnston :)16:04
slaweqlets start then16:04
njohnstonhello slaweq, sorry I am late - working on a bug16:05
slaweq#topic Actions from previous meetings16:05
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"16:05
slaweqnjohnston: no problem :)16:05
slaweq* mlavalle to talk with mriedem about https://bugs.launchpad.net/neutron/+bug/178800616:05
openstackLaunchpad bug 1788006 in neutron "Tests fail with "Server [UUID] failed to reach ACTIVE status and task state "None" within the required time ([INTEGER] s). Current status: BUILD. Current task state: spawning."" [Critical,Fix released] - Assigned to Slawek Kaplonski (slaweq)16:05
slaweqI think it's done, right? :)16:05
mlavallewe did and we fixed it16:05
mriedemyar16:05
mlavalle\o/16:05
mriedemvirt_type=qemu16:05
mlavallethanks mriedem16:05
slaweqthx mriedem for help on that16:05
njohnston\o/16:06
slaweqok, next one16:06
slaweq* mlavalle continue debugging failing MigrationFromHA tests, bug https://bugs.launchpad.net/neutron/+bug/178943416:06
openstackLaunchpad bug 1789434 in neutron "neutron_tempest_plugin.scenario.test_migration.NetworkMigrationFromHA failing 100% times" [High,Confirmed] - Assigned to Miguel Lavalle (minsel)16:06
mlavallemanjeets took over that bug last week16:06
slaweqyes, I saw but I don't think his approach to fix that is good16:06
manjeets++16:06
mlavallehe literally stole it from my hands, despite my strong resistance ;-)16:07
slaweqLOL16:07
slaweqI can imagine that mlavalle :P16:07
mlavallemanjeets: I am going to assign the bug to you, ok?16:07
manjeetsmlavalle, ++16:07
slaweqI was looking on it also quickly during the weekend16:07
slaweqand I think that it is again some race condition or something like that16:08
manjeetsIt could be some subscribed callback as well ?16:08
slaweqIMO this ports set to down should comes from L2 agent, not directly from l3 service plugin16:08
slaweqbecasuse it is something like: neutron-sever sends notification that router is disabled to L3 agent, L3 agent removes ports so L2 agent updates ports to down status (probably)16:09
slaweqand probably there is no this notification send properly and because of that other things didn't happens16:09
slaweqhaleyb: mlavalle does it makes sense for You? or I missunderstood something in this workflow maybe?16:10
*** mjturek has quit IRC16:10
haleybslaweq: yes, that makes sense.  originally i thought it was in the hadbmode code, but l2 is maybe more likely16:11
mlavalleagree16:11
* manjeets take a note will dig into l2 16:11
haleybbut it's someone getting the event and missing something16:11
slaweqgive me a sec, I will check one thing according to that16:12
*** mjturek has joined #openstack-meeting16:12
slaweqso what I found was that when I was checking migrtation from Legacy to HA, ports were down after this notification: https://github.com/openstack/neutron/blob/master/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L5516:14
slaweqin case of migration from HA it didn't happen16:14
slaweqbut I didn't have more time on the airport to dig more into it16:15
manjeetsslaweq, you that notification wasn't called in case of HA ?16:15
manjeetsyou mean**16:15
slaweqI think that it was called but then "hosts" list was empty and it wasn't send to any agent16:16
slaweqbut it has to be checked still, I'm not 100% sure16:16
manjeetsi'll test that today16:16
slaweqok, thx manjeets16:17
manjeetsthe host thing if its empty in case16:17
slaweqso I will assign this to You as an action, ok?16:17
manjeetssure !16:17
slaweqthx16:17
slaweq#action manjeets continue debugging why migration from HA routers fails 100% of times16:17
slaweqok, lets move on16:17
slaweqnext one16:18
slaweq* mlavale to check issue with failing test_attach_volume_shelved_or_offload_server test16:18
manjeetsbut the issue occurs after migration to HA , migration from HA worked16:18
manjeetsslaweq, its migration to HA, from HA i think is fine16:18
slaweqmanjeets: http://logs.openstack.org/29/572729/9/check/neutron-tempest-plugin-dvr-multinode-scenario/21bb5b7/logs/testr_results.html.gz16:19
slaweqfirst example16:19
slaweqfrom HA to any other fails always16:19
mlavalleyes, that was my experience when I tried it16:19
manjeetsah ok got it !16:20
slaweq:)16:20
mlavalleare we moving on then?16:20
slaweqI think we can16:20
mlavalleThat bug was the reason I was almost late for this meeting16:21
mlavallewe have six instances in kibana over the past 7 days16:21
mlavalletwo of them is with the experimental queue16:21
slaweqso not too many16:21
*** mjturek has quit IRC16:22
mlavallenjohnston is playing with change 580450 and the experimental queue16:22
njohnstonyes16:22
slaweqso it's njohnston's fault :P16:22
njohnston=> fault <=16:22
njohnston:-)16:22
mlavallethe other failures are with neutron-tempest-ovsfw and tempest-multinode-full, which are non voting if I remember correctly16:22
slaweqIIRC this issue was caused by instance was not pinging after shelve/unshelve, right?16:23
mlavalleyeah16:24
mlavallewe get a timeout16:24
slaweqmaybe something was changed in nova then and this is not an issue anymore?16:24
mlavalleI'll dig a little longer16:25
mlavallebefore concluding that16:25
slaweqok16:25
mlavallefor the time being I am just making the point that it is not hitting us very hard16:25
slaweqthat is good information16:26
slaweqok, lets assign it to You for one more week then16:26
mlavalleyes16:26
slaweq#action mlavale to check issue with failing test_attach_volume_shelved_or_offload_server test16:26
slaweqthx mlavalle16:26
slaweqcan we go to the next one?16:26
* mlavalle has long let go of the hope of meeting El Comandante without getting homework16:27
slaweqLOL16:27
slaweqmlavalle: next week I will not give You homework :P16:27
mlavallenp whatsoever.... just taking the opportunity to make a joke16:28
slaweqI know :)16:28
slaweqok, lets move on16:28
slaweqlast one16:28
slaweqnjohnston to switch fullstack-python35 to python36 job16:28
*** bnemec has quit IRC16:28
*** bnemec has joined #openstack-meeting16:28
slaweqnjohnston: are You around?16:29
njohnstonyes, I have a change for that up; I think it just needs a little love now that the gate is clear16:29
njohnstonI'll check it and make sure it's good to go16:29
slaweqok, thx njohnston16:29
slaweqsounds good16:30
njohnstonhttps://review.openstack.org/59971116:30
slaweq#action njohnston will continue work on switch fullstack-python35 to python36 job16:30
slaweqok, that's all from last meeting16:30
slaweq#topic Grafana16:31
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"16:31
slaweqhttp://grafana.openstack.org/dashboard/db/neutron-failure-rate16:31
slaweqI was checking grafana earlier today and there wasn't many problems there16:32
slaweqat least not problems which we are not aware of :)16:32
haleybnot since we removed the dvr-multinode from the gate :(16:32
slaweqhaleyb: yes16:32
slaweqso we have still neutron-tempest-plugin-dvr-multinode-scenario 100% failures but it's related to issue with migration from HA routers16:33
slaweqand issue related to grenade job16:33
slaweqother things I think I in quite good shape now16:34
slaweqI was recently checking also reasons of some failures in tempest jobs and it was usually some issues with volumes (I don't have links to examples now)16:34
mlavalleI have a question16:34
slaweqsure mlavalle16:34
mlavalleThis doesn't have an owner: https://bugs.launchpad.net/neutron/+bug/179198916:35
openstackLaunchpad bug 1791989 in neutron "grenade-dvr-multinode job fails" [High,Confirmed]16:35
slaweqyes, sorry16:35
*** longkb has quit IRC16:35
slaweqI forgot to assign myself to it16:35
slaweqI just did it now16:35
*** sambetts is now known as sambetts|afk16:35
mlavalleit is not voting for the time being, but we need to fix it right?16:35
slaweqyes, I was checking that even today16:35
mlavalleah ok, question answered16:35
mlavallethanks16:36
slaweq:)16:36
*** bobh has joined #openstack-meeting16:36
slaweqand I wanted to talk about it now as it's last point on my list for today :)16:36
mlavalleok16:36
slaweq#topic grenade16:36
*** openstack changes topic to "grenade (Meeting topic: neutron_ci)"16:36
slaweqso speaking about this issue16:36
slaweqyesterday I pushed patch https://review.openstack.org/#/c/602156/6/playbooks/legacy/neutron-grenade-dvr-multinode/run.yaml to neutron16:37
*** annabelleB has quit IRC16:38
*** gyee has joined #openstack-meeting16:38
slaweqtogether with depends-on from grenade https://review.openstack.org/#/c/602204/7/projects/60_nova/resources.sh it allowed me to log into at least controller node in this job16:38
slaweqso I tried today and then I spawned manually same vm as is spawned by grenade script16:39
slaweqand all worked perfectly fine, instance was pinging after around 5 seconds :/16:39
slaweqso now I added some additional logs to this grenade script: https://review.openstack.org/#/c/602204/9/projects/60_nova/resources.sh16:40
slaweqand I'm running this job once again: http://zuul.openstack.org/stream.html?uuid=928662f6de054715835c6ef9599aefbd&logfile=console.log16:40
slaweqI'm waiting for results of it16:40
*** mjturek has joined #openstack-meeting16:40
*** annabelleB has joined #openstack-meeting16:40
*** bobh has quit IRC16:40
slaweqI also compared packages installed on nodes in such failed job from this week with packages installed before 7.09 on job which passed16:41
slaweqI have list of packages which have different versions16:41
slaweqthere is different libvirt, linux-kernel, qemu, openvswitch16:41
slaweqso many potential culprits16:42
slaweqI think I will start with downgrading libvirt as it was updated in cloud-archive repo on 7.0916:42
haleybslaweq: we will eventually figure that one out!16:42
slaweqany ideas what else I can check/test/do here?16:43
mlavallechecking packages seems the right way to go16:43
slaweqyes, so I will try to send some DNM patches with downgraded each of those packages (except kernel) and will try to recheck them few times16:44
haleybyes, other than that you can keep adding debug commands to the script - eg for looking at interfaces, routes, ovs, etc, but packages is a good first step16:44
slaweqand see if issue will still happen on each of them16:44
slaweqhaleyb: yes, I just don't know if it's possible (and how to do it) to run such commands on subnode16:45
*** mjturek has quit IRC16:45
slaweqso currently I only added some OSC commands to check status of instance/port/fip on control plane level16:45
*** bobh has joined #openstack-meeting16:47
slaweqso I will continue debugging of this issue16:47
slaweqohh, one more thing, yesterday we spotted it with haleyb also in stable/pike job16:47
slaweqand when I was looking for this issue in logstash, I found that it happend couple of times in stable/pike16:47
slaweqless than in master but still it happend there also16:48
mlavallenice ctach16:48
mlavallecatch16:48
slaweqstrange thing is that I didn't saw it on stable/queens or stable/rocky branches16:48
slaweqso as this is failing on "old" openstack this means that it fails on neutron with stable/rocky and stable/ocata branches16:49
*** yamahata has joined #openstack-meeting16:50
slaweqand that's all as summary of this f..... issue :/16:50
mlavalleLOL16:50
slaweqI will assign it to me as an action for this week16:51
mlavalleI see you are learning some Frnech16:51
mlavalleFrench16:51
slaweq#action slaweq will continue debugging multinode-dvr-grenade issue16:51
slaweqmlavalle: it can be "French" :P16:51
* slaweq is becoming Hulk when has to deal with grenade multinode issue ;)16:52
njohnstonLOL!16:52
slaweqok, that's all from me about this issue16:52
slaweq#topic Open discussion16:52
*** openstack changes topic to "Open discussion (Meeting topic: neutron_ci)"16:52
slaweqdo You have anything else to talk about?16:53
njohnstonI just sent email to openstack-dev to inquire about the python3 conversion status of tempest and grenade16:53
slaweqthx njohnston, I will read it after the meeting then16:53
njohnstonif those conversions have not happened yet, and further if they need to be done globally, that could be interesting.16:54
njohnstonBut I'll try not to borrow trouble.  Thats it from me.16:55
slaweqspeaking about emails, I want to ask mlavalle one thing :)16:55
mlavalleok16:55
slaweqdo You remember to send email about adding some 3rd party projects jobs to neutron?16:55
mlavalleyes16:55
slaweqok, great :)16:55
slaweqok, so if there is nothing else to talk, I think we can finish now16:56
mlavalleThanks!16:56
slaweqthanks for attending16:57
slaweqand see You next week16:57
slaweqo/16:57
slaweq#endmeeting16:57
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:57
openstackMeeting ended Tue Sep 18 16:57:11 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:57
manjeetsbye16:57
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2018/neutron_ci.2018-09-18-16.00.html16:57
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2018/neutron_ci.2018-09-18-16.00.txt16:57
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2018/neutron_ci.2018-09-18-16.00.log.html16:57
njohnstonthanks all! o/16:57
*** cloudrancher has joined #openstack-meeting16:59
*** kopecmartin has quit IRC17:01
*** jchhatbar has joined #openstack-meeting17:14
*** mlavalle has left #openstack-meeting17:16
*** janki has quit IRC17:17
*** sfotony has quit IRC17:20
*** alexchadin has quit IRC17:25
*** arne_wiebalck has quit IRC17:29
*** cloudrancher has quit IRC17:40
*** jchhatbar has quit IRC17:40
*** jamesmcarthur has quit IRC17:41
*** bobh has quit IRC17:42
*** electrofelix has quit IRC17:45
*** hemna_ has joined #openstack-meeting17:48
*** goldenfri has joined #openstack-meeting17:52
*** diablo_rojo has joined #openstack-meeting17:57
*** bobh has joined #openstack-meeting18:02
*** mjturek has joined #openstack-meeting18:16
*** iyamahat has quit IRC18:17
*** yamahata has quit IRC18:17
*** alexchadin has joined #openstack-meeting18:20
*** alexchadin has quit IRC18:24
*** iyamahat has joined #openstack-meeting18:27
*** priteau has quit IRC18:28
*** iyamahat_ has joined #openstack-meeting18:30
*** iyamahat has quit IRC18:34
*** alexchadin has joined #openstack-meeting18:37
*** alexchadin has quit IRC18:40
*** alexchadin has joined #openstack-meeting18:41
*** alexchadin has quit IRC18:44
*** yamahata has joined #openstack-meeting18:48
*** dustins has quit IRC18:52
*** Shrews has joined #openstack-meeting18:54
*** annabelleB has quit IRC18:56
clarkbHello, anyone here for the infra meeting?19:00
*** annabelleB has joined #openstack-meeting19:00
clarkbI expect it may be a light crowd, but I'll probably run through the topics anyway as there is PTG related material and not everyone would have been able to attend the PTG19:00
ianwo/19:00
clarkb#startmeeting infra19:01
openstackMeeting started Tue Sep 18 19:01:09 2018 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: infra)"19:01
openstackThe meeting name has been set to 'infra'19:01
clarkb#link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting19:01
clarkb#topic Announcements19:01
*** openstack changes topic to "Announcements (Meeting topic: infra)"19:01
Shrewsohai19:01
diablo_rojoo/19:01
clarkbMostly just a reminder that last week was the PTG and this week it seems like a good chunk of the team is either on vacation or something resembling vacation so that they can deal with a hurricane19:02
clarkb#topic Actions from last meeting19:02
*** openstack changes topic to "Actions from last meeting (Meeting topic: infra)"19:02
clarkb#link http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-09-04-19.01.txt minutes from last meeting19:02
clarkbThere weren't any concrete actions listed. It was prep for PTG for many of us I think19:03
clarkbMaybe I should do a high level recap of the PTG then we can dig in through the rest of our agenda19:03
clarkb#topic High level PTG recap19:03
*** openstack changes topic to "High level PTG recap (Meeting topic: infra)"19:03
ianw++19:04
clarkbThe first two days of the PTG the infra team was in the "Ask Me Anything" helproom19:04
clarkbThis ended up being fairly productive. We ended up helping OSA with various zuul related job questions as well as pointing them in the direction of how to reproduce those builds locally (I think we may see them push changes to make that possible in the near future)19:04
clarkbSwift came by a couple times as they were working on multinode functional testing and doc build updates19:05
notmynameclarkb: and we're just waiting on zuul to land it! thanks for your help (https://review.openstack.org/#/c/601686/)19:05
tdasilvathanks for the help :)19:05
clarkbTripleo/RDO are interesting in how we might do OVB (something virtual baremetal) testing. Which requires working PXE boot which requires layer 2 broadcast for dhcp in the current design19:06
clarkbone suggestion was to test the pxe provisioning independent of the workload those nodes run so that OVB can boot lightweight instances using our existing network overlays and qemu then test the workload on nodepool provisioned nodes so that nested virt isn't a problem19:07
clarkbSahara and others were interested in third part CI as well as zuul v319:07
clarkbianw: ^ I tried pushing people to your spec to see if we could get a volunteer to write out what that looks like in practice19:07
clarkbianw: the other popular idea was to just use software factory19:07
clarkbwhich apparently works very well out of the box at small scale (but needs tuning as things grow)19:07
clarkbI also helped StarlingX sketch out their docs publishing which intersected with AFS ( you may have seen chagnes from dtroyer and myself for that which I think are not yet merged )19:08
ianwi've actually updated that spec, so was interested if there was talk on it19:08
clarkbianw: the sense I got was that for most users they would be fine just using software factory19:08
clarkbwhich is maybe a reasonable thing to point people at19:09
clarkbJan from I forget South African networking company in particular was a fan of that19:09
clarkb(I'm really bad with names sorry)19:09
Shrewsyeah, i think SF is pretty solid and close to what we run19:09
Shrewsa few extras19:09
clarkbAnother recurrent topic in the helproom was working around nested virt19:10
clarkbI think where we have gotten with that is if we can continue to push on our cloud providers to have better communications channels we can explore the possibility of nested virt enabled flavors. Then jobs are best effort if they run on that. Basically you'd have to talk to the cloud not infra if they break in weird ways. I think that should happen once we've largely moved to bionic though19:10
clarkbnew kernels seem to lead to new nested virt trouble19:11
clarkbThat was Monday and Tuesday19:11
clarkbWednesday and Thurday were infra dedicated time. The first day was largely spent getting Zuul to trigger CD updates of Zuul via bridge.o.o19:11
clarkbTurns out this is more complicated than we originally anticipated. Current status is we have a job that will add bridge.o.o to its inventory then fail to ssh to bridge.o.o because we only allow ssh in as root from bridge and puppetmaster19:12
clarkb(we can talk about this more in the priority topic section of the meeting)19:12
* fungi is sorta here for a few minutes, but not completely and may disappear again shortly19:13
clarkbTo get to that point we addressed quite a few of the smaller issues we've run into with ansible. Including upgrading ansible to 2.7.0rcX on bridge.o.o which seemed to speed up execution of ansible19:13
clarkbOverall it was quite useful in getting ansible things running more reliably19:13
clarkbThursday was the last day of Infra dedicated time and we used it to talk about the other items on our etherpad. I and others tried to take notes about the conversation there. /me picks out some highlights19:13
clarkbThe openstack transition from xenial to bionic will ideally be managed by the openstack project with infra and zuul teams assisting19:14
*** raildo has quit IRC19:14
clarkbfungi: will be sending out mailing list merge plan details as soon as the hurricane allows19:14
fungiyup19:14
fungithe list creation change has been up for review since friday, btw19:15
clarkbThe lounge has come up again as a thing we shoudl try to run to make irc more friendly to new users19:15
clarkbttx volunteered to make that a 20% time project of his19:15
fungi#link https://review.openstack.org/602781 Create the OpenStack discussion mailing list19:15
clarkbmostly need to figure out authentication for our users there19:15
clarkbAnd finally there was a conversation in which kmalloc volunteered to start buildign a proof of concept for authentication/identity aggregation so that we can point gerrit and wiki and storyboard etc at a single identity provider that proxies for say launchpad, openstackid, github, or whatever else people want19:16
fungiideally 602781 will be merged and the resulting list locked down before i announce its existence19:16
clarkbfungi: would it be best for us to +2 then you can approve and lock down when ready?19:16
fungisure, wfm19:17
clarkbI think that is it for a high level recap19:17
clarkbBefore we talk the config management update priority effort anything I missed people want more info on?19:17
clarkbor topics that I skimmed too much on in general?19:17
*** mrhillsman has joined #openstack-meeting19:18
Shrewsany experimenting/discussing the dockerhub zuul/nodepool containers?19:18
clarkbShrews: we did discuss some of the pain points that SpamapS ran into with the pbrx generated images in the helproom19:18
clarkbShrews: I think we had general consensus that we should improve zuul's default behavior around daemonizing to be more container friendly as well as improving the documentation aroudn what each image is/does19:19
clarkbbut otherwise SpamapS said the images were working for him they just had to have some minor tweaks done19:19
Shrewscool19:19
clarkbzuul-base vs zuul image is confusing. Also you don't want to daemonize by default in docker containers19:20
*** raildo has joined #openstack-meeting19:20
clarkb#topic Update Config Management19:20
*** openstack changes topic to "Update Config Management (Meeting topic: infra)"19:20
clarkbDon't forget to review 'topic:puppet-4 and topic:update-cfg-mgmt' changes in gerrit19:21
clarkbThis is what msot of the time was dedicated to at the PTG. cmurphy has got quite a few services puppeting under puppet 4 parser (and improved the inventory listing to make it less merge conflicty)19:21
clarkbThen mordred corvus jhesketh myself et al worked on setting up Zuul to trigger ansible playbook runs on bridge.o.o so that we can run CD from zuul19:22
clarkbThere are two big things we need to address to make ^ possible. The first is we only allow ssh as root into the bridge from the bridge and puppetmaster. We need a zuul user on bridge or we need to allow the zuul executors to ssh into bridge.o.o as root19:23
clarkbI'm actually leaning towards adding a zuul user myself since we already manage arbitrary users that can ssh in and sudo19:23
clarkbThe other item is our support for nil nodeset jobs is pretty basic in our base jobs19:23
clarkbwe don't get all the same logging for example nor do we clean out the master ssh key from the ssh agent like we do on jobs with a nodeset19:24
clarkbAnother item to be aware of is mordred is writing a new inventory plugin for us that will more quickly filter and add nodes to groups than the current constructed plugin can do (the constructed plugin accounts for like 2 minutes of every ansible-playbook run right now, it isn't fast)19:25
clarkbmordred: ^ you aren't around today to talk about that are you19:25
clarkbthere were some unexpected runtime behaviors around removing nodes from groups that I don't think have been solved yet19:25
Shrews2 minutes? eek19:26
clarkbShrews: ya its creating and evaluating a jinja template for all our groups X all our hosts19:26
clarkbsomething like 10k jinja template evaluations19:26
fungimatrix expansion at its finest19:26
Shrewscartesian inventory ftw19:27
clarkbto summarize this topic we are in a much more reliable state for running ansible frequently in a loop than we were last week. But still work to be done to get zuul talking to bridge.o.o to trigger ansible runs there19:27
clarkbAlso the puppet-4 futureparser work continues and if you can help review those changes I'm more than happy to approve them when I can monitor them :)19:28
clarkb#topic General Topics19:28
*** openstack changes topic to "General Topics (Meeting topic: infra)"19:28
clarkbLets keep moving because we have half an hour left and plenty of other items related to PTG to talk about :)19:28
clarkbOpenDev19:28
clarkbit hda been about 3 weeks without seeing any strong opposition to the OpenDev name (the only thing I saw aws concern about confusion with the conference and fungi made a strong argument for why that shouldn't be a problem and that seemed to settle it on the therpad)19:29
clarkbI went ahead and decided it was important for us to keep making progress on this and did the no news is good news thing and sent email  to the list saying we should go with it19:29
clarkbone thing corvus brought up at the PTG is that we should enlist the help of the OpenStack Foundation to draft clear communications about what this means for existing and potential new projects19:30
*** priteau has joined #openstack-meeting19:30
*** tobiash has joined #openstack-meeting19:30
clarkbone concern in particular being that if we aren't careful we could send the message that this is just a rename of the openstack infra team and they exist only to serve openstack the project19:30
clarkbI've got it on my todo to talk to the foundation about drafting this19:31
clarkbFrom a technical perspective we will spin up ns1.opendev.org ns2.opendev.org and adns1.opendev.org. Host opendev.org DNS there and then migrate zuul-ci.org DNS there then delete ns*.openstack.org19:31
clarkbthat is sort of a step 0 to starting to host things with that name branding.19:32
clarkbcorvus: has volunteered to spin those up19:32
fungijust want to chime in with my support for self-hosting dns as the day-zero prerequisite19:32
clarkbThe last related item on my list si that we should consider booting new servers eg etherpad01.openstack.org replacement as etherpad01.opendev.org in nova so that we don't have to replace everything once we do trusty upgrades19:33
clarkbthat will likely be a case by case basis (and in that particular example it would continue to run etherpad.openstack.org since dns won't necessarily be up yet and comms won't have been sent out yet)19:33
clarkbjust a nova api bookkeeping item.19:33
*** mjturek has quit IRC19:34
*** alexchadin has joined #openstack-meeting19:34
clarkbQuestions, concerns, thoughts on OpenDev before we move on to the next item?19:34
mnaser(yay for opendev)19:34
*** bobh has quit IRC19:34
Shrewsare we going to use resources donated for openstack for opendev?19:35
Shrewsany concerns around that?19:35
* fungi is looking forward to typing two fewer keys for our urls ;)19:35
clarkbShrews: the clouds we've spoken too so far (mostly vexxhost and potential arm resources) seem tothink that being a little more generic is easier to sell19:36
clarkbShrews: but we haven't talked to all of them yet.19:36
clarkbShrews: what I like to point out is that tripleo + nova + neutron are ~80% of our resource utilization19:36
clarkband I don't expect that will change19:36
fungisome of the sentiment was that resources are being donated to primarily serve the communities operated under the osf, and that won't really change19:36
Shrewsmakes sense19:36
clarkbShrews: and we've always hosted related projects its just that many are unwilling to do so with "openstack" featured so prominently with branding and naming19:37
clarkbits definitely somethign to be aware of though19:37
Shrews19:37
Shrewsk19:37
*** alexchadin has quit IRC19:38
clarkbSpeaking of ARM we have a new linaro cloud region19:38
clarkbianw: ^ thank you for setting that up19:38
ianwi think fungi has leads on more arm resources?19:39
clarkbin parallel to that Gary Perkins is spinning up another arm cloud https://review.openstack.org/#/c/602436/19:39
ianwthere's certainly a review out for adding credentials19:39
clarkbya I think fungi got the secret portions of the credentials ^ fungi any idea if we are good to move forward on that at this point?19:39
fungiyeah, if someone has time to pick up the creds are in a file in my homedir on bridge.o.o at the moment19:39
clarkbah, we need to add them to hiera but after that should be good to go?19:40
ianwfungi: ahh, cool, i can take a look.  i know we got confirmation it works with the same images nb03.o.o is producing, which is good19:40
fungi~fungi/temp_clouds.yaml19:40
fungii was trying to get it working well enough with osc to reset the password19:40
fungibut didn't get quite that far yet19:40
fungiwas running into some sort of error i think with my syntax there19:40
clarkbianw: ^ let me know if I can help too19:41
fungianyway, feel free to take that over, i was mostly acting as an in-person secrets receptacle19:41
clarkbprometheanfire has been working to get gentoo images running in nodepool19:42
clarkbI think those are close but may need some small tweaks still. Mostly a heads up that this is happening and probably still slightly broken19:42
ianwi guess we should promote that to voting jobs in dib19:43
clarkbmost of the issues so far have been around unbound config, gentoo is different than other distros19:43
clarkbI think we are past that particular set of problems now though19:43
ianwmy one concern is that nobody but prometheanfire has really every maintained those in dib19:43
clarkbya, I'm not really a gentooian myself having never run it19:44
clarkbbut we do have other gentoo people in the larger community (like ttx)19:44
Shrewswhat's the need for gentoo?19:44
Shrewsjust more os coverage?19:44
clarkbShrews: OSA is interested in supporting it, and the consequence of that will be that gentoo doesn't need their own tooling for openstack support19:44
Shrewsgotcha19:45
fungicertainly a good option for some kinds of bleeding-edge versions of dependencies and platform testing19:45
*** awaugama has quit IRC19:45
Shrewsor for folks who really like waiting on compiles19:46
persiaThere was some talk of precompiling most things for the test nodes19:46
clarkbHeads up that OVH is doing openstack upgrades in BHS1 tonight. THis has a couple of consequences for us. First is that we'll disable nodepool in BHS1 this afternoon (I'll approve that change later today). The other is my IRC bouncer runs on ovh openstack in BHS1 so uh I may not be around in the morning while I recover :)19:47
fungiif i ever get around to it i'll try to get debian unstable images building too (which is remarkably stable, contrary to its name, i've run it for a nearly two decades)19:47
clarkbOn October 10th they will do the same in GRA119:47
clarkbAnd the last thing on my list of agenda items is there is an openstack baord meeting in 13 minutes19:47
*** bobh has joined #openstack-meeting19:47
clarkb#link http://lists.openstack.org/pipermail/foundation/2018-September/002620.html OpenStack Board Meeting after this meeting19:48
mordredclarkb: ohai19:48
clarkbThis meeting will cover the plans for focus areas and new projects like zuul and starlingx and airship and kata19:48
mordredclarkb: sorry my brain still isn't quite working19:48
fungia wild board member appears!19:48
clarkbI know that particular topic is of interest to many around here so feel free to join19:48
*** dustins has joined #openstack-meeting19:48
Shrewsmordred: that implies that it once *did* work19:48
clarkbI'll dial in then work on this conference talk slide deck probably :)19:49
*** mjturek has joined #openstack-meeting19:49
clarkbalso lunch19:49
fungii have to miss the board call this time, but have confidence that others will fill me in19:49
clarkb#topic Open Discussion19:49
*** openstack changes topic to "Open Discussion (Meeting topic: infra)"19:49
ianwfungi: my laptop sleep state would like to disagree with you on debian unstable stability ;)19:49
clarkbAnything else? feel free to dig into mroe PTG related stuff too19:49
clarkbianw: ha19:50
fungiianw: fair, let's try to avoid putting our servers to sleep ;)19:50
clarkbfungi: that is how live migration works fwiw19:50
fungisomething tells me we don't care if a live migration of one of our test nodes fails19:50
*** bobh has quit IRC19:52
clarkbmordred: when your brain does start working you may want to sync up on the current zuul cd to brideg issues19:52
clarkbmordred: they are just tricky enough that I think we should consider them a bit before choosing a path forward19:52
ianwif people want to look at the 3rd party ci spec, i think it's about done19:54
*** alexchadin has joined #openstack-meeting19:54
ianw#link https://review.openstack.org/#/c/563849/19:54
mordredclarkb: ++19:54
ianwhowever, i'm also willing to promote softwarefactory to a more prominent "this is a solution", if that's what people want19:54
clarkbianw: I think its what people are doing, unsure if its what they all want19:55
clarkbbut those doing it with software factory do seem happy with it19:55
kmalloco/19:55
* kmalloc reads backscroll and confirms what folks said re: volunteering19:56
clarkbSeems like we are winding down now. Thank you everyone. If any of this doesn't make sense or is crazy or needs further clarification please do reach out to me or start a mailing list thread and we can discuss it further19:57
clarkb(I'm happy to start a thread if you aren't comfortable with it either)19:58
clarkbwith that I'll give you all a minute to dial into the baord meeting if you choose or have breakfast or dinner etc :)19:58
clarkb#endmeeting19:58
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"19:58
openstackMeeting ended Tue Sep 18 19:58:35 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-09-18-19.01.html19:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-09-18-19.01.txt19:58
openstackLog:            http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-09-18-19.01.log.html19:58
*** Shrews has left #openstack-meeting19:58
*** bobh has joined #openstack-meeting19:58
*** gouthamr has quit IRC20:00
*** stevebaker has quit IRC20:01
*** dmellado has quit IRC20:01
*** david-lyle is now known as dklyle20:02
*** gouthamr has joined #openstack-meeting20:03
*** bobh has quit IRC20:03
*** raildo_ has joined #openstack-meeting20:04
*** alexchadin has quit IRC20:04
*** raildo has quit IRC20:06
*** bobh has joined #openstack-meeting20:16
*** erlon has quit IRC20:16
*** bobh has quit IRC20:20
*** rmascena__ has joined #openstack-meeting20:26
*** jamesmcarthur has joined #openstack-meeting20:26
*** bobh has joined #openstack-meeting20:27
*** raildo_ has quit IRC20:29
*** munimeha1 has joined #openstack-meeting20:31
*** bobh has quit IRC20:31
*** rmascena__ has quit IRC20:35
*** bobh has joined #openstack-meeting20:37
*** bobh has quit IRC20:41
*** janders_ has joined #openstack-meeting20:43
*** efried has quit IRC20:44
*** efried has joined #openstack-meeting20:44
*** mjturek has quit IRC20:47
*** bobh has joined #openstack-meeting20:48
*** bobh has quit IRC20:52
*** oneswig has joined #openstack-meeting20:54
*** rbudden has joined #openstack-meeting20:55
*** r-mibu has quit IRC20:56
*** b1air has joined #openstack-meeting20:57
*** priteau has quit IRC20:57
*** bobh has joined #openstack-meeting20:57
*** priteau has joined #openstack-meeting20:59
oneswigit's that time again21:00
oneswig#startmeeting scientific-sig21:00
openstackMeeting started Tue Sep 18 21:00:26 2018 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
janders_g'day everyone21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: scientific-sig)"21:00
openstackThe meeting name has been set to 'scientific_sig'21:00
oneswiggreetings janders_ and all21:00
oneswigwhat's new?21:00
oneswig#link agenda for today is https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_September_18th_201821:01
*** bobh has quit IRC21:02
oneswigTomorrow is upgrade day over here... but this specific time it's Pike->Queens21:02
oneswigWe've been doing the drill on the staging environment but there's nothing quite like the real thing ...21:03
janders_oneswig: what are the main challenges?21:03
oneswigIn this case, not too many.  One concern is correctly managing resource classes in Ironic21:04
janders_right! are you doing BIOS/firmware upgrades as well?21:04
oneswigoh no.  That's not in the plan (should it be I wonder?)21:05
b1airo/21:05
oneswigG'day b1air, which airport are you in today? :-)21:05
b1airDo all the changes all at once!!21:05
janders_if you were to, would you use something like lifecycle manager, or would you temporarily boot ironic nodes into a "service image" with all the tools?21:05
b1airVery near AKL as it happens21:05
oneswigFighting talk from a safe distance, that21:06
b1air;-)21:06
oneswigjanders_: last time we did this, it was the latter - a heat stack for all compute instances with a service image in it.21:06
*** martial_ has joined #openstack-meeting21:07
janders_right! in a KVM-centric world, it's easy - just incorporate all the BIOS/FW management tools in the image. Ironic changes this paradigm so I was wondering how do you go about it. Might be an interesting forum topic.21:07
martial_(difficulty joining on the phone)21:07
oneswigHave you seen an Ansible playbook for doing firmware upgrades via the dell idrac?21:07
oneswigHello martial_21:07
*** diablo_rojo has quit IRC21:07
oneswig#chair b1air martial_21:07
openstackCurrent chairs: b1air martial_ oneswig21:07
oneswig(remiss of me)21:07
janders_do you pxeboot the service image via ironic or outside ironic?21:07
oneswigIn that case we booted it like a standard compute instance, via Ironic21:08
b1airKVM world easy? Pull the other one @janders_ ! :-)21:08
janders_no.. I looked at the playbooks managing the settings but not the BIOS/FW versions. If it works (and I'm not worried about the playbooks, I'm worried about the Dell hardware side :) it'd be gold21:08
janders_oneswig: does this mean you had to delete all the ironic instances first?21:09
janders_b1air: KVM world is easy in this one sense :)21:09
oneswigIn that case, yes - I guess the lifecycle manager could have avoided that, do you think?21:09
*** bobh has joined #openstack-meeting21:09
janders_oneswig: yes - it will do all of this in the pre-boot environment (if it works..)21:10
*** tssurya has quit IRC21:10
janders_when I say "if it works" - on our few hundred nodes of HPC it definitely works for 70-95% nodes. Success rates vary. The ones that failed usually just need more attempts.. (thanks, Dell)21:10
b1airPower drain?21:11
janders_however I am unsure if Mellanox firmware can be done via Lifecycle Controller (we usually do this part from the compute OS)21:11
oneswigjanders_: is this the playbooks at https://github.com/dell/Dell-EMC-Ansible-Modules-for-iDRAC ?21:11
b1airjanders_: only if it is a Dell OEM Mellanox part - that's the value add21:12
janders_b1air: most of our HCAs are indeed OEM - I need to revisit this (I guess the guys have always done this with mft & flint, cause it works 99/100) - in the ironic world doing everything from LC could simplify things21:13
*** bobh has quit IRC21:14
janders_closer to the main topics - from your experience, how big do the forum sessions typically get?21:14
oneswigjanders_: there has also been talk previously of performing these actions as a manual cleaning step - less obtrusive but without out-of-band dependencies on idrac21:14
b1airAt Monash we found the LCs to be ok reliability-wise from 13G21:15
oneswigjanders_: perhaps we should, indeed, look at the agenda..21:15
oneswig#topic Forum sessions21:15
*** openstack changes topic to "Forum sessions (Meeting topic: scientific-sig)"21:15
oneswigForum sessions I've been in have ranged in size from ~8 people to ~50 (but about 12 holding court)21:16
janders_oneswig: this is a neat way to do it in a rolling fashion - however the drawback is having a mix of versions for quite a while as users delete/reprovision the nodes. I'm trying to come up with an option of doing it all in a defined downtime window, without affecting existing ironic instances.21:16
janders_b1air: that is great to hear! :)21:16
janders_oneswig: that is good - it shouldn't be impossible to get some bandwidth in these sessions! :)21:16
oneswigI get the feeling one on Ironic and BIOS firmware management could be interesting!21:17
oneswigFacilitating it but also, conversely, preventing it21:17
*** bobh has joined #openstack-meeting21:17
*** gouthamr_ has joined #openstack-meeting21:18
priteaujanders_: I think at CERN they have a way of letting the instance owner select their downtime period21:19
priteauI am trying to find where I saw it described21:19
oneswigGood evening priteau!21:20
janders_wow - very cool idea.. I wonder if it's leveraging AZs (which might have different downtime windows) or something else21:20
priteauHi everyone by the way :-)21:20
priteaujanders_: it may even be per-host21:20
*** dmellado has joined #openstack-meeting21:20
b1airSounds a bit like AWS' reboot/downtime scheduling API21:21
*** bobh has quit IRC21:22
janders_thinking about it - if it's just the instance that's supposed to be up and it has no volumes etc attached it can be quite fine grained21:22
janders_however if the instance is leveraging any services coming off the control plane, it might be tricky to go below AZ-level downtime21:23
janders_or at least that's my quick high level thought without looking into details21:23
janders_very interesting topic though! :)21:23
oneswigquestion of procedure - do we add a proposal like this to the Ironic forum etherpad, or mint our own SIG etherpad and add it to the list?21:24
*** rmascena__ has joined #openstack-meeting21:25
priteauI found http://openstack-in-production.blogspot.com/2018/01/keep-calm-and-reboot-patching-recent.html, but it's not how I remember it21:25
*** rmascena__ has quit IRC21:26
*** gouthamr has quit IRC21:26
oneswigAnother area I am interested in pursuing is support for the recent features introduced to Ironic for alternative boot methods (boot from volume, boot to ramdisk) - is there scope for getting these working with multi-tenant networking?21:26
priteauMaybe there is another procedure for the less critical upgrades21:26
*** bobh has joined #openstack-meeting21:27
*** Emine has joined #openstack-meeting21:28
janders_oneswig: alternative boot methods would definitely be of interest. Looking at the PTG notes there are some good ideas so it looks like the next step would be to find out if/when these ideas can be implemented21:30
janders_something from my side (across all the storage-related components) would be BeeGFS support/integration in OpenStack21:31
oneswigOoh, interesting.21:31
janders_would you guys be interested in this, too?21:31
oneswigLike, in Manila?21:31
janders_yes, that's the most powerful scenario21:31
oneswigAbsolutely!  We've got playbooks for it, but nothing "integrated"21:32
*** bobh has quit IRC21:32
oneswig(but does it need to be?)21:32
janders_but running VM instances (for those who still need VMs) and cinder volumes off BeeGFS would be of value as well21:32
oneswigThat follows quite closely what IBM was up to with SpectrumScale21:32
janders_given no kerberos support in BeeGFS for the time being I think it would be very useful to have some smarts there21:33
oneswigOK, let's get these down...21:33
janders_haha! you found the logic behind my thinking21:33
oneswig#link SIG brainstorming ideas https://etherpad.openstack.org/p/BER-stein-forum-scientific-sig21:33
janders_I liked what IBM have done with GPFS/Spectrum however I find deploying and maintaining this solution more and more painful as time goes21:33
janders_I see the same sentiment on the storage side21:34
janders_"it's good, but..."21:34
janders_I'll add some points to the etherpad now21:34
janders_ok, you already have - thank you! :)21:35
janders_another storage related idea21:36
*** stevebaker has joined #openstack-meeting21:36
janders_would you find it useful to be able to separate storage backends for instance boot drives and ephemeral drives?21:36
janders_I like the raw performance of node-local SSD/NVMe21:36
janders_however having something more resilient (and possibly shared) for the boot drive is good, too21:37
janders_I would happily see support for splitting the two up (I do not think this is possible today, please correct me if I am wrong)21:37
goldenfriI was just thinking about that today, so I 2nd that21:37
janders_in this case, we could even wipe ephemeral on live migration (this would have to be configurable) so only the boot drive needs to persist21:38
oneswigIt seems like a good idea to me, certainly worth suggesting21:38
janders_ok!21:38
*** bobh has joined #openstack-meeting21:38
oneswighello goldenfri!21:38
goldenfrio/21:39
priteaujanders_: if the ephemeral storage is mounted while live migrating, wouldn't the guest OS complain if data gets wiped out?21:40
*** Emine has quit IRC21:40
janders_good point, there would have to be some smarts around it. I don't have this fully thought through yet, but I think the capability would be useful. Perhaps cloud-* services could help facilitate this?21:42
oneswigOK we are linked up to https://wiki.openstack.org/wiki/Forum/Berlin2018#Etherpads_from_Teams_and_Working_Groups21:42
janders_but obviously if there's heavy IO hitting ephemeral, some service trying to umount /dev/sdb won't have a lot of luck..21:42
b1air+1 to janders_ ephemeral separation feature request21:42
*** bobh has quit IRC21:43
priteaujanders_: VM-aware live migration?21:43
b1airI see it more likely to be used with cold migration21:43
b1airWhere you have a fleet of long lived instances that you want to move around due to underlying maintenance etc21:44
janders_another thing I'm looking at is using trim/discard like features for node cleaning - however bits of this might be already implemented, looking at ironic and pxe_idrac/pxe_ilo bits21:46
janders_have any of you used this with success?21:46
janders_(I might have asked this question here already, not sure)21:46
b1airYes I recall discussing this before, but don't think anything came of it yet21:47
oneswigDid we cover this last week?  I think there's an Ironic config parameter for key rotation21:47
b1airWith hardware encrypted storage?21:47
*** bobh has joined #openstack-meeting21:47
oneswigWe use it, and when I checked up I believe it was as simple as that - with the caveat that some of the drives needed a firmware update (of course!)21:48
priteaujanders_: you asked last week ;-) http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-09-12-11.00.log.html#l-13921:48
oneswigb1air: hardware encryption as I understand it but with an empty secret.21:48
*** bobh has quit IRC21:48
oneswigSo not really encryption...21:49
*** bobh has joined #openstack-meeting21:49
b1airCunning - the baddies will never suspect an empty password!21:49
janders_oneswig: :) I've discussed this with too many parties and lost track (scientific-sig, RHAT, Dell, ... )21:50
oneswigjanders_: your comrades here are the source of truth, you can't trust those other guys :-)21:50
janders_that's right :) can't trust those sales organisations21:51
oneswigThere was one other matter to cover today, before I forget21:51
priteaukeycloack21:51
oneswig#topic SIG event space at Berlin21:51
*** openstack changes topic to "SIG event space at Berlin (Meeting topic: scientific-sig)"21:51
oneswigpriteau: I think we have that on the agenda for next week21:52
priteauOh, I looked at the wrong week :-)21:52
oneswigI know - it's a handy aide memoire for me, probably confusing for anyone else!21:52
oneswigAnyway - We have the option of 1 working group session + 1 bof session (ie, what we've had at previous summits).21:53
oneswigI think this works well enough, unless anyone prefers to shorten it?21:53
oneswigb1air? martial_? Thoughts on that?21:53
janders_I have couple more forum ideas - given we're running low on time I will fire these away now21:56
oneswigPlease do.21:57
janders_1) being able to schedule a bare-metal instance to a specific piece of hardware (I don't think this is supported today) - would this be useful to you?21:57
janders_think --availability-zone host:x.y.z equivalent for Ironic21:57
oneswigOn the SIG events - looks like Wednesday morning is clear for the AI-HPC-GPU track21:57
oneswigjanders_: I believe that exists, in the form of a three-tuple delimited by colons21:58
janders_2) I don't think "nova rebuild" works with baremetal instances - I think it would be something useful21:58
oneswigThe form might be nova::<Ironic uuid of the node>21:58
oneswigOn 2, are you sure? I think I've rebuilt Ironic instances before21:59
*** dustins has quit IRC21:59
oneswigLet's follow up on that...21:59
janders_in this case, I will retest both and update the etherpad as required21:59
oneswiggood plan, let us know!21:59
oneswigOK, we are out of time22:00
oneswigThanks everyone22:00
oneswigkeep adding to that etherpad if you get more ideas we should advocate22:00
*** goldenfri has quit IRC22:00
oneswighttps://etherpad.openstack.org/p/BER-stein-forum-scientific-sig22:00
janders_thanks guys!22:00
oneswig#endmeeting22:01
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"22:01
openstackMeeting ended Tue Sep 18 22:01:02 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:01
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-09-18-21.00.html22:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-09-18-21.00.txt22:01
openstackLog:            http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-09-18-21.00.log.html22:01
oneswiguntil next time!22:01
b1airBye!22:01
*** oneswig has quit IRC22:02
*** rbudden has quit IRC22:03
*** janders_ has quit IRC22:05
*** priteau has quit IRC22:05
*** slaweq has quit IRC22:26
*** annabelleB has quit IRC22:36
*** diablo_rojo has joined #openstack-meeting22:36
*** annabelleB has joined #openstack-meeting22:37
*** tetsuro has joined #openstack-meeting22:45
*** rcernin has joined #openstack-meeting22:47
*** ykatabam has joined #openstack-meeting22:47
*** janki has joined #openstack-meeting22:47
*** tpsilva has quit IRC22:52
*** annabelleB has quit IRC22:56
*** munimeha1 has quit IRC22:57
*** mriedem has quit IRC22:59
*** jamesmcarthur has quit IRC23:00
*** jamesmcarthur has joined #openstack-meeting23:01
*** hongbin has quit IRC23:03
*** jamesmcarthur has quit IRC23:07
*** slaweq has joined #openstack-meeting23:11
*** Leo_m has quit IRC23:11
*** slaweq has quit IRC23:16
*** martial_ has quit IRC23:16
*** dmacpher has quit IRC23:26

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!