Tuesday, 2018-11-27

*** _alastor_ has quit IRC00:02
*** jesusaur has quit IRC00:02
*** tetsuro has joined #openstack-meeting00:04
*** cloudrancher has quit IRC00:08
*** cloudrancher has joined #openstack-meeting00:09
*** gouthamr has left #openstack-meeting00:14
*** gouthamr has joined #openstack-meeting00:14
*** slaweq has joined #openstack-meeting00:16
*** mriedem_away has quit IRC00:23
*** jesusaur has joined #openstack-meeting00:23
*** slaweq has quit IRC00:24
*** ijw has quit IRC00:28
*** ijw has joined #openstack-meeting00:29
*** Liang__ has joined #openstack-meeting00:32
*** armstrong has quit IRC00:34
*** longkb has joined #openstack-meeting00:39
*** longkb has quit IRC00:39
*** _alastor_ has joined #openstack-meeting00:54
*** _alastor_ has quit IRC00:59
*** jamesmcarthur has joined #openstack-meeting01:04
*** jamesmcarthur has quit IRC01:08
*** slaweq has joined #openstack-meeting01:16
*** slaweq has quit IRC01:24
*** ijw has quit IRC01:47
*** ijw has joined #openstack-meeting01:48
*** ijw has quit IRC01:52
*** hongbin has joined #openstack-meeting01:55
*** slaweq has joined #openstack-meeting02:13
*** bzhao__ has joined #openstack-meeting02:20
*** slaweq has quit IRC02:24
*** bzhao__ has quit IRC02:27
*** bzhao__ has joined #openstack-meeting02:34
*** cloudrancher has quit IRC02:38
*** cloudrancher has joined #openstack-meeting02:39
*** psachin has joined #openstack-meeting02:40
*** yamahata has quit IRC02:41
*** iyamahat_ has quit IRC02:41
*** mhen has quit IRC02:43
*** mhen has joined #openstack-meeting02:47
*** whoami-rajat has joined #openstack-meeting02:50
*** ijw has joined #openstack-meeting02:56
*** tashiromt__ has joined #openstack-meeting02:59
*** ijw has quit IRC03:01
*** tashiromt__ has quit IRC03:03
*** Liang__ is now known as LiangFang03:06
*** dtroyer has quit IRC03:10
*** slaweq has joined #openstack-meeting03:16
*** artom has quit IRC03:21
*** artom has joined #openstack-meeting03:24
*** slaweq has quit IRC03:24
*** sridharg has joined #openstack-meeting03:33
*** ijw has joined #openstack-meeting03:34
*** ijw has quit IRC03:39
*** diablo_rojo has quit IRC03:42
*** tpatil has joined #openstack-meeting03:52
*** tashiromt has joined #openstack-meeting03:59
tpatilToday Sampath might join late or skip this meeting as he is busy in attending some conference. So I will chair today's Masakari meeting04:01
tpatil#startmeeting Masakari04:01
openstackMeeting started Tue Nov 27 04:01:15 2018 UTC and is due to finish in 60 minutes.  The chair is tpatil. Information about MeetBot at http://wiki.debian.org/MeetBot.04:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.04:01
*** openstack changes topic to " (Meeting topic: Masakari)"04:01
openstackThe meeting name has been set to 'masakari'04:01
tpatil#topic Bugs (stuck/critical)04:01
*** openstack changes topic to "Bugs (stuck/critical) (Meeting topic: Masakari)"04:01
tpatilLast week this patch was merged : https://review.openstack.org/#/c/615752/04:02
tpatilNeed to confirm if this issue is present in stable branch Rocky release04:02
tpatilSampath was going to check it at his end04:03
*** sagara has joined #openstack-meeting04:04
tpatilSince he is not here, I will ask him about this status later04:04
*** hongbin has quit IRC04:06
tpatilDo you want to discuss about any other critical bugs?04:06
*** tashiromt_ has joined #openstack-meeting04:07
tpatilOk. Moving ahead04:08
tpatil#topic Stein Work Items04:08
*** openstack changes topic to "Stein Work Items (Meeting topic: Masakari)"04:08
tpatilAdd functional tests to openstacksdk04:09
*** iyamahat has joined #openstack-meeting04:09
tpatilTwo days ago, patch proposed to add functional tests in openstacksdk got merged04:10
tpatil#link https://review.openstack.org/#/c/615744/04:10
tpatilNow, need to work on adding detailed functional tests in masakari04:10
*** tashiromt has quit IRC04:10
*** slaweq has joined #openstack-meeting04:11
sagaratpatil: Are there other work items?04:13
tpatilNotification implementation04:13
tpatilFew unit tests are failing, team is working on looking into this issue04:14
tpatilwill propose the patches as soon as it is done04:14
*** ShilpaSD has joined #openstack-meeting04:14
tpatilThat's all update from my end about stein work items04:14
sagaratpatil: Thanks04:15
tpatilIf no other topic is left for discussion, let's end this meeting early04:16
sagaratpatil: From my side, nothing.04:17
tpatilOK, let's end this meeting04:17
tpatilThank you, All04:18
tpatil#endmeeting04:18
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"04:18
openstackMeeting ended Tue Nov 27 04:18:05 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)04:18
openstackMinutes:        http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-11-27-04.01.html04:18
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-11-27-04.01.txt04:18
openstackLog:            http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-11-27-04.01.log.html04:18
*** sagara has quit IRC04:18
*** tashiromt_ has quit IRC04:22
*** slaweq has quit IRC04:24
*** yamahata has joined #openstack-meeting04:27
*** janki has joined #openstack-meeting04:49
*** ijw has joined #openstack-meeting04:50
*** tpatil has quit IRC04:52
*** ijw has quit IRC04:55
*** ijw has joined #openstack-meeting05:10
*** ijw has quit IRC05:14
*** ijw has joined #openstack-meeting05:28
*** ijw has quit IRC05:33
*** ijw has joined #openstack-meeting05:47
*** ijw has quit IRC05:51
*** jamesmcarthur has joined #openstack-meeting06:04
*** ijw has joined #openstack-meeting06:06
*** jamesmcarthur has quit IRC06:09
*** ijw has quit IRC06:10
*** slaweq has joined #openstack-meeting06:11
*** slaweq has quit IRC06:24
*** _alastor_ has joined #openstack-meeting06:35
*** arne_wiebalck_ has joined #openstack-meeting06:36
*** arne_wiebalck_ has quit IRC06:37
*** _alastor_ has quit IRC06:40
*** Luzi has joined #openstack-meeting06:45
*** rcernin has quit IRC06:58
*** dkushwaha has joined #openstack-meeting07:02
*** jawad_axd has joined #openstack-meeting07:16
*** aojea has joined #openstack-meeting07:21
*** pcaruana has joined #openstack-meeting07:23
*** kopecmartin|off is now known as kopecmartin07:26
*** slaweq has joined #openstack-meeting07:42
*** bhagyashris has joined #openstack-meeting07:52
*** joxyuki has joined #openstack-meeting07:53
*** phuoc has joined #openstack-meeting07:59
*** longkb has joined #openstack-meeting08:00
*** YanXing_an has joined #openstack-meeting08:02
dkushwaha#startmeeting tacker08:03
openstackMeeting started Tue Nov 27 08:03:10 2018 UTC and is due to finish in 60 minutes.  The chair is dkushwaha. Information about MeetBot at http://wiki.debian.org/MeetBot.08:03
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.08:03
*** openstack changes topic to " (Meeting topic: tacker)"08:03
openstackThe meeting name has been set to 'tacker'08:03
phuochi08:03
dkushwaha#topic Roll Call08:03
*** tpatil has joined #openstack-meeting08:03
*** openstack changes topic to "Roll Call (Meeting topic: tacker)"08:03
YanXing_anhi08:03
phuoco/08:03
dkushwahawho is here for Tacker weekly meeting ?08:03
bhagyashrisO/08:04
dkushwahahello YanXing_an phuoc08:04
dkushwahaok lets start08:05
dkushwaha#chair phuoc YanXing_an08:05
openstackCurrent chairs: YanXing_an dkushwaha phuoc08:05
dkushwaha#topic BerlinSummit08:05
*** openstack changes topic to "BerlinSummit (Meeting topic: tacker)"08:05
dkushwahaI missed the all core members this time. As no one in summit this time08:06
joxyukihi08:07
dkushwahahope to see in next summit :)08:07
dkushwahahi joxyuki08:07
phuocme too08:08
YanXing_anwish being there in next summit08:08
dkushwahaI had been asked a question in summit regarding Tacker deployment on container. but I had not tried it, so i was not able to explain it that time. Is anyone deployed on container?08:09
*** hadai has joined #openstack-meeting08:09
phuocusers can use kolla to deploy tacker on container08:09
dkushwahaphuoc, yea I see. have you tried?08:10
phuocnot yet08:11
dkushwahaok, i will try some days.08:11
phuocbut I will try it soon08:11
dkushwahaphuoc, cool08:11
phuockolla seems to be good for deploying openstack08:11
dkushwahaphuoc, yea, so many attendees joined kolla project onboarding & update session this time. and room was almost packed08:12
*** tssurya has joined #openstack-meeting08:13
dkushwahaok, move on.08:13
dkushwaha#topic BPs08:13
*** openstack changes topic to "BPs (Meeting topic: tacker)"08:13
dkushwahaseems multiple patches on test-addition-refactoring08:14
dkushwahaThat is great08:15
dkushwahaYanXing_an, Thanks for leading this BP08:15
YanXing_anhttps://etherpad.openstack.org/p/test-addition-refactoring08:15
YanXing_anyou can see the detail plan and the status08:16
phuocthat looks good to me08:17
joxyukigreat08:17
dkushwahaYanXing_an, nice work08:17
YanXing_an:)08:18
dkushwahaYanXing_an, could you please prioritize skipped cases. I think it will help to reduces some rework08:18
dkushwahaYanXing_an, as in point 308:18
YanXing_andkushwaha, sure, these skipped case will be reopened during point 2, and will have high priority08:20
*** lpetrut has joined #openstack-meeting08:21
dkushwahaYanXing_an, Thanks08:21
dkushwahaphuoc, do you have something to talk ?08:22
*** priteau has joined #openstack-meeting08:23
phuocdkushwaha, I plan to help force delete resources08:23
phuocI will upload some patches soon08:23
dkushwahaphuoc, sounds good.08:23
tpatilphouc: Will it delete resource from heat as well or only tacker?08:24
tpatilphuoc: Sorry to misspell your name08:24
dkushwahaphuoc, i had just submited initial draft on spec,08:24
dkushwahaphuoc, https://review.openstack.org/#/c/602528/08:25
phuocdkushwaha, I will look at it08:25
tpatilWe are also trying to fix one similar issue : https://review.openstack.org/#/c/618086/08:25
tpatilGot one comment from Yan Xing an, will address his comment soon08:26
tpatilphuoc: Please take a look at this patch and give us your feedback08:26
phuoctpatil, I saw your patch08:26
tpatilphuoc: We will upload a new PS which will cover interacting with heat to ensure all resources from VIM are deleted before deleting the VNF08:27
phuocI will add --force in tacker delete commands first08:28
phuoctpatil, and I will make it compatible with your patch too08:28
*** ralonsoh has joined #openstack-meeting08:29
tpatilphuoc: Ok, Great. Thank you08:29
phuocnp :)08:31
dkushwahatpatil, phuoc IMO, force-delete should be for the case only when normal delete not able to clean the resources. So even there any error from backend(i.e. heat), it should move forward and clean entries from tacker08:33
phuocyes, it should cover all cases, in which we cant not remove resources08:35
YanXing_andkushwaha, agree with you, force-delete hardly fail to delete08:36
tpatildkushwaha: Yes, I have understood it but it's also important to delete the heat resources before actually cleaning entries from tacker. That's my main point. Otherwise as an operator, those resource would need to be cleaned up manually08:36
*** ijw has joined #openstack-meeting08:36
dkushwahaanother thing is, we can not control all the external behavior where normal workflow failed. So instead of blocking to delete from backend, we can just log error message and move forward step08:37
tpatilSome indication to operator is useful. Maybe an event will do with info like stack id or whatever.08:39
phuocyes, we may log heat and mistral resource to let users delete it manually08:39
dkushwahaphuoc, +108:39
dkushwahatpatil, yes, but my point is, we might stuck in never ending loop, so for cleaner approach, just request to delete, if fail from backend, log error message, and then clean from tacker side.08:41
*** ijw has quit IRC08:41
dkushwahamoving on08:43
dkushwahajoxyuki, do you have something to talk ?08:43
joxyukinothing from me08:44
tpatildkushwaha: sounds good to me08:44
bhagyashrisHi08:44
bhagyashrisAs comment given on patch https://review.openstack.org/#/c/612595/9  and discussion in the summit : ”vdu_autoheal" monitoring policy action implementation08:44
bhagyashrisshould be as per ETSI standard HealVnfRequest interface https://www.etsi.org/deliver/etsi_gs/NFV-SOL/001_099/003/02.05.01_60/gs_NFV-SOL003v020501p.pdf08:44
*** irclogbot_1 has quit IRC08:44
bhagyashrisStarted working on the same but have some quires detail description is at :  http://paste.openstack.org/show/735705/  and http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000172.html   so needs some feedback and inputs.08:44
phuocbhagyashris, IMO in tosca.datatypes.nfv.VnfHealOperationConfiguration, there are missing actions on it08:47
joxyukibhasyashris, I will investigate and reply to your query.08:48
*** irclogbot_1 has joined #openstack-meeting08:48
dkushwahabhagyashris, after summit i just joined my work yesterday, and unfortunately I could not look into that. I will check and respond.08:48
bhagyashrisok08:49
YanXing_anbhagyashris, what's the difference between VnfHealRequest with alarm monitor08:49
bhagyashrisThank you :)08:49
joxyukiphouc, additional parameters such as action can be defined as a parameter.08:49
*** tssurya has quit IRC08:49
phuocjoxyuki, yes, we should have to define them08:50
joxyukimain oi08:50
joxyukisorry08:50
joxyukimove on08:51
dkushwaha#topic Open Discussion08:52
*** openstack changes topic to "Open Discussion (Meeting topic: tacker)"08:52
dkushwahano any update from my side08:53
YanXing_anmy team have a focus time before the end of this year, so i wish we can finish all UT case refactoring in next month, so it’s very kindly to review all patches and give feedback, thanks.08:53
*** irclogbot_1 has quit IRC08:53
dkushwahasure YanXing_an , and thanks again for being more active on that08:54
dkushwahaDo we have something to talk now? otherwise we can close this meeting.08:55
dkushwahaok, Thanks to all Folks :)08:56
dkushwahaClosing this meeting for now08:56
*** joxyuki has left #openstack-meeting08:57
dkushwaha#endmeeting08:57
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"08:57
openstackMeeting ended Tue Nov 27 08:57:05 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)08:57
openstackMinutes:        http://eavesdrop.openstack.org/meetings/tacker/2018/tacker.2018-11-27-08.03.html08:57
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/tacker/2018/tacker.2018-11-27-08.03.txt08:57
openstackLog:            http://eavesdrop.openstack.org/meetings/tacker/2018/tacker.2018-11-27-08.03.log.html08:57
*** ijw has joined #openstack-meeting08:57
*** irclogbot_1 has joined #openstack-meeting08:57
*** tssurya has joined #openstack-meeting09:01
*** ijw has quit IRC09:01
*** tpatil has quit IRC09:02
*** rossella_s has joined #openstack-meeting09:04
*** hadai has quit IRC09:14
*** shrasool has joined #openstack-meeting09:17
*** dmacpher has joined #openstack-meeting09:19
*** YanXing_an has quit IRC09:20
*** bhagyashris has quit IRC09:44
*** dkushwaha has left #openstack-meeting09:47
*** longkb has quit IRC09:52
*** erlon has joined #openstack-meeting10:26
*** _alastor_ has joined #openstack-meeting10:37
*** _alastor_ has quit IRC10:41
*** jlvillal has joined #openstack-meeting10:58
*** LiangFang has quit IRC11:11
*** electrofelix has joined #openstack-meeting11:17
*** phuoc has left #openstack-meeting11:17
*** dosaboy has quit IRC11:24
*** dosaboy has joined #openstack-meeting11:31
*** sambetts_ is now known as sambetts|afk11:32
*** raildo has joined #openstack-meeting11:37
*** armstrong has joined #openstack-meeting11:43
*** rfolco is now known as rfolco_doctor11:45
*** janki has quit IRC11:59
*** tetsuro has quit IRC12:13
*** yamamoto has quit IRC12:16
*** yamamoto has joined #openstack-meeting12:16
*** shrasool has quit IRC12:27
*** Luzi has quit IRC12:32
*** Luzi has joined #openstack-meeting12:47
*** erlon has quit IRC12:48
*** vishalmanchanda has joined #openstack-meeting12:48
*** takamatsu has quit IRC12:58
*** erlon has joined #openstack-meeting12:59
*** aojeagarcia has joined #openstack-meeting13:09
*** aojea has quit IRC13:09
*** shrasool has joined #openstack-meeting13:14
*** takamatsu has joined #openstack-meeting13:21
*** _alastor_ has joined #openstack-meeting13:31
*** takamatsu has quit IRC13:37
*** takamatsu has joined #openstack-meeting13:43
*** jhesketh_ has joined #openstack-meeting13:44
*** whoami-rajat has quit IRC13:49
*** jhesketh has quit IRC13:50
*** armstrong has quit IRC13:53
*** davidsha has joined #openstack-meeting13:53
*** bobh has joined #openstack-meeting14:14
*** cloudrancher has quit IRC14:15
*** cloudrancher has joined #openstack-meeting14:15
*** mriedem has joined #openstack-meeting14:18
*** eharney has joined #openstack-meeting14:19
*** dustins has joined #openstack-meeting14:20
*** _alastor_ has quit IRC14:33
*** hongbin has joined #openstack-meeting14:41
*** rfolco_doctor is now known as rfolco14:53
*** awaugama has joined #openstack-meeting14:59
*** awaugama has quit IRC14:59
*** edmondsw has joined #openstack-meeting14:59
*** awaugama has joined #openstack-meeting15:01
*** ianychoi_ is now known as ianychoi15:06
*** lpetrut has quit IRC15:18
*** eharney has quit IRC15:19
*** eharney has joined #openstack-meeting15:26
*** mjturek has joined #openstack-meeting15:30
*** jawad_axd has quit IRC15:49
*** jawad_axd has joined #openstack-meeting15:50
*** Luzi has quit IRC15:51
*** dtrainor__ is now known as dtrainor15:52
*** jawad_axd has quit IRC15:55
*** ttsiouts has joined #openstack-meeting15:58
slaweq#startmeeting neutron_ci16:00
openstackMeeting started Tue Nov 27 16:00:18 2018 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"16:00
slaweqhi16:00
openstackThe meeting name has been set to 'neutron_ci'16:00
bcafarelo/16:00
slaweqlet's wait few minutes for  the others16:01
*** njohnston has joined #openstack-meeting16:02
hongbino/16:02
*** dansmith has quit IRC16:02
slaweqI pinged them on openstack-neutron channel16:02
njohnstono/16:02
*** dansmith has joined #openstack-meeting16:02
*** diablo_rojo has joined #openstack-meeting16:02
*** mlavalle has joined #openstack-meeting16:03
mlavallesorry, I thought it was 1 hour from now16:03
mlavallenot used to winter time yet16:04
slaweq:)16:04
bcafarel:)16:04
slaweqok, so lets start16:04
slaweq#topic Actions from previous meetings16:04
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"16:04
slaweqmlavalle to continue tracking not reachable FIP in trunk tests16:04
mlavalleyes16:04
mlavallethat entails merging https://review.openstack.org/#/c/61875016:04
*** cloudrancher has quit IRC16:04
slaweqthis was added only to not forget about it IIRC, as we first want to get my patch merged16:05
mlavalle(I just approved it)16:05
slaweqright mlavalle :)16:05
slaweqthx16:05
mlavalleand then looking at the effects for some days16:05
slaweqso I will add this action for next week too to remember it, ok?16:05
mlavalleyes16:05
slaweq#action mlavalle to continue tracking not reachable FIP in trunk tests16:05
slaweqthx16:05
*** cloudrancher has joined #openstack-meeting16:05
slaweqthat was quick :)16:05
slaweqnext one16:05
slaweqslaweq to check which experimental jobs can be removed16:05
mlavalleI was actually going to ping you....16:05
slaweqwhy?16:06
mlavalledo you have a pointer to a traceback of the failute that the patch^^ is supposed to fix16:06
mlavalle?16:06
slaweqsure16:06
*** munimeha1 has joined #openstack-meeting16:06
slaweqthere is a lot of such issues recently16:06
mlavalleI want to see if I find it in the trunk test failure16:07
slaweqe.g. http://logs.openstack.org/23/619923/2/check/neutron-tempest-dvr-ha-multinode-full/e356b9a/logs/subnode-2/screen-q-l3.txt.gz?level=ERROR16:07
slaweqand it happens in different tests, not only in trunk16:07
mlavalleexactly, that is what I was looking for16:07
slaweqit's general problem with FIP connectivity16:08
mlavallethanks16:08
slaweqyw16:08
slaweqok, so going back to not needed experimental jobs16:08
haleybslaweq: sorry, i need to help a repair man here, just assign me some tasks :)16:08
slaweqI did patch to remove some of them: https://review.openstack.org/61971916:08
slaweqhaleyb: sure - that we can do definitelly16:09
mlavallewe are all repair men here16:09
mlavallehaleyb: ^^^^16:09
slaweq#action haleyb takes all this week :D16:09
njohnstonlol16:09
slaweqmlavalle: please take a look at this patch https://review.openstack.org/619719 - it has already +2 from haleyb16:10
mlavalleslaweq: added to the pile16:10
slaweqmlavalle: thx16:10
slaweqok, moving on16:10
slaweqnext one was: slaweq to start migrating neutron CI jobs to zuul v3 syntax16:10
slaweqI opened bug for that https://bugs.launchpad.net/neutron/+bug/180484416:11
openstackLaunchpad bug 1804844 in neutron "CI jobs definitions should be migrated to Zuul v3 syntax" [Low,Confirmed] - Assigned to Slawek Kaplonski (slaweq)16:11
*** _alastor_ has joined #openstack-meeting16:11
slaweqAnd I pushed first patch for functional tests but it’s WIP now: https://review.openstack.org/#/c/619742/16:11
*** jamesmcarthur has joined #openstack-meeting16:11
njohnstonthanks for working on that16:11
slaweqso if someone wants to work on migration for some job, please feel free to do it and push patch related to this bug16:11
slaweqit is in fact a lot of patches to do but I though that one bug to track them all will be enough16:12
mlavallegood idea16:12
slaweqok, next one16:13
slaweqnjohnston to switch neutron to use integrated-gate-py35 with grenade-py3 job instead of our neutron-grenade job16:13
slaweqnjohnston: any update on this one?16:14
njohnstonSo the grenade-py3 job is already in check and gate queue.  I am watching it for a few runs16:14
njohnstonJust for due diligence, then I'll push up a. change to disable neutron-grenade.16:15
slaweqwhere it is in gate alreade?16:16
slaweq*already16:16
slaweqI don't see it16:16
njohnstonit is inherited from one of the templates we include16:16
njohnstonbut if you look at any neutron job in zuul.openstack.org you'll see grenade-py316:17
slaweqahh, ok16:18
slaweqI see it now16:18
slaweqso we only need to remove neutron-grenade job now and we will be done with this, right?16:18
njohnstonyep!16:19
slaweqgood16:19
slaweqwill You do it this week?16:19
njohnstonI should have the change up within the hour16:19
slaweq#action njohnston to remove neutron-grenade job from neutron's CI queues16:19
slaweqthx njohnston16:19
njohnstonjust waiting for the job I am watching to finish16:19
slaweqok16:20
slaweqso lets move on to the next onw16:20
slaweq*one16:20
slaweqslaweq to check bug 179847516:20
openstackbug 1798475 in neutron "Fullstack test test_ha_router_restart_agents_no_packet_lost failing" [High,Confirmed] https://launchpad.net/bugs/179847516:20
slaweqI sent patch to store all journal logs in fullstack results: https://review.openstack.org/#/c/619935/16:20
slaweqI hope this will help to debug this issue as we will be able to see what is keepalived doing then.16:20
mlavalleI'll review it today16:20
slaweqin the future when jobs will be migrated to zuulv3 format I think this can be added as role and added to all jobs as it can be helpful with some keepalived or dnsmasq logs16:21
njohnstonit's a great idea regardless16:21
mlavalleyeap16:21
slaweqbut for now I want it only in fullstack job as first step16:21
*** artom has quit IRC16:22
slaweq#action slaweq to continue debugging bug 1798475 when journal log will be available in fullstack tests16:22
openstackbug 1798475 in neutron "Fullstack test test_ha_router_restart_agents_no_packet_lost failing" [High,Confirmed] https://launchpad.net/bugs/179847516:22
*** artom has joined #openstack-meeting16:22
slaweqok, lets move on16:22
*** eharney has quit IRC16:23
slaweqslaweq to check why db_migration functional tests don't have logs16:23
slaweqpatch https://review.openstack.org/61926616:23
*** armax has quit IRC16:23
slaweqit's merged already16:23
slaweqso now we should have logs from all functional tests in job results16:23
slaweqnext one was:16:24
slaweqnjohnston to remove neutron-fullstack-python36 from grafana dashboard16:24
njohnstonOne side note on the removal of the neutron-grenade job; that job is actually in the check and gate queue for the grenade project so I'll push a change in grenade to remove those first, and use a Depends-On to make sure that goes through before the neutron change16:24
njohnstonRegarding neutron-fullstack-python36, I remember adding it, but when I went to project-config I could find no reference to it.  So that is a no-op.16:25
slaweqahh, that's good16:26
slaweqso it's done :)16:26
slaweqthx njohnston for checking it16:26
slaweqok, so that was all actions for today16:26
mlavallefwiw16:26
slaweqanything else to add or can we move on?16:27
mlavallenothing from me16:27
slaweqok, so next topic then16:27
slaweq#topic Python 316:27
*** openstack changes topic to "Python 3 (Meeting topic: neutron_ci)"16:27
slaweqnjohnston: bcafarel any updates from You?16:27
bcafarelfrom next week not much I think16:28
bcafarel*previous16:28
bcafarelslaweq: except someone digging into functional tests for py316:28
*** apetrich has quit IRC16:28
slaweqok, about this functional tests it is real problem16:29
njohnstonnothing from me because of PTO16:29
slaweqI pushed today some DNM patch to test those tests with less output: https://review.openstack.org/#/c/620271/16:29
slaweqand indeed it was better16:30
*** lpetrut has joined #openstack-meeting16:30
slaweqbut not perfect16:30
slaweqI also talked with mtreinish about it and he told me that it's know issue with stestr and too much output from tests16:30
bcafarel:/16:30
njohnston:-[16:30
slaweqso based on his comments I think that only workaround for this is to make somehow that our tests will produce less on stdout/stderr16:31
slaweqalso in my DNM patch I had 3 tests failing: http://logs.openstack.org/71/620271/2/check/neutron-functional/a7fd8ea/logs/testr_results.html.gz16:31
slaweqit looks for me that it's related to issue with SIGHUP16:32
slaweqso I'm not sure if we shouldn't skip/mark as usnstable those tests for now16:32
slaweqI will try once again this DNM patch but with those 3 tests marked as unstable to check how it will be then16:33
slaweqand we will see then16:33
slaweqif anyone has some idea how to fix/workaround this problem, that would be great16:34
slaweqpatch to switch functional tests to py3 is here: https://review.openstack.org/#/c/577383/16:34
bcafarelsounds good, we do have https://bugs.launchpad.net/neutron/+bug/1780139 open for the SIGHUP issue16:34
openstackLaunchpad bug 1780139 in neutron "Sending SIGHUP to neutron-server process causes it to hang" [Undecided,Triaged] - Assigned to Bernard Cafarelli (bcafarel)16:34
slaweqso thats all from me about py316:36
slaweqnjohnston: do You know how many other jobs we still should switch to py3?16:36
bcafarelslaweq: maybe worth going through https://bugs.launchpad.net/cinder/+bug/1728640 and see if we can grab some ideas, like this "Make test logging setup fixture disable future setup"16:37
openstackLaunchpad bug 1728640 in Cinder "py35 unit test subunit.parser failures" [Critical,Fix released] - Assigned to Sean McGinnis (sean-mcginnis)16:37
slaweqyes, that is very similar issue to what we have with functional tests now :)16:38
slaweqI will check that this week16:38
njohnstonI believe the multinode grenade jobs still need to be switched, at a minimum; grenade-py3 does not relieve us of those sadly16:38
njohnstonI'll have to check the etherpad16:38
slaweq#action slaweq to continue fixing funtional-py3 tests16:38
slaweqok, thx njohnston16:39
njohnston#action njohnston to research py3 conversion for neutron grenade multinode jobs16:39
slaweqI will also check neutron-tempest-plugin jobs then16:39
slaweq#action slaweq to convert neutron-tempest-plugin jobs to py316:39
slaweqok, can we go on to the next topic then?16:40
mlavalleI think so16:40
njohnstongo ahead16:40
slaweq#topic Grafana16:40
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"16:40
slaweq#link http://grafana.openstack.org/dashboard/db/neutron-failure-rate16:40
*** tpsilva has joined #openstack-meeting16:41
slaweqgate queue wasn't busy last week as there was not too many people with +2 power available :)16:41
*** shrasool has quit IRC16:42
mlavalleyeap16:42
slaweqWe have Neutron-tempest-dvr-ha-multinode-full and Neutron-tempest-plugin-dvr-multinode-scenario failing on 100% again16:42
*** apetrich has joined #openstack-meeting16:42
slaweqbut from what I was checking it's very often this issue with snat namespace, which should be fixed by https://review.openstack.org/#/c/618750/16:43
slaweqso we should be better next week I hope16:43
slaweqFrom other things, I spotted again couple of issues with cinder backups, like:16:43
slaweqhttp://logs.openstack.org/64/617364/19/check/tempest-slow/18519dc/testr_results.html.gz16:44
mlavalleyeah, let's track the effect of that16:44
slaweqhttp://logs.openstack.org/87/609587/11/check/tempest-multinode-full/2a5c5a1/testr_results.html.gz16:44
slaweqI will report this as a cinder bug today16:44
mlavalleslaweq: and I know I have an email from you with cinder failures16:44
mlavalleI will talk to Jay and Sean this week16:45
slaweqfrom other things, we still have from time to time failures in functional tests (db-migrations timeout) and fullstack tests (this issue with keepalived mostly) and I'm trying to find out what is going on with both of them16:45
slaweqthx mlavalle :)16:45
slaweqone more thing related to grafana16:46
slaweqWe should add to grafana 2 new jobs:16:46
slaweqnetworking-ovn-tempest-dsvm-ovs-release16:46
slaweqTempest-slow16:46
slaweqany volunteer for that? :)16:46
njohnstonsure16:46
slaweqthx njohnston :)16:46
njohnston#action njohnston add tempest-slow and networking-ovn-tempest-dsvm-ovs-release to grafana16:47
slaweqok, lets move on then16:47
slaweq#topic Tempest/Scenario16:47
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)"16:47
slaweqI today found out that we have job neutron-tempest-dvr in our queue16:47
slaweqand it looks that it is single node dvr job16:48
slaweqis it intentional? do we want to keep it like that?16:48
slaweqIt looks the same as neutron-tempest-dvr-ha-multinode-full job in fact16:48
njohnstonISTR some discussion about this a long time ago, like in the newton timeframe16:48
slaweqonly difference is that this multinode job is non-voting16:48
*** jamesmcarthur has quit IRC16:48
njohnstonI think the goal was for the multinode job to end up being the voting one16:49
mlavalleyes, I think I have the same recollection16:49
*** jamesmcarthur has joined #openstack-meeting16:49
mlavallewe can discuss in the L3 meeting16:49
njohnston+116:49
slaweqnjohnston: that is not possible to have multinode job voting now ;)16:49
slaweqok, mlavalle please then add this to L3 meeting agenda if You can16:50
hongbindoes the multinode job stable enough?16:50
mlavalleyes16:50
mlavallehongbin: not even close16:50
slaweq#action mlavalle to discuss about neutron-tempest-dvr job in L3 meeting16:50
slaweqhongbin: it depends what You mean by stable16:50
slaweqit's very stable now as it is on 100% of failures all the time :P16:51
hongbinslaweq: if it doesn't block the merging too much after turning into voting, then it is fine16:51
slaweqhongbin: it will block everything currently but I agree that we should focus on stabilize it16:52
slaweqand we are working on it since some time16:52
hongbinack16:52
slaweqok, lets move on then16:53
slaweq#topic Periodic16:53
*** openstack changes topic to "Periodic (Meeting topic: neutron_ci)"16:53
slaweqI just want to mention that we still have neutron-tempest-postgres-full failing all the time16:53
slaweqbut it's nova issue16:53
slaweqbug reported: https://bugs.launchpad.net/nova/+bug/180427116:53
openstackLaunchpad bug 1804271 in OpenStack Compute (nova) "nova-api is broken in postgresql jobs" [High,In progress] - Assigned to Matt Riedemann (mriedem)16:53
slaweqFix in progress: https://review.openstack.org/#/c/619061/16:53
slaweqso we should be good when this will be merged16:53
mriedemslaweq: here is a tip,16:54
mriedemshow up in the nova channel and ask that another core look at that already +2ed fix for the postgres job16:54
mriedemi would, but i've already spent some review request karma today16:54
slaweqmriedem: ok, I will :)16:54
slaweqthx16:54
slaweqlast topic then16:55
slaweq#topic Open discussion16:55
*** openstack changes topic to "Open discussion (Meeting topic: neutron_ci)"16:55
slaweqanyone wants to discuss about anything?16:55
hongbini have one16:56
slaweqgo on hongbin16:56
hongbini don't like the long list of extensions in zuul job, so i propose a patch: https://review.openstack.org/#/c/619642/16:56
hongbini want to know if this is what you guys prefer to do?16:56
hongbinor it is not a good idea16:57
slaweqyes, IMO that it easier to read in diff16:57
bcafarelit certainly better fits the screen16:57
njohnstonWould it be possible to use reusable snippets like we do with *tempest-irrelevant-files now?16:58
*** vishalmanchanda has quit IRC16:58
hongbinyes, it possibly will fix the frequeent merge conflict between patches16:58
slaweqhongbin: njohnston: great ideas16:58
*** mordred has joined #openstack-meeting16:58
hongbinnjohnston: i am not sure, because the list of extensions look different between jobs16:58
slaweqhongbin: not all jobs16:59
slaweqYou can define snippet "per branch" and reuse them if necessary16:59
slaweqat least for master branch it should be fine16:59
*** gyee has joined #openstack-meeting16:59
hongbinyes, we can possibly consolidate the stable branch list16:59
hongbini will look into that16:59
slaweqok, we have to finish now17:00
slaweqthx for attending17:00
slaweq#endmeeting17:00
njohnstonthanks all17:00
slaweqo/17:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"17:00
openstackMeeting ended Tue Nov 27 17:00:23 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2018/neutron_ci.2018-11-27-16.00.html17:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2018/neutron_ci.2018-11-27-16.00.txt17:00
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2018/neutron_ci.2018-11-27-16.00.log.html17:00
bcafarelo/17:00
mlavalleo/17:01
*** lpetrut has quit IRC17:01
*** ttsiouts has quit IRC17:06
*** ttsiouts has joined #openstack-meeting17:07
*** lpetrut has joined #openstack-meeting17:10
*** jawad_axd has joined #openstack-meeting17:11
*** ttsiouts has quit IRC17:11
*** jawad_axd has quit IRC17:15
*** diablo_rojo has quit IRC17:19
*** psachin has quit IRC17:21
*** apetrich has quit IRC17:30
*** dmacpher has quit IRC17:32
*** mlavalle has left #openstack-meeting17:32
*** zaneb has quit IRC17:34
*** diablo_rojo has joined #openstack-meeting17:35
*** davidsha has quit IRC17:37
*** apetrich has joined #openstack-meeting17:50
*** imacdonn has quit IRC17:54
*** imacdonn has joined #openstack-meeting17:55
*** ijw has joined #openstack-meeting17:56
*** jawad_axd has joined #openstack-meeting17:57
*** yamahata has quit IRC17:58
*** iyamahat has quit IRC17:58
*** zaneb has joined #openstack-meeting18:01
*** ijw has quit IRC18:02
*** bnemec has quit IRC18:06
*** bnemec has joined #openstack-meeting18:06
*** sridharg has quit IRC18:09
*** kopecmartin is now known as kopecmartin|off18:16
*** gary_perkins has quit IRC18:17
*** iyamahat has joined #openstack-meeting18:18
*** gary_perkins has joined #openstack-meeting18:18
*** bobh has quit IRC18:32
*** lpetrut has quit IRC18:33
*** armax has joined #openstack-meeting18:37
*** yamahata has joined #openstack-meeting18:37
*** zbitter has joined #openstack-meeting18:49
*** tssurya has quit IRC18:50
*** zaneb has quit IRC18:52
*** shrasool has joined #openstack-meeting18:53
*** Linkid has quit IRC18:54
*** eharney has joined #openstack-meeting18:54
*** Shrews has joined #openstack-meeting18:55
*** dmsimard has joined #openstack-meeting18:59
clarkbanyone here for the infra meeting?19:00
corvusye19:00
ianwo/19:00
dmsimard\o19:00
fricklero/19:00
Shrewshola19:00
clarkb#startmeeting infra19:01
openstackMeeting started Tue Nov 27 19:01:18 2018 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: infra)"19:01
openstackThe meeting name has been set to 'infra'19:01
rfolcoo/19:01
clarkb#link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting19:01
clarkb#topic Announcements19:01
*** openstack changes topic to "Announcements (Meeting topic: infra)"19:01
gary_perkinsHi o/19:01
clarkbThe only announcement I have is that openstack-discuss is replacing a bunch of openstack mailing lists that will be retired in 6 days. Please subscribe if you haven't already19:02
clarkbfungi: ^ anything else to add to that?19:02
*** zbitter is now known as zaneb19:02
clarkbprobably worth mentioning here as well that I've picked up some sort of infant/toddler plague over the weekend so may not be 100% the next few days19:03
clarkb#topic Actions from last meeting19:04
*** openstack changes topic to "Actions from last meeting (Meeting topic: infra)"19:04
clarkb#link http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-11-20-19.01.txt minutes from last meeting19:04
clarkbit was a slow week due to the US holiday so not much there. However I did want to double check on the pypi cleanup19:05
clarkbianw: ^ any update on that or help that can be provided?19:05
*** jawad_axd has quit IRC19:05
ianwi got the reviews on the removal of the symlink from the mirrors yesterday, and just left it to make sure there wasn't any issues popping up19:06
ianwwhich i'm guessing no from scrollback :)  so i'll clear it out probably today19:06
clarkbgreat, then fallout from that is fedora 29 mirroring?19:07
*** ralonsoh has quit IRC19:07
ianwfedora29 we merged our client-side fix to ignore the upstream permissions19:07
ianwhowever i'd like some eyes on https://review.openstack.org/#/c/618416/ to remove f27 which will also free up space19:08
clarkb#link https://review.openstack.org/#/c/618416/ cleanup f27 mirrors to free up afs disk space19:08
ianwout of an abundance of caution i depended that on https://review.openstack.org/#/c/614375/ to remove f27 from nodepool too19:09
fungiwhoops! i got distracted replying to someone's e-mail, sorry. no nothing to add on the ml merger19:09
clarkb#topic Specs approval19:09
*** openstack changes topic to "Specs approval (Meeting topic: infra)"19:10
clarkbI don't think there are specs ready for approval but did want to call out a couple specs that could use your review/attention19:10
clarkb#link https://review.openstack.org/#/c/607377/ Storyboard attachments spec19:10
Shrewsianw: i +A'd 61437519:10
*** jlvillal has left #openstack-meeting19:11
clarkb#link https://review.openstack.org/#/c/581214/ Anomaly detection in CI jobs. This one is still marked as a draft but tristanC and dirk demoed their work in berlin and its really neat (and I think very useful) stuff19:11
clarkbif you've got a momemnt to review those it would be very much appreciated19:11
clarkb#topic Priority Efforts19:11
*** openstack changes topic to "Priority Efforts (Meeting topic: infra)"19:11
clarkb#topic Storyboard19:11
*** openstack changes topic to "Storyboard (Meeting topic: infra)"19:11
clarkbOther than the attachments spec is there other work or items that the infra team should be aware of here?19:12
diablo_rojoLooks like I have a few comments to address on the spec, but if anyone else wants to pile on before I update it, feel free!19:13
mordredo/19:13
diablo_rojoAt the summit we had talked about moving the db to a different node?19:13
* diablo_rojo rakes brain for other things..19:13
clarkbthe db is currently a trove instance right? are we talking about a new trove instance or a self managed database server?19:14
fungiyes19:15
fungiwe were talking about possibly moving to a local mysql server on the sb instance19:15
fungito reduce round-trip latency on queries19:15
clarkbgotcha19:15
*** jawad_axd has joined #openstack-meeting19:16
fungithough some sort of query caching might also bring similar performance benefits19:16
*** tobiash has joined #openstack-meeting19:16
mordredwell, having the server locally would make it easier to diagnose what's going on too19:16
fungiultimately, i expect we just need someone with competence in the field to help perform some query profiling and help make them more efficient19:16
clarkb#info Storyboard could use someone with mysql/database skillz to profile queries and help make them more efficient19:17
diablo_rojoyes plz19:17
fungii'll set up a modest shrine in my hallway dedicated to their worship19:17
*** erlon has quit IRC19:17
diablo_rojoI will put their picture on my wall next to my top monitor19:18
clarkbanything else storyboard before we move to the next item?19:18
corvusdid we ever get past the testing issue?19:18
corvusi tried to pitch in, but never got any help on this: https://review.openstack.org/55310219:18
fungino, we were hoping to get some intern/mentee vigor on rewriting the test framework to be maintainable i think19:19
diablo_rojoI have not had time to dig in yet :/19:19
clarkb#link https://review.openstack.org/#/c/553102/ corvus attempt to improve testing of storyboard subscribers. Could use help19:19
* diablo_rojo opens corvus's link19:19
corvusi think it would be a lot easier (read: "possible") to make database improvements to storyboard if the subsystem were testable.  but i can't make heads or tails of it.  :(19:19
*** zaneb has quit IRC19:20
diablo_rojocorvus, I will add it to my todolist.19:20
diablo_rojoI'll see if I can get some SotK eyes on it too19:20
clarkbdiablo_rojo: thanks19:21
corvusi feel pretty sure i would be able to contribute bite-sized improvements, but i don't think i have time for a subsystem overhaul :(19:22
clarkb#topic Update Config Management19:23
diablo_rojocorvus, I can say I greatly appreciate anything and everything you do for StoryBoard no matter how big or small :)19:23
*** openstack changes topic to "Update Config Management (Meeting topic: infra)"19:23
clarkb(sorry for moving on but seems like we have a plan forward on last item and we have other agenda items to cover)19:23
corvusdiablo_rojo: :)19:23
diablo_rojoclarkb, no worries19:24
corvusyeah, i don't mean to derail, just thought i'd mention it because it may facilitate new contributions in the db area19:24
clarkb++19:24
*** electrofelix has quit IRC19:25
clarkbok updating config management19:26
clarkbI think the bulk of the work happening here now is with ianw's graphite docker chain19:26
* mordred needs to read it19:26
clarkblast week I had become concerned that we had an iptables problem, but mordred reminded me that we plan to not network namespace19:26
corvusdoes that affect https://review.openstack.org/605585 ?19:27
clarkbcorvus: it shouldn't the network namespace disablement is a container creation argument/option aiui19:27
clarkbcorvus: getting a docker in place should be the same either way19:27
mordredwell - I think we might not want to disable the iptables tests though19:28
corvusright, but my -1 on that was mostly because ^19:28
clarkbmordred: corvus yes the -1 is still valid. I think we should assert that our rules end up in place as expected19:28
mordred++19:28
corvusi guess my question is, does the new thinking on namespaces mean that most of that iptables complexity goes away and we don't really need to change anything about it?19:28
mordredthat way we can also verify that non-network-namepsaced containers work properly and the iptables stuff doesn't go to the bad place19:28
mordredcorvus: yah. I think so19:29
clarkbsort of, docker will still try to manage the iptables rules19:29
clarkbthe base set happen from the daemon starting19:29
clarkbhowever it shouldn't create any container specific rules because we use the host namespace19:29
clarkb(so our testing will have to accomodate that those rules may exist)19:29
mordredagree. we still want the tests to test that docker isn't breaking what we're doing with iptables19:29
corvusok, so we still need some iptables changes, but we should be able to modify our tests to accomodate that19:30
corvus(rather than drop them)19:30
mordredyah19:30
clarkbyes to our test accomodating that. I'm not sure we need to change anything in how we iptables?19:30
mordredwell, we may need some iptables changes - we may not19:30
mordredwhat clarkb said19:30
corvussorry, i guess i meant iptables-testing-related changes19:30
*** zaneb has joined #openstack-meeting19:31
corvusnot changes to our base iptables roles19:31
mordredwe _shouldn't_ need to care about docker iptables - but should is dangerous :)19:31
clarkbthe important thing here is our tests can ensure that docker doesn't suddenly stop being nice to our rules19:31
clarkband if that changes we should notice via the testing19:31
clarkbso ++ to testing it19:31
mordred++19:31
clarkbas far as other work I think cmurphy has a stack at topic:puppet-4 that could use second reviewers (I have reviewed a bunch last week?)19:32
clarkband the other standout item for this topic is zuul doing CD. I think my changes to add a zuul cd user to bridge are still outstanding if we want to pick that back up again19:32
mordredI feel like there was something else blocking zuul cd ... but that was just the per-project key we landed ages ago perhaps?19:34
clarkbmordred: ya I think that was the major blcoker and now its getting ssh into bridge working (whcih is what my changes will allow for)19:35
mordredsweet19:36
clarkbmordred: please do go review ianw's stack though I think its gets some major chunks of the docker story into place for us19:36
clarkb#topic General Topics19:37
*** openstack changes topic to "General Topics (Meeting topic: infra)"19:37
clarkbianw had some good comments on how a weekly standing meeting has been useful for cross timezone coordination19:37
corvusi think this is a good use of time19:38
clarkbI proposed a slight modification of my proposal from last week where instead of cancelling the meeting if there are no agenda updates we can cover the standing topics (our priority items mostly)19:38
ianw++19:38
clarkbbut retain the 24 hour agenda update so that people can selectively skip the meeting if that helps them get dinner or sleep19:38
clarkbI think this gives us a good balance between making it easiser for people to prepare and possibly skip meetings while still having the sync up regularly19:39
fungii agree19:39
mordred++19:39
*** cloudrancher has quit IRC19:39
clarkbGoing forward I'm going to try and be better about sending out a 24 hour pre notice with the agenda copied in19:40
*** cloudrancher has joined #openstack-meeting19:40
clarkbthen its clear to anyone following the list that 1) there is a meeting happening and 2) this is what is going to occur19:40
clarkbbut maybe we'll find that is just noise. Willing to try it though19:40
*** cloudrancher has quit IRC19:40
ianwthat sounds good; i think that as i mentioned a bit more structure with dates in the wiki page might help too19:41
*** cloudrancher has joined #openstack-meeting19:41
clarkbianw: ++ and I think it was you that suggested cleaning up the agenda as part of the meeting? I worry we won't have time for that but I can take it as a chair item to use the 10 minutes after a meeting to do that19:41
ianweither or, it can just get a bit confusing as to what's been discussed and what's upcoming on the wiki19:42
clarkbthat is a good point19:42
clarkbnext is a holdover from last week but I kept it because still relevant today. We have good news on the opendev.org DNS situation. Our dns servers are now the dns servers for this domain19:43
*** b1airo has joined #openstack-meeting19:43
clarkbWe are waiting on them to create the DS record so that dnssec is happy but finally jamesmcarthur got through to the registrar :)19:43
corvusnever make jamesmcarthur mad19:44
jamesmcarthurlol19:44
clarkbcorvus: ^ assuming dnssec is working in the next day as we've been told is the next step there to migrate zuul-ci.org to the new servers?19:44
fungiyou won't like him when he's angry! ;)19:44
jamesmcarthurit takes a lot, but eventually i get pushed over the edge19:44
fungialso, he needs to stop ripping his shirts19:44
corvusclarkb: maybe set up a static website first, exercise it hosting opendev.org a bit, then move zuul-ci.org over i think19:45
clarkbthat seems reasonable. And after that clean up the old dns servers.19:45
corvusyep19:45
clarkband from there we can start tackling the actual migrationy bits of this process19:45
fungiyes, i think the opendev-website repo is likely the next one on the docket after zone-opendev.org19:45
clarkb#info opendev-website next after zone-opendev.org and associated DNS registrar situation is sorted19:46
clarkb#topic Open Discussion19:47
*** openstack changes topic to "Open Discussion (Meeting topic: infra)"19:47
clarkbWe've got a bit of time (not surprising given the very short week last week for some of us). Feel free to bring up other topics.19:47
ianwsince we were talking about fedora29, there's a surprisingly large stack of stuff related in https://review.openstack.org/#/q/topic:fedora29+(status:open+OR+status:merged)19:47
clarkbOne I've got is should we remove the open discussion item from the agenda? so that we don't sneak things into the agenda after someone has decided it is safe to sleep in?19:48
ianwmostly related to getting glean plugged in with networkmanager on redhat rpm platforms19:48
ianwreviews welcome, but ci looking good19:48
fungii'm good with dropping open discussion. that's basically 95% of the #openstack-infra channel's purpose anyway19:48
fricklerclarkb: IMO discussing things when there is time is fine, but avoid decision making?19:48
ianw++, it's usually mostly a ad-hoc call for reviews?  (e.g. mine :)19:49
clarkbfrickler: works for me19:50
fungii wouldn't mind seeing topical entries for the "standing" items on the agenda too, but wouldn't want to make a lot of extra work for the chair19:50
clarkbianw: the comments at https://review.openstack.org/#/c/618964/7..11/glean/init/glean-nm%2540.service have me slightly confused. My local xenial install doesn't have an /etc/sysconfig/network19:52
clarkbthat said glean-nm should only be used by red hatty things in the current setup so this is fine I think19:52
clarkbfungi: I think if the chair or others have time to add topical items to the standing items that would be much appreciated19:53
clarkbbut I also don't think it is necessary particularly since that is our sanity check catch up period19:53
ianwclarkb: /etc/sysconfig/network/ifcfg-* is what glean will write out on Xenial systems19:54
*** aojeagarcia has quit IRC19:55
clarkbianw: I think that is for suse?19:55
clarkband debuntu writes to /etc/networking/interfaces19:55
ianwoh, hrm, yes maybe.  what's the xenial path then?19:56
fungiit's /etc/network/interfaces(.d/something)19:56
clarkbya the per interface stuff goes into interfaces.d/something19:56
ianwyeah, sorry i mean do we not have a xenial service file?19:56
fungiifupdown?19:57
ianwin the install, we only do "if (os.path.exists('/usr/lib/systemd/system')" then install the service ...19:58
fungiahh, networking19:58
clarkbianw: I think that condition is broken on xenial so glean will run each time?19:58
fungiis /etc/init/networking.conf what you mean by "service file"?19:58
clarkbI think the systemd unit file that glean installs19:59
ianwclarkb: hrm, yes maybe xenial is falling through ... let's follow up in infra ...19:59
clarkbianw: see you there19:59
clarkband with that we are out of time19:59
clarkbthank you everyone19:59
fungithanks clarkb!19:59
clarkb#endmeeting19:59
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"19:59
openstackMeeting ended Tue Nov 27 19:59:43 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:59
openstackMinutes:        http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-11-27-19.01.html19:59
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-11-27-19.01.txt19:59
openstackLog:            http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-11-27-19.01.log.html19:59
*** cloudrancher has quit IRC20:00
*** cloudrancher has joined #openstack-meeting20:00
*** _hemna has quit IRC20:04
*** dustins has quit IRC20:06
*** jamesmcarthur has quit IRC20:13
*** tobiash has left #openstack-meeting20:19
*** awaugama has quit IRC20:21
*** Shrews has left #openstack-meeting20:26
*** imacdonn has quit IRC20:37
*** imacdonn has joined #openstack-meeting20:37
*** slaweq_ has joined #openstack-meeting20:44
*** eharney has quit IRC20:45
*** oneswig has joined #openstack-meeting20:55
*** rossella_s has quit IRC20:57
*** jawad_axd has quit IRC20:58
oneswigb1airo: would you like to do the honours?20:59
oneswigyou there?21:00
oneswig#startmeeting scientific-sig21:00
openstackMeeting started Tue Nov 27 21:00:41 2018 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: scientific-sig)"21:00
openstackThe meeting name has been set to 'scientific_sig'21:00
oneswigyou snooze you lose!21:00
*** trandles has joined #openstack-meeting21:00
oneswig#link Agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_November_27th_201821:01
b1airohowdy21:01
oneswigGreetings Americans, I trust Thanksgiving was good21:01
oneswighey b1airo, g'day21:01
trandlesomnomnom21:01
oneswig#chair b1airo21:01
openstackCurrent chairs: b1airo oneswig21:01
* b1airo has the power!21:01
*** janders has joined #openstack-meeting21:02
jandersg'day everyone21:02
oneswigI'm on a train, wifi's likely to get interesting as we move21:02
b1airohi janders21:02
oneswigg'day janders21:02
oneswig#topic SC roundup21:02
*** openstack changes topic to "SC roundup (Meeting topic: scientific-sig)"21:02
jandershey Blair - how was SC?21:02
oneswigHow indeed?21:02
b1airogood. that enough...? :-P21:03
*** slaweq_ has quit IRC21:03
oneswigHow did Wojtek break his leg, can you share that?21:03
b1airothe container sessions outnumbered the cloud sessions this year21:03
janderssounds like Berlin :)21:04
b1airoi can share a video of him in emergency. but actually the incident was pretty lame, he must just be getting old21:04
jandersIs SC changing name anytime soon?21:04
oneswigAnything on Charliecloud trandles?21:04
b1airoone particular thing on the container front that i found interesting was an understanding of Docker's strategy in the HPC space...21:05
trandlesNothing interesting on Charliecloud.  Just more progress going into supporting what seems like an increasing number of MPI implementations and interconnects21:05
trandlesb1airo: you have an understanding of docker's HPC strategy?  I don't and I've been talking to them for the past year. :(21:06
trandlesalthough, I get different strategies from different people within docker21:06
oneswigDocker's strategy right now is to marvel at that bright light at the end of the tunnel, it seems21:06
oneswig... and wonder what the chuffing noise might be...21:07
trandlesoneswig: that's my impression as well21:07
jandersI hope the strategy is not "we don't care and keep going our own way, the HPC folks will figure it out"21:07
b1airobasically, because everyone in research and big HPC wants stuff for free, Docker Inc can't commit engineering to related feature development. So Christian's strategy is to go after ML workloads in enterprise, supporting GPU, then MPI, and presto - HPC is supported.21:07
b1airothat's the simplistic version21:08
jandersit's a bit like RHAT&mlnx, HPC, SRIOV and Telco21:08
trandleslast I heard from Christian is "we'll replace [slurm|pbs|torque|etc] with docker swarm, voila!"21:08
oneswigMPI support that I've seen (kubeflow's mpi-operator) isn't going to scale - but that doesn't mean it will always be like that21:08
b1airothere is also some uid/gid mapping stuff they have in the works for FS namespaces, i guess that will be a "within this container, squash all I/O to this volume to these IDs"21:09
trandlesWell, LANL's position is that money, power, and cooling aren't going to scale much further so we might as well tackle trying to get better than 1% efficiency from our platforms21:10
oneswigThat has bitten us21:10
oneswig1%? Sounds like my household finances.21:10
b1airotrandles: lol, that'll never catch on21:10
trandleswe're seeing something around 3-6% efficiency with our new ARM systems21:10
jandershave you guys heard anything interesting from the other side ( [slurm|pbs|torque|etc] )?21:11
b1airodidn't you hear Trump, he doesn't believe climate change will have any economic impacts, and besides, it was cold last weekend21:11
jandersare they looking at running their own components containerised - and consuming k8s resources?21:11
trandlesI had lunch == no one is hungry any more!21:11
jandersb1airo: damn right! It's bloody cold here despite we're almost in December.. burn more coal, quickly!21:12
b1airotrandles: more seriously though, what is your definition of efficiency for these purposes?21:12
trandlesjanders: I haven't heard much of anything from the resource manager side RE: containers21:12
oneswigjanders: that sounds a little far fetched - apart from for univa grid engine, who have done something likethat21:12
jandersyeah I hear those vendors trying to orchestrate docker themselves - which I find retarded21:12
b1airothere was one suggestion during the Q&A in one of the BoFs that Slurm should support one of the OCI interfaces (whichever one is responsible for launching containers)21:13
b1airothere was one interesting suggestion during the Q&A in one of the BoFs - that Slurm should support one of the OCI interfaces (whichever one is responsible for launching containers)21:13
jandershave slurm run containers for HPC and k8s run containers for non-hpc within a single site sounds anything but efficient21:13
trandlesb1airo: application performance as percent of theoretical peak IIRC...looking for that now...21:13
oneswigWere there new faces for OpenStack at SC?21:13
b1airooneswig: yes, both in the audience and on the panel, but mostly that was because half of them were container peeps21:14
trandlesbigger problem is no one knows how to launch MPI applications at scale but the existing resource managers (kinda) and the MPI implementations (for now)21:14
*** raildo has quit IRC21:14
oneswigyour caveats, most beguiling...21:15
oneswigcare to elaborate?21:15
trandlesin the OpenMPI space, it sounds like before long no one will be able to launch anything...orte is dead, PMIx is almost unsupported, and mpirun is deprecated21:15
oneswigI hadn't realised that pmix is itself written like an hpc application21:15
trandlesb1airo: did you or anyone you know attend the OpenMPI BoF?  I wasn't there long enough.  Rumor had it lots of folks were showing up to make a stink about the lack of future for things like mpirun21:16
oneswigsounds awkward21:17
b1airounfortunately not trandles , sounds like i missed out though. it's possible one of the Monash folks did (will ask)...21:17
trandlesat LANL, we use slurm's support of PMI2 to use srun to wire up the job21:17
trandlesnot sure what IBM is doing RE: jsrun and lrun21:17
trandlesbut they have Spectrum-MPI anyway21:18
oneswigtrandles: that's what I tend to use for openhpc and slurm21:18
oneswig(pmi2)21:18
oneswigWe should also round up the OpenStack summit activity21:18
oneswigReady to flip over?21:19
trandlesone last thing21:19
oneswiggo for it, columbo21:19
trandleswithout details, I get the impression that some vendors are looking to implement container-launching plugins for resource managers21:19
b1airooneswig: i'm afraid my memory of the session needs jogging slightly - it was on the afternoon following my night in the hospital with Wojtek, so i was a little spacey. there was not a lot of OpenStack specific conversation though, more general higher-level cloud workload issues. a couple of the people asking questions had some very... strange... problems too (of the "i can't believe you ended up here" nature)21:20
trandlesand it's not like "have slurm run docker" it's more like "slurm job_launch plugin that makes the right syscalls to set up the namespaces"21:20
*** imacdonn has quit IRC21:20
janderstrandles: this is good to hear! :)21:21
oneswigtrandles: will all those people who bemoaned the docker daemon's root privilege now turn on slurmd, I wonder?21:21
b1airogood question oneswig21:21
trandlesslurmd already does things on behalf of the user21:21
trandlesie. the user doesn't need sudo to use slurm ;)21:21
oneswigindeed, and gets little of the flak that docker gets21:22
trandlesmaybe if docker didn't require sudo (or setuid) and direct user control it would be different21:22
b1airois that just because some of Slurm's guts are setuid ?21:22
oneswiganyway, trandles, you've got a pretty good implementation of container launch, do you think it will come in?21:23
trandlesmaybe...I certainly hope so21:23
oneswigtrandles: fair point on the direct user control21:23
oneswigOK, move on?21:24
trandlesanyway, that's all I got...Berlin?21:24
*** imacdonn has joined #openstack-meeting21:24
oneswig#topic Berlin roundup21:24
*** openstack changes topic to "Berlin roundup (Meeting topic: scientific-sig)"21:24
*** _hemna has joined #openstack-meeting21:24
oneswigjanders: we had a pretty vibrant meeting this time round, wouldn't you say?21:24
oneswigmore than half the crowd were there for the first time21:24
oneswigThe room was full, I'd guess 70 upwards21:25
jandersoneswig: I think we can definitely say that!21:25
oneswigperhaps even 10021:25
jandersit almost felt like we need a bigger room21:25
oneswigwhat was interesting was that the majority of attendees had bioinformatics use cases21:25
jandersif we weren't that far from all the other presentations we would likely get few more people :)21:26
oneswigsome HEP, some generic university workloads, some AI/ML21:26
janderstrue, however looking back at the last five years of OpenStacking I think nearly all of my "interesting" users were from the bioinformatics domain21:26
oneswigThere's continuing interest in how to manage sensitive data, particularly from them.21:27
b1airoit's slightly depressing that so many people need a whole different infrastructure approach just to make bioinformaticians happy21:27
oneswigI don't think it's different, just different processes21:27
b1airoi could be projecting some current frustration with supporting the Genomics community down here21:28
jandersb1airo: other than the storage backend, what differences are you thinking?21:28
oneswigThere's a lot of common ground around questions like how to implement a safe haven21:28
*** martial__ has joined #openstack-meeting21:28
*** eharney has joined #openstack-meeting21:28
oneswigSomebody today made the point it's just like data in banking (or similar), really21:28
martial__(sorry computer issues)21:29
*** yamamoto has quit IRC21:29
oneswighey martial__21:29
oneswigyou're growing underscores again!21:29
oneswig#chair martial__21:29
openstackCurrent chairs: b1airo martial__ oneswig21:29
martial__I know, fun times :)21:29
oneswigWe had a visit today to our office from a team from Monash - Komathy and Jerico21:29
oneswigHad a really useful discussion with them on where they are and what they need21:30
oneswigThey are keen to make contact with fellow travellers from the US.  They've been speaking with half a dozen or so around Europe and the UK on their tour21:30
*** tpsilva has quit IRC21:31
oneswigAh, I should update the topic21:31
oneswig#topic fellow travellers for controlled-access data21:31
*** openstack changes topic to "fellow travellers for controlled-access data (Meeting topic: scientific-sig)"21:31
oneswigtrandles: I imagine sensitive data handling is totally different in your domain21:32
trandlesoneswig: I think those folks need to make contact with Khalil and his federation efforts21:32
trandleshrm, for us it's a pain in the arse21:32
oneswigDoes ORCA cover that kind of use case?21:33
oneswigmartial__: ?21:33
martial__(sorry was looking into logs of crash)21:33
oneswigyou're not in the car, I hope?21:34
martial__(no, computer reboot)21:34
oneswigjust checking :-)21:34
martial__so Khalil and the ORCA people do not cover the conversation about data sensitivity21:34
martial__it covers the case of access of information and it can be either an RBAC or ACL solution21:35
oneswigah ok21:35
trandlesI'm not sure how they can work on federation without considering data sensitivity21:35
martial__not sure if those overlap with your definition of sensitivity?21:36
martial__(use case of)21:36
oneswigI guess the first use case is to enable sharing, and the follow-up use case becomes how to control it21:36
b1airoi went to the federation panel at SC, but to be perfectly honest it still seems very academic21:37
martial__the idea that a user has access to a subset of data if controlled by who this user is per its access rights21:37
martial__it closer to ACLs type21:38
oneswigb1airo: from an OpenStack perspective there are many gaps for federated users, agreed21:38
martial__OpenStack has federated users (keystone to keystone)21:38
oneswigIt does indeed, but (for example) they cannot create heat stacks or use application credentials21:39
oneswiga federated user is a bit ghostly21:39
martial__no, indeed21:39
jandersoneswig: I wasn't aware of that. What are the reasons? Bugs in heat? Else?21:40
martial__the concept of the work done for ORCA is the use of the IEEE P2302 model to drive an implementation (proof of concept)21:40
martial__that allows clouds to interconnect21:40
oneswigIt relates to trusts in Keystone janders21:40
oneswigbeyond that I am not sure but people on ou team have been through the issues.21:41
martial__their is an "agent" to communicate rights, roles and privileges for the users from a remote cloud to the local cloud21:41
martial__(the "cloud broker" in a way)21:41
jandersright... one would think that a user holds a token or he/she doesn't - so every service works or no service works21:41
oneswigmartial__: ORCA is working on a poc implementation?21:41
jandersbut I guess that was pretty naive and the reality is much less binary21:41
janders:)21:41
martial__so IEEE P2302 is getting a NIST SP500 in the draft in the coming month21:42
jandersnoted! thank you, knowing this will help me stay out of trouble down the track21:42
oneswigjanders: I think it relates to actions performed in the user's name when they've gone home for the night, my understanding of it21:42
martial__ORCA will benefit from it and there was a big effort of integrating sites while at SC18 in the framework to help dive this POC21:43
oneswiggood to hear there is some forward progress for them21:44
oneswiganyway, I feel we should do what we can to gather interested parties around best practice for sensitive data.21:44
martial__yes, Stig, I agree21:44
oneswigSeems we are light on the ground today but if we can tap a few regulars, perhaps21:44
martial__although ORCA is "for academic use" so less issue of data sensitivity (we are not talking Secret, TS, ... are we?)21:45
oneswigThe other area of interest from the SIG session - a new one for our discussions to date - best practice for AI/ML21:45
trandlesmartial__: it doesn't have to venture into classifications, it can be simple export control21:46
oneswigI wasn't sure what demands that entails21:46
*** pcaruana has quit IRC21:46
oneswigbut then I met a guy from sweden wanting to put 8x100G networking into each GPU node...21:46
*** priteau has quit IRC21:47
b1airosorry, sidetracked in another meeting on Zoom...21:47
oneswignp, hope you're not sharing your desktop :-)21:47
b1airorelated to the federated roles, privileges etc discussion above. sounds like what SciTokens might be designed to help with21:48
b1airooneswig: me too!21:48
oneswigthe AI use case is tangential to our cohort but becoming increasingly used at a platform level by users I'm aware of21:49
oneswigwe may not realise our infra is already doing this kind of work...21:49
oneswigOK, I forgot the other matter arising (from Pawsey)21:50
oneswigAnyone using Manila to manage Lustre?  Or interested in doing so?21:50
oneswig#topic manila and lustre21:51
*** openstack changes topic to "manila and lustre (Meeting topic: scientific-sig)"21:51
martial__yes, we have looking into SciTokens indeed21:51
oneswigmartial__: is that you with your ORCA hat on?21:51
martial__nah, the P2302. We have a ton of conversation on the different models out there ... it needs to "interconnect" after all :)21:53
oneswigThere was some interest in orchestrating dynamic shares on Lustre filesystems using Manila.  I'm aware of a few people interested in pursuing it.21:54
oneswigThis was raised last week on the EMEA session21:54
jandersjust from my perspective, doing it via NFS is less interesting21:55
*** shrasool has quit IRC21:55
oneswigI agree that the native transport makes most sense, given the effort of getting Lustre to that point.21:56
jandersI hope to look at BeeGFS as nova/glance/cinder/manila/swift backend instead (sorry I haven't managed to get back to you on this oneswig - chasing up some other things this week)21:56
oneswigjanders: np, we've all got stuff to do :-)21:56
jandersnative Lustre orchestration could also be of strong interest to us, depending on the outcomes of a tender21:56
janderswe have BeeGFS for sure, Lustre might or might not come into play21:57
oneswigjanders: may the best filesystem win :-)21:57
trandlesgotta run to another meeting, toodle pip21:57
oneswigcheers trandles21:57
*** trandles has quit IRC21:57
jandersGPFS would be another area of interest, though I feel that BeeGFS might be better going forward21:57
jandersoneswig: exactly! :)21:58
oneswigI'm sure you're not the only one weighing these things up21:58
jandersindeed21:58
oneswigOK, we are nearly out of time21:58
oneswigfinal thoughts?21:58
jandersI'm good. Thank you all!21:59
oneswigI'll mail openstack-discuss to try to gather more interest on the data21:59
*** jmlowe has joined #openstack-meeting21:59
oneswigOK y'all, thanks and goodnight21:59
martial__cool, thanks :)21:59
oneswig#endmeeting21:59
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"21:59
openstackMeeting ended Tue Nov 27 21:59:55 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:59
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-11-27-21.00.html21:59
oneswiguntil next week21:59
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-11-27-21.00.txt22:00
openstackLog:            http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-11-27-21.00.log.html22:00
b1airoah, my last messages got lost22:00
b1airowe need to update the mailing list references on the wiki22:00
*** oneswig has quit IRC22:00
*** slaweq has quit IRC22:03
*** yamamoto has joined #openstack-meeting22:07
*** yamamoto has quit IRC22:18
*** slaweq has joined #openstack-meeting22:19
*** slaweq has quit IRC22:24
*** shrasool has joined #openstack-meeting22:28
*** shrasool has quit IRC22:39
*** munimeha1 has quit IRC22:45
*** rossella_s has joined #openstack-meeting22:46
*** shrasool has joined #openstack-meeting22:46
*** shrasool has quit IRC22:46
*** janders has quit IRC22:48
*** iyamahat_ has joined #openstack-meeting22:53
*** eharney has quit IRC22:54
*** priteau has joined #openstack-meeting22:54
*** iyamahat has quit IRC22:56
*** radeks has quit IRC22:57
*** rcernin has joined #openstack-meeting22:57
*** iyamahat_ has quit IRC22:58
*** iyamahat has joined #openstack-meeting22:58
*** priteau has quit IRC22:58
*** dtroyer has joined #openstack-meeting23:02
*** jamesmcarthur has joined #openstack-meeting23:14
*** mjturek has quit IRC23:16
*** jamesmcarthur has quit IRC23:18
*** rossella_s has quit IRC23:23
*** slaweq has joined #openstack-meeting23:29
*** slaweq has quit IRC23:33
*** yamamoto has joined #openstack-meeting23:34
*** mriedem has quit IRC23:46
*** armax has quit IRC23:49
*** jhesketh_ is now known as jhesketh23:51
*** armax has joined #openstack-meeting23:52
*** iyamahat has quit IRC23:54
*** diablo_rojo has quit IRC23:57
*** hongbin has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!