Tuesday, 2019-08-06

*** jamesmcarthur has quit IRC00:09
*** mattw4 has quit IRC00:12
*** diablo_rojo has joined #openstack-meeting00:12
*** jamesmcarthur has joined #openstack-meeting00:16
*** gyee has quit IRC00:19
*** diablo_rojo has quit IRC00:25
*** diablo_rojo__ has joined #openstack-meeting00:25
*** mriedem has quit IRC00:38
*** jamesmcarthur has quit IRC00:38
*** markvoelker has joined #openstack-meeting00:41
*** markvoelker has quit IRC00:51
*** diablo_rojo__ has quit IRC00:54
*** igordc has quit IRC01:00
*** yamamoto has joined #openstack-meeting01:01
*** markvoelker has joined #openstack-meeting01:17
*** yamamoto has quit IRC01:24
*** yamamoto has joined #openstack-meeting01:25
*** mhen has quit IRC01:26
*** altlogbot_2 has quit IRC01:37
*** altlogbot_3 has joined #openstack-meeting01:38
*** markvoelker has quit IRC01:38
*** tetsuro has joined #openstack-meeting01:39
*** jamesmcarthur has joined #openstack-meeting01:44
*** jamesmcarthur has quit IRC01:49
*** nitinuikey has joined #openstack-meeting01:55
*** yamamoto has quit IRC02:02
*** apetrich has quit IRC02:08
*** tetsuro has quit IRC02:12
*** ricolin has joined #openstack-meeting02:16
*** jamesmcarthur has joined #openstack-meeting02:18
*** yamamoto has joined #openstack-meeting02:28
*** slaweq has quit IRC02:35
*** markvoelker has joined #openstack-meeting02:42
*** tetsuro has joined #openstack-meeting02:44
*** nitinuikey has quit IRC02:48
*** hongbin has joined #openstack-meeting02:49
*** tetsuro_ has joined #openstack-meeting02:51
*** jamesmcarthur has quit IRC02:53
*** tetsuro has quit IRC02:53
*** jamesmcarthur has joined #openstack-meeting02:54
*** jamesmcarthur has quit IRC02:59
*** jamesmcarthur has joined #openstack-meeting03:04
*** whoami-rajat has joined #openstack-meeting03:07
*** markvoelker has quit IRC03:15
*** jamesmcarthur has quit IRC03:17
*** tetsuro_ has quit IRC03:17
*** jamesmcarthur has joined #openstack-meeting03:40
*** tetsuro has joined #openstack-meeting03:55
*** enriquetaso has quit IRC03:56
*** nitinuikey has joined #openstack-meeting04:00
*** cheng1 has quit IRC04:04
*** hongbin has quit IRC04:11
*** ircuser-1 has joined #openstack-meeting04:34
*** jamesmcarthur has quit IRC04:40
*** Lucas_Gray has joined #openstack-meeting04:41
*** ykatabam has quit IRC04:51
*** jhesketh has joined #openstack-meeting04:54
*** nitinuikey has quit IRC04:54
*** janki has joined #openstack-meeting05:04
*** Luzi has joined #openstack-meeting05:05
*** jamesmcarthur has joined #openstack-meeting05:09
*** jamesmcarthur has quit IRC05:15
*** tetsuro has quit IRC05:23
*** tetsuro has joined #openstack-meeting05:24
*** tetsuro has quit IRC05:27
*** markvoelker has joined #openstack-meeting05:28
*** tetsuro has joined #openstack-meeting05:30
*** markvoelker has quit IRC05:33
*** Lucas_Gray has quit IRC05:35
*** jchhatbar has joined #openstack-meeting05:38
*** jchhatbar has quit IRC05:41
*** janki has quit IRC05:41
*** jchhatbar has joined #openstack-meeting05:42
*** jchhatbar has quit IRC05:43
*** links has joined #openstack-meeting05:47
*** links has quit IRC06:00
*** jamesmcarthur has joined #openstack-meeting06:11
*** yamamoto has quit IRC06:12
*** jamesmcarthur has quit IRC06:16
*** yamamoto has joined #openstack-meeting06:18
*** yamamoto has quit IRC06:24
*** yamamoto has joined #openstack-meeting06:30
*** vishalmanchanda has joined #openstack-meeting06:36
*** markvoelker has joined #openstack-meeting06:36
*** belmoreira has joined #openstack-meeting06:36
*** belmoreira has quit IRC06:37
*** belmoreira has joined #openstack-meeting06:37
*** yamamoto has quit IRC06:42
*** jamesmcarthur has joined #openstack-meeting06:46
*** apetrich has joined #openstack-meeting06:47
*** yamamoto has joined #openstack-meeting06:47
*** jamesmcarthur has quit IRC06:51
*** kopecmartin|off is now known as kopecmartin06:58
*** jamesmcarthur has joined #openstack-meeting07:00
*** tssurya has joined #openstack-meeting07:04
*** jamesmcarthur has quit IRC07:05
*** hyunsik_m has joined #openstack-meeting07:06
*** slaweq has joined #openstack-meeting07:06
*** hyunsikyang__ has quit IRC07:08
*** hyunsikyang has joined #openstack-meeting07:08
*** markvoelker has quit IRC07:09
*** jbadiapa has joined #openstack-meeting07:18
*** dkushwaha has joined #openstack-meeting07:21
*** jbadiapa has quit IRC07:27
*** tesseract has joined #openstack-meeting07:30
*** keiko-k has joined #openstack-meeting07:30
*** rubasov has joined #openstack-meeting07:35
*** jamesmcarthur has joined #openstack-meeting07:39
*** ralonsoh has joined #openstack-meeting07:43
*** jamesmcarthur has quit IRC07:43
*** hyunsik_m has quit IRC07:53
*** hyunsik_m has joined #openstack-meeting07:54
*** hyunsik_m has quit IRC07:54
*** iyamahat has quit IRC07:56
*** hyunsik_m has joined #openstack-meeting07:58
*** joxyuki has joined #openstack-meeting07:58
*** nitinuikey has joined #openstack-meeting08:01
dkushwaha#startmeeting tacker08:02
openstackMeeting started Tue Aug  6 08:02:13 2019 UTC and is due to finish in 60 minutes.  The chair is dkushwaha. Information about MeetBot at http://wiki.debian.org/MeetBot.08:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.08:02
*** openstack changes topic to " (Meeting topic: tacker)"08:02
openstackThe meeting name has been set to 'tacker'08:02
dkushwaha#topic Roll Call08:02
*** openstack changes topic to "Roll Call (Meeting topic: tacker)"08:02
dkushwahawho is here for Tacker weekly meeting ?08:02
nitinuikeyHi08:02
keiko-ko/08:02
*** takahashi-tsc has joined #openstack-meeting08:02
hyunsikyangHi08:02
takahashi-tscHi08:03
joxyukihi08:03
dkushwahahowdy all08:03
*** rcernin has quit IRC08:04
dkushwaha#chair joxyuki08:04
openstackCurrent chairs: dkushwaha joxyuki08:04
dkushwaha#topic BP08:04
*** openstack changes topic to "BP (Meeting topic: tacker)"08:04
*** hjwon has joined #openstack-meeting08:05
*** ociuhandu has joined #openstack-meeting08:05
*** hokeeeeun has joined #openstack-meeting08:05
dkushwahaenable updating VNF parameters08:05
dkushwaha#link https://review.opendev.org/#/c/672199/08:05
*** jaewook_oh_ has joined #openstack-meeting08:05
*** ociuhandu has quit IRC08:05
dkushwahajoxyuki, I don't have no comment on that08:06
joxyukidkushwaha, thanks for your review08:06
dkushwahajoxyuki, could you please update it, so we will merge it08:06
joxyukiok. will do it soon08:06
*** peschk_l has joined #openstack-meeting08:07
dkushwahajoxyuki, I have one thing, like how to update image, I mean abot software update things, which we needs to handle, but do not have comment on it for now08:07
*** jamesmcarthur has joined #openstack-meeting08:08
joxyukidkushwaha, it depends on heat. As for image, I think target instance is re-created.08:09
joxyukiso user have to be carefully what he is going to do.08:10
*** hyunsik_m has quit IRC08:10
*** lpetrut has joined #openstack-meeting08:10
*** keiko-k has quit IRC08:11
dkushwahajoxyuki, yes, but the cases like: 1: how to handle already attached block storages. 2: what if user want to just update some patch on its Os.08:11
*** electrofelix has joined #openstack-meeting08:11
*** keiko-k has joined #openstack-meeting08:12
*** tpatil has joined #openstack-meeting08:12
*** jamesmcarthur has quit IRC08:12
dkushwahajoxyuki, so, for now I suggest to support update parameters, and later we can work for such other cases08:12
dkushwahathoughts..08:13
joxyukidkushwaha, what do you mean other cases?08:14
*** jamesmcarthur has joined #openstack-meeting08:14
*** tetsuro has quit IRC08:14
joxyukiAre they case1 and 2 just you mentioned above?08:14
dkushwahajoxyuki, not sure about all cases, but among them 2 as i mentioned in above comment08:15
*** hokeen has joined #openstack-meeting08:15
joxyukidkushwaha, got it.08:15
joxyukiAs for case 2, I think tacker need to issue commands, such as apt/yum/patch, in the instance.08:17
*** ociuhandu has joined #openstack-meeting08:17
joxyukibecause heat doesn't support such use case, maybe.08:18
*** hokeeeeun has quit IRC08:18
*** shubham_potale has joined #openstack-meeting08:18
dkushwahajoxyuki, I see.08:18
*** jamesmcarthur has quit IRC08:20
dkushwahajoxyuki, ok, so please update spec, I will be give my +2, and if further no any comment byt others, we willb merge08:20
joxyukidkushwaha, yes08:21
tpatildkushwaha: I want to discuss about VNF packages support for VNF onboarding specs08:21
dkushwahatpatil, sure08:22
tpatilspecs: https://review.opendev.org/#/c/58293008:22
tpatilWe are planning to add new RPC API in tacker-conductor for processing vnf packages08:22
tpatilso I would like to ask question whether tacker-conductor service is installed on controller node or on a separate node in the production env.08:23
*** JangwonLee has joined #openstack-meeting08:23
dkushwahatpatil, its installed on controller08:24
tpatilGenerally for HA, tacker.service will be installed on multiple controller nodes ( 2 or 3)08:24
tpatilI have seen one patch where monitoring is moved to tacker.conductor08:24
dkushwahatpatil, which patch? you mean mistral-monitoring patch?08:25
tpatilthat patch is not yet merged, but monitoring same vnF from 2 or 3 controller nodes would be problematic08:25
tpatilyes08:25
*** hokeeeeun has joined #openstack-meeting08:26
dkushwahatpatil, i needs to re-check that patch, but as I remember, conductor is to communicate with, not for monitoring08:27
tpatilin our specs, we want to process vnf packages in tacker conductor, for that, we need to extract the csar zip in a folder which will be made configurable.08:28
tpatilonce the csar zip is extracted, we want to keep the files as is until the vnf package is deleted08:28
*** hokeen has quit IRC08:29
tpatilnow if tacker-conductor is running on multiple nodes for HA, we will need to cleanup the extracted data from folder when vnf package will be deleted from tacker.conductor from all nodes08:29
*** ociuhandu has quit IRC08:30
dkushwahatpatil, just trying to understand, why new API on conductor?08:31
tpatilfor that we will need to introduce periodic tasks in conductor for clean up of deleted VNF packages08:31
tpatilwe want to add processing of vnf package code in conductor08:32
tpatilas it would be lengthy tasks08:32
dkushwahamake sense08:32
tpatiland also conductor manager, we can introduce the periodic task for cleanup08:32
joxyukitpatil, why is it priodic? when VNF package delete is called, tacker will delete it.08:34
joxyukis/priodic/periodic/08:34
tpatilbut if you run multiple tacker.conductor service, the request will be processed by only one service08:35
*** e0ne has joined #openstack-meeting08:35
joxyukiunderstand08:35
tpatilin that case, some of the extracted csar data from one of the controller node won't be deleted08:36
dkushwahatpatil, seems, we missed this case in spec.08:38
dkushwahai needs to re-llok into it08:38
nitinuikey@tpatil so you mean periodic task will clean up vnf data from all the tacker conductor nodes?08:38
dkushwahare-look08:38
tpatilyes, I will update the specs as now I'm clear that tacker.conductor will be installed on the controller node08:38
hyunsikyangIMO, If you want to change the conductor architecture, it is anoother issue.08:39
hyunsikyangdkushwaha, now is tacker support multiple conductor and service?08:39
tpatilnitinuikey: it will be deleted from one of the controller node when user will delete the vnf package, and from other controller node, if any data is there ,it will be cleaned up in the periodic tasks08:39
nitinuikeytpatil understood08:40
dkushwahahyunsikyang,  some actions cannot access tacker database directly08:43
dkushwahahyunsikyang, so conductor server was introduced to do database access for those actions08:43
dkushwahahyunsikyang, but yes, it looks an issue to have multiple conductor08:44
*** tpatil has quit IRC08:44
*** jamesmcarthur has joined #openstack-meeting08:44
hyunsikyangdkushwaha, yes. I think so. thanks08:45
shubham_potaleFYI tpatil lost internet connection08:46
dkushwahatpatil, please update spec, i will check again08:46
dkushwahaoh08:46
nitinuikeydkushwaha we will inform them if he will not able to connect08:46
shubham_potaledkushwaha: tpatil here, sure i will update the specs08:47
dkushwahahyunsikyang, could you please help to review https://review.opendev.org/#/c/58293008:47
dkushwahatpatil, thanks08:47
*** panda|pubholiday is now known as panda08:49
dkushwahamoving next..08:49
dkushwahaPrometheus plugin support08:49
dkushwaha#link https://review.opendev.org/#/c/540416/08:49
*** iyamahat has joined #openstack-meeting08:50
dkushwahajaewook_oh_, any update from your side?08:50
jaewook_oh_Umm I updated the bp and I checked your comments08:50
*** ociuhandu has joined #openstack-meeting08:51
dkushwahajaewook_oh_, I just commented some nits.08:52
jaewook_oh_Yes, and I've updated the bp from patch set 29 to patch set 30. Some new comments from the reviewers would be appreciated.08:54
*** sridharg has joined #openstack-meeting08:54
dkushwahaFolks, As we have to freeze spec soon, so please help to review specs on priority.08:54
dkushwahajaewook_oh_, ok08:54
jaewook_oh_And as you said it is not for container-based vnf only, and I've changed the commit title, but that made some error.08:55
jaewook_oh_That's why I couldn't change it... and I think creating new bp would be nice in this case :(08:56
dkushwahamoving next..08:56
*** tpatil has joined #openstack-meeting08:57
dkushwaha#topic Open Discussion08:57
*** openstack changes topic to "Open Discussion (Meeting topic: tacker)"08:57
dkushwahatpatil, as in last meeting discussion about cp-auto-heal08:58
*** tetsuro has joined #openstack-meeting08:58
dkushwahahttps://github.com/openstack/tacker/blame/master/tacker/vnfm/policy_actions/vdu_autoheal/vdu_autoheal.py#L5108:58
dkushwahatpatil, it does not heal CP values, but its name only.08:59
*** Lucas_Gray has joined #openstack-meeting08:59
dkushwahaso once a vnf(i.e. VDU) heal, it lost there ip, and created new one08:59
tpatilyes, but if Mac address is there, it would assign same ip address09:00
dkushwahaso every time we comes with new ip09:00
*** tetsuro has quit IRC09:00
*** tetsuro has joined #openstack-meeting09:01
dkushwahaoh, time up folks09:01
dkushwahaClosing this meeting09:01
takahashi-tscFYI, We checked how it works, and IP address is not changed.09:02
dkushwahathanks all for joining09:02
tpatilI don't recollect every thing at this point, will update later09:02
dkushwahatakahashi-tsc, tpatil we can continue on tacker channel for further discussion09:02
dkushwaha #endmeeting09:02
tpatilsure09:03
*** tetsuro has quit IRC09:03
*** hokeen has joined #openstack-meeting09:03
*** tpatil has left #openstack-meeting09:05
*** ociuhandu has quit IRC09:05
*** hokeeeeun has quit IRC09:06
*** hokeen has quit IRC09:08
*** hjwon has quit IRC09:17
*** hyunsikyang__ has joined #openstack-meeting09:17
*** jamesmcarthur has quit IRC09:18
*** hyunsikyang has quit IRC09:20
*** hyunsikyang has joined #openstack-meeting09:21
*** hyunsikyang__ has quit IRC09:22
*** hyunsikyang has quit IRC09:23
*** yamamoto has quit IRC09:27
*** Lucas_Gray has quit IRC09:27
*** yamamoto has joined #openstack-meeting09:28
*** dtrainor_ has quit IRC09:28
*** Lucas_Gray has joined #openstack-meeting09:29
*** dtrainor has joined #openstack-meeting09:31
*** keiko-k has quit IRC09:32
*** yamamoto has quit IRC09:33
*** abishop_ has joined #openstack-meeting09:40
*** abishop has quit IRC09:41
*** ociuhandu has joined #openstack-meeting09:47
*** yamamoto has joined #openstack-meeting09:51
*** jamesmcarthur has joined #openstack-meeting09:55
*** jamesmcarthur has quit IRC09:59
*** yamamoto has quit IRC10:00
*** yamamoto has joined #openstack-meeting10:05
*** yamamoto has quit IRC10:06
*** yaawang has quit IRC10:21
*** dkushwaha has quit IRC10:30
*** takahashi-tsc has quit IRC10:31
*** rfolco|ruck is now known as rfolco|doctor10:33
*** bbowen has quit IRC10:38
*** yaawang has joined #openstack-meeting10:38
*** yaawang has quit IRC10:44
*** yamamoto has joined #openstack-meeting10:48
*** rakhmerov has joined #openstack-meeting10:55
*** jamesmcarthur has joined #openstack-meeting10:55
*** jamesmcarthur has quit IRC11:00
*** nitinuikey has quit IRC11:05
*** links has joined #openstack-meeting11:12
*** ociuhandu has quit IRC11:13
*** ociuhandu has joined #openstack-meeting11:16
*** ociuhandu has quit IRC11:21
*** jamesmcarthur has joined #openstack-meeting11:31
*** jamesmcarthur has quit IRC11:35
*** jamesmcarthur has joined #openstack-meeting11:35
*** ociuhandu has joined #openstack-meeting11:35
*** links has quit IRC11:36
*** bbowen has joined #openstack-meeting11:36
*** carloss has joined #openstack-meeting11:37
*** ociuhandu has quit IRC11:40
*** yamamoto has quit IRC11:42
*** yaawang has joined #openstack-meeting11:43
*** ociuhandu has joined #openstack-meeting11:49
*** panda is now known as panda|lunch11:49
*** brinzhang_ has joined #openstack-meeting11:50
*** brinzhang has quit IRC11:50
*** brinzhang has joined #openstack-meeting11:50
*** dviroel has joined #openstack-meeting11:53
*** brinzhang_ has quit IRC11:54
*** rubasov has quit IRC12:00
*** ociuhandu has quit IRC12:01
*** zbr is now known as zbr|lunch12:01
*** Lucas_Gray has quit IRC12:02
*** markvoelker has joined #openstack-meeting12:04
*** Lucas_Gray has joined #openstack-meeting12:04
*** markvoelker has quit IRC12:06
*** markvoelker has joined #openstack-meeting12:06
*** rubasov has joined #openstack-meeting12:07
*** yamamoto has joined #openstack-meeting12:14
*** links has joined #openstack-meeting12:18
*** jamesmcarthur has quit IRC12:20
*** Lucas_Gray has quit IRC12:22
*** hongbin has joined #openstack-meeting12:34
*** zbr|lunch is now known as zbr12:38
*** yamamoto has quit IRC12:43
*** enriquetaso has joined #openstack-meeting12:57
*** mriedem has joined #openstack-meeting13:00
*** eharney has joined #openstack-meeting13:00
*** enriquetaso has quit IRC13:03
*** Lucas_Gray has joined #openstack-meeting13:07
*** enriquetaso has joined #openstack-meeting13:08
*** ociuhandu has joined #openstack-meeting13:08
*** panda|lunch is now known as panda13:12
*** lseki has joined #openstack-meeting13:14
*** jamesmcarthur has joined #openstack-meeting13:17
*** yamamoto has joined #openstack-meeting13:19
*** hongbin has quit IRC13:24
*** yamamoto has quit IRC13:34
*** ociuhandu has quit IRC13:40
*** belmoreira has quit IRC13:41
*** belmoreira has joined #openstack-meeting13:42
*** Lucas_Gray has quit IRC13:43
*** yamamoto has joined #openstack-meeting13:44
*** yamamoto has quit IRC13:44
*** yamamoto has joined #openstack-meeting13:45
*** yamamoto has quit IRC13:46
*** yamamoto has joined #openstack-meeting13:46
*** yamamoto has quit IRC13:46
*** Lucas_Gray has joined #openstack-meeting13:47
*** yamamoto has joined #openstack-meeting13:47
*** yamamoto has quit IRC13:51
*** Luzi has quit IRC13:56
*** abishop_ is now known as abishop13:57
*** shintaro has joined #openstack-meeting14:02
*** JangwonLee has quit IRC14:08
*** liuyulong has joined #openstack-meeting14:10
*** ociuhandu has joined #openstack-meeting14:11
*** altlogbot_3 has quit IRC14:12
*** altlogbot_2 has joined #openstack-meeting14:14
*** ociuhandu has quit IRC14:17
*** Lucas_Gray has quit IRC14:17
*** rfolco|doctor is now known as rfolco14:19
*** rfolco is now known as rfolco|ruck14:19
*** Lucas_Gray has joined #openstack-meeting14:19
*** brinzhang has quit IRC14:24
*** brinzhang has joined #openstack-meeting14:25
*** brinzhang has quit IRC14:26
*** ociuhandu has joined #openstack-meeting14:27
*** brinzhang has joined #openstack-meeting14:29
*** brinzhang has quit IRC14:29
*** brinzhang has joined #openstack-meeting14:30
*** Lucas_Gray has quit IRC14:31
*** Lucas_Gray has joined #openstack-meeting14:32
*** yamamoto has joined #openstack-meeting14:45
*** belmoreira has quit IRC14:49
*** yamamoto has quit IRC14:50
*** Lucas_Gray has quit IRC14:52
*** links has quit IRC14:58
*** shintaro has quit IRC15:00
*** cheng1 has joined #openstack-meeting15:01
*** diablo_rojo has joined #openstack-meeting15:02
*** belmoreira has joined #openstack-meeting15:05
*** belmoreira has quit IRC15:13
*** Lucas_Gray has joined #openstack-meeting15:14
*** nfakhir has quit IRC15:26
*** gyee has joined #openstack-meeting15:29
*** enriquetaso has quit IRC15:35
*** ociuhandu has quit IRC15:36
*** sridharg has quit IRC15:47
*** igordc has joined #openstack-meeting15:48
*** ociuhandu has joined #openstack-meeting15:48
*** rsimai is now known as rsimai_away15:51
*** Wryhder has joined #openstack-meeting16:00
slaweq#startmeeting neutron_ci16:00
openstackslaweq: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.16:00
slaweqhi16:00
*** mlavalle has joined #openstack-meeting16:00
slaweq#endmeeting16:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:00
openstackMeeting ended Tue Aug  6 16:00:47 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/tacker/2019/tacker.2019-08-06-08.02.html16:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/tacker/2019/tacker.2019-08-06-08.02.txt16:00
openstackLog:            http://eavesdrop.openstack.org/meetings/tacker/2019/tacker.2019-08-06-08.02.log.html16:00
slaweq#startmeeting neutron_ci16:00
openstackMeeting started Tue Aug  6 16:00:55 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"16:00
openstackThe meeting name has been set to 'neutron_ci'16:01
ralonsohhello16:01
slaweqhi16:01
mlavalleLO, I thought I had missed the meeting by 1 hour16:01
mlavalleLOL16:01
slaweq:D16:01
*** Lucas_Gray has quit IRC16:01
slaweqmlavalle: You're just in time16:01
*** Wryhder is now known as Lucas_Gray16:01
mlavalleyes, I realized that right a way, but for a split second I got confused16:02
slaweqNate is no PTO this week16:02
mlavalleon16:02
slaweqbut lets wait 2 more minutes for haleyb and others16:02
haleybhi16:02
slaweqhi haleyb16:03
*** ociuhandu has quit IRC16:03
haleybhi, will need to leave about :15 early16:03
slaweqhaleyb: ok16:03
slaweqso lets start this meeting16:03
slaweqGrafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate16:03
slaweq#topic Actions from previous meetings16:04
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"16:04
slaweqmlavalle to report bug with router migrations16:04
mlavalleI did report the bug16:04
mlavallehttps://bugs.launchpad.net/neutron/+bug/183844916:04
openstackLaunchpad bug 1838449 in neutron "Router migrations failing in the gate" [Medium,Confirmed] - Assigned to Miguel Lavalle (minsel)16:04
mlavalleand I spent time debugging it16:04
mlavalleto make a long story short16:04
mlavallethe neutron server never receives notification that the router_interface_distributed is down in the controller ovs agent16:06
mlavalleI can see that L3 agent removed the interface16:06
mlavalleI can also see that the ovs agent removes the port16:06
mlavallebut the notification never gets to the server16:06
mlavalleand therefore the port remains in status UP from the point of the neutron server and the API16:07
slaweqand do You see that notification was sent by ovs-agent?16:07
mlavallechecking that is my next step16:07
mlavalleI kight need to add some LOG statements in a DNM patch16:08
slaweqyes, some additional logs might be useful for that16:08
mlavallebut definitely the problem is somewhere in the notifcation path16:09
slaweqso sounds like real bug, not testonly issue16:09
mlavalleyes it is16:09
*** mattw4 has joined #openstack-meeting16:09
slaweqthx mlavalle for update and for working on this16:10
mlavallein my dev system the test always succeeds16:10
mlavallebut I am executing it without load16:10
mlavalleso it must be related with the system under load16:10
slaweqyes, probably16:10
mlavallethis type is tough to debug16:10
slaweqas many of our gate failures16:10
slaweqI think we already fixed easy ones ;)16:11
slaweqok, lets move on to the next action16:11
slaweqralonsoh to report a bug about failed test neutron.tests.fullstack.test_qos.TestMinBwQoSOvs.test_bw_limit_qos_port_removed16:11
ralonsohslaweq, I talked to you last week16:12
*** igordc has quit IRC16:12
ralonsohThe logs are gone and I didn't see this error reproduced16:12
mlavalle#action mlavalle will continue debugging router migration bug16:12
*** Lucas_Gray has quit IRC16:12
slaweqralonsoh: ok, I probably forgot about it16:12
ralonsohso I didn't open a bug because we don't have a log to report16:12
slaweqnow I remember16:12
slaweqok, sounds good for me16:12
ralonsoh(but I'm still checking the CI)16:13
slaweqmlavalle: thx for adding action for Yourself16:13
slaweqralonsoh: thx :)16:13
slaweqok, next one than16:13
slaweqslaweq to check midonet job and report bug(s) related to it16:13
slaweqI checked that there was issue with expired SSL cert for some midonet domain16:13
slaweqyamamoto workarounded that with https://review.opendev.org/#/c/674538/16:14
slaweqbut today I found out that there is still one test failing in this job, so I reported bug https://bugs.launchpad.net/networking-midonet/+bug/183916916:14
openstackLaunchpad bug 1839169 in networking-midonet "Tempest test tempest.api.compute.admin.test_migrations.MigrationsAdminTest is failing 100% times" [Undecided,New]16:14
slaweqand I just sent patch to skip this test in this midonet job running on neutron repo16:15
*** tssurya has quit IRC16:15
slaweqhttps://review.opendev.org/67485816:15
slaweqand the last one from last week was16:16
slaweqslaweq to report and try to fix bug in neutron_tempest_plugin.api.test_port_forwardings.PortForwardingTest16:16
slaweqI reported this bug, started looking into it and than found out that this was releated to the patch on which it was running, so no bug at all :P16:17
slaweqI just lost about 30minutes of my life on that one :D16:17
mlavalleLOL16:18
slaweqmlavalle: yeah, that was my reaction when I noticed that :)16:18
slaweqok, any questions/comments?16:18
*** mattw4 has quit IRC16:19
slaweqok, lets move on than16:19
slaweq#topic Stadium projects16:20
*** openstack changes topic to "Stadium projects (Meeting topic: neutron_ci)"16:20
slaweqpython 3 migration16:20
*** ricolin has quit IRC16:20
slaweqStadium projects etherpad: https://etherpad.openstack.org/p/neutron_stadium_python3_status16:20
*** mattw4 has joined #openstack-meeting16:20
slaweqI think there is nothing to update here today16:20
slaweqok, next one16:21
slaweqtempest-plugins migration16:21
slaweqEtherpad: https://etherpad.openstack.org/p/neutron_stadium_move_to_tempest_plugin_repo16:21
slaweqI know that tidwellr is working on patch for neutron-dynamic-routing16:22
slaweqand still have some problems with some tests there16:22
mlavalleI did spend some time with the vpn one16:22
slaweqany update about vpnaas mlavalle ?16:22
mlavalledidn't make much progress though16:22
mlavalleI'll continue this week16:23
slaweqok, thx for taking care of it16:23
*** enriquetaso has joined #openstack-meeting16:23
slaweqany questions/comments about stadium projects?16:24
*** hongbin has joined #openstack-meeting16:24
slaweqok, lets move on then16:26
slaweqnext topic16:26
slaweq#topic Grafana16:26
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"16:26
slaweqlink was given earlier: http://grafana.openstack.org/dashboard/db/neutron-failure-rate16:26
slaweqin overall it looks quite good IMO16:27
mlavalleand summer vacation has hit us16:27
mlavalleit seems there is not a lot of activity16:28
slaweqwe had problem with functional/fullstack jobs at the end of last week but it's fixed now by frickler https://review.opendev.org/#/c/674426/16:28
slaweqmlavalle: right, we have much less patches, especially in gate queue16:28
slaweqthat is my impression too16:28
*** hongbin has quit IRC16:29
slaweqany other comments related to grafana in overall?16:30
mlavallenone from me16:30
slaweqok, so lets move on16:30
slaweqI don't have anything related to functional/fullstack jobs for today16:30
slaweqso lets skip to16:31
slaweq#topic Tempest/Scenario16:31
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)"16:31
*** jamesmcarthur has quit IRC16:31
slaweqhere I have only one thing which I want to mention16:31
slaweqrecently I realized that we are not running neutron-tempest-plugin tests on any non-dvr openvswitch environment16:31
slaweqso I proposed 2 new jobs:16:31
slaweq    neutron-tempest-plugin-scenario-openvswitch - https://review.opendev.org/#/c/670738/ - merged already16:31
slaweq    neutron-tempest-plugin-scenario-openvswitch-iptables-hybrid https://review.opendev.org/#/c/674274/16:32
*** markvoelker has quit IRC16:32
slaweqboth are proposed as voting and gating because this is in fact our most common and most supported configuration16:32
slaweqso it should works fine16:32
slaweqDescription of those jobs is proposed in https://review.opendev.org/#/c/674272/16:32
*** enriquetaso has quit IRC16:32
slaweqplease review those patches and tell me what You think about them :)16:33
slaweqanything else You want to talk about, related to tempest/scenario jobs?16:34
mlavallethe description is already merged16:34
slaweqno, it's not as it depends-on on other patches16:34
slaweqso it's approved but not merged yet16:34
mlavalleok16:36
mlavalleapproved the other one16:36
slaweqmlavalle: thx a lot16:36
slaweqI will push also patch to update grafana and add those 2 new jobs16:37
mlavallethanks16:37
slaweqLast one thing which I have for today is new neutron-tempest-plugin release16:38
slaweqI proposed it yesterday: https://review.opendev.org/#/c/674573/16:39
slaweqthe reason why I did it so fast after the last one is that it has some quite important new test cases, like multicast traffic test16:39
slaweqso mlavalle if You can look at it too, that would be nice16:39
slaweqamotoki already gave +1 for it :)16:40
mlavalleok16:40
mlavalledone16:40
slaweqmlavalle: thx16:41
slaweqok, that's all from my side for today16:41
mlavalleI don't have anything else16:41
slaweq(faster than ever)16:41
mlavalleit must be the summer vacation spirit16:41
slaweqif You don't have anything else to talk about, we can finish this meeting earlier today :)16:41
slaweqmlavalle: that's right :)16:41
mlavalleat least for us in the Northern hemisphere16:41
slaweqyeah :)16:42
mlavalleThe Aussies are in the middle of their winter16:42
ralonsoh(I'm going to the pool right now)16:42
slaweqbut do we have anyone from Southern hemispere here?16:42
mlavalleno16:42
slaweq:)16:42
slaweqralonsoh: have fun16:42
mlavalleI am probably the one farthest south16:42
slaweqok, thx for attending16:43
mlavalleboth now and where I was born, whihch is even further south16:43
mlavalleo/16:43
ralonsohbye!16:43
slaweqhave a great day and week :)16:43
slaweqsee You online16:43
slaweq#endmeeting16:43
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:43
slaweqo/16:43
openstackMeeting ended Tue Aug  6 16:43:47 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:43
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-08-06-16.00.html16:43
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-08-06-16.00.txt16:43
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-08-06-16.00.log.html16:43
*** markvoelker has joined #openstack-meeting16:44
*** mlavalle has left #openstack-meeting16:44
*** e0ne has quit IRC16:48
*** ociuhandu has joined #openstack-meeting16:49
*** ociuhandu has quit IRC16:57
*** ociuhandu has joined #openstack-meeting16:58
*** yamahata has quit IRC17:03
*** iyamahat has quit IRC17:03
*** ralonsoh has quit IRC17:08
*** ociuhandu has quit IRC17:22
*** iyamahat has joined #openstack-meeting17:22
*** enriquetaso has joined #openstack-meeting17:23
*** ociuhandu has joined #openstack-meeting17:23
*** iyamahat_ has joined #openstack-meeting17:26
*** ociuhandu has quit IRC17:27
*** ociuhandu has joined #openstack-meeting17:28
*** iyamahat has quit IRC17:29
*** senrique_ has joined #openstack-meeting17:33
*** enriquetaso has quit IRC17:35
*** yamahata has joined #openstack-meeting17:42
*** lpetrut has quit IRC17:44
*** diablo_rojo has quit IRC17:45
*** ociuhandu has quit IRC17:48
*** diablo_rojo has joined #openstack-meeting17:55
*** ociuhandu has joined #openstack-meeting18:02
*** lpetrut has joined #openstack-meeting18:03
*** ociuhandu has quit IRC18:04
*** ociuhandu has joined #openstack-meeting18:05
*** igordc has joined #openstack-meeting18:08
*** ociuhandu has quit IRC18:09
*** armax has joined #openstack-meeting18:15
*** bbowen_ has joined #openstack-meeting18:15
*** bbowen has quit IRC18:17
*** e0ne has joined #openstack-meeting18:17
*** igordc has quit IRC18:25
*** e0ne has quit IRC18:31
*** senrique_ has quit IRC18:41
*** igordc has joined #openstack-meeting18:46
*** jbadiapa has joined #openstack-meeting18:47
*** tesseract has quit IRC18:52
*** Shrews has joined #openstack-meeting18:55
*** jamesmcarthur has joined #openstack-meeting18:57
clarkbanyone else here for the infra meeting? I'll get things started in a few minutes18:59
Shrewso/19:00
ianwo/19:00
*** jgriffith has joined #openstack-meeting19:00
clarkb#startmeeting infra19:01
openstackMeeting started Tue Aug  6 19:01:07 2019 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: infra)"19:01
openstackThe meeting name has been set to 'infra'19:01
clarkb#link http://lists.openstack.org/pipermail/openstack-infra/2019-August/006437.html Today'd Agenda19:01
clarkb#link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting Edit the agenda at least 24 hours before our scheduled meeting to get items on the agenda19:02
clarkb#topic Announcements19:02
*** openstack changes topic to "Announcements (Meeting topic: infra)"19:02
clarkbNext week I will be attending foundation staff meetings and will not be able to run our weekly meeting. I expect fungi is in the same boat. We will need a non clarkb or fungi volunteer to chair the meeting19:03
clarkbor we can decide to skip it if people prefer that19:03
fungiyup19:03
*** jbadiapa has quit IRC19:03
clarkbAlso expect that I won't be much help next week in general19:03
fungithe boat is probably a metaphor19:03
*** e0ne has joined #openstack-meeting19:04
fungii won't be bringing any fishing gear19:04
clarkb#topic Actions from last meeting19:04
*** openstack changes topic to "Actions from last meeting (Meeting topic: infra)"19:04
clarkb#link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-07-30-19.01.txt minutes from last meeting19:04
clarkbI think mordred did github things last week19:05
fungihe did indeed19:05
clarkbgithub.com/openstack-infra repos should all be updated with a note on where they can now be found as well as archived19:05
diablo_rojoo/19:05
clarkbmordred: was the opendev admin account created too?19:05
fungis/updated with/replaced by/19:05
corvusi think i saw mordred say he did that19:06
clarkbawesome19:06
clarkbThe other action listed was updating the gitea sshd container to log its sshd logs19:06
clarkbI do not think this happened; however, I can take a look at that today so I'll assign the action to myself19:07
clarkb#action clarkb Have gitea sshd logs recorded somewhere19:07
*** mriedem has quit IRC19:08
clarkb#topic Priority Efforts19:08
*** openstack changes topic to "Priority Efforts (Meeting topic: infra)"19:08
mordredo/19:08
clarkb#topic OpenDev19:08
*** openstack changes topic to "OpenDev (Meeting topic: infra)"19:08
*** jamesmcarthur has quit IRC19:08
clarkbThat is a good jump into recent opendev things19:09
mordredyes - I did github things19:09
clarkbwe do still have the OOM problem however it seems less prevalent19:09
mordredand the opendevadmin account19:09
clarkbmordred: tyty19:09
clarkb#link https://etherpad.openstack.org/p/debugging-gitea08-OOM19:09
clarkbLast week I dug into that a bit and tried to collect my thoughts there19:09
*** mriedem has joined #openstack-meeting19:09
clarkbit would probably be good if someone else could review that and see if I missed anything obvious and consider my ideas there19:10
corvuswe have no reason to think that gitea 1.9.0 will improve anything, but mordred has a patch to upgrade; assuming we move forward that will be a variable change19:10
mordredclarkb: I pushed up a patch just a little bit ago to upgrade us to gitea 1.9 - it's also possible that 1.9 magically fixes the oom problems19:10
mordredcorvus: yes - I agree - I have no reason to believe it fixes anything19:10
fungiright, it's also possible magical elves will fix the oom ;)19:11
mordredso we could also hold off so as not to move variables19:11
clarkbya I think the memory issues largely come down to big git repos being a problem and gitea holding open requests against big git repos for significant periods of time so they pile up19:11
corvusyeah, but unfounded optimism is great, i endorse it :)19:11
mordred\o/19:11
corvusi don't think we need to hold back for further debugging; i think we should do the 190 upgrade and just be aware of the change19:11
clarkbwe have a 2 minute haproxy timeout (and haproxy seemed to time out these requests because i could not map them to gitea logs based on timestamps) but gitea logs show requests going on for hours19:11
clarkbcorvus: ++19:11
clarkbone idea I had was maybe digging into having gitea timeout requiests because a single 500 error is better than OOMing and crashing gitea19:12
fungian option there, just spitballing, might be to use an haproxy health check which includes some resource metrics like memory19:12
corvusclarkb: oh that's an interesting data point i missed19:12
clarkb(I have not done that yet, but possibly can later this week)19:12
fungiprobably involves running a check agent on the gitea servers though19:12
corvusclarkb: if the remote side has hung up, i don't see why gitea should continue whatever it was doing19:12
clarkbcorvus: ya that is at the bottom of the etherpad19:12
clarkbcorvus: exactly19:12
fungibut that would force additional requests to get redistributed if there's a pileup on one of the backends19:13
fungion the other hand it could just end up taking the entire pool offline19:13
*** e0ne has quit IRC19:13
clarkbfungi: ya I think if this continues to be a problem (we've failed at making gitea/git more efficient first) then improving haproxy load balancing methods is our next step19:13
clarkbcorvus: you have some grasp of the gitea code base maybe I can take a look at it later this week and if I run into problems ask for help?19:14
*** e0ne has joined #openstack-meeting19:14
corvusclarkb: absolutely19:14
clarkbgreat. Any other opendev related business before we move on?19:15
*** e0ne has quit IRC19:15
corvusoh one thing19:15
corvusi think tobiash identified the underlying cause of the zuul executors ooming; the fix has merged and if we restart them, things should be better19:16
corvusthis is the thing ianw discovered19:16
*** jbadiapa has joined #openstack-meeting19:16
clarkbcorvus: is that related to the executor gearman worker class fix?19:16
corvushttps://review.opendev.org/674762 is the fix19:16
corvusyes19:16
corvusuneven distribution of jobs makes executors use too much memory and either the log streaming process gets killed, or the executor itself (making the problem worse)19:17
corvuswhat's really cool is --19:17
corvusif you take a look at the graphs right now, you can actually see that some of them are graphs of noisy neighbors19:17
corvus(they have oscillations which have no relationship to zuul itself)19:18
corvusbecause absent our leveling algorithm, external inputs like hypervisor load and network topology have an outsize effect19:18
clarkbbecause gearman is a "you get jobs as quickly as you can service them" system19:18
corvusyep19:19
corvusnanoseconds count19:19
clarkb#topic Update Config Management19:20
*** openstack changes topic to "Update Config Management (Meeting topic: infra)"19:20
clarkbianw has changes up to deploy an ansible based/managed backup server19:20
* clarkb finds links19:20
ianw#link https://review.opendev.org/67454919:20
clarkbianw wins19:20
ianw#link https://review.opendev.org/67455019:20
ianwi can fiddle that today if i can get some eyes and see if we can get something backing up to it19:20
ianwshould run in parallel with existing backups, so no flag day etc19:21
clarkbianw: ya the original backup design had us backing up to two location anyway19:22
ianw(well, no client is opted into it yet either, the first one i'll babysit closely)19:22
clarkbI expect that the puppetry will handle that fine19:22
clarkbI think we are in a good spot to start pushing on the CD stuff too?19:23
clarkbcorvus: ^ I unfortunately tend to page that out more than I should. You probably know what the next step is there19:23
corvusfor jobs triggered by changes to system-config, yes19:23
corvusfor the dns stuff, no19:23
corvushttps://review.opendev.org/671637 is the current hangup for that19:24
clarkb#link https://review.opendev.org/671637 Next step for CD'ing changes to system-config19:24
corvusthat's how i wanted to solve the problem, but logan pointed out a potentially serious problem19:24
corvusso we either need to put more brainpower into that, or adopt one of our secondary plans (such as, opendev takes over the zuul-ci.org zone from the zuul project).  that could be a temporary thing until we have the brainpower to solve it better.19:25
clarkber that is for the dns stuff not system-config right?19:26
clarkb#undo19:26
openstackRemoving item from minutes: #link https://review.opendev.org/67163719:26
corvusclarkb: correct19:26
clarkb#link https://review.opendev.org/671637 Next step for CD'ing changes to DNS zones19:26
corvusno known obstacles to triggering cd jobs from changes to system-config19:26
clarkbgot it19:26
clarkbAnything else on this subject?19:26
corvus(project-config is probably ok too)19:27
clarkb#topic Storyboard19:28
*** openstack changes topic to "Storyboard (Meeting topic: infra)"19:28
fungino updates for sb this week that i'm aware of19:28
clarkbfungi: I know mnaser reported some slowness with the dev server, but I believe that was tracked back to sql queries actually being slow?19:28
* mordred did not help on the sql this past week19:28
clarkb(so there isn't an operational change we need to be aware of?)19:28
* mordred will endeavor to do so again this week19:29
fungiit seemed to be the same behavior, yes. if i tested the same query i saw mysql occupy 100% a vcpu until the query returned19:29
fungi(api query, i mean)19:29
*** kopecmartin is now known as kopecmartin|off19:29
clarkbdiablo_rojo: Do you have anything to add?19:29
diablo_rojomordred, should I just actively bother you in like...two days or something? Would that be helpful?19:29
fungiso the bulk of the wait was in one or more database queries presumably19:30
diablo_rojoclarkb, the only other thing we did during the meeting last week was start to talk about how we want to try to do onboarding in SHanghai of users and another for contributors19:30
diablo_rojoThat's all.19:30
clarkbdiablo_rojo: related to that can I assume that you have or will handle space allocation for storyboard? or should I make a formal request similar to what I did for infra?19:30
fungiahh, yep, and covered that trying to facilitate remote participation in shanghai might be harder than normal19:31
mordreddiablo_rojo: yeah - actually - if you don't mind19:31
mordreddiablo_rojo: I keep remembering on tuesday morning - when I look at the schedule and think "oh, infra meeting"19:31
diablo_rojomordred, happy to be annoying ;) I'll try to find/create a quality gif or meme for your reminder19:31
*** jbadiapa has quit IRC19:31
mordredhah19:31
diablo_rojomordred, lol19:31
clarkbdiablo_rojo: just let me know if I need to do anything official like for storyboard presence in shanghai. Happy to do so19:32
diablo_rojoclarkb, I will handle space for StoryBoard :)19:32
clarkbawesome19:32
diablo_rojoclarkb, I know the person with the form ;)19:32
clarkbindeed19:32
diablo_rojo^^ bad joke I will continue to make19:32
clarkb#topic General Topics19:33
*** openstack changes topic to "General Topics (Meeting topic: infra)"19:33
clarkbFirst up is trusty server replacements.19:33
clarkbfungi: are you planning to do the testing of wiki-dev02?19:33
clarkbiirc planned next steps was to redeploy it to make sure puppet works from scratch?19:33
fungiyep, have been sidetracked by other responsibilities unfortunately, but that's still high on my to do list19:33
fungiwiki-dev02 can simply be deleted and re-launched at any time19:34
funginothing is using it19:34
fungii'll try to get to that this week19:34
clarkbthank you19:34
clarkbcorvus has also made great progress with the swift log storage (whcih means we can possibly get rid of logs.openstack.org)19:34
clarkbcorvus: at this point you are workign through testing of individual cloud behaviors?19:35
corvusclarkb: yes, i believe rax, and vexxhost are ready, confirming ovh now (i expect it's good)19:35
corvusso we'll be able to randomly store logs in one of six regions19:36
clarkbandI know you intended to switch over to the zuul logs tab with logs.o.o backing it first. Are we ready to start planning that move or do we want to have the swift stuff ready to happen shortly after?19:36
corvus(job/log region proximity would be nice, but not relevant at the moment since our logs still go through the executor)19:36
corvusyeah, we're currently waiting out a deprecation period for one of the roles which ends monday19:37
clarkbexciting we might be switched over next week then?19:37
corvusafter that, i think we can switch to zuul build page as the reporting target (but we need a change to zuul to enable that behavior)19:37
corvusand then i think we'll have the swift stuff ready almost immediately after that19:38
clarkbthat is great news19:38
corvusmaybe we plan for a week between the two changes, just to give time for issues to shake out19:38
clarkbwfm19:38
mordred++19:39
fungiyes, it's timely, given we've had something like 3 disruptions to the current log storage in a couple weeks time19:39
corvusthough... hrm, timing might be tight on that cause i leave for gerrit user summit soon19:39
*** lpetrut has quit IRC19:39
corvusi leave on aug 2219:40
clarkbwe can probably take our time then and do the switches we are comfortable with bit by bit as people are around to monitor19:40
corvusassuming we don't want to merge it the day before i leave, we really only have next week to work with19:40
corvusi return sept 319:40
clarkbk19:40
corvusso we either do both things next week, or build page next week and swift in september19:41
* mnaser curious which swifts are being used19:41
clarkband we can probably decide on when to do swift based on how smoothly the build logs tag change goes?19:41
clarkbs/tag/tab/19:41
corvusmnaser: vexxhost, rax, ovh19:42
mnaserjust wondering how much data is expected to be likely hosted?19:42
clarkband fortnebula has hinted that a swift install there might happen too19:42
*** bbowen__ has joined #openstack-meeting19:42
corvusmnaser: i think we're currently estimating about 2GB for starters (much less than when we initially discussed it with you)19:42
corvuser 2TB19:43
mnasercool, thank you for that info19:43
* mnaser hides again19:43
clarkbWhich is a good lead into the next agenda item. State of the clouds19:43
clarkbI wanted to quickly give a status update on fn and was hoping mordred could fill us in on any changes with MOC19:43
clarkbfn is now providing 100 test instances and we seem to be quite stable there now19:44
clarkbWe have noticed odd mirror throughput when pulling things from afs19:44
mordredapp-creds are working in moc now - so next steps are getting teh second account created and creating the mirror node19:44
clarkbif we manually pull cold files we get about 1MBps and if we pull a warm file we get about 270MBps. But yum installing packages reports 12MBps19:44
*** jamesmcarthur has joined #openstack-meeting19:44
*** bbowen_ has quit IRC19:45
clarkbI am not sure that the afs mirror performance behavior is a major issue as the impact on job runtimes is low19:45
clarkbbut something I wanted to make note of19:45
clarkbmordred: exciting19:45
corvusclarkb: only yum?19:45
mnaseryum being slow is nothing new :(19:45
clarkbcorvus: I haven't looked at the other package managers yet, but the examples donnyd dug up were yum19:45
clarkbcorvus: but that is a good point we should check apt-get too19:45
mnaserOSA's centos jobs take almost twice as long and there isn't a lot of different things happening19:46
mnaserfor context19:46
clarkbmnaser: good to know19:46
clarkbmordred: do you need anything to push MOC along or is that mostly you filing a ticket/request for the second account?19:46
mordrednope- just filing a ticket19:47
donnydIm a little late to the party, but swift will surely be happening. Just a matter of when19:47
*** bbowen__ has quit IRC19:47
clarkbgreat19:48
clarkbnext up is making note of a couple of our distro mirrors' recent struggles19:48
ianwre mirrors i think overall some macro apache throughput stats would be useful, for this and also for kafs comparisons.  working on some ideas19:48
clarkbianw: thanks!19:48
clarkbfungi has found reprepro won't create a repo until it has packages (even if an empty repo exists upstream)19:49
clarkbthis is causing problems for debian buster jobs as buster updates does not exist19:49
clarkbfungi: ^ have we managed to workaroudn that yet?19:49
ianwoh ... maybe a skip like we added for the security when it wasn't there?19:49
fungiyeah, the first buster stable point release is scheduled to happen a month from tomorrow, so the buster-updates suite won't exist until then19:49
fungior rather it will have no packages in it until then19:50
fungiianw: well, except we also add it to sources.list on test nodes19:50
fungiso would need to actually omit that19:50
fungior find a way to convince reprepro to generate an empty suite, but i haven't been able to identify a solution in that direction19:50
*** e0ne has joined #openstack-meeting19:51
ianwahh .. can we just fake an empty something?19:51
clarkbfungi: if we touch a couple files does that result in a valid empty repo or is it more involved than that?19:51
clarkbor maybe even mirror empty repos exactly as upstream rather than building from scratch19:51
fungiit's mostly that...19:51
*** senrique_ has joined #openstack-meeting19:51
fungii mean, sure we could fake one but need to then find a way to prevent reprepro from removing it19:52
clarkbah19:52
fungisince it's a suite in an existing mirror19:52
fungiexisting package repository i mean19:53
fungiso not as simple as something like debian-security which is a different repository we're mirroring separately19:53
clarkbok something to dig into more outside of the meeting I guess19:53
clarkbwe are almost at time and have a few more thing to bring up really quickly19:54
*** jamesmcarthur has quit IRC19:54
fungiyeah, we can move on19:54
clarkbthe fedora mirror has also been struggling. It did not update for about a month because a vos release timed out (presumably that is why the lock on the volume was held)19:54
clarkbI have since manaully updated it and returned the responsibility for updates to the mirror update server.19:54
clarkbOne thing I did though was to reduce the size of that mirror by removing virtualbox and vagrant image files, old atomic release files, and power pc files19:55
clarkbThat dropped repo size by about 200GB whihc should make vos releases quicker19:55
fungiyeah, the debian mirror was similarly a month stale until we worked out which keys we should be verifying buster-backports with19:55
clarkbthat said it is still a large repo and we may be want to further exclude thigns we don't need19:55
ianwthanks; i haven't quite got f30 working which is why i guess nobody noticed ... we should be able to drop f28 then19:55
clarkbI'm watching it now to make sure automatic updates work19:56
clarkbianw: I think tripleo depends on 28 to stand in for rhel8/centos819:56
clarkbianw: so we might not be able to drop 28 until they also drop it, but ya that will also reduce the size19:56
fungialso... magnum? uses f27 still right?19:56
clarkbfungi: the atomic image only19:56
fungiaha19:56
clarkbfungi: which I don't think uses our mirrors19:56
fungigot it19:56
clarkbAnd finally we have PTG prep as a topic19:57
clarkbfriendly reminder we can start brainstorming topics if we have them19:57
clarkb#link https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-201919:57
clarkb#topic Open Discussion19:58
*** openstack changes topic to "Open Discussion (Meeting topic: infra)"19:58
clarkbwe have a couple minutes for any remaining items19:58
clarkbI will be doing family things tomorrow so won't be around19:58
Shrewsfwiw, i think i've identified the sdk bug that is causing us to leak swift objects from uploading images to rax. if we fail to upload the final manifest after the segments, we don't retry and don't cleanup after ourselves. seems to happen at least once every few days or so according to what logs we have.19:58
clarkbShrews: the manifest is the special object that tells swift about the multiobject file?19:58
mordredyah19:59
Shrewsclarkb: correct. uploaded last19:59
corvus\o/19:59
clarkband we are at time. That is an excellent find re image uploads. Thank you everyone!20:00
clarkb#endmeeting20:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"20:00
openstackMeeting ended Tue Aug  6 20:00:23 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-08-06-19.01.html20:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-08-06-19.01.txt20:00
openstackLog:            http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-08-06-19.01.log.html20:00
diablo_rojoThanks clarkb!20:00
fungithanks clarkb!!!20:00
*** Shrews has left #openstack-meeting20:00
*** lpetrut has joined #openstack-meeting20:08
*** slaweq has quit IRC20:09
*** diablo_rojo has quit IRC20:12
*** jamesmcarthur has joined #openstack-meeting20:14
*** jamesmcarthur has quit IRC20:15
*** jamesmcarthur has joined #openstack-meeting20:17
*** jamesmcarthur has quit IRC20:19
*** eharney has quit IRC20:22
*** jamesmcarthur has joined #openstack-meeting20:25
*** slaweq has joined #openstack-meeting20:25
*** slaweq has quit IRC20:30
*** yamamoto has joined #openstack-meeting20:36
*** eharney has joined #openstack-meeting20:37
*** yamamoto has quit IRC20:41
*** mriedem is now known as mriedem_afk20:46
*** e0ne has quit IRC20:48
*** priteau has joined #openstack-meeting20:51
*** oneswig has joined #openstack-meeting20:54
*** whoami-rajat has quit IRC20:56
*** b1airo has joined #openstack-meeting21:00
oneswig#startmeeting scientific-sig21:00
openstackMeeting started Tue Aug  6 21:00:18 2019 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: scientific-sig)"21:00
openstackThe meeting name has been set to 'scientific_sig'21:00
*** janders has joined #openstack-meeting21:00
*** armax has quit IRC21:00
oneswigaway we go21:00
b1airoMorning21:00
jandersg'day guys21:00
oneswig#link Agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_August_6th_201921:00
*** strigazi has joined #openstack-meeting21:01
oneswigMorning b1airo janders how's Wednesday going?21:01
*** armax has joined #openstack-meeting21:01
jandersoneswig starting slowly :)  how are you?21:01
b1airoI'm just back from holiday and organising late start with the kids etc, so will be a bit fleeting21:01
priteauHello everyone21:01
oneswigHi priteau!21:01
oneswigjanders: I'm well, thanks.  Been a fairly intense week working on some customer projects, but a good one.21:02
jandersthat's good to hear21:02
oneswigI was anticipating jmlowe and trandles coming along today21:03
oneswigjanders: what's the latest on supercloud?21:03
jandersnot much of an update - we're at this time in the year where all the projects are getting reshuffled - so more paperwork than anything else really21:04
janderssome interesting challenges with dual port ConnectX6es - happy to touch on this in AOB if there is time21:04
oneswigwas talking with somebody recently with an interest in a Manila driver for BeeGFS21:05
oneswigjanders: always happy to have some grunty networking issues to round the hour out.21:05
janders:)  sounds good21:06
b1airoYeah Manila BeeGFS sounds useful21:06
janders+1!!!21:06
oneswig#topic OpenStack user survey21:06
*** openstack changes topic to "OpenStack user survey (Meeting topic: scientific-sig)"21:06
oneswig#link yer tiz https://www.openstack.org/user-survey/survey-2019/landing21:06
oneswigGet it filled in and make sure your scientific clouds are stood up and counted.21:07
*** trandles has joined #openstack-meeting21:07
oneswigThat's all about that.21:07
oneswig#topic Monitoring for Chargeback and Accounting21:07
*** openstack changes topic to "Monitoring for Chargeback and Accounting (Meeting topic: scientific-sig)"21:07
tbarronjust need someone to write and maintain a manila driver for BeeGFS and we commit (manila maintainers) to help it integrate/merge/etc.21:07
* tbarron apologizes for jumping before current topic, is done21:08
oneswigHi tbarron - ears burning :-)21:08
oneswigThanks for joining in.21:08
*** lpetrut has quit IRC21:08
tbarrononeswig: :)21:08
trandleso/ I'm late but oh well21:09
oneswigI fear the maintaining bit is often overlooked.  But I think there's a good opportunity here to do something good.21:09
oneswighi trandles, you made it!21:10
oneswigtbarron: we'll see if we can get an interested group together21:11
*** igordc has quit IRC21:12
oneswig#action oneswig to canvas for interest on Manila driver for BeeGFS21:12
oneswigOK, let's return to topic, ok?21:13
oneswigWe actually discussed this last week when priteau did some interesting work investigating CloudKitty drawing data from Monasca instead of the Ceilometer family.21:13
priteauoneswig: Telemetry family :)21:14
oneswigI stand corrected21:14
oneswig#link CloudKitty and Monasca (episode 1) https://www.stackhpc.com/cloudkitty-and-monasca-1.html21:14
oneswigThis article sets the scene21:15
*** senrique_ has quit IRC21:15
oneswigpriteau: I think you've been busy since and assume you've no further developments to report?21:15
priteauI am afraid I've been otherwise engaged21:16
priteauBut hopefully the summer is not yet over for a sequel blog post21:16
oneswigI hope not - you can't leave us hanging off this cliff-edge!21:17
oneswigb1airo: were you using CloudKitty at Monash?21:17
b1airoNo, looked at it a few times, but never attempted putting it all together21:19
oneswigAh ok, I remember you mentioning it.21:20
*** mriedem_afk is now known as mriedem21:20
oneswigWe have an interest in generating billing data but an aversion to pulling in additional telemetry to do it.21:21
priteauCloudKitty itself isn't very complex to configure, it's more having the right data collected that can be tricky21:21
oneswigI'll be interested to see how it works with data from the OpenStack exporter, or nova instance data21:24
*** bbowen__ has joined #openstack-meeting21:24
priteauNova is actually the easiest service to charge because there are various ways to collect usage metrics. Other services, like charge for image or volume storage, will be more challenging.21:26
oneswigA pity jmlowe's not around to tell us how they use xdmod (I assume) for this21:28
*** shubham_potale has quit IRC21:28
*** igordc has joined #openstack-meeting21:31
oneswigOK, time for janders ConnectX6 issue?21:31
oneswig#topic AOB21:31
*** openstack changes topic to "AOB (Meeting topic: scientific-sig)"21:31
jandersok!21:32
oneswigjanders: what's been going on?21:32
jandersdoes any of you have any experience with dual-port CX6s?21:32
oneswigonly CX-5, alas21:32
clarkbThought I would point out http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008304.html as some of you may be able to answer those questions21:33
jandersI think CX5s are a bit easier to work with :)21:33
jandersI wanted to use one CX6 port as 50GE and the other as HDR20021:33
oneswigHi clarkb - saw that earlier tonight, thanks for raising it.21:34
jandersit seems that with the current firmware it's hard to get eth/ib the ports to work concurrently21:34
jandersit's a bit of "one or the other"21:34
oneswigWhat happened to VPI?21:34
jandersanother angle: do you guys use splitter cables (eg 100GE> 2x50GE)?21:35
b1airoYes21:35
oneswigjanders: yes, on a CX-4 system, had a good deal of trouble initially21:35
jandersinteresting.. hitting the same21:35
janderswhat sorts of issues did you have?21:36
jandersso far I've seen that the support for splitters sometimes varies across firmware versions (say 1.1 supports it than you upgrade to 1.2 and stuff stops working and support say this version doesn't support splitters)21:36
jandersalso support guys seem very confused about splitters in general (once I was told these are meant to connect switches not nodes, which seems insane)21:37
oneswigWe had 100G-4x25G splitter with 25G SFP+, but the NICs had QSFP sockets.  Mellanox do a passive slipper that would take the SFP cable in the QSFP socket - needed some firmware tweaks to get it going.21:38
*** markvoelker has quit IRC21:38
jandersright!21:38
jandersdid they make these tweaks mainstream in the end, or did these remain tweaks?21:39
*** jamesmcarthur has quit IRC21:39
oneswigI think they were mainstream but I'll check the fw version now21:39
jandersIIRC our "splitters" are QSFP on both sides21:40
oneswigThe CX4 NICs are running 12.20.101021:41
oneswigProbably ~1.5 years old21:41
jandersok!21:41
jandersso - I think between dual-port CX6es and the splitter cables we might have a bit more "fun" before everything works21:41
oneswigWhat's preventing running the NIC dual-protocol?  Do you think it's the splitter?21:42
jandersI was told there are missing bits in the CX6 firmware but that came through a reseller not Mellanox directly21:42
jandersso not 100% sure if this is accurate21:42
oneswigThat would be a surprising piece to be missing.21:43
jandersalso given we're using the dual-PCI-slot form factor I have a sneaking suspicion that VPI might be tricker than it used to21:43
oneswigWhat do you get with CX-6, apart from 200G?21:43
jandersthanks, PCIe3.021:43
jandersnothing21:43
jandersI'm actually considering asking to have some cards replaced with CX5s if this can't be resolved promptly21:43
jandersHDR200 would be handy for my GPFS cluster though21:43
jandersit's NVMe based (similar design to our BeeGFS) so could definitely use that bandwidth21:44
jandersbut... 2x CX5 could do that, too :)21:44
oneswigYou'd get to put one on each socket as well, and perhaps exploit some numa locality21:45
jandersindeed21:45
jandersI'm anticipating an update from the reseller & Mellanox this week, I will report back if I learn anything intresting21:45
jandersit's very useful to know you guys had issues with splitters, too21:46
jandersI will be less keen to put them in any larger systems now :(21:46
oneswigWe did but I think the QSA28 slipper was part of our problem21:46
jandersit's a shame cause these could be used to build some really funky IO-centric topologies21:46
jandersI haven't given up on them entirely just yet but it's definitely an amber light in terms of resilient systems design21:47
oneswigI wouldn't give up on them just yet, it would rule out many options21:47
jandersone thing seems certain: taking up a new HCA generation + splitters at the same time is painful21:48
jandersthat's pretty much all I have on this topic for now. Thank you for sharing thoughts, I will keep you posted! :)21:49
oneswigThanks janders, feel your pain but slightly envy your kit at the same time :-)21:50
oneswigOK, shall we wrap up?  Any more to add today?21:50
b1airoI think being an early adopter of anything new from MLNX has a certain degree of pain and perplexedness involved21:50
janders+1 :)21:50
oneswighow about being an early adopter of OPA, in fairness?21:51
jandersLOL!21:51
jandersI suppose you can be among the first and the last ones, all at the same time21:52
b1airoHehe21:53
b1airoYou guys see ARDC has released a call for proposals for new Research Cloud infra21:53
oneswigWhat's going on there b1airo?21:54
b1airoOnly ~$5.5m worth though it seems21:54
b1airoProbably just aiming to replace existing capacity. Was $20m 6-7 years ago, so $5m probably buys at least the same number of vcores now21:55
oneswigARDC is New Nectar?21:56
b1airoYeah, Nectar-ANDS-RDS conglomerate21:57
*** raildo has quit IRC21:57
b1airoAlso, quickly on topic of public vs private/hybrid21:58
b1airoWas thinking of reaching out to some server vendors to see if they have some good base modelling on costs, anyone know of some?21:58
oneswigI think Cambridge Uni had some models for cost per core hour that might be relevant but I'm not sure how general (or public) they are21:59
oneswigI'll check21:59
oneswigOK, time to close22:00
b1airoSounds like I'll need to do some direct legwork22:00
oneswigThanks all22:00
oneswig#endmeeting22:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"22:00
openstackMeeting ended Tue Aug  6 22:00:29 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-08-06-21.00.html22:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-08-06-21.00.txt22:00
openstackLog:            http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-08-06-21.00.log.html22:00
b1airoCheers gang22:01
oneswigcheers b1airo, have a good one22:01
*** trandles has left #openstack-meeting22:03
*** oneswig has quit IRC22:09
*** jamesmcarthur has joined #openstack-meeting22:20
*** efried has left #openstack-meeting22:21
*** priteau has quit IRC22:21
*** ekcs has quit IRC22:23
*** armax has quit IRC22:42
*** joxyuki has quit IRC22:43
*** igordc has quit IRC22:56
*** yamamoto has joined #openstack-meeting23:00
*** markvoelker has joined #openstack-meeting23:13
*** janders has quit IRC23:14
*** markvoelker has quit IRC23:18
*** igordc has joined #openstack-meeting23:20
*** rcernin has joined #openstack-meeting23:26
*** armax has joined #openstack-meeting23:33
*** mriedem has quit IRC23:40
*** mattw4 has quit IRC23:42
*** jamesmcarthur has quit IRC23:42
*** mattw4 has joined #openstack-meeting23:42
*** jamesmcarthur has joined #openstack-meeting23:47
*** jamesmcarthur_ has joined #openstack-meeting23:54
*** jamesmcarthur has quit IRC23:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!