Tuesday, 2019-02-19

*** jamesmcarthur has quit IRC00:22
*** sdake has joined #openstack-meeting00:26
*** tetsuro has joined #openstack-meeting00:34
*** ijw has quit IRC00:35
*** ijw has joined #openstack-meeting00:35
*** jamesmcarthur has joined #openstack-meeting00:35
*** lbragstad has quit IRC00:39
*** jamesmcarthur has quit IRC00:40
*** slaweq has quit IRC00:41
*** erlon has joined #openstack-meeting00:45
*** ijw has quit IRC00:52
*** sdake has quit IRC00:58
*** markvoelker has joined #openstack-meeting01:10
*** sdake has joined #openstack-meeting01:16
*** sdake has quit IRC01:16
*** sdake has joined #openstack-meeting01:22
*** whoami-rajat has joined #openstack-meeting01:22
*** ijw has joined #openstack-meeting01:29
*** lbragstad has joined #openstack-meeting01:32
*** hongbin has joined #openstack-meeting01:33
*** erlon has quit IRC01:38
*** ijw has quit IRC01:41
*** ijw has joined #openstack-meeting01:41
*** sdake has quit IRC01:42
*** markvoelker has quit IRC01:44
*** sdake has joined #openstack-meeting01:45
*** jamesmcarthur has joined #openstack-meeting01:46
*** zaneb has joined #openstack-meeting01:50
*** sdake has quit IRC01:54
*** jamesmcarthur has quit IRC01:56
*** jamesmcarthur has joined #openstack-meeting01:58
*** sdake has joined #openstack-meeting02:00
*** armstrong has joined #openstack-meeting02:11
*** sdake has quit IRC02:22
*** _alastor_ has joined #openstack-meeting02:33
*** _alastor_ has quit IRC02:38
*** markvoelker has joined #openstack-meeting02:41
*** jamesmcarthur has quit IRC02:41
*** armstrong has quit IRC02:44
*** psachin has joined #openstack-meeting02:59
*** markvoelker has quit IRC03:13
*** apetrich has quit IRC03:14
*** ya has joined #openstack-meeting03:15
*** ya has quit IRC03:20
*** yaawang has joined #openstack-meeting03:22
*** dklyle has joined #openstack-meeting03:23
*** rfolco has quit IRC03:36
*** tpatil has joined #openstack-meeting03:43
*** armstrong has joined #openstack-meeting03:45
*** psachin has quit IRC03:50
*** armstrong has quit IRC03:51
*** psachin has joined #openstack-meeting04:00
*** samP has joined #openstack-meeting04:01
samPHi all for masakari04:01
tpatilHi04:01
samP#startmeeting masakari04:02
openstackMeeting started Tue Feb 19 04:02:11 2019 UTC and is due to finish in 60 minutes.  The chair is samP. Information about MeetBot at http://wiki.debian.org/MeetBot.04:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.04:02
*** openstack changes topic to " (Meeting topic: masakari)"04:02
openstackThe meeting name has been set to 'masakari'04:02
*** tashiromt has joined #openstack-meeting04:02
samPtpatil: hi04:02
samPsorry bit late04:02
samP#topic bugs and tickets04:02
*** openstack changes topic to "bugs and tickets (Meeting topic: masakari)"04:02
*** sw3 has joined #openstack-meeting04:02
samPAny critical bugs or patches to discuss?04:03
tpatil# link : https://launchpad.net/bugs/181565704:03
openstackLaunchpad bug 1815657 in masakari "python 3 failures - /usr/bin/masakari-wsgi and unit tests" [Undecided,In progress] - Assigned to Corey Bryant (corey.bryant)04:03
tpatilPatch is up for review: https://review.openstack.org/#/c/636749/04:03
tpatilI will reproduce this issue in 18.0404:03
tpatiland then review the patch04:03
samPtpatil: thanks.04:04
tpatilIn patch https://review.openstack.org/#/c/637486/ I have updated hacking version > 1.1.004:06
tpatilplease review this patch04:06
samPI will review this to. But I don't have 18.04 to reproduce this, so I will wait for your results04:06
samPtpatil: sure I will04:06
tpatilsamP: OK04:06
tpatilNeeds review : https://review.openstack.org/63571004:07
tpatilI have voted +2 on this patch04:08
*** andreaf has quit IRC04:08
samPtpatil: Thanks. lgtm, I will merge this.04:09
*** andreaf has joined #openstack-meeting04:09
tpatilsamP: Ok, Thanks04:09
*** belmoreira has quit IRC04:10
tpatil#link : https://review.openstack.org/#/c/636553/204:10
tpatilThis patch addresses LP bug : https://bugs.launchpad.net/masakari/+bug/181571804:10
openstackLaunchpad bug 1815718 in masakari "masakari should ignore notification when host status is "UNKNOWN"" [Undecided,In progress] - Assigned to Lucas hua (lucas.hua)04:10
tpatilif host status is UNKNOWN, then the notification should be ignored04:11
tpatilfor notification_event STOPPED04:12
tpatilIf host_status is "UNKNOWN" that means we don't know the power state of host, evacuate nova compute will cause VM split brain. Just ignore notification04:12
samPbefore host-monitor sends the notifitcaion, pacemaker (in general case) have already kill the node, isn't it?04:13
tpatilfrom host-monitor point of view, maybe it's not reproducible. but if some service calls notification api , this issue could happen04:14
*** janki has joined #openstack-meeting04:14
*** imsurit_ofc has joined #openstack-meeting04:15
samPThat's true. but the main point is, masakari does not responsible for fencing a node. monitor or service which send the notification should responsible for fence the node before send the notificaion04:16
samPI am not against to ignore this notification, but I need to check the nova code for put host to an unknown state.04:19
tpatilsamP: Ok, Got it04:20
samPtpatil: I will put my comments on the patch04:20
tpatilsamP: OK04:21
samPany other bugs or patches?04:21
tpatilNo04:21
samPthanks04:22
samP#topic Stein work items04:22
*** openstack changes topic to "Stein work items (Meeting topic: masakari)"04:22
samPplease share any updates on Stein work items04:22
tpatilEvent Notification : Specs https://review.openstack.org/#/c/473057/04:24
tpatilneeds review04:24
tpatilImplementation is ready : https://review.openstack.org/#/q/topic:bp/notifications-in-masakari+(status:open+OR+status:merged)04:24
samPtpatil: Thanks, I will review those. sorry for the delay04:25
tpatilsamP : NP04:25
tpatilAdd progress details for recovery workflow: Specs https://review.openstack.org/#/c/632079/04:26
tpatilNeeds review04:26
*** jamesmcarthur has joined #openstack-meeting04:26
tpatilImplementation is in progress04:26
tpatilPresently taskflow doesn't support to fetch progress details in the order of workflow execution04:27
*** zaneb has quit IRC04:27
tpatilWe have reported this issue in taskflow04:27
samPtpatil: thanks04:27
ShilpaSDhere is the link https://bugs.launchpad.net/taskflow/+bug/181573804:27
openstackLaunchpad bug 1815738 in taskflow "Taskflow doesn't support to retrieve tasks progress details in the order of workflow task execution" [Undecided,New]04:27
tpatiluntil this issue not fixed, we will write extra logic to sorted out the progress details as per the required order04:27
samPShilpaSD: thanks for the link04:28
tpatilAdd functional CI job and tests for segments: In progress04:29
tpatil#link : https://review.openstack.org/#/c/633663/04:29
tpatilWaiting for04:30
tpatilwaiting for author to fix the review comments04:30
samPtpatil: thanks04:30
tpatilDevstack support to install and run masakari-monitors: Good progress04:31
*** jamesmcarthur has quit IRC04:31
tpatilPatch will be uploaded in this week, except host-monitors04:32
tpatilAs it has dependency on pacemaker04:32
tpatilthe host-monitor fails to start if pacemaker is not installed and running04:32
samPtpatil: got it.04:32
*** radez has quit IRC04:33
samPI will try to review all the pending patches asap04:34
tpatilsamP: Ok, Thanks04:34
samPthanks for the updates.04:35
samP#topic AOB04:36
*** openstack changes topic to "AOB (Meeting topic: masakari)"04:36
samPAny other items to discuss?04:36
tpatilNothing from my side04:37
samPOK then, but early but let finish the meeting here.04:37
tpatilsamP: Ok04:38
samPPlease use IRC #openstack-masakari or ML for further discussion.04:38
samPThank you all.04:38
samP#endmeeting04:38
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"04:38
openstackMeeting ended Tue Feb 19 04:38:26 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)04:38
openstackMinutes:        http://eavesdrop.openstack.org/meetings/masakari/2019/masakari.2019-02-19-04.02.html04:38
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/masakari/2019/masakari.2019-02-19-04.02.txt04:38
openstackLog:            http://eavesdrop.openstack.org/meetings/masakari/2019/masakari.2019-02-19-04.02.log.html04:38
*** zaneb has joined #openstack-meeting04:42
*** vishakha has joined #openstack-meeting04:45
*** tpatil has quit IRC04:47
*** irclogbot_1 has quit IRC04:57
*** jamesmcarthur has joined #openstack-meeting04:58
*** jamesmcarthur has quit IRC05:03
*** bdperkin has quit IRC05:08
*** bdperkin has joined #openstack-meeting05:25
*** ijw has quit IRC05:27
*** ijw has joined #openstack-meeting05:29
*** ijw has quit IRC05:34
*** jamesmcarthur has joined #openstack-meeting05:38
*** hongbin has quit IRC05:41
*** markvoelker has joined #openstack-meeting05:41
*** jamesmcarthur has quit IRC05:43
*** ralonsoh has joined #openstack-meeting05:48
*** lbragstad has quit IRC05:53
*** sridharg has joined #openstack-meeting05:53
*** jamesmcarthur has joined #openstack-meeting06:10
*** markvoelker has quit IRC06:14
*** jamesmcarthur has quit IRC06:15
*** e0ne has joined #openstack-meeting06:25
*** e0ne has quit IRC06:26
*** apetrich has joined #openstack-meeting06:38
*** Luzi has joined #openstack-meeting06:48
*** zhubx has joined #openstack-meeting06:49
*** zhubx has quit IRC06:51
*** tashiromt has quit IRC06:56
*** kopecmartin|off is now known as kopecmartin07:02
*** slaweq has joined #openstack-meeting07:10
*** markvoelker has joined #openstack-meeting07:11
*** jamesmcarthur has joined #openstack-meeting07:11
*** jamesmcarthur has quit IRC07:15
*** belmoreira has joined #openstack-meeting07:16
*** yaawang has quit IRC07:19
*** e0ne has joined #openstack-meeting07:23
*** rcernin has quit IRC07:25
*** yaawang has joined #openstack-meeting07:26
*** aojea has joined #openstack-meeting07:29
*** takamatsu has joined #openstack-meeting07:40
*** ociuhandu has joined #openstack-meeting07:40
*** cloudrancher has quit IRC07:43
*** cloudrancher has joined #openstack-meeting07:43
*** markvoelker has quit IRC07:43
*** ociuhandu has quit IRC07:47
*** cloudrancher has quit IRC07:47
*** jamesmcarthur has joined #openstack-meeting07:48
*** hadai has joined #openstack-meeting07:50
*** jamesmcarthur has quit IRC07:53
*** nnsingh has joined #openstack-meeting07:56
*** shubham_potale_ has joined #openstack-meeting07:56
*** pcaruana has joined #openstack-meeting08:11
*** e0ne has quit IRC08:13
*** sdake has joined #openstack-meeting08:27
*** cloudrancher has joined #openstack-meeting08:30
*** tssurya has joined #openstack-meeting08:31
*** liuyulong_ has joined #openstack-meeting08:33
*** _alastor_ has joined #openstack-meeting08:36
*** sdake has quit IRC08:37
*** electrofelix has joined #openstack-meeting08:38
*** _alastor_ has quit IRC08:40
*** markvoelker has joined #openstack-meeting08:40
*** sdake has joined #openstack-meeting08:45
*** jamesmcarthur has joined #openstack-meeting08:50
*** liuyulong_ has quit IRC08:54
*** jamesmcarthur has quit IRC08:55
*** ttsiouts has joined #openstack-meeting08:57
*** priteau has joined #openstack-meeting08:59
*** ociuhandu has joined #openstack-meeting09:02
*** hadai has quit IRC09:04
*** ttsiouts has quit IRC09:08
*** imsurit_ofc has quit IRC09:08
*** ttsiouts has joined #openstack-meeting09:08
*** imsurit_ofc has joined #openstack-meeting09:09
*** ttsiouts has quit IRC09:13
*** markvoelker has quit IRC09:14
*** ociuhandu has quit IRC09:23
*** yamamoto has joined #openstack-meeting09:23
*** sdake has quit IRC09:25
*** ttsiouts has joined #openstack-meeting09:25
*** sdake has joined #openstack-meeting09:28
*** ociuhandu has joined #openstack-meeting09:31
*** slaweq has quit IRC09:31
*** slaweq has joined #openstack-meeting09:32
*** ociuhandu has quit IRC09:35
*** ttsiouts has quit IRC09:38
*** ttsiouts has joined #openstack-meeting09:38
*** tetsuro has quit IRC09:41
*** sdake has quit IRC09:49
*** jamesmcarthur has joined #openstack-meeting09:51
*** e0ne has joined #openstack-meeting09:53
*** jamesmcarthur has quit IRC09:55
*** ociuhandu has joined #openstack-meeting09:56
*** ttsiouts has quit IRC09:56
*** ociuhandu has quit IRC09:57
*** ociuhandu has joined #openstack-meeting09:58
*** ttsiouts has joined #openstack-meeting09:58
*** markvoelker has joined #openstack-meeting10:11
*** jamesmcarthur has joined #openstack-meeting10:27
*** jamesmcarthur has quit IRC10:32
*** ttsiouts has quit IRC10:40
*** priteau has quit IRC10:41
*** ttsiouts has joined #openstack-meeting10:41
*** ttsiouts_ has joined #openstack-meeting10:42
*** cloudrancher has quit IRC10:43
*** markvoelker has quit IRC10:43
*** cloudrancher has joined #openstack-meeting10:43
*** cloudrancher has quit IRC10:44
*** ttsiouts has quit IRC10:45
*** cloudrancher has joined #openstack-meeting10:46
*** lpetrut has joined #openstack-meeting10:46
*** carlos_silva has joined #openstack-meeting10:50
*** cloudrancher has quit IRC10:54
*** yamamoto has quit IRC11:00
*** yamamoto has joined #openstack-meeting11:06
*** yamamoto has quit IRC11:11
*** yamamoto has joined #openstack-meeting11:11
*** JangwonLee_ has joined #openstack-meeting11:19
*** JangwonLee__ has joined #openstack-meeting11:20
*** cloudrancher has joined #openstack-meeting11:21
*** JangwonLee has quit IRC11:23
*** JangwonLee_ has quit IRC11:24
*** jawad_axd has joined #openstack-meeting11:28
*** jamesmcarthur has joined #openstack-meeting11:29
*** cloudrancher has quit IRC11:32
*** cloudrancher has joined #openstack-meeting11:33
*** jamesmcarthur has quit IRC11:33
*** markvoelker has joined #openstack-meeting11:40
*** thgcorrea has joined #openstack-meeting11:41
*** imsurit_ofc has quit IRC11:46
*** raildo has joined #openstack-meeting11:49
*** jawad_axd has quit IRC11:50
*** jawad_axd has joined #openstack-meeting11:55
*** ttsiouts_ has quit IRC11:57
*** jawad_axd has quit IRC12:03
*** enriquetaso has joined #openstack-meeting12:07
*** erlon has joined #openstack-meeting12:14
*** markvoelker has quit IRC12:14
*** erlon has quit IRC12:14
*** erlon has joined #openstack-meeting12:15
*** erlon has quit IRC12:16
*** erlon has joined #openstack-meeting12:16
*** cloudrancher has quit IRC12:22
*** rfolco has joined #openstack-meeting12:33
*** cloudrancher has joined #openstack-meeting12:39
*** janki has quit IRC12:41
*** janki has joined #openstack-meeting12:44
*** janki has quit IRC12:52
*** janki has joined #openstack-meeting12:53
*** janki has quit IRC12:54
*** janki has joined #openstack-meeting12:54
*** JangwonLee_ has joined #openstack-meeting12:58
*** JangwonLee__ has quit IRC13:01
*** jamesmcarthur has joined #openstack-meeting13:06
*** markvoelker has joined #openstack-meeting13:11
*** jamesmcarthur has quit IRC13:11
*** ttsiouts has joined #openstack-meeting13:11
*** cloudrancher has quit IRC13:11
*** jamesmcarthur has joined #openstack-meeting13:12
*** ttsiouts has quit IRC13:18
*** ttsiouts has joined #openstack-meeting13:19
*** sdake has joined #openstack-meeting13:23
*** mjturek has joined #openstack-meeting13:31
*** vishakha has quit IRC13:31
*** jamesmcarthur has quit IRC13:32
*** mjturek has quit IRC13:32
*** mjturek has joined #openstack-meeting13:37
*** markvoelker has quit IRC13:43
*** jamesmcarthur has joined #openstack-meeting13:49
*** janki has quit IRC13:54
*** eharney has joined #openstack-meeting13:55
*** jamesmcarthur has quit IRC13:56
*** ttsiouts has quit IRC14:04
*** ttsiouts has joined #openstack-meeting14:04
*** sdake has quit IRC14:05
*** rfolco has quit IRC14:07
*** rfolco has joined #openstack-meeting14:08
*** liuyulong has quit IRC14:08
*** ttsiouts has quit IRC14:09
*** ttsiouts has joined #openstack-meeting14:09
*** yamamoto has quit IRC14:10
*** sdake has joined #openstack-meeting14:14
*** sdake has quit IRC14:19
*** yamamoto has joined #openstack-meeting14:24
*** mriedem has joined #openstack-meeting14:25
*** yamamoto has quit IRC14:26
*** yamamoto has joined #openstack-meeting14:27
*** JangwonLee_ has quit IRC14:30
*** lbragstad has joined #openstack-meeting14:34
*** radez has joined #openstack-meeting14:35
*** aagate has joined #openstack-meeting14:37
*** sdake has joined #openstack-meeting14:38
*** _alastor_ has joined #openstack-meeting14:38
*** mriedem has left #openstack-meeting14:40
*** mriedem has joined #openstack-meeting14:40
*** markvoelker has joined #openstack-meeting14:41
*** _alastor_ has quit IRC14:43
*** munimeha1 has joined #openstack-meeting14:45
*** sdake has quit IRC14:53
*** sdake has joined #openstack-meeting15:00
*** whoami-rajat has quit IRC15:01
*** awaugama has joined #openstack-meeting15:03
*** vishakha has joined #openstack-meeting15:04
*** markvoelker has quit IRC15:14
*** whoami-rajat has joined #openstack-meeting15:18
*** Luzi has quit IRC15:19
*** sdake has quit IRC15:28
*** cloudrancher has joined #openstack-meeting15:33
*** sdake has joined #openstack-meeting15:35
*** KurtB has joined #openstack-meeting15:45
*** lpetrut has quit IRC15:45
*** njohnston_ has joined #openstack-meeting15:51
*** sdake has quit IRC15:52
*** sdake has joined #openstack-meeting15:59
*** e0ne has quit IRC16:00
*** mlavalle has joined #openstack-meeting16:00
*** _alastor_ has joined #openstack-meeting16:00
slaweq#startmeeting neutron_ci16:00
openstackMeeting started Tue Feb 19 16:00:31 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
slaweqhi16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"16:00
openstackThe meeting name has been set to 'neutron_ci'16:00
mlavalleo/16:00
njohnston_o/16:00
*** ianychoi has quit IRC16:00
slaweqlets wait few minutes for others16:01
*** jamesmcarthur has joined #openstack-meeting16:01
slaweqmaybe bcafarel and hongbin will join too16:01
slaweqI know that haleyb is on PTO today16:01
bcafarelo/16:01
bcafarelthanks for the ping slaweq, I was writing some doc (easy to forget the clock then)16:02
slaweqbcafarel: :)16:02
slaweqok, let's start then16:03
slaweq#topic Actions from previous meetings16:03
*** sdake has quit IRC16:03
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"16:03
*** e0ne has joined #openstack-meeting16:03
slaweqmlavalle to check if adding KillFilter for neutron-keepalived-state-change will solve issues with L3 agent in dvr jobs16:03
mlavalleI porposed patch https://review.openstack.org/#/c/636710/16:03
*** hongbin has joined #openstack-meeting16:03
hongbino/16:03
slaweqhi hongbin :)16:03
mlavalleand actually dougwig made an ineteresting comment16:03
mlavallewhich I intend to explore16:04
njohnston_yeah, that is interesting16:04
slaweqbut patch basically looks like is helping to solve some of failing tests in multinode job, right?16:05
mlavalleI agree in that we should keep the door as narrowly open as possible16:05
bcafarel+116:05
slaweq++16:05
mlavalleslaweq: I haven't had time to chack on that since yesterday? Did you look?16:06
slaweqI looked at test's results right now: http://logs.openstack.org/10/636710/2/check/neutron-tempest-plugin-dvr-multinode-scenario/f44d655/testr_results.html.gz16:06
slaweqit looks that "only" 2 tests failed which is much better than it was16:07
mlavalleoh yeah, in fact the trunk lifecycle test is passing16:07
mlavallewhich I haven't seen in a long time16:07
mlavalleso it looks we are moving in the right direction16:08
slaweqalso from quick look at l3-agent's log, it looks much better and IMO agent was working all the time properly16:08
bcafarelso there really was a rootwrap filter missing all along?16:08
mlavalleit seems so16:08
slaweqbcafarel: for some time at least16:08
mlavalleit was only for some months16:08
slaweqbcafarel: I removed them when I removed old metadata proxy code16:08
bcafarelslaweq: ok, missing for some months makes more sense :)16:09
slaweq:)16:09
mlavallebcafarel: when I started suspecting the filters were the cause, I had the exact same question in my mind16:09
mlavallehow come did this work before?16:09
mlavallebut we really introduced the bug recently16:10
mlavalleok, so I'll play with some more rechecks of the patch16:10
slaweqI think that this bug was exposed by switch to python316:10
slaweqon python2.7 it is "working" somehow16:11
mlavalleand I'll explore dougwig's suggestion, which is very sensible16:11
slaweqat least agent is not dying when this issue occurs16:11
mlavalleyeah, I think that in python2.7 we still miss the filter16:11
*** cloudrancher has quit IRC16:11
*** markvoelker has joined #openstack-meeting16:11
mlavallebut for some reason we don't kill the agent16:11
*** ianychoi has joined #openstack-meeting16:12
mlavalleanbd therefore we don't have a chain reaction in the tests16:12
slaweqyep16:12
slaweqok, moving on to the next one then16:13
slaweqbcafarel to continue work on grafana jobs switch to python 316:13
bcafarels/grafana/grenade/16:13
slaweqbcafarel: right :)16:13
bcafarelI commented on #openstack-neutron yesterday on it, I think it will be better to have grenade job definition in grenade repo once it is full zuul v3 (which will allow devstack_localrc vars to work)16:14
bcafarelso I updated https://review.openstack.org/#/c/636356/ to simply switch to py3 in the meantime16:14
bcafareljust got zuul +1 :)16:15
slaweqand now it looks that it is running on python 3 indeed :)16:15
slaweqfor example here: http://logs.openstack.org/56/636356/7/check/neutron-grenade-multinode/0ea452a/logs/screen-q-l3.txt.gz#_Feb_19_14_14_16_09279016:15
bcafarelyes, wrong variable in my first try sorry, now it looks good16:16
slaweq+2 it already16:16
slaweqIMO it will be good if we just have single node grenade on python 2.716:16
slaweqand others on python316:17
njohnston_and the nature of the grenade job is that it upgrades from an old version running py27 to a new version running py316:17
njohnston_so regardless of what code is running the grenade harness we get some testing on both versions of neutron code16:18
slaweqnjohnston_: I think that in those jobs it runs on python3 for both old and new16:18
slaweqsee e.g. here: http://logs.openstack.org/56/636356/7/check/neutron-grenade-multinode/0ea452a/logs/old/local_conf.txt.gz16:18
slaweqif it would be possible to do upgrade from py27 to py3 that would be great IMO16:19
slaweqbut maybe that should be discussed with QA team?16:19
njohnston_I checked the grenade output, for example16:20
slaweqnjohnston_: in grenade-py3 job (defined in grenade repo) it is also like that, all on py3: http://logs.openstack.org/56/636356/7/check/grenade-py3/b075864/logs/old/local_conf.txt.gz16:20
*** cloudrancher has joined #openstack-meeting16:20
njohnston_old: http://logs.openstack.org/56/636356/7/check/grenade-py3/b075864/logs/grenade.sh.txt.gz#_2019-02-19_13_45_48_138 "New python executable in /opt/stack/old/requirements/.venv/bin/python2"16:20
njohnston_new: http://logs.openstack.org/56/636356/7/check/grenade-py3/b075864/logs/grenade.sh.txt.gz#_2019-02-19_14_19_45_871 "New python executable in /opt/stack/new/requirements/.venv/bin/python3.5"16:20
hongbinif the upgrade from py3 to py3 success, is there any scenario that upgrade from py2 to py3 break?16:21
slaweqnjohnston_: but is devstack deployed in venv in grenade job? I really don't think so16:22
slaweqbut I'm not grenade expert so may be wrong here :)16:22
slaweqhongbin: we don't have any other jobs which tests upgrade16:22
njohnston_slaweq: I can ask the QA team to be sure16:23
slaweqnjohnston_: ok, thx16:24
hongbinok, from me, i think testing from py3 to py3 upgrade is enough16:24
slaweqnjohnston_: but still, patch from bcafarel is "consistent" with this single node grenade job so we can IMO go forward with it :)16:24
njohnston_slaweq: Absolutely, no disagreement there16:24
slaweqnjohnston_: great :)16:25
slaweqok, can we move on?16:25
njohnston_yep16:27
slaweqok, next one then16:27
slaweqslaweq to propose patch with new decorator skip_if_timeout in functional tests16:27
slaweqI did patch https://review.openstack.org/#/c/636892/16:27
slaweqplease review it if You can16:27
njohnston_looks like functional tests failed on the last run16:28
njohnston_I guess that would be the run still running16:29
slaweqnjohnston_: where?16:29
bcafarelhttp://logs.openstack.org/92/636892/1/gate/neutron-functional/85da30c/logs/testr_results.html.gz ?16:29
slaweqahh, in the gate16:30
slaweqok, so maybe I will need to investigate it more :)16:30
slaweqthx16:30
njohnston_http://logs.openstack.org/92/636892/1/gate/neutron-functional/85da30c/job-output.txt.gz#_2019-02-19_15_32_02_80110816:30
*** yamamoto has quit IRC16:31
slaweqso it looks that it maybe don't work properly, I will investigate that tomorrow morning then16:31
slaweq#action slaweq to fix patch with skip_if_timeout decorator16:31
bcafarelyou did get your test failure in the end at least :)16:31
slaweqLOL, indeed16:31
slaweqok, that's all for actions from last week16:32
slaweqnext topic is:16:32
slaweq#topic Python 316:32
*** openstack changes topic to "Python 3 (Meeting topic: neutron_ci)"16:32
slaweqas we already talked today, we have (now in the gate even) patch for grenade multinode jobs to switch to py316:33
slaweqthx bcafarel :)16:33
slaweqthere is also patch https://review.openstack.org/633979 for neutron-tempest-dvr-ha-multinode-full job16:33
*** yamamoto has joined #openstack-meeting16:33
*** yamamoto has quit IRC16:33
slaweqbut this one is still failing with some tests16:33
*** yamamoto has joined #openstack-meeting16:33
*** yamamoto has quit IRC16:33
slaweqI'm not sure if that is issue with tests or maybe similar problem like we have in neutron-tempes-plugin-scenario-dvr-multinode job16:34
slaweqso I will probably split it into 2 pieces: migration to zuulv3 and then second patch with switch to py316:34
slaweqdo You agree?16:35
mlavalleyes16:35
bcafarelsounds good yes16:35
slaweqthanks :)16:35
njohnston_+116:35
slaweq#action slaweq to split patch https://review.openstack.org/633979 into two: zuulv3 and py3 parts16:35
slaweqand I think that this will be all for switch to py316:36
slaweqwe will still have some experimental jobs to switch but that can be done slowly later I think16:36
*** jaypipes has quit IRC16:36
*** jaypipes has joined #openstack-meeting16:36
bcafarelyeah it would be a bad sign if important tests we needed for proper python3 support are hidden in experimental jobs16:37
bcafarelso we can do these "leisurely"16:37
mlavalleI like leisurely16:37
slaweqLOL16:38
slaweqme too16:38
slaweqok, any other questions/something to add about python3?16:38
slaweqor can we move on?16:38
mlavallenot from me16:38
njohnston_I need to send an email to to openstack-discuss to ask the stadium projects about their py3 status16:39
bcafarelall good here16:39
* mlavalle will have to drop off at 45 minutes after the hour16:39
slaweqnjohnston_++16:39
slaweq#action njohnston to ask stadium projects about python3 status16:39
njohnston_I can help with some, but I wouldn't want to but in to projects like midonet that I know little about16:39
slaweqnjohnston_: yes, same for me16:40
njohnston_I believe I sent an email a while back and got no requests for help, but let's see if anyone is so motivated now16:40
slaweqthx njohnston_16:40
slaweqok, lets move on quickly16:40
slaweq#topic Grafana16:40
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"16:40
slaweq#link http://grafana.openstack.org/dashboard/db/neutron-failure-rate16:40
slaweqI was looking at grafana today and TBH I don't see anything very bad on it in last few days16:42
slaweqbut maybe You noticed something what You want to discuss about16:42
slaweqeven all tempest jobs are in quite good shape this week16:43
mlavallelooks pretty good to me16:43
*** markvoelker has quit IRC16:44
mlavallethe only thing I see if functional py27 in gate16:44
slaweqmlavalle: yes, functional tests are not very good currently16:44
slaweqand I have cuplrits of it16:44
slaweq#topic fullstack/functional16:44
*** openstack changes topic to "fullstack/functional (Meeting topic: neutron_ci)"16:44
*** cloudrancher has quit IRC16:44
slaweqwe recently noticed at least 3 bugs in functional tests:16:45
slaweq- https://bugs.launchpad.net/neutron/+bug/1816239 - patch proposed https://review.openstack.org/#/c/637544/16:45
openstackLaunchpad bug 1816239 in neutron "Functional test test_router_processing_pool_size failing" [High,In progress] - Assigned to Brian Haley (brian-haley)16:45
slaweq- https://bugs.launchpad.net/neutron/+bug/1815585 - if there will be no anyone to look at this, I will try to debug it16:45
openstackLaunchpad bug 1815585 in neutron "Floating IP status failed to transition to DOWN in neutron-tempest-plugin-scenario-linuxbridge" [High,Confirmed]16:45
slaweq- https://bugs.launchpad.net/neutron/+bug/1816489 - same here, we need volunteer for this one16:45
openstackLaunchpad bug 1816489 in neutron "Functional test neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase. test_ha_router_lifecycle failing" [High,Confirmed]16:45
slaweqfirst one is hitting as the most often I think and it has already fix proposed16:45
slaweqso we should be better after this will be merged16:46
slaweqother 2 needs volunteers to debug :)16:46
mlavalleoh, oh, do we depend on haleyb for this? we are in trouble  :-)16:46
slaweqI can take a look at https://bugs.launchpad.net/neutron/+bug/1815585 but someone else could look at the last one maybe :)16:46
openstackLaunchpad bug 1815585 in neutron "Floating IP status failed to transition to DOWN in neutron-tempest-plugin-scenario-linuxbridge" [High,Confirmed]16:46
mlavalleok, I'll try to look at the last one16:47
slaweqmlavalle: no, fortunatelly fix was done by liuyulong :)16:47
slaweqthx mlavalle16:47
mlavalleLOL16:47
slaweq#action mlavalle to check bug https://bugs.launchpad.net/neutron/+bug/181648916:47
openstackLaunchpad bug 1816489 in neutron "Functional test neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase. test_ha_router_lifecycle failing" [High,Confirmed]16:47
*** macza has joined #openstack-meeting16:47
slaweq#action slaweq to check bug https://bugs.launchpad.net/neutron/+bug/181558516:47
mlavalleok guyas I got to leave16:47
slaweqok, thx mlavalle16:47
slaweqsee You later16:47
mlavalleo/16:47
slaweqbasically that is all what I have for today16:48
bcafarelo/ mlavalle16:48
slaweqother jobs are in pretty good shape16:48
slaweqso do You have something else You want to talk about today?16:48
bcafarelnothing from me16:49
slaweqok, so lets have 10 minutes back today :)16:49
slaweqthx for attending and see You all around16:49
slaweqo/16:49
bcafarel\o/16:49
slaweq#endmeeting16:49
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:49
openstackMeeting ended Tue Feb 19 16:49:45 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:49
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-02-19-16.00.html16:49
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-02-19-16.00.txt16:49
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-02-19-16.00.log.html16:49
mlavalleslaweq: ahh, didn't miss much16:50
slaweqmlavalle: LOL16:50
slaweqmlavalle: indeed16:50
bcafarelmlavalle: just to make you feel bad, we waited until you leave16:50
slaweq:)16:50
mlavalleslaweq: yeah, you made me feel guilty... LOL16:51
slaweqmlavalle: I didn't want that, really ;P16:51
*** hongbin has quit IRC16:54
*** ttsiouts has quit IRC16:56
*** ttsiouts has joined #openstack-meeting16:56
*** mattw4 has joined #openstack-meeting16:57
*** pcaruana has quit IRC16:57
*** mattw4 has quit IRC16:59
*** tssurya has quit IRC17:01
*** ttsiouts has quit IRC17:01
*** raildo_ has joined #openstack-meeting17:02
*** raildo has quit IRC17:03
*** raildo has joined #openstack-meeting17:06
*** raildo_ has quit IRC17:07
*** raildo has quit IRC17:08
*** raildo has joined #openstack-meeting17:09
*** yamamoto has joined #openstack-meeting17:10
*** raildo has quit IRC17:14
*** bbowen has joined #openstack-meeting17:15
*** aojea has quit IRC17:16
*** yamamoto has quit IRC17:16
*** artom has quit IRC17:24
*** raildo has joined #openstack-meeting17:27
*** lpetrut has joined #openstack-meeting17:27
*** Vadmacs has joined #openstack-meeting17:32
*** raildo has quit IRC17:33
*** ociuhandu_ has joined #openstack-meeting17:33
*** e0ne has quit IRC17:35
*** ociuhandu has quit IRC17:36
*** eharney has quit IRC17:37
*** ociuhandu_ has quit IRC17:38
*** psachin has quit IRC17:39
*** lpetrut has quit IRC17:40
*** markvoelker has joined #openstack-meeting17:41
*** raildo has joined #openstack-meeting17:48
*** ociuhandu has joined #openstack-meeting17:48
*** ociuhandu has quit IRC17:52
*** bbowen has quit IRC17:53
*** kopecmartin is now known as kopecmartin|off17:53
*** raildo_ has joined #openstack-meeting18:02
*** ociuhandu has joined #openstack-meeting18:04
*** raildo has quit IRC18:05
*** ociuhandu has quit IRC18:08
*** ijw has joined #openstack-meeting18:08
*** raildo has joined #openstack-meeting18:11
*** raildo_ has quit IRC18:12
*** raildo has quit IRC18:13
*** raildo_ has joined #openstack-meeting18:14
*** markvoelker has quit IRC18:14
*** jamesmcarthur has quit IRC18:17
*** jamesmcarthur has joined #openstack-meeting18:18
*** raildo has joined #openstack-meeting18:20
*** raildo_ has quit IRC18:21
*** tssurya has joined #openstack-meeting18:28
*** mattw4 has joined #openstack-meeting18:29
*** raildo_ has joined #openstack-meeting18:30
*** raildo has quit IRC18:30
*** ociuhandu has joined #openstack-meeting18:30
*** artom has joined #openstack-meeting18:32
*** artom has quit IRC18:33
*** artom has joined #openstack-meeting18:33
*** hongbin has joined #openstack-meeting18:34
*** hongbin has quit IRC18:35
*** hongbin has joined #openstack-meeting18:35
*** lpetrut has joined #openstack-meeting18:39
*** e0ne has joined #openstack-meeting18:41
*** e0ne has quit IRC18:42
*** jamesmcarthur has quit IRC18:53
*** e0ne has joined #openstack-meeting18:54
fungialoha, infrateers19:00
clarkbhello there19:00
clarkbwe'll get started momentarily19:00
clarkb#startmeeting infra19:01
openstackMeeting started Tue Feb 19 19:01:08 2019 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: infra)"19:01
openstackThe meeting name has been set to 'infra'19:01
clarkb#link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting19:01
gary_perkinso/ Hi!19:01
ianwo/19:01
*** Shrews has joined #openstack-meeting19:01
clarkbI seem to remember scribbling a note to link to the mailing list threads when I send those out instead /me scribbles note bigger to do that19:02
clarkb#topic Announcements19:02
*** openstack changes topic to "Announcements (Meeting topic: infra)"19:02
*** prometheanfire has joined #openstack-meeting19:02
clarkbI have nothing to announce. Any announcements others would like to call out?19:02
funginothing springs to mind19:03
*** electrofelix has quit IRC19:03
clarkbI guess I should note the openstack TC election nomination period is now and ends shortly19:04
clarkbso get your self nomination pushed if that is something you would like to do19:04
fungi~4.75 hours remaining19:04
clarkbThen be on the lookout for ballots in the near future19:04
clarkb#topic Actions from last meeting19:05
*** openstack changes topic to "Actions from last meeting (Meeting topic: infra)"19:05
clarkb#link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-02-12-19.01.txt minutes from last meeting19:05
clarkbWe didn't call this out as an action in the meeting notes but ianw took the action of updating the letsencrypt spec to consider dns based acme validation. This has been done and is the subject of our next topic19:05
clarkb#topic Specs approval19:06
*** openstack changes topic to "Specs approval (Meeting topic: infra)"19:06
clarkb#link https://review.openstack.org/#/c/587283/ LetsEncrypt Spec now updated with DNS ACME19:06
clarkbI don't think this is ready for approval based on the good conversations this has started19:06
clarkbIt does seem we are making good progress through the conversation though. So please take a look if this interests you at all19:06
clarkband thank you ianw for getting that put together with some poc ansible work to show how it might actually run in production19:07
fungiit seems to be zeroing in on a more concrete design19:07
ianwyes i've updated from comments yesterday, a small changelog is in the notes for ps1419:07
ianwi also updated the POC to provide self-signed certs in a testing environment19:08
ianw(one thought for future development is that we could run our own LE, containerised)19:08
ianwfuture use in CI I mean19:09
clarkbianw: as far as next steps go it seems like we are nearing where we need to start making some decisions and moving forward? do we think we might be able to put it up for approval next week? if we can get through another round of reviews and post a new ps with some decisions?19:09
ianwand responded to corvus' thoughts on breaking up base.yaml in https://review.openstack.org/63764519:09
clarkbcorvus: fungi ^ you've reviewed it recently too so your thoughts on how quickly we might be able to come to consensus is appreciated19:10
ianwclarkb: yes, i think so; i'll commit to responding to any review comments quickly so we can get things going19:10
corvusi think i'm generally in favor of what's written.19:10
*** markvoelker has joined #openstack-meeting19:11
corvusi would love one more update that said "we aren't going to run anything that touches proprietary rax dns"19:11
fungii haven't looked at the diff from the most recent patchset yet19:11
fungiskimming now19:11
clarkbas the purchasers of openstack.org certs for our services I'm ok with corvus' request and doing one last bulk purchase round19:12
clarkbif that helps swing any opinions19:12
corvusbut i'm at least happy enough that we're all on the same page that it's at least not something we'll use for any new services and will only shrink, that i can go along with what's written19:12
clarkbalright anything else on this before we move on? (thanks again to everyone that helped push this forward, ianw in particular)19:13
ianwmy feeling is that it really is quite a small change, and we can have certificates for anything under openstack.org which is something that we've talked about for a long time19:13
fungihaving previously obtained the certs for openstack.org for a number of years too, i agree that generating them all at once for a renewal is less painful than adding new ones for services we add over time (and new services should be going into opendev.org anyway)19:13
fungialso, moving our existing http-only services to opendev.org before switching them to https shouldn't pose significant burden19:14
clarkbseems like we should be able to sort that out in review. Please do review the spec. But I think we need to move to our next topic(s)19:15
*** ijw has quit IRC19:15
ianw++19:15
clarkb#topic Priority Efforts19:15
*** openstack changes topic to "Priority Efforts (Meeting topic: infra)"19:15
clarkb#topic Storyboard19:15
*** openstack changes topic to "Storyboard (Meeting topic: infra)"19:16
clarkbfungi you have good news for us on this topic iirc19:16
clarkbStoryboard prod and dev servers are now both running on xenial with local mysql DBs right?19:16
clarkblast remaining cleanups are to configure backups?19:16
fungioh, well we replaced the trusty server with a xenial deployment in production (and upped the flavor help with performance), plus moved the database out of trove and onto the server itself19:17
mordredyay xenial!19:17
fungi#link https://review.openstack.org/637388 Clean up after StoryBoard replacement maintenance19:18
SotK\o/19:18
prometheanfiregood for another 2 years iirc (EOL)?19:18
fungithat's the cleanup change19:18
fungiand yeah, i still need to init the remote backups19:18
clarkbprometheanfire: yup19:18
*** lpetrut has quit IRC19:18
fungialso clarkb spotted that karbor has asked to move to storyboard19:19
fungi#link https://review.openstack.org/636574 Update karbor to use storyboard19:19
fungiand cinder is adding a new repo to sb to test it out too19:19
fungi#link https://review.openstack.org/637613 Add cinderlib project19:20
clarkbcinderlib a library to consume cinder drivers without an api service19:20
fungiyeah, i get the impression that'll be handy for some container orchestration system use cases19:20
smcginnisoVirt is also adopting it.19:21
fungineat!19:21
clarkbanything else to be aware of or help with in storyboard land?19:21
SotKIf anyone has spare time to look at the attachments patches it'd be much appreciated :)19:22
*** vishakha has quit IRC19:22
clarkb#link https://review.openstack.org/#/q/status:open+topic:story-attachments storyboard file attachment changes19:22
funginothing else on my side19:23
diablo_rojo_phonMine either19:23
clarkb#topic Configuration Management Updates19:23
*** openstack changes topic to "Configuration Management Updates (Meeting topic: infra)"19:23
*** jamesmcarthur has joined #openstack-meeting19:24
clarkbNow that I've got pbx off my plate I intend to pick up the puppet 4 upgrades and futureparser changes again.19:24
clarkbAlso worht noting (beacuse some had missed it) that we are now running a service with ansible + docker and no puppet. This service is the insecure image registry that will be used for zuul jobs that build docker images19:24
fungii had forgotten after reviewing many (most?) of those changes even19:25
clarkbI think work here has slowed beacuse its starting to become our new normal. Though still more work to be done with puppet upgrades and other service flips to docker.19:26
clarkbAre there any other items we need to call out on this topic?19:26
corvusi'm planning on having zuul-preview deployed that way too19:27
corvusi think it's easy and fast :)19:27
clarkbya once the underlying tooling is in place it seems to be pretty straightforard19:27
fungiand sounds like the plan for now is to also do gitea that way?19:28
clarkbthe big bootstrapping need was the image building wheel and we have that now19:28
clarkbfungi: thats our next topic :)19:28
* fungi returns to the present in that case19:28
clarkbSo maybe time to move on to that19:28
clarkb#topic OpenDev19:28
*** openstack changes topic to "OpenDev (Meeting topic: infra)"19:28
clarkbLast we met at this venue corvus expected we'd have the gitea cluster in a production like state ready for us to really start interacting with it19:29
corvusreality happened19:29
clarkbThen we did a couple gitea upgrades and discovered that gitea had never intended for shared everything gitea clusters to "work" and they fixed it19:29
*** sridharg has quit IRC19:29
clarkblong story short you kind of need to run a single gitea process today the way it is built19:29
corvus#link gitea bug for shared indexes https://github.com/go-gitea/gitea/issues/579819:30
corvusto be fair it's the search engine that has this problem19:30
corvusbut the search engine is a major selling point19:30
clarkbso after a discussion in #opendev earlier today I think we are leaning towards running multiple shared nothing gitea instances similar to how we run multiple shared nothing cgit instances today19:30
fungiwhich does mean independently replicating to them for now19:31
mordred++19:31
clarkbAnd longer term it sounds like they'll add support for elasticsearch backed search indexes for the code which will allow us to go back to the shared everything cluster model in k8s19:31
fungialso, i think we didn't touch on this, but we'll likely need some manner of client affinity in the load balancing layer to deal with replication being out of sync between cluster members19:32
fungi(if memory serves we do this in the cgit haproxy instance today?)19:32
corvusi'd like to keep the k8s cluster up and running to continue work in parallel on deployment and testing tooling19:32
corvusfungi: yeah, i expect us to keep exactly the haproxy config we have now with docker-compose gitea.19:33
clarkbfungi: we switched to least connections backend with haproxy but can go back to client affinity based on ip19:33
clarkbiirc19:33
mordredcorvus: basically just switching the backend ips really, right?19:33
corvusmordred: yep19:33
mordred\o/19:33
*** eharney has joined #openstack-meeting19:34
fungiclarkb: oh, did we not actually end up with any git fetch errors from dropping the client affinity?19:34
*** sdake has joined #openstack-meeting19:34
fungiin that case it's probably safe to do the same with gitea19:34
corvusthough, maybe it's more complex because we're also changing hostnames... so... i dunno, maybe it's a duplicate haproxy.19:34
corvusand actually...19:34
corvusdo we want to host the docker gitea instances in vexxhost?19:34
corvus(that's where we put the k8s cluster)19:34
fungiis there a reason not to?19:34
corvusonly reason i can think of is ipv6, which should be fixed before it matters to us.19:35
corvusbut if we do, then we definitely want a new lb, since that should be in the same dc as the backends :)19:35
fungihave we gotten working reverse dns for v6 addresses in vexxhost yet?19:36
corvusfungi: we don't have v6 in vexxhost sjc1 yet.  but it should be there RSN.19:36
fungiahh19:36
clarkbcorvus: what was the reason for using sjc1 over montreal?19:36
corvuswell, originally, because magnum was newer there :)19:36
clarkbthere is working ipv6 in the other region today19:37
corvuswe could ask mnaser where he'd like us to put them :)19:37
clarkb++19:37
corvuseither way, it'll be a new load balancer.19:37
clarkbOther news in opendev land is we've started booting replacement servers with opendev dns names and CNAMEing openstack records to them19:38
clarkbI updated the launch docs with some short notes on how this changes our server laucnh processes19:38
clarkbplease do propose updates to that little README if there are inconsistencies with reality19:38
mordredsweet19:39
*** e0ne has quit IRC19:39
fungiyeah, the storyboard servers are this way now19:39
clarkbas is the new pbx19:39
*** e0ne has joined #openstack-meeting19:39
clarkbAnything else on opendev?19:40
clarkb#topic General Topics19:41
*** openstack changes topic to "General Topics (Meeting topic: infra)"19:41
clarkbFirst up frickler points out that I have been bad with emacs and gpg agent and the secrets file19:41
prometheanfiredo we know why image sizes jumped? https://nb01.openstack.org/images/19:41
*** mjturek has quit IRC19:42
*** diablo_rojo has joined #openstack-meeting19:42
corvusme too19:42
clarkbprometheanfire: we can follow up after the agena'd topics19:42
fricklerclarkb: I was also wondering whether we somehow automating this to be safer. like having a cronjob that checks for running gpg-agent19:42
corvuswe should write a /usr/local/bin script19:43
clarkbfrickler: or maybe create a small script to use to run emacs for that file19:43
corvuseditpw or something19:43
prometheanfireclarkb: k19:43
clarkbcorvus: jinx19:43
mordredI have also been bad at emacs19:43
mordredclarkb, corvus: ++19:43
clarkbcorvus: I think that solution will be easy for me to remember once i use it a couple times19:43
clarkbfrickler: ^ what do you think?19:43
fricklersounds good, yes19:43
corvusyeah.  there's no way i'll remember "gpg-agent --daemon emacs"19:43
ianwwas this an email?  i've missed this19:43
corvusno one think about that too much.19:44
ianwre: gpg agent19:44
fungican we configure gpg-agent to run on-demand and terminate immediately by default?19:44
clarkbianw: no frickler pointed it out in irc then added it to the meeting agenda19:44
*** markvoelker has quit IRC19:44
clarkbianw: the tldr is running emacs to open the gpg encrypted file starts a gpg agent that never goes away19:44
fungii want to say i've seen instructions for how to do that when building and signing debian packages in a chroot19:44
fricklerianw: and it caches the passphrase for an hour or so19:45
clarkbfungi: the gpg-agent --daemon emacs command is the command version of that19:45
clarkbfungi: so it is possible19:45
ianwoh right, yeah from time to time i've killed gpg-agents after emacs went bananas and totally messed up the terminal too19:45
*** e0ne has quit IRC19:45
mordredfungi: we were unable to find any way to cause the correct behavior when we were looking at this in berlin19:46
clarkbI think the editpw script solves this in a pretty straightforward way19:46
clarkbso I'm good with that option19:46
mordredfungi: other than the gpg-agent --daemon emacs ... but if there is a way to just configure the system to not be terrible, that would bae awesome19:46
mordredclarkb: ++19:47
clarkbfrickler: is that something you would like to push up?19:47
fungiyeah, i'm hunting for a config option there19:48
clarkbok we've only got ~12 minutes so lets get to the next item19:48
fricklerclarkb: I can give it a try, but if someone else wants to do it, I'm not too sad either19:48
clarkbfrickler: ok19:48
clarkbNext up Trusty server upgrades19:48
clarkb#link https://etherpad.openstack.org/p/201808-infra-server-upgrades-and-cleanup19:48
clarkbfungi and I have made recent progress on this19:48
clarkbthere are still a few outstanding and potentially getting more difficult servers in that list.19:49
fungiyeah, frickler just now approved the inventory addition for the openstackid.org xenial replacement so we're probably close to swapping it out19:49
clarkbIf you can grab one or two it will go quickly. Another thought was maybe a sprint sometime after openstack release candidate processes quiet down? ( think that is mid march?19:49
fungiwe probably need to plan another in-place ubuntu release upgrade for lists.o.o19:50
clarkbI can send out an email to try and organize a sprint if we think we'd sit down for a few days to focus on this. Maybe let me know in the infra channel today if you think you could help with a sprint19:50
clarkbprometheanfire: ok image size questions. The easiest way to start sorting that out would likely be to grab the qcow2's from before and after the size job, and either nbd mount them or boot them and run du?19:51
clarkbprometheanfire: I do not know why there is a size jump19:51
clarkbmost of that image size cost is git repos iirc. So it is possible we've got a git repo growing quickly19:51
ianwi will go back and see why we turned of the automatic size reporting in the dib logs19:51
corvusi'd like to let folks know that i'm continuing work on using the intermediate docker registry.  still hitting bugs, but hopefully soon we'll be able to have speculative docker images, which means we can test changes to images in our deployment tooling before deploying them.19:51
ianwit was intended for exactly this :)19:52
clarkbcorvus: that is exciting19:52
clarkbcorvus: also yay for ansible bugs/behavior19:52
mordredcorvus: \o/19:52
clarkb#topic Open Discussion19:52
*** openstack changes topic to "Open Discussion (Meeting topic: infra)"19:52
prometheanfireclarkb: ya, will probably do that19:52
clarkbwe've started discussing open things anyway :)19:52
*** sdake has quit IRC19:52
prometheanfireptg schedule?19:52
clarkbprometheanfire: I brought it up last week and have an etherpad to start putting ideas on19:53
* clarkb finds link19:53
clarkb#link https://etherpad.openstack.org/2019-denver-ptg-infra-planning19:53
clarkbI think we're moving at a relatively quick pace on a lot of items so it is hard to know exactly what we'll want to work on at the PTG19:54
clarkbbut there should be plenty of topics when we get there19:54
prometheanfirek19:54
corvusi think we should do some containerization, but i don't think we'll know what until much closer.19:55
*** sdake has joined #openstack-meeting19:55
*** bbowen has joined #openstack-meeting19:56
clarkbOne thing I've struggeld with (and maybe this means I need to try hard to use gertty again) is keeping track of outstanding reviews that need to be done for these various efforts. I think we have been reasonably good about posting stacks/topics that need eyeballs but maybe I should look at using storyboard worklists again19:57
ianwthis is where i got sidetracked with graphite; it's probably better to just do the easy upgrade to xenial, then tackle containerisation separately?19:57
clarkbianw: I think that is where we've ended up on a couple other upgrades.19:58
corvusclarkb: i try to make sure the topic is set on opendev-gerrit changes19:58
*** sdake has quit IRC19:58
corvusit's not perfect19:58
* corvus changes some topics19:58
clarkbcorvus: ya maybe what I really need is a gerrit dashboard to collect a number of those topics for me19:58
clarkb(I'm mostly just brainstorming out loud about this)19:58
ianwclarkb: gerrit-dash-creator might help too, certainly a shared one that highlights current important topics19:58
corvuswell, we're supposed to have that in the specs page19:59
corvushttp://specs.openstack.org/openstack-infra/infra-specs/19:59
clarkbcorvus: ya, you can fine tune it a bit more with a dashboard though19:59
corvusthat url seems broken19:59
corvushttps://review.openstack.org/#/q/status:open+AND+(topic:storyboard-migration+OR+topic:opendev-gerrit)19:59
*** thgcorrea has quit IRC19:59
clarkblike show all the changes with one +2 and no -1's to flush them quickly etc19:59
fungisphinx's url highlighting may simply be broken19:59
ianwoh right, yeah we could do a little fancier one that filters out the different votes20:00
fungilooks like it doesn't include the trailing ) in the hyperlink and instead shows it as text20:00
corvussure, but we don't need that for priority efforts20:00
clarkbcorvus: that is true20:00
corvusthere are never more than a handful of those20:00
clarkband we are at time20:00
fungithanks clarkb!20:00
clarkbThank you everyone. I'll send you back to your regularly scheduled day20:00
corvusso what i'd love is if we kept up making sure the topics are set right20:00
clarkb#endmeeting20:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"20:00
openstackMeeting ended Tue Feb 19 20:00:40 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-02-19-19.01.html20:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-02-19-19.01.txt20:00
openstackLog:            http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-02-19-19.01.log.html20:00
corvusthen we can use that link and simple dashboards/queries etc20:00
clarkbcorvus: ++ I can clean that up today since i brought it up20:00
*** prometheanfire has left #openstack-meeting20:01
*** Shrews has left #openstack-meeting20:01
*** bbowen has quit IRC20:02
*** b1airo has joined #openstack-meeting20:22
*** efried has quit IRC20:29
*** efried has joined #openstack-meeting20:29
*** tssurya has quit IRC20:32
*** ttsiouts has joined #openstack-meeting20:36
*** awaugama has quit IRC20:36
*** e0ne has joined #openstack-meeting20:37
*** Vadmacs has quit IRC20:38
*** markvoelker has joined #openstack-meeting20:41
*** erlon has quit IRC20:55
*** igordc has joined #openstack-meeting20:58
*** martial has joined #openstack-meeting21:00
martialHello21:00
martial(On my phone for now)21:00
*** ttsiouts has quit IRC21:00
*** ttsiouts has joined #openstack-meeting21:01
*** janders has joined #openstack-meeting21:01
b1airomorning martial21:01
b1airo#startmeeting Scientific-SIG21:02
openstackMeeting started Tue Feb 19 21:02:11 2019 UTC and is due to finish in 60 minutes.  The chair is b1airo. Information about MeetBot at http://wiki.debian.org/MeetBot.21:02
jandersgood morning, good evening All21:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:02
*** openstack changes topic to " (Meeting topic: Scientific-SIG)"21:02
openstackThe meeting name has been set to 'scientific_sig'21:02
b1airo#chair martial21:02
openstackCurrent chairs: b1airo martial21:02
b1airohi janders21:02
jandersI just had a really interesting chat about baremetal cloud & networking21:03
janderswould you be interested in spendning some time on this?21:03
b1airooh yeah! who with?21:03
jandersRHAT, Mellanox and Bright Computing21:03
*** eharney has quit IRC21:03
b1airodo tell21:03
jandershave you guys played much with pxe booting baremetals from neutron tenant networks (as opposed to the nominated provisioning network)?21:04
b1airono, not at all, but it's clearly one of the key issues in multi-tenant bare-metal...21:05
janderswith Bright, we're trying to make Bright Cluster Manager a Cloud-native app21:05
*** ttsiouts has quit IRC21:05
jandersit might seem out of the blue but I do see a ton of benefits of this approach and few disadvantages (other than the initial work to make it work)21:05
b1airoright, that's a very sensible approach for them21:05
jandersnow - they're all about pxe booting (as you probably know - you're a Bright user right?)21:06
b1airopresumably they'd make a new agent or something that sat inside each tenant21:06
*** gt1437 has joined #openstack-meeting21:06
jandersso you need to spin up frontend and backend networks. You put BCM-cloud on both. Then you pxeboot the compute nodes off BCM's backend21:06
b1airoNeSI has Bright (BCM and BOS) so I'm picking some stuff up, but I'm not hands on in the ops stuff here21:07
janderssame as me - big bright user but we have bright gurus in the team so I know what it does but log in maybe once a month21:07
*** raildo_ has quit IRC21:07
*** njohnston_ has quit IRC21:08
janderson Ethernet - it's all easy and just works. Networking-ansible is plumbing VLANs, there's no NIC side config21:08
janderson IB though provisioning_network pkey needs to be pre-set in the HCA FW21:08
*** njohnston_ has joined #openstack-meeting21:08
jandersafter that's done it's rock-solid but one negative side effect is you can't pxeboot off tenant networks. We tried some clever hacks with custom compiled ipxe but nah. The silicon will drop any packets that do not match the pre-set pkey21:09
janderslooks like dead end21:09
jandersso - back to the drawing board21:09
jandersthe kit I've got does have ethernet ports, they are just shut off cause the switching has no SDN21:09
jandersit does have ssh though hence... networking-ansible!21:10
b1airohmm, until MLNX creates a new firmware with some pre-shared trusted key or some such21:10
jandersI was told it can be made work in parallel to Mellanox SDN, both can be configured at the same time just for different physnets21:10
jandersI think this can be made work (pending networking-mellanox testing on these switches)21:10
b1airowhich kit do you have?21:11
jandersM1000es from Dell21:11
janders(I have many different platforms but that's the one I got in large numbers for OpenStack to start with)21:11
jandershere I wanted to ask you what you think about all this and what is your experience with networking-ansible21:11
jandersI'll VPN into work to get the exact switch details (RHAT are asking too) bear with me if I drop out21:12
jandersok.. still here?21:13
*** yamamoto has joined #openstack-meeting21:13
*** markvoelker has quit IRC21:13
b1airohaven't used networking-ansible myself yet, though it was on the radar when looking at trying to insert an Ironic Undercloud into things at Monash, where our networks are all 100G Mellanox Spectrum running Cumulus21:14
jandersForce10 MXL Blade21:16
jandersthat's my eth platform21:16
janderson that - what are your thoughts on NEO+Mellanox ethernet vs networking-ansible+Mellanox ethernet?21:17
*** yamamoto has quit IRC21:18
jandersI bet there won't be feature parity yet but conceptually it's quite interesting21:18
*** ralonsoh has quit IRC21:19
jandersvery different approach21:19
jandersI suppose the same question applies to Juniper SDNs etc21:19
b1airowe first looked at NEO over 2 years ago, it wasn't very polished or nice from a UI perspective yet, so was difficult to actually realise the potential value add. i had been promising the MLNX folks we'd try again for at least 6 months before I left Monash - no doubt way back in the pile for them now21:19
b1airowe spent quite a lot of effort moving to Cumulus + Ansible for basic network management and I was/am very keen to see that progress to full automation of the network as a service21:21
*** whoami-rajat has quit IRC21:21
gt1437We will get there21:21
jandersyeah this tenant-pxeserver on baremetal/IB challenge might actually lead to interesting infrastructure changes21:22
janderson my side21:23
jandersrunning SDN and networking-ansible side-by-side does more complexity but I think more capability going forward21:23
b1airoaiming for bare-metal multi-tenant networking with Mellanox RoCE capable gear - it seemed like we would need to try Cumulus + Neutron native (networking-ansible would be a possible alternative there) and also NEO + Neutron. interesting thing there is that value of Cumulus suddenly becomes questionable if you're managing the BAU network operations via Neutron, given e.g. NEO can do all the day0 switch provisioning21:24
b1airostuff21:24
jandersOpenStack internal traffic (other than storage) couldn't care less about IB - maybe it's better if these are keep separate so there is even more IB bandwidth for the workloads (and less jitter)21:24
janderswith Cumulus, do they have the same SDN-controller-centric architecture as NEO?21:25
jandersDo you configure Cumulus URL in Neutron SDN section, and the flow is Neutron>Cumulus>Switching gear?21:25
b1airoyeah our basic cloud hpc networking architecture used dualport eth with active-backup bonding for host stuff (openstack, management, etc) and sriov into tenant machines on top of the default backup interface21:26
jandershttps://docs.cumulusnetworks.com/display/CL35/OpenStack+Neutron+ML2+and+Cumulus+Linux21:27
jandersI see Cumulus has HTTP service listening21:27
jandersHowever, in Neutron, individual switches are configured21:27
b1airono central controller that i'm aware of for Cumulus, the driver has to talk to all the devices21:27
jandersok.. so what's the HTTP with REST API service for? Do switches talk to it?21:27
b1airobut yes, all restful apis etc, no ssh over parallel python ;-)21:28
*** e0ne has quit IRC21:28
jandersor is the REST API running on each switch?21:30
b1airothe rest api is on the switches themselves21:30
b1airoya21:30
jandersok!21:30
jandersright...21:30
jandersnice - no two sources of truth (Neutron DB and then SDN DB)21:30
jandershowever troubleshooting will be fun (log aggregation will help to a degree)21:31
b1airothat's true21:31
janderswith centralised SDN my experience is - they work 99% - but when they stop, life sucks21:31
b1airoexactly, one of the things we were trying to wrangle with - monitoring, visibility, etc etc21:31
jandersmanual syncing of the DBs would be a nightmare and a high risk operation21:32
b1airook, i'm at a conference and talk i'm in (Barbara Chapman talking about OpenMP for Exascale) just finished21:32
b1airoso i'm going to have to scoot pretty soon21:33
martialwhat are the other topics to cover?21:33
b1airobetter touch on the ISC19 BOF in case anyone is lurking21:33
janderswhat conference? :)21:33
b1airo...21:33
b1airoeResearchNZ :-)21:33
jandersdo you guys know if we're still on track for speaker notifications for Denver?21:34
jandersit would be great to book travel by the end of this week if possible21:34
b1airohad a plenary before that on the Integrated Data Infrastructure at Stats NZ - enviable resource coming from the Aus context, clearly digital health in australia didn't talk to them before launching my health record...21:35
jandershaha :)21:35
martialjanders: I heard they are runing a day late21:35
b1airoah yes, the track chairing process got extended till Sunday just gone as there was a slight booboo that locked everyone out of the tool21:35
jandersright.. so no notifications this week?21:36
martialI heard the 20th instead of the 19th21:36
b1airobut we've done our job now, so i guess foundation staff will be doing the final checks and balances before locking in21:36
jandersah ok21:37
jandersI will keep this in mind while making plans21:37
jandersthanks guys - this is very helpful21:37
b1airosounds possible martial , let me check my email21:37
b1airojanders: just quietly, you can confidently make plans21:37
*** _pewp_ has quit IRC21:37
*** _pewp_ has joined #openstack-meeting21:38
janders:) :) :) thank you21:39
jandersb1airo: not sure if this is relevant to you but Qantas double status credits are on21:41
jandersor are you with AirNZ these days?21:41
b1airoyeah kiwi-air all the way ;-)21:42
jandersNice. Good airline and nice aircraft.21:43
b1airook i'd better run. martial , shall we finish up now or do you want to kick on and close things out?21:43
jandersI don't have anything else.21:43
martialI can close21:43
b1airoagreed janders , they can be a bit too progressive/cringe-worthy with their safety videos at times though21:43
martialalthough not sure if we have much to add21:43
martial#topic ISC BoF21:44
*** openstack changes topic to "ISC BoF (Meeting topic: Scientific-SIG)"21:44
b1airook, i'm off to the coffee cart. o/21:44
janderssafe travels mate!21:44
martiallike @b1airo mentioned, we are proposing a OpenStack HPC panel for ISC 19 (Frankfurt, June)21:44
jandersI haven't managed to get anything out of my colleagues who might be going to ISC so unfortunately can't contribute21:44
gt1437I put in a talk in vHPC, so I might be there for ISC21:45
martialgt1437: cool :)21:46
jandersnice!21:46
gt1437and just realised the one after Denver is Shanghai, that's cool21:47
*** artom has quit IRC21:47
jandersoh wow21:47
jandersthanks, I didn't know (Shanghai was in my top 3 guesses though)21:48
gt1437yeah thought it was going to be beijing, anyway, still good21:49
*** hongbin has quit IRC21:49
jandersany goss about the exact dates?21:49
janders(the website states just "nov 2019")21:50
gt1437no idea21:50
martialno dates yet21:51
*** mkarray has joined #openstack-meeting21:52
gt1437I'm off to meetings..ciao21:54
*** gt1437 has quit IRC21:55
jandershave a good day21:55
martialhave a good day all,21:55
martialanything else?21:55
martialotherwise just a reminder that we will have the Lighting talk at the summit :)21:55
jandersI think we're good21:56
jandersthank you Martial21:56
martial#endmeeting21:57
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"21:57
openstackMeeting ended Tue Feb 19 21:57:10 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:57
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-02-19-21.02.html21:57
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-02-19-21.02.txt21:57
openstackLog:            http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-02-19-21.02.log.html21:57
*** janders has quit IRC21:57
*** cloudrancher has joined #openstack-meeting21:57
*** sdake has joined #openstack-meeting21:58
*** enriquetaso has quit IRC22:01
*** ttsiouts has joined #openstack-meeting22:07
*** carlos_silva has quit IRC22:08
*** sdake has quit IRC22:09
*** markvoelker has joined #openstack-meeting22:10
*** ttsiouts has quit IRC22:12
*** rcernin has joined #openstack-meeting22:26
*** aagate has quit IRC22:27
*** cloudrancher has quit IRC22:39
*** markvoelker has quit IRC22:44
*** cloudrancher has joined #openstack-meeting22:47
*** cloudrancher has quit IRC22:51
*** munimeha1 has quit IRC23:02
*** eharney has joined #openstack-meeting23:26
*** mattw4 has quit IRC23:29
*** ociuhandu has quit IRC23:30
*** ociuhandu_ has joined #openstack-meeting23:30
*** imacdonn has joined #openstack-meeting23:32
*** imacdonn_ has joined #openstack-meeting23:32
*** imacdonn_ has quit IRC23:32
*** liuyulong has joined #openstack-meeting23:40
*** markvoelker has joined #openstack-meeting23:41
*** mriedem has quit IRC23:41
*** yaawang has quit IRC23:53
*** mattw4 has joined #openstack-meeting23:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!