16:01:40 <slaweq> #startmeeting neutron_ci
16:01:41 <openstack> Meeting started Tue Dec 17 16:01:40 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:43 <slaweq> hi again
16:01:45 <openstack> The meeting name has been set to 'neutron_ci'
16:01:46 <ralonsoh> hi
16:01:48 <njohnston> o/
16:02:43 <bcafarel> o/
16:02:52 <slaweq> lets start and do this quick :)
16:03:02 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:03:09 <slaweq> #topic Actions from previous meetings
16:03:17 <slaweq> njohnston to check failing NetworkMigrationFromHA in multinode dvr job
16:03:58 <njohnston> sigh.  I gut distracted from this, going to circle back to it today.
16:04:06 <slaweq> sure :)
16:04:11 * njohnston writes it in marker on his arm
16:04:16 <slaweq> I will add it as a reminder for next time
16:04:23 <slaweq> #action njohnston to check failing NetworkMigrationFromHA in multinode dvr job
16:04:32 <slaweq> ok, next one
16:04:34 <slaweq> slaweq to check and try to optimize neutron-grenade-multinode jobs memory consumption
16:04:43 <slaweq> I just sent patch https://review.opendev.org/699441 - let's check how it will be
16:04:59 <slaweq> I want to disable swift, etcd and cinder-backup
16:05:19 <ralonsoh> +1
16:05:20 <slaweq> I'm not sure if we can disable other services too
16:05:26 <ralonsoh> try and error
16:05:53 <slaweq> where is error?
16:05:57 <ralonsoh> no no
16:06:04 <ralonsoh> I mean: to try to disable other services
16:06:11 <ralonsoh> and see if there is an error
16:06:12 <njohnston> *trial and error
16:06:15 <ralonsoh> sorry!
16:06:17 <ralonsoh> trial
16:06:18 <njohnston> :-)
16:06:27 <slaweq> ahh, ok
16:06:30 <slaweq> :)
16:07:02 <slaweq> I will also try to disable cinder completly - maybe that will be fine too
16:07:22 <slaweq> ok, next one
16:07:24 <slaweq> ralonsoh to check ironic-tempest-ipa-wholedisk-bios-agent_ipmitool-tinyipa job
16:07:31 <ralonsoh> yes
16:07:47 <ralonsoh> we found that dhcputils package was missing
16:08:07 <ralonsoh> #link https://review.opendev.org/#/c/698425/
16:08:15 <ralonsoh> ^^ slaweq this is tours
16:08:19 <ralonsoh> *yours
16:08:25 <slaweq> yes, that was funny because all neutron jobs were actually fine by accident :)
16:08:42 <ralonsoh> yeah...
16:08:48 <ralonsoh> (btw, dnsmasq-utils package)
16:10:48 <slaweq> ok, next one
16:10:51 <slaweq> slaweq to check tripleo job
16:11:12 <slaweq> I checked it and it is failing because this job is run on Centos 7 and using python 2.7 stilk
16:11:15 <slaweq> *still
16:11:25 <bcafarel> which is kind of problematic with master
16:11:45 <slaweq> in ooo repos they stick to specific neutron commit which works with py27 for now
16:11:51 <slaweq> but we can't do that
16:12:17 <slaweq> so we will probably need to wait until tripleo will work on centos 8 and we will switch this job to centos 8 than
16:12:47 <slaweq> and next one
16:12:49 <slaweq> ralonsoh to check periodic mariadb job failures
16:13:01 <ralonsoh> #link https://review.opendev.org/#/c/698980/
16:13:19 <ralonsoh> in devstack, if bionic and mariadb
16:13:29 <ralonsoh> then we add the mariadb repos for 10.4
16:13:39 <ralonsoh> I tested this version manually
16:13:42 <ralonsoh> and it's working
16:14:04 <slaweq> thx ralonsoh for checking that
16:14:09 <ralonsoh> yw
16:14:39 <slaweq> ralonsoh: but is this function fixup_ubuntu called in devstack jobs?
16:14:52 <slaweq> I know it was used e.g. in functional job some time ago
16:15:03 <slaweq> but I'm not sure if it's used on devstack based scenario jobs
16:15:34 <ralonsoh> it is called in neutron/tools/configure_for_func_testing.sh
16:15:58 <ralonsoh> ok, I'll check that
16:16:15 <slaweq> ralonsoh: so this script is used in functional and fullstack tests
16:16:19 <slaweq> *job
16:16:20 <ralonsoh> I know
16:16:27 <slaweq> but probably not in tempest jobs
16:17:04 <ralonsoh> I'll check that
16:17:07 <slaweq> thx
16:17:18 <slaweq> that's all actions from last week
16:17:22 <slaweq> next topic
16:17:24 <slaweq> #topic Stadium projects
16:17:38 <slaweq> I want to say that finally we finished tempest-plugins migration
16:17:49 <slaweq> thx all of You for help with that
16:18:12 <njohnston> \o/
16:18:14 <slaweq> gmann should be happy that we don't have neutron related tempest plugins in project's repos directly
16:18:23 <slaweq> but all in neutron-tempest-plugin repo now
16:18:38 <njohnston> do we need to let the QA team know?  ISTR there was something they were going to optimize/fix with the build process once this was done
16:18:53 <bcafarel> nice to see this one finally completed :)
16:19:10 <slaweq> njohnston: yes, I will ping gmann to let him know about it
16:19:21 <njohnston> +1 thanks
16:19:33 <slaweq> #action slaweq to talk with gmann about finish of tempest plugins migration
16:20:07 <slaweq> next stadium projects related topic is dropping py27 support
16:20:14 <slaweq> Etherpad: https://etherpad.openstack.org/p/neutron-train-zuulv3-py27drop
16:20:32 <slaweq> we talked about it 2 hours ago so I think we are good with it now
16:20:39 <bcafarel> just one point, I added a link that should grab most open reviews in etherpad
16:20:55 <njohnston> #success Neutron team concluded long-running project to migrate stadium tempest tests to neutron-tempest-plugin
16:21:22 <slaweq> thx bcafarel
16:21:26 <slaweq> and thx njohnston :)
16:21:52 <njohnston> bcafarel: You may want to tinyurl that, clicking on it in etherpad does not handle the parentheses well
16:22:27 <bcafarel> njohnston: oh indeed
16:22:41 <njohnston> nice use of gerrit regexing
16:23:13 <slaweq> will be back in 1 minute
16:23:31 <bcafarel> thanks this is a variation on what I use to watch other repos on stable branches
16:24:38 * slaweq is back
16:25:06 <slaweq> ok, next topic
16:25:07 <slaweq> #topic Grafana
16:25:32 <slaweq> #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:26:24 <njohnston> quite a week
16:26:25 <slaweq> there was some issue with some package, like cryptography or something like that few days ago
16:26:37 <slaweq> and now we have the issue with hacking
16:26:44 <slaweq> but other than that graphs looks good IMO
16:27:16 <njohnston> lol.  As my father-in-law would say, "Other than that Mrs. Lincoln, how was the play?"
16:27:57 <slaweq> and dashboard is now updated so it has some ovn jobs also included now
16:28:24 <njohnston> excellent
16:28:43 <bcafarel> nice (and njohnston++ nice one)
16:29:09 <slaweq> do You have anything else about grafana?
16:29:20 <ralonsoh> yes
16:29:26 <ralonsoh> neutron-grenade-dvr-multinode is failing many times
16:29:34 <ralonsoh> both in the CI and the gate
16:29:54 <ralonsoh> we can see that in THE patch: https://review.opendev.org/#/c/699021/
16:30:07 <bcafarel> he I was about to do the same comment :)
16:30:08 <ralonsoh> we have been trying to merge it for the last 3 days
16:31:00 <njohnston> yet in Grafana it looks like - problem from this weekend aside - it's at 10-20% failure
16:31:04 <njohnston> in check queue
16:32:20 <slaweq> did You check reasons of those failures?
16:32:39 <slaweq> from what I was checking last week it was always related to those timeouts from placement api
16:32:45 <slaweq> or almost always
16:33:12 <ralonsoh> and the memory problems (swap problems)?
16:33:30 <slaweq> that's what coreycb said last week IIRC
16:33:40 <slaweq> so we can try to optimize it a bit
16:34:03 <ralonsoh> I know, thanks for your previous patch
16:34:44 <slaweq> I know that grenade jobs are currently our biggest pain but I don't have easy answear what to do with them
16:34:57 <slaweq> we can switch them to be non-voting but I don't think it's good idea
16:35:13 <ralonsoh> me neither, I tried to debug them without any luck
16:37:14 <slaweq> lets try to optimize those jobs a bit and check how it will work than
16:37:21 <ralonsoh> +1
16:38:00 <slaweq> anything else regarding grafana for today?
16:38:11 <slaweq> if not, lets move on to the next topic
16:39:07 <slaweq> ok, lets move on than
16:39:31 <slaweq> I don't have any new issues for fullstack/functional tests neighter for tempest jobs
16:39:37 <slaweq> so that's very good news IMO
16:39:51 <slaweq> and I have just one last topic for today
16:40:00 <slaweq> #topic On demand agenda
16:40:37 <slaweq> as I said on neutron team meeting, I'm going to cancel this meetings in this year
16:40:42 <slaweq> are You fine with that?
16:40:56 <ralonsoh> ok
16:41:00 <slaweq> we will meet again on January 7th
16:41:18 <slaweq> when we will all be back from the holidays
16:41:24 <slaweq> fine for You?
16:41:41 <bcafarel> oh yes
16:41:58 <slaweq> :)
16:42:08 <slaweq> great, I will also sent email about it this week
16:42:20 <slaweq> and that's all from me for today
16:42:33 <slaweq> do You have anything else to talk about for today?
16:43:06 <ralonsoh> no
16:43:12 <bcafarel> all good, just hoping THE patch gets in with this recheck
16:43:21 <ralonsoh> THE patch!!
16:43:31 <slaweq> LOL
16:43:37 <slaweq> THE Patch - good one :)
16:44:16 <slaweq> ok, so have a great holidays and Christmas
16:44:31 <slaweq> and see You after this break in 2020
16:44:35 <bcafarel> same!
16:44:38 <slaweq> all the best
16:44:40 <ralonsoh> see you all next year!!
16:44:41 <slaweq> #endmeeting