16:00:05 <slaweq> #startmeeting neutron_ci
16:00:06 <openstack> Meeting started Tue Apr 16 16:00:05 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:08 <slaweq> hi
16:00:10 <openstack> The meeting name has been set to 'neutron_ci'
16:00:23 <ralonsoh> hi
16:00:28 <mlavalle> o/
16:00:44 <slaweq> first of all
16:00:46 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:00:54 <slaweq> please open it now
16:01:11 <bcafarel> o/
16:01:23 <slaweq> #topic Actions from previous meetings
16:01:36 <slaweq> first one: mlavalle to debug reasons of neutron-tempest-plugin-dvr-multinode-scenario failures
16:01:57 <mlavalle> slaweq: I spent not as much time as I wanted. so no conclusions here yet
16:02:25 <slaweq> sure, lets assign it to You for next week as reminder, ok?
16:02:52 <mlavalle> yes
16:02:56 <mlavalle> please
16:03:00 <slaweq> #action mlavalle to debug reasons of neutron-tempest-plugin-dvr-multinode-scenario failures
16:03:02 <slaweq> thx
16:03:13 <slaweq> so lets go the next one
16:03:15 <slaweq> ralonsoh to report and triage new fullstack test_min_bw_qos_policy_rule_lifecycle failure
16:03:39 <ralonsoh> slaweq, I think that was solved
16:03:53 <ralonsoh> do you have the launchpad id?
16:04:06 <slaweq> no, I don't have it now
16:04:52 <ralonsoh> slaweq, ok, let me check it, I'll report it later
16:05:17 <slaweq> ralonsoh: was it this one https://bugs.launchpad.net/neutron/+bug/1824138 ?
16:05:18 <openstack> Launchpad bug 1824138 in neutron "Fullstack QoS tests should not handle other tests port events" [High,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)
16:05:36 <ralonsoh> exactly
16:05:46 <slaweq> ok, so it's not fixed yet
16:05:48 <ralonsoh> and the patch: https://review.openstack.org/#/c/651523/
16:06:01 <slaweq> please review this patch if You will have some time mlavalle :)
16:06:21 <slaweq> thx ralonsoh for fixing this bug :)
16:06:24 <ralonsoh> np
16:06:29 <mlavalle> slaweq: ack
16:06:31 <ralonsoh> we need to back port it to stein
16:07:04 <slaweq> I saw You added tag for backport also, will You take care of it once it will be merged?
16:07:09 <ralonsoh> sure
16:07:16 <slaweq> thx a lot
16:07:24 <slaweq> ok, next one then
16:07:26 <slaweq> slaweq to report and debug yet another db migration functional tests issue
16:07:35 <slaweq> I reopened old bug https://bugs.launchpad.net/neutron/+bug/1687027 for it
16:07:36 <openstack> Launchpad bug 1687027 in neutron "test_walk_versions tests fail with "IndexError: tuple index out of range" after timeout" [High,Fix committed] - Assigned to Slawek Kaplonski (slaweq)
16:07:43 <slaweq> And sent a patch https://review.openstack.org/651371
16:07:58 <slaweq> it's merged now and I didn't saw same issues again in last couple of days
16:08:05 <slaweq> so hopefully it will be better now
16:08:21 <slaweq> but if You would see something like that, please ping me :)
16:08:59 <slaweq> ok, next one
16:09:01 <slaweq> * mlavalle will send DNM patch which will add tcpdump in routers' namespaces to debug ssh issue
16:09:27 <mlavalle> https://review.openstack.org/#/c/653021/
16:09:43 <bcafarel> nice so these failing-again tests were "just" because of different raised exception? good to have fixed
16:10:30 <slaweq> bcafarel: yes, it seems so :)
16:10:40 <slaweq> mlavalle: thx nice, clean patch :)
16:10:54 <slaweq> I hope we will spot this issue and we will be able to debug something from it
16:11:10 <slaweq> next one was:
16:11:12 <slaweq> slaweq will send DNM patch to tempest to dump router's namespace state when ssh will fail
16:11:27 <slaweq> and I didn't have time to do that :/ sorry
16:11:37 <slaweq> I will add it for myself for this week
16:11:43 <slaweq> #action slaweq will send DNM patch to tempest to dump router's namespace state when ssh will fail
16:12:12 <mlavalle> slaweq: add an action to me to recheck the tcpdump patch and analyze output
16:12:17 <mlavalle> please
16:12:41 <slaweq> #action mlavalle to recheck tcpdump patch and analyze output from ci jobs
16:12:52 <slaweq> here You go :)
16:12:56 <mlavalle> :-)
16:13:25 <slaweq> and the last one (but not least)
16:13:28 <slaweq> njohnston move wsgi jobs to check queue nonvoting
16:14:12 <slaweq> I don't think he did it this week
16:14:26 <slaweq> I will assign it to him for next time to not forget
16:14:31 <slaweq> #action njohnston move wsgi jobs to check queue nonvoting
16:14:55 <slaweq> ok, that's all actions from last week
16:15:03 <slaweq> anything You want to add/ask?
16:16:30 <slaweq> ok, so lets move on
16:16:37 <slaweq> #topic Python 3
16:16:43 <slaweq> Stadium projects etherpad: https://etherpad.openstack.org/p/neutron_stadium_python3_status
16:16:58 <slaweq> I don't think there was any progress on it recently
16:17:55 <slaweq> I'm looking into this etherpad now and I see that most of those py27 jobs are still defined as legacy jobs, so we should also try to move them to zuulv3
16:18:36 <slaweq> I will probably have some time to work on those slowly after PTG
16:18:54 <mlavalle> ok
16:19:00 <slaweq> but I also think that for projects like midonet or odl we will need some help from teams which are responsible for it
16:19:15 <mlavalle> yeap
16:19:19 <slaweq> as e.g. I have no any knowledge about those projects, configs and so on
16:19:30 <slaweq> and probably none of us have such knowledge
16:20:26 <slaweq> mlavalle: do You think it is reasonable to add such (short) topic to PTG etherpad?
16:20:38 <mlavalle> we can ask yamamoto and manjeets can help
16:20:47 <mlavalle> and yes, let's discuss it in Denver
16:20:56 <slaweq> ok, I will add it to etherpad then
16:20:58 <slaweq> thx
16:21:18 <slaweq> anything You want to add regarding python3?
16:21:31 <mlavalle> not from me
16:21:40 <slaweq> ok, lets move on then
16:21:43 <slaweq> next topic
16:21:45 <slaweq> #topic tempest-plugins migration
16:21:51 <slaweq> Etherpad: https://etherpad.openstack.org/p/neutron_stadium_move_to_tempest_plugin_repo
16:22:06 <bcafarel> just checking the agenda can probably have a "stadium in general" section (even if it will probably be short)
16:22:26 <slaweq> bcafarel: good point, I can change it :)
16:22:52 <slaweq> so regarding to tempest plugins, I just send today first PS for bgpvpn project https://review.openstack.org/652991
16:23:08 <slaweq> lets see how much I will be missing there
16:23:28 <slaweq> I also saw that bcafarel send new patch set for sfc today, right?
16:23:31 <bcafarel> and I finally sent first WIP or SFC https://review.openstack.org/#/c/653012
16:23:33 <bcafarel> :)
16:23:55 <mlavalle> I updated my patch
16:24:22 <slaweq> one question which I want to ask here
16:24:26 <bcafarel> I kept the same layout as njohnston suggested in fwaas 	neutron_tempest_plugin/fwaas/{api|scenario|...}
16:24:57 <slaweq> e.g. in bcafarel's patch I see that there is neutron_tempest_plugin/sfc/services/flowclassifier_client.py file
16:25:17 <slaweq> I moved bgpvpn_client.py file to be in neutron_tempest_plugin/services/network/json/ directory
16:25:36 <slaweq> do You think that we should do it in some consistent way for all patches?
16:25:43 <mlavalle> yes
16:25:50 <mlavalle> I originally did what slaweq did
16:26:10 <mlavalle> but then I saw what njohnston and bcafarel did and I think it is better
16:26:26 <mlavalle> This is my latest revision https://review.openstack.org/#/c/649373/
16:26:55 <mlavalle> I suggest we all stick to the same approach
16:27:05 <slaweq> ok, so I will move it back in my patch :)
16:27:12 <slaweq> and we will be consistent with it
16:27:21 <mlavalle> I also have a question....
16:27:24 <bcafarel> "everything related to a stadium project tests in its own subfolder" right?
16:27:45 <mlavalle> yeap bcafarel
16:27:59 <mlavalle> going back to my question
16:28:21 <mlavalle> as you can see, Tempest is now attempting to run the vpnaas test: http://logs.openstack.org/73/649373/5/check/neutron-tempest-plugin-vpnaas/2019e3f/testr_results.html.gz
16:28:44 <mlavalle> but they are skipped, becuase vpnaas is not confirgured in the tempest config
16:28:48 <mlavalle> whre is this file?
16:29:06 <mlavalle> it's been driving me crazy the past 30 minutes
16:29:37 <slaweq> are You asking about http://logs.openstack.org/73/649373/5/check/neutron-tempest-plugin-vpnaas/2019e3f/controller/logs/tempest_conf.txt.gz ?
16:30:24 <mlavalle> correct
16:31:15 <mlavalle> I think I have to add vpnaas as a service to the extensions, right?
16:31:53 <mlavalle> to pass this check https://review.openstack.org/#/c/649373/5/neutron_tempest_plugin/vpnaas/api/test_vpnaas.py@44
16:32:18 <mlavalle> where do I add it?
16:32:30 <slaweq> yes, to neutron extensions list I think
16:32:52 <mlavalle> yeah, but where in the tempest plugin repo is that file?
16:33:32 <slaweq> https://github.com/openstack/neutron-tempest-plugin/blob/master/.zuul.yaml#L17
16:33:40 <slaweq> it is defined in .zuul.yaml file
16:33:54 <mlavalle> ok, thanks
16:34:05 <mlavalle> driving me crazy the past 30 minutes
16:34:08 <mlavalle> ufff!
16:34:22 <slaweq> You should do it in similar way like in https://github.com/openstack/neutron-tempest-plugin/blob/master/.zuul.yaml#L304
16:34:37 <slaweq> so use "standard" ones and add "vpnaas" to it
16:34:44 * bcafarel takes notes for next sfc patchset too
16:34:53 <slaweq> or set only vpnaas will be enough, I'm not sure
16:34:59 <mlavalle> ok, thanks!
16:35:03 <slaweq> yw
16:35:41 <slaweq> ok, can we move on to the next topic then?
16:35:56 <bcafarel> good for me
16:36:01 <slaweq> #topic Grafana
16:36:03 <mlavalle> yes please
16:36:09 <slaweq> #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:37:56 <slaweq> I don't see anything what would be going bad recently
16:38:16 <slaweq> in fact I think that it's quite good comparing to what was couple of weeks ago
16:38:59 <mlavalle> yes, I agree
16:39:11 <slaweq> I was even going today through some results of recent jobs to find some examples of failures and I only found some issues with SSH (which is known)
16:39:23 <slaweq> some failures not related to neutron directly
16:39:46 <slaweq> and some failures e.g. in fullstack or functional jobs related to patch on which it was running
16:40:07 <slaweq> do You see anything what You want to discuss now?
16:40:19 <mlavalle> no, it looks good overall to me
16:40:34 <mlavalle> what do you think bcafarel ?
16:41:03 <bcafarel> loosk good to me too :)
16:41:16 <bcafarel> I will send patch soon to update http://grafana.openstack.org/d/pM54U-Kiz/neutron-failure-rate-previous-stable-release?orgId=1 (and "older stable" too) to point to stein and rocky now that stein is out
16:41:16 <slaweq> ok
16:41:38 <slaweq> thx bcafarel :)
16:41:58 <slaweq> so, lets talk a bit about tempest job now
16:41:58 <bcafarel> np, it should not be too complicated :)
16:42:00 <slaweq> #topic Tempest/Scenario
16:42:16 <slaweq> I saw one strange failure in http://logs.openstack.org/27/642527/4/check/neutron-tempest-plugin-api/9aeecb7/testr_results.html.gz
16:42:54 <slaweq> it is strange for me because it looks like it failed because GET returned no body
16:43:04 <slaweq> But from logs it looks that all was fine: http://logs.openstack.org/27/642527/4/check/neutron-tempest-plugin-api/9aeecb7/controller/logs/screen-q-svc.txt.gz#_Apr_15_12_46_02_995043
16:43:31 <slaweq> and what is the most strange for me is that it failed in line https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/api/admin/test_network_segment_range.py#L209
16:43:43 <slaweq> and it somehow passed assertions in above lines
16:43:57 <slaweq> but I have no idea how it is possible
16:44:26 <slaweq> maybe someone wants to take a look at it and will find something what I missed there :)
16:44:35 <ralonsoh> I'll try to debug this tomorrow
16:44:44 <slaweq> ralonsoh: thx a lot
16:44:44 <ralonsoh> I'll ping you if I find something
16:44:48 <slaweq> great
16:45:03 <slaweq> I will add it as an action for You to not forget about it, ok?
16:45:10 <ralonsoh> ok
16:45:32 <slaweq> #action ralonsoh to debug issue with neutron_tempest_plugin.api.admin.test_network_segment_range test
16:45:34 <slaweq> thx a lot :)
16:45:55 <slaweq> and that was the only new case which I found recently and wanted to raise here
16:46:02 <mlavalle> cool
16:46:06 <slaweq> anything else You want to talk about?
16:46:13 <mlavalle> not from me
16:46:38 <bcafarel> nothing new either
16:46:47 <slaweq> ok, so last topic for today
16:46:49 <slaweq> #topic Periodic
16:47:01 <slaweq> I have only one question regarding to periodic jobs
16:47:10 <slaweq> we are running those jobs with python 35 still
16:47:16 <slaweq> should we move them to py36?
16:47:24 <mlavalle> i think so
16:47:28 <slaweq> there was email from gmann about that recently
16:47:45 <bcafarel> probably yes, as they are getting dropped in "normal" jobs
16:47:54 <slaweq> ok, so I will propose patch for that this week
16:48:10 <slaweq> #action slaweq to switch periodic jobs from py35 to py36
16:48:17 <gmann> yeah, we need to move periodic job to py36.  thanks slaweq
16:48:28 <slaweq> gmann: thx for confirmation :)
16:48:48 <slaweq> ok, thats all from my side for today
16:49:05 <slaweq> if You don't have anything else I will give You 10 minutes back today
16:49:19 <mlavalle> I take those 10 minute
16:49:29 <mlavalle> how about you ralonsoh and bcafarel
16:49:41 <bcafarel> 10 minutes back sounds great indeed :)
16:49:42 <ralonsoh> nothing from me
16:49:49 <slaweq> Happy Easter and see You next week then :)
16:49:53 <slaweq> thx for attending
16:49:57 <slaweq> #endmeeting