18:02:46 #startmeeting networking_policy 18:02:46 Meeting started Thu Nov 2 18:02:46 2017 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:47 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:49 The meeting name has been set to 'networking_policy' 18:02:53 hello! 18:02:57 annakk: hi 18:03:08 hi annakk! 18:03:09 not much of an agenda from me today 18:03:22 i was hoping to have made more progress with my pike-prep patches 18:03:38 unfortunately there are still some UTs failing with the 2nd patch 18:04:00 and then there is a timeout issue with the newton branch 18:04:15 (ideally we shouldnt be dealing with the newton branch any more!) 18:04:46 annakk: wanted to check with you 18:05:12 but update you first about our meeting in SJ (involving rkukura and tbachman while they visited here last week) 18:05:37 we tried to scope out the work required to sync with Pike 18:05:48 some of that already started 18:06:06 tbachman would be looking at some of the apic repos is some work is needed 18:06:20 sounds good 18:06:31 annakk: you had mentioned that you had identified some potentially tricky item for Pike sync? 18:07:14 i remember there were more OVO in pike, but I don't recall anything particularly tricky 18:07:33 annakk: ah okay, that sounds encouraging 18:07:45 i was hoping that the OVO work was transparent to us 18:07:59 but we might need to do some work in the DB plugin, i havent looked into it 18:08:29 we hit some ovo-related issues in ocata 18:08:56 annakk: okay, can you summarize those if you recall them? 18:09:07 i think it was related to partial ovo in neutron 18:09:34 annakk: okay, i believe the OVO work was completed in Pike? 18:09:51 I'm not sure 18:10:02 I think all core objects are done 18:10:18 hopefully we'll be better off this time 18:10:23 annakk: okay 18:10:56 the current set of failures i am seeing in my patch are a bit wierd, something seems to be failing in the API layer (response path) when creating address scopes 18:11:37 will reach out to you guys if i cant make more progress, but at this point i need to give it a little more time 18:12:05 ok 18:12:06 any suggestions on how to make progress on getting past the timeout issue for py27 in the stable/newton branch? 18:12:21 ok, i think address scopes and subnets were the problematic ovo areas last time 18:12:24 this is the patch: #link https://review.openstack.org/#/c/516766/ 18:12:38 i tried disabling some tests, still failing 18:12:56 the root cause is that the job is not getting resources, i.e. worker threads 18:13:21 but we cant control that 18:13:51 i has made this observation before that EoL’ed branches seem to get fewer resources in the gate jobs 18:13:59 (which sounds logical to me) 18:14:03 is this a case where certain (or random) UTs are timing out, or some timeout in the setup or something? 18:14:11 we are not seeing the timeout issue in the ocata or master branchbes 18:14:15 no ideas from me.. (other than increase the timeout even more :)) 18:14:19 lol 18:14:26 annakk: increasing the time out does not help 18:14:35 since its not one particular test that times out 18:14:43 i already tried increasing the timeout 18:14:49 rkukura: thats the tought part 18:14:51 The tesr.conf timeout is only for individual UTs, right? 18:15:02 rkukura: yes 18:15:15 rkukura: the logs dont get persisted at the time the timeout happens 18:15:23 so you cant tell which test was executing at that time 18:16:06 i do see that the tests which regularly take the longest to execute, are actually completing in the timed out runs 18:16:55 i think the timeout just happens on the entire job, meaning if the job only gets 4 threads (or lesser), the job is not going to complete in 35 mins or so 18:17:10 if you see the master or ocata the jobs get 8 worker threads 18:17:36 SumitNaiksatam: that makes sense, I guess 18:17:47 so i guess the obvious thing to do would be to disable a few more tests :-( 18:17:51 Is there any easy way to try with most tests disabled? 18:17:59 rkukura: yeah just saying 18:18:12 Maybe just leave one module enabled? 18:18:19 i disabled the most obvious big chunk of tests 18:18:23 rkukura: okay 18:18:38 i really hope we dont have to keep going back to newton 18:18:46 but i guess people are still using newton 18:18:53 * tbachman nods 18:19:09 the patch in question here is #link https://review.openstack.org/#/c/515203/ 18:19:25 which i believe is in response to some customer issue 18:19:48 anyway, i guess we have discussed this enough 18:20:05 oh just remembered, annakk weren’t you going to be in Sydney? 18:20:12 yes 18:20:28 * tbachman wonders if/when annakk is getting on a plane 18:20:42 should be pretty soon, if you don’t want to knackered when you start the summit 18:20:50 s/to/to be/ 18:20:50 tomorrow evening 18:20:56 annakk: awesome 18:21:11 annakk: would love to hear about it when you return! 18:21:14 i am sure you are looking forward to it, will be fun! 18:21:21 tbachman: +1 18:21:51 annakk: have you attended the summit before, or is this your first one? 18:22:19 I've been once in vancouver, but I wasn't involved with openstack back then 18:22:33 annakk: right, i was guessing you might have attended that 18:22:48 vancouver was really nice, loved it! 18:22:54 i hope to see you all here in may 18:23:05 annakk: oh sure, hopefully 18:23:16 we might need to use this template #link https://www.openstack.org/blog/2017/10/dear-boss-i-want-to-attend-openstack-summit-sydney/ 18:23:21 :-P 18:23:40 rkukura you are not going to Sydney, are you? 18:23:50 no 18:24:04 would love to go diving 18:24:15 rkukura: yeah, absolutely! 18:24:27 before it disappears 18:24:31 right 18:24:43 you made it to the carribean and napa in time i guess! 18:24:59 maybe I shouldn’t go anywere 18:25:02 anywhere 18:25:04 * tbachman will have to leave in about 5-10 minutes 18:25:13 tbachman: sure 18:25:19 anything else we need to discuss today? 18:25:31 not from me 18:25:42 i was wondering about the failing jobs? 18:26:02 annakk: yes its on my todo list to fix those 18:26:06 sorry about the delay 18:26:16 the best thing would have been to migrate them to the devstack plugin 18:26:30 np 18:26:50 good point, i should have brought it up 18:26:58 those jobs are running with patched devstack 18:27:07 i just wanted to ask if we know what's wrong with then 18:27:08 the aim job uses the devstack plugin (and passes) 18:27:28 annakk: i do some patching to checkout the appropriate branches 18:27:33 that is somehow failing 18:27:49 its not even completing the devstack install 18:28:01 so its not the case that the tests are failing 18:28:16 annakk: you use the devstack plugin for your vmware job, right? 18:28:21 yes 18:28:43 yeah, so the ideal thing would be for us to migrate the setup for these jobs to use the devstack plugin 18:28:50 but thats a little more work 18:28:58 i will see how i can get this fixed at the earliest 18:29:18 annakk: what is time sensitivity on getting the pike branch ready? 18:30:06 (i do realize that as a matter of discipline we should be getting it ready asap) 18:30:32 I need to check, I think it's not urgent since it won't make it to next release anyway 18:30:50 annakk: okay, good to know 18:31:10 but we will try to sync this up at the earliest 18:31:15 thanks all for joining today 18:31:26 thanks! 18:31:26 annakk: enjoy your time in Sydney! 18:31:30 thanks! 18:31:31 SumitNaiksatam: thanks! 18:31:38 rkukura: tbachman: thanks 18:31:40 bye 18:31:46 #endmeeting