15:03:41 #startmeeting third-party 15:03:42 Meeting started Mon May 23 15:03:41 2016 UTC and is due to finish in 60 minutes. The chair is anteaya. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:03:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:03:46 The meeting name has been set to 'third_party' 15:03:49 hello 15:03:52 thank you lennyb 15:03:55 :) 15:04:10 o/ 15:04:11 infra is having an operating system upgrade sprint this week 15:04:15 I was in mid-patch 15:04:17 anteaya, I see at #link https://wiki.openstack.org/wiki/Meetings/ThirdParty that there are still 0800 utc meeting 15:04:25 thanks for the tap on the shoulder lennyb 15:04:43 lennyb: thank you 15:04:48 I'll clean that up 15:05:02 anteaya: you prevented me from doing it twice a week :) 15:05:19 lennyb: you were getting so good at it 15:05:21 :) 15:05:34 infra is having a sprint this week: https://wiki.openstack.org/wiki/VirtualSprints#Infra_Trusty_Upgrade 15:05:38 newcomers welcome 15:05:56 does anyone have anything they would like to discuss this week? 15:06:32 if it's only you and me again, I will let you go to infra 15:06:42 mmedvede: is here too 15:06:50 mmedvede: thanks for doing the meeting last week 15:07:12 anteaya: you're welcome 15:07:13 mmedvede: did you use/worked with docker? 15:07:15 we just disabled a personal account leaving CI comments on cinder patches: https://review.openstack.org/#/c/319965/1 15:07:25 lennyb: no, I have not yet 15:07:25 so if you see it show up again, let us know 15:07:32 good morning 15:07:45 lennyb: you are doing investigations with docker are you not? 15:07:48 asselin: morning 15:08:22 anteaya: yes. we are checking if we ca move our ci to dockers. 15:08:48 how is your investigation coming along? 15:09:21 to many issue :( 15:10:14 :( 15:10:18 since it's not 'pure vm' and using host kernel there are a lot of modules and libs that are in conflict. but we are still on it 15:10:27 go you 15:11:45 anyone else doing anything with containers or looking at doing anything with containers? 15:12:13 i am in contact with wznoinsk 15:12:19 awesome 15:12:31 is wznoinsk also doing some of this kind of work? 15:12:35 we'd like to, but right now too swamped with just maintaining existing CI 15:13:10 mmedvede: I understand 15:13:27 lennyb: did you end up posting about this to a mailing list? 15:13:41 I think we discussed you doing so but I can't remember if you did 15:13:42 anteaya: no, not yet. 15:13:50 ah did we discuss you doing so? 15:14:04 anteaya: yes. 15:14:07 ah okay 15:14:15 glad my memory is still intact 15:14:57 so I do think a mailing list post would draw from a wider audience to see who else is going this direction 15:15:06 when you feel like posting something 15:15:19 anteaya: sure 15:15:22 thanks 15:15:30 any more on the topic of containers? 15:17:07 does anyone have any other topic they would like to discuss today? 15:17:59 does anyone have any objection to me closing the meeting? 15:18:03 does anyone has problems with their zuul-merger recently? 15:18:14 mmedvede: thank you 15:18:20 mmedvede: what kind of? 15:18:31 it often hangs while trying to get nova patches 15:18:57 mmedvede: have you something you can share in a paste? 15:19:17 i.e. you'll notice that merge queue is not getting smaller, and then when you check process list, one of 'ssh review.o.o' subprocesses is hanging/taking too long 15:19:43 mmedvede: do you know when this began? 15:19:53 I saw it a while ago ( about month ago ) 15:19:53 anteaya: I noticed about a week ago 15:20:03 hmmm 15:20:20 anteaya: I suspect it could be either our connection, or review.o.o being overwhelmed 15:20:23 have either of you looked at zuul bugs to see if anything has been filed? 15:20:32 nope 15:20:41 mmedvede: either way it would be good to document what you are seeing 15:20:58 mmedvede: would you be willing to file a zuul story on it? 15:20:58 anteaya: we seem to have issues if the load is low, ssh connection seem to timeout but not get closed. Thoug, we’re still investigating this issue so I wouldn’t say for sure it’s a zuul bug, yet 15:21:14 ociuhandu: okay fair enough 15:21:15 hi all, btw :) 15:21:21 ociuhandu: hello 15:21:37 ociuhandu: is your system the same system mmedvede is talking about? 15:21:42 or a different one? 15:21:47 anteaya: ok. Asked in case someone sees it. We also have more patches now, I am trying to add more zuul-mergers 15:22:03 different one 15:22:06 mmedvede: yup, makes sense 15:22:10 it is definitely not what ociuhandu is talking about, as it happens when there are many patches 15:22:34 okay yeah if different systems are seeing issues with zuul mergers then it is worth documenting something somewhere 15:22:40 although I did see during slow times zuul also stops enqueueing patches sometimes 15:22:53 since the zuul, zuul merger, zuul launcher work is ongoing 15:23:16 worth noting we are pinning zuul, so any new work should not affect it 15:23:19 is there a thread on the infra mailing list about this? 15:23:32 or would that be the best place to document some things? 15:23:44 mmedvede: I saw something similar, also /var/log/zuul/*log have size of 0 in such cases 15:23:46 I suspect jeblair is interested in hearing your experiences 15:23:56 and would like a chance to hear details and follow up 15:24:02 no, I did not want to start thread until I know it is not something with upstream 15:24:07 if load is an issue then that needs to be conveyed 15:24:38 mmedvede: okay, well if 3 different systems are having zuul issues 15:24:51 I do think that is something that jeblair would like to know about 15:25:03 how should we share that information? 15:25:36 anteaya: I know that the "solution" would be to update to latest zuul first 15:25:48 mmedvede: okay fair enough 15:25:57 in our case, we’ll start a thread once we confirm the cause, as we have 2 environments in 2 datacenters and while both have the issue, one of them is showing it more frequently (i.e. once every 1-2 days, vs, less than once a week on the other one) 15:26:00 because we pin, anything I can provide would probably be of little value 15:26:13 unless the problem is due to review.o.o 15:26:14 hmmmm 15:26:35 mmedvede: it might be, but we have no way of investigating without having something filed 15:26:46 if it is due to review.o.o , I expected more people to have that problem 15:26:53 I'm not sure where jeblair is this week so don't know when/if he will see these meeting logs 15:27:03 mmedvede: maybe they do 15:27:14 but maybe like you they don't want to say anything 15:27:32 hehe 15:27:33 so this is why I think at least a thread on the infra mailing list would be useful here 15:28:04 anteaya: ok, I'll try a couple of things first, and might start a thread 15:28:11 mmedvede: thank you 15:28:36 I appreciate you mentioning the issue 15:28:45 anything more on this topic? 15:29:28 any objection to me closing the meeting? 15:29:42 oh sorry, any other topic anyone would like to discuss? 15:29:48 I forgot to ask that question 15:29:49 not from me. Thanks ociuhandu, lennyb for the information 15:30:05 mmedvede: thanks for bringing up the topic 15:30:38 no other points from me, thanks 15:30:43 ociuhandu: thank you 15:30:47 nope 15:30:55 now I'll ask, does anyone object to me closing the meeting? 15:31:16 no 15:31:17 no, see you all next week 15:31:23 thank you all 15:31:32 I appreciate your attendance and participation 15:31:36 see you next week 15:31:39 #endmeeting