15:00:22 <anteaya> #startmeeting third-party
15:00:23 <openstack> Meeting started Mon Mar 23 15:00:22 2015 UTC and is due to finish in 60 minutes.  The chair is anteaya. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:27 <openstack> The meeting name has been set to 'third_party'
15:00:29 <anteaya> hello
15:00:54 <anteaya> do raise your hand if you are here for the third party meeting
15:01:00 <ameade> o/
15:01:05 <luqas> o/
15:01:11 <rhe00> hi
15:01:15 <akerr> o/
15:01:18 <asselin_> o/
15:01:21 <anteaya> hello
15:01:24 <ctlaugh> hi
15:01:27 <anteaya> how is everyone today?
15:01:37 <ctlaugh> great!
15:01:42 <anteaya> glad to hear it
15:01:44 * ameade sleepy
15:01:53 <anteaya> I can understand that
15:02:02 <anteaya> who has someting they would like to discuss?
15:02:07 <anteaya> something
15:02:17 <ctlaugh> I have some Jenkins-related questions...
15:02:26 <anteaya> okay
15:02:31 <anteaya> ctlaugh: why don't you start
15:02:52 <kaisers> Hi
15:02:59 <asselin_> my ci's been failing since around 3 am pacific time
15:03:10 <anteaya> hi kaisers
15:03:15 <anteaya> asselin_: hmmm
15:03:17 <ctlaugh> Ok, mainly looking for suggestions on what to try to solve problems where a running test dies with this: "hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel"
15:03:22 <wznoinsk> o/
15:03:29 <anteaya> asselin_: let's look at that after ctlaugh
15:03:59 <anteaya> does anyone have any suggestions for ctlaugh?
15:04:03 <ameade> ctlaugh: that's when a jenkins job was aborted, no?
15:04:04 <rhe00> ctlaugh: I saw that when jenkins tried to start a second test job ona node that had already run a job
15:04:19 <rhe00> make sure you have the zuul script set up to only run a single job
15:04:31 <ctlaugh> I have been able to get my zuul, nodepool configs working properly, and things will run along for the most part, but then I'll have the tempest runs (or sometimes even in the middle of devstack) fail like that.
15:05:01 <asselin_> when that happens, can you still ping the slave?
15:05:25 <ctlaugh> rhe00: I'll check, but I think it's only 1 job per slave
15:06:24 <anteaya> ctlaugh: can we hear from asselin_ while you check?
15:06:27 <ameade> ctlaugh: i think we had a similar issue and had to update our jenkins gearman plugin
15:06:29 <rhe00> ctlaugh: do you have these lines in your zuul layout: http://paste.openstack.org/show/195400/
15:07:05 <ctlaugh> asselin_: not sure -- I can try if I see it happen while some jobs are running right now.  I'm seeing these after having left my setup running all weekend.
15:07:23 <amotoki> do you configure gearmann jenkins plugin to offline the node after jenkins job finishes.
15:07:26 <ctlaugh> rhe00: yes
15:07:52 <akerr> ctlaugh: to expand on ameade's comment, the issue we had was gearman's plugin was too older to support the offline-node option.  Once it was updated we stopped getting nodes used multiple times
15:08:24 <amotoki> I think gearman plugin or zuul function has a configuration to offline a slave. It help your problem.
15:08:45 <ameade> ctlaugh: whats your gearman plugin version?
15:09:10 <ctlaugh> let me check
15:09:33 <ctlaugh> 0.0.7
15:09:42 <ctlaugh> shows 0.1.1 is available
15:09:58 <amotoki> another way to offline a slave node is to use Groovy postscript.
15:10:11 <akerr> I think 0.0.4 added the offline node support
15:10:24 <ctlaugh> I haven't updated any of the plugin versions after installation using os-ext
15:10:27 <amotoki> I execute "manager.build.getBuiltOn().getComputer().setTemporarilyOffline(true)" as Groovy postscript.
15:11:39 <ctlaugh> ctlaugh: I have started seeing this more only during tempest now, but last week, it would sometimes happen even before the devstack install was able to start.
15:12:03 <ameade> we did the groovy hack before updating gearman
15:12:16 <anteaya> ctlaugh: are you able to work on some of what others have provided and then report your status next week?
15:12:19 <ameade> we are on 0.1.1 for gearman plugin, not sure what we were at before
15:12:43 <ameade> 0.1.0 Update to work with Jenkins LTS ver 1.565.3 may have fixed the issue for use since we are on a later jenkins version
15:12:53 <anteaya> ctlaugh: it looks like you have a few directions, I don't think you are going to be able to try them all before the end of the meeting
15:12:53 <ctlaugh> ctlaugh: Yes, definitely.  Things are working a LOT better now -- our setup is mostly running, and the only real failures we are seeing are these jenkins exceptions.
15:13:03 <anteaya> ctlaugh: are you okay if we move on now?
15:13:08 <ctlaugh> anteaya: correct -- I'll need a bit
15:13:14 <ctlaugh> anteaya: thank you
15:13:15 <anteaya> ctlaugh: okay to move on?
15:13:16 <rhe00> ctlaugh: can you paste your layout.yaml?
15:13:24 <ctlaugh> anteaya: yes - move on, please
15:13:28 <anteaya> great
15:13:49 <anteaya> thanks all for your contributions, hopefully ctlaugh will have some success following up on those
15:13:50 <anteaya> moving on
15:13:57 <anteaya> asselin_: you had an item
15:14:23 <asselin_> yea...devstack is failing building some wheels. not sure why. STarted in the middle of the night.
15:14:28 <asselin_> http://15.126.198.151/97/164697/6/check/3par-fc-driver-master-client-pip-eos10-dsvm/5af4187/logs/devstacklog.txt.gz#_2015-03-23_10_43_00_435
15:15:04 <ctlaugh> rhe00: http://paste.ubuntu.com/10661423/
15:15:42 <anteaya> okay this moring in scrollback sdague mentioned a problem crept up on friday that affected setuptools
15:16:04 <anteaya> asselin_: I have not gone back to friday's scrollback to read what took place nor what the fix is
15:16:52 <anteaya> is anyone else experiencing what asselin_ is?
15:17:04 <kaisers> anteaya: currently not
15:17:11 <ameade> asselin_:  we havent hit the issue
15:17:23 <akerr> not the same issue really, but we have issues building the cryptography packages on anything smaller than 8G rackspace nodes
15:17:34 <asselin_> ok...maybe it's a network issue on our end....
15:17:41 <asselin_> akerr, these are 8GB nodes
15:18:11 <asselin_> I was thinking something got released this morning, but not sure what.
15:18:27 <akerr> asselin_: k, our problem usually manifests as segmentation errors and general lack of memory so I don't think its related
15:18:29 <anteaya> also I am not sure if it is related but there currently is a patch to global-requirements to bump the crypto version: https://review.openstack.org/#/c/164289/
15:18:31 <ameade> yeah looks either network or something wrong with the download mirror
15:19:07 <asselin_> yeah...maybe the package on the mirror is 'bad'
15:19:20 <anteaya> asselin_: that might be a place to begin
15:19:27 <asselin_> ok thanks
15:19:31 <anteaya> thank you
15:19:37 <anteaya> asselin_: let us know what you discover
15:19:44 <asselin_> sure
15:19:49 <anteaya> anything more for asselin_?
15:20:00 <anteaya> let's move on then
15:20:12 <anteaya> does anyone have anything else they would like to discuss?
15:20:46 <kaisers> If nobody else has, i could:
15:20:48 <rhe00> anteaya: would openstack-thirdparty be a good channel to continue discussion on these issues that people are having. in case the rest of us can help?
15:21:04 <wznoinsk> asselin_: a long shot but maybe bdist_wheel doesn't respect your proxy settings
15:21:08 <anteaya> is there an openstack-thirdparty channel?
15:21:33 <rhe00> anteaya: yes
15:21:43 <anteaya> oh okay, well discuss away
15:21:56 <anteaya> I'm not in it, I'm fragmented enough as it is
15:22:12 <anteaya> you are welcome to discuss anythign you like at any time
15:22:13 <ameade> i'll creep the channel now :)
15:22:16 <kaisers> :)
15:22:19 <kaisers> me too
15:22:21 <asselin_> wznoinsk, I will investigate that. thanks looks like a potential
15:22:22 <anteaya> do as you please
15:22:28 <rhe00> anteaya: not sure what it is for, really. I logged on and at some point another nick asked if that was the place to ask questions
15:22:40 <rhe00> I pointed him to openstack-infra at that time
15:22:46 <anteaya> well that is rather the point
15:22:59 <anteaya> since people end up in infra anyway
15:23:08 <anteaya> but if you want to talk there, noone is stopping you
15:23:16 <anteaya> but I can't spend any time tehre
15:23:28 <anteaya> I've stretched myself too thin as it is
15:23:34 <anteaya> so kaisers
15:23:42 <anteaya> did you want to mention something?
15:23:44 <kaisers> anteaya: ok, thanks
15:23:48 <kaisers> I have random httplib.ResponseNotReady exception in setup and teardown of tempest tests.
15:24:11 <kaisers> have been looking into this for quite some time now but i could not find anything wrong in the code.
15:24:21 <kaisers> openstack-infra hinted me on checking the configuration
15:24:30 <kaisers> as in everything :)
15:24:43 <kaisers> Q: Has anybody seen something like this?
15:25:15 <kaisers> I found a hint on similiar issues (not openstack related) that found the cause in some firewall settings. That's what i'm testing right now.
15:25:26 <kaisers> Just wanted to ask if anybody else has seen this...
15:25:53 <anteaya> is this ringing any bells for anyone?
15:26:06 <asselin_> no, never saw that
15:26:08 <ameade> haven't hit this one
15:26:26 <rhe00> I don't recall seeing that one
15:26:28 <ctlaugh> not yet
15:26:33 <kaisers> ok, thanks :)
15:26:41 <anteaya> kaisers: thanks for asking
15:26:43 <rhe00> can you paste a stack trace?
15:26:53 <kaisers> rhe00: yep
15:27:09 <kaisers> will take moment, please continue
15:27:28 <anteaya> does anyone else have anything else they would like to discuss today?
15:27:33 <luqas> hi, yes
15:27:38 <anteaya> hi luqas
15:27:40 <anteaya> go ahead
15:27:47 <luqas> a question on neutron moving api test from tempest to neutron
15:28:10 <luqas> and how this affects third party tests
15:28:22 <luqas> running on neutron patches
15:28:49 <anteaya> okay good question
15:28:51 <kaisers> rhe00: This test run hat ResponseNotReady all over the place: http://176.9.127.22:8081/refs-changes-63-165763-1/
15:28:55 <luqas> should we still run network/api tests from tempest or from neutron?
15:28:58 <anteaya> I don't know the answer
15:29:07 <anteaya> luqas: right, you need to know
15:29:31 <anteaya> luqas: can you make the neutron meeting tomorrow at 1400?
15:29:35 <kaisers> s/hat/had/
15:29:49 <luqas> anteaya: yes, I think so
15:30:01 <anteaya> luqas: great, I'm planning on being there as well
15:30:14 <anteaya> luqas: let's see if we can get you an answer
15:30:25 <anteaya> luqas: do you have a patch url that moves the test?
15:30:31 <luqas> anteaya: perfect, thanks
15:30:41 <anteaya> it helps to have code to give people context for the question
15:30:51 <luqas> anteaya: not right now
15:31:07 <luqas> but I can look for it later
15:31:10 <anteaya> luqas: okay how did you conclude they are moving the apit test?
15:31:17 <anteaya> ah so there is a patch
15:31:28 <anteaya> great yes, let's find that and take the patch url to the meeting
15:31:43 <anteaya> that increases the chances we will get an actionable response
15:31:44 <luqas> from an email form maru newby subject: [openstack-dev] [qa][neutron] Moving network api test development to Neutron repo
15:31:59 <anteaya> ah okay great, we can take the mailing list link
15:32:16 <anteaya> does everyone know how to find mailing list links that are publicly accessable?
15:32:23 <anteaya> they are all at lists.openstack.org
15:32:35 <anteaya> then you can find the list and search the archives
15:32:43 <anteaya> then post the link of the mailing list post
15:32:51 <anteaya> which is publicly accessable
15:32:57 <anteaya> very helpful in meetings
15:33:10 <amotoki> i have one question.
15:33:13 <anteaya> luqas: so we will find that post and take that to the meeting
15:33:20 <anteaya> amotoki: go ahead
15:33:25 <amotoki> I would like to confirm what is the official way to share CI status (e.g., power outage, local machine problem, some troubles of CI failures...)
15:33:38 <amotoki> third party wiki? third-party announce ml?
15:33:39 <luqas> I found the link to the email: https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg48304.html
15:33:56 <anteaya> luqas: awesome good work
15:34:16 <anteaya> luqas: are we okay to move on to amotoki's question?
15:34:26 <luqas> anteaya: sure
15:34:29 <anteaya> thanks
15:34:33 <anteaya> amotoki: https://wiki.openstack.org/wiki/ThirdPartySystems
15:35:01 <amotoki> If we have any status update, we should update corresponding page. right?
15:35:13 <anteaya> this is the list of all the third party ci accounts that have read the infra requirements and have indicated they are trying to comply by following them
15:35:18 <anteaya> amotoki: yes
15:35:22 <amotoki> my motivation of the question is to share this information among not only thirdparty folks but also all people including PTLs or some other menbers?
15:35:28 <anteaya> amotoki: is your ci account listed https://wiki.openstack.org/wiki/ThirdPartySystems
15:35:37 <amotoki> yes, of course.
15:35:44 <anteaya> amotoki: what is your system?
15:35:54 <amotoki> anteaya: my system is NEC CI.
15:36:14 <anteaya> update this page as you see fit: https://wiki.openstack.org/wiki/ThirdPartySystems/NEC_CI
15:36:20 <amotoki> I see similar questions several times in the mailing list, and just would like to confirm again for clarification.
15:36:25 <anteaya> and share that link with whomever you need to
15:36:30 <anteaya> yes
15:36:40 <amotoki> yes, thanks for the clarification.
15:36:41 <anteaya> we tell folks and tell folks and they don't pay any attention
15:36:47 <anteaya> so thanks for listening
15:36:49 <anteaya> :)
15:37:06 <amotoki> we might be better to update this information to https://wiki.openstack.org/wiki/ThirdPartySystems
15:37:19 <anteaya> amotoki: what do you mean?
15:38:08 <amotoki> I would like to add what you answered now to the top of ThirdPartySystem wiki page.
15:38:43 <anteaya> it is already at http://ci.openstack.org/third_party.html#requirements
15:39:05 <anteaya> All accounts must have a wikipage entry. Follow the instructions on the ThirdPartySystems wiki page to add your system. When complete, there should be a page dedicated to your system with a URL like: https://wiki.openstack.org/wiki/ThirdPartySystems/Example.
15:39:07 <anteaya> All comments from your CI system must contain a link to the wiki page for your CI system.
15:39:22 <amotoki> ah... I see.
15:39:24 <anteaya> what do you need to add?
15:39:33 <anteaya> more instructions don't help people
15:39:42 <anteaya> they just get more confused
15:39:54 <anteaya> we grew faster than our culture could keep up
15:40:09 <anteaya> now people just do random things and others copy the random things
15:40:18 <amotoki> I sometimes forget to check if the document on ci.openstack.org is updated...
15:40:36 <anteaya> the document on ci.openstack.org is fairly stable
15:40:40 <anteaya> on purpose
15:40:56 <anteaya> as any change doesn't actually have the intended effect
15:41:01 <anteaya> it just makes everyone panic
15:41:06 <anteaya> and do random things
15:41:10 <anteaya> which then get copied
15:41:18 <amotoki> totally agree.
15:41:23 <anteaya> so thank you for asking
15:41:25 <wznoinsk> anteaya: to come to the meeting asking for permission to comment is now a requirement I see :-)
15:41:45 <anteaya> please update your system page and share your updated system page with whoever you need to
15:42:25 <anteaya> wznoinsk: well I didn't add that
15:42:31 <anteaya> and I disagree it is a requirement
15:42:42 <anteaya> as a requirement if not followed gets your system disabled
15:42:55 <anteaya> and that if not followed won't get your system disabled from me
15:43:34 <wznoinsk> I think it would help for all these lost souls to avoid annoying any of the projects teams tho
15:43:52 <anteaya> I honestly don't know what would help anymore
15:44:15 <anteaya> since everything annoys the project teams
15:44:22 <anteaya> and I have been acting as mediator
15:44:36 <wznoinsk> you're right in saying to much information is bad for people, it's a matter of leveraging more info vs. how much confusion/question/ml traffic it would generate if not added
15:44:41 <anteaya> but if folks are going to add random requirements to the list, well I can't support them
15:44:57 <anteaya> the point is this group is not cohesive
15:45:09 <anteaya> you can't do anythign with a group that is not cohesive
15:45:23 <anteaya> first you have to have some common sense of what the group is or means
15:45:31 <anteaya> to get it moving in a common direction
15:45:35 <anteaya> that never happened
15:45:44 <anteaya> and it appears still isn't happening
15:45:51 <anteaya> so I will do what I say I will do
15:46:03 <anteaya> and I can't keep getting stretched in different directions
15:46:24 <anteaya> some people are cohesive
15:46:29 <anteaya> and I enjoy working with them
15:46:41 <anteaya> but unfortunately they are still the minority
15:46:52 <anteaya> so I will work with the minority that puts in the effort
15:46:58 <anteaya> shows up asks questions, helps others
15:47:07 <anteaya> and if the majority fall down, so be it
15:47:19 <anteaya> I really can't do anything more to help them
15:47:27 <anteaya> so enough of that
15:47:37 <anteaya> does anyone have anything else today?
15:48:25 <anteaya> if noone else has anything else today
15:48:30 <anteaya> let's wrap up
15:48:33 <kaisers> Only some positive comment: Last weeks session helped me get CI running at full, which in turn helped find a bug in our driver, which in turn will have the ci run more stable (fix is currently in review). So some things of this _do_ work
15:48:45 <kaisers> :)
15:48:54 <anteaya> kaisers: great, because you show up
15:48:56 <amotoki> :-)
15:49:06 <anteaya> showing up is the first and best thing anyone can do
15:49:13 <wznoinsk> there's no doubt these meetings are very helpful
15:49:24 <anteaya> kaisers: well done, and congratulations on improving the quality of your driver
15:49:30 <asselin_> kaisers, great news!
15:49:32 <anteaya> wznoinsk: I agree
15:49:34 <ctlaugh> And I have gotten quote a bit of help from many here on getting my setup working (still not perfect, and still not done, but lots of progress).  Thank you to everyone!
15:49:51 <anteaya> wznoinsk: and I'm quite willing to help folks that attend, but I can't be all things to all people
15:50:00 <anteaya> great
15:50:10 <anteaya> thanks for sharing your success stories
15:50:21 <anteaya> anyone else with a comment?
15:50:49 <anteaya> okay thanks everyone for your continued participation
15:50:56 <anteaya> you are what makes things work
15:50:58 <anteaya> thank you
15:51:07 <anteaya> have a good week and see you next week
15:51:11 <anteaya> #endmeeting