15:00:47 <lennyb> #startmeeting third-party
15:00:48 <lennyb> Hello
15:00:48 <openstack> Meeting started Mon Jun 27 15:00:47 2016 UTC and is due to finish in 60 minutes.  The chair is lennyb. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:51 <openstack> The meeting name has been set to 'third_party'
15:01:13 <mmedvede> hey
15:01:14 <lennyb> hi mmedvede. Strange that bot has not started the meeting
15:01:31 <eantyshev> Hello everyone!
15:01:35 <lennyb> mmedvede, how are you?
15:01:48 <lennyb> eantysgev: hi
15:01:52 <mmedvede> I am good, our CI is failing due to pip timeouts
15:02:05 <lennyb> anything you would like to disscuss?
15:02:23 <lennyb> mmedvede, yeap, ours as well. I am reruning jobs manually :(
15:03:10 <eantyshev> guys, I need some attention on this review: https://review.openstack.org/#/c/238988
15:03:24 <mmedvede> lennyb: are you using pypi.python.org or local mirror?
15:03:55 <lennyb> mmedvede, pypi, not the local mirror
15:04:06 <eantyshev> BTW, I use pypi.python.org and we don't see many timeouts
15:04:54 <mmedvede> we have 100% failure when we use pypi.python.org. So using mirror is kind of requirement now
15:04:54 <mmedvede> eantyshev: would take a look at the patch
15:05:18 <lennyb> mmedvede, do you have mirror inside your net or regional
15:07:02 <eantyshev> mmedvede: Thank you, it causes our CI to get stuck from time to time. It affects only those CIs which live behind a stateful firewall
15:07:06 <mmedvede> lennyb: inside our net, local geo
15:09:06 <mmedvede> lennyb: but the problem we now see is a strange timeout that happens even with mirror, and timeout is probably red herring
15:09:07 <mmedvede> but I do not see any other CIs having similar problems
15:09:08 <lennyb> mmedvede, this is our timeout #link http://13.69.151.247/78/333478/5/check-cinder/Cinder-ISER-LIO/c214366/console.html.gz
15:09:15 <lennyb> mmedvede, curl: (7) Failed to connect to bootstrap.pypa.io port 443: Connection timed out
15:10:31 <lennyb> mmedvede, is it similar to your issue?
15:12:02 <mmedvede> lennyb: no, we fail during 'pip install XStatic-Angular-Bootstrap===0.11.0.7'
15:12:04 <lennyb> I have also another question: does any of you uses multijob plugin https://wiki.jenkins-ci.org/display/JENKINS/Multijob+Plugin  ? I am building MultiNode CI and I need to run multijobs there
15:12:32 <mmedvede> we are not using that plugin
15:13:21 <lennyb> mmedvede, do you have multi jobs? I mean few jobs that should run one ofter another?
15:14:03 <mmedvede> lennyb: no
15:15:45 <lennyb> any ideas or tips for mmedvede? Or we can move to the next question/topic? ( mmedvede, sorry I have nothing to suggest )
15:16:25 <mmedvede> lennyb: I do not want to send you in the wrong direction, but doesn't zuul support some sort of job chaining?
15:16:51 <mmedvede> I think you can have one pipeline event triggering another pipeline
15:17:15 <lennyb> mmedvede, thanks, I will check it.
15:17:39 <eantyshev> lennyb: isn't that already done in upstream CI?
15:17:52 <lennyb> mmedvede, I need to run few jobs, but only one ( the global ) to comment/publish result
15:18:37 <eantyshev> there is a support for multinode jobs in devstack-gate and nodepool
15:18:47 <lennyb> mmedvede, I have multinode_ci == start_controller, -> start_compute->run_tempest->stop_controller->stop_compute
15:19:05 <lennyb> so I dont want all those jobs to comment on gerrit
15:20:28 <mmedvede> lennyb: so the order does not matter? as eantyshev said, there is a multinode job in infra. It uses several VMs for each job, and normally has controller on one VM and compute on the other (I might be wrong on specifics)
15:21:10 <lennyb> it matters, if I start compute before the controller it fails to connect to keystone service that runs on contoller
15:21:29 <lennyb> but I will check it
15:22:02 <mmedvede> that job uses nodepool multinode nodes, they appear to jenkins as a single node
15:22:28 <mmedvede> and then jenkins job uses ansible to do whatever is necessary on secondary node/s
15:22:58 <lennyb> we are currently not working with nodepool and ancible :(
15:25:10 <mmedvede> ansible does not matter here in this case. You could also use ssh from primary to secondary. But having no nodepool you would have to setup your nodes for ssh access yourself
15:25:39 <lennyb> mmedvede: we are. we use physical nodes due to HW limitations
15:28:23 <lennyb> I guess you provided me with enough ideas.
15:28:33 <lennyb> any other questions?
15:28:46 <lennyb> mmedvede: thanks.
15:29:07 <mmedvede> not from me, I need to go back to debugging our CI :)
15:29:28 <lennyb> mmedvede: good luck
15:30:12 <lennyb> anything that will prevent me from closing the meeting?
15:31:02 <lennyb> thank you all for being here.
15:31:07 <lennyb> see you next week
15:31:39 <lennyb> #endmeeting