15:03:17 <anteaya> #startmeeting third-party
15:03:18 <openstack> Meeting started Mon Mar 16 15:03:17 2015 UTC and is due to finish in 60 minutes.  The chair is anteaya. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:03:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:03:22 <openstack> The meeting name has been set to 'third_party'
15:03:24 <anteaya> hello
15:03:34 <kaisers> Hi
15:03:34 <luqas> o/
15:03:36 <patrickeast> hi
15:03:44 <anteaya> how is everyone today?
15:04:00 <anteaya> I seem to be getting into discussions right at meeting time
15:04:03 <anteaya> sorry about that
15:04:33 <patrickeast> anteaya: no worries
15:04:39 <patrickeast> i'm doing pretty well
15:04:44 <patrickeast> anteaya: how about yourself?
15:04:54 <anteaya> patrickeast: glad to hear it
15:04:56 <anteaya> good thanks
15:05:04 <anteaya> so we are into feature freeze week
15:05:08 <anteaya> #link https://wiki.openstack.org/wiki/Kilo_Release_Schedule
15:05:19 <anteaya> how is everyone's system operating?
15:05:45 <kaisers> Ok, i start?
15:05:47 <kaisers> :)
15:06:00 <anteaya> please do
15:06:22 <kaisers> Our system is still under construction but i'm planning to switch it live for cinder project later today
15:06:30 <kaisers> Using sos-ci from jgriffith with some adaptions
15:06:41 <kaisers> currently running in openstack-dev/ci-sandbox
15:06:47 <anteaya> well done
15:06:51 <kaisers> Basically all is well
15:06:57 <anteaya> have you the url for a patch to share with us?
15:07:13 <kaisers> a tested patch you mean?
15:07:16 <anteaya> yes
15:07:19 <kaisers> sec
15:07:27 <anteaya> a ci-sandbox patch that shows your system operating
15:07:47 <kaisers> https://review.openstack.org/#/c/155735/
15:08:07 <anteaya> #link https://review.openstack.org/#/c/155735/
15:08:13 <anteaya> what is the name of your system?
15:08:18 <kaisers> quobyteci
15:08:31 <kaisers> but i have a question regarding this
15:08:53 <kaisers> this morning i lost some of my gerrit feedback (meaning like 10-4 hours ago)
15:09:02 <kaisers> That's untouched code that should be working
15:09:11 <kaisers> has anyone seen something like this?
15:09:25 <anteaya> what do you mean?
15:09:31 <kaisers> From the logs the test feedback went back into gerrit fine but it did not turn up on the frontend page
15:09:39 <kaisers> more detailed:
15:09:59 <asselin> hi
15:10:03 <kaisers> quobyteci tested a new patch set and had some results. It posted the results back to gerrit.
15:10:10 <kaisers> From this point of view all was looking fine
15:10:21 <kaisers> But the results never turned up on the gerrit page for the change set
15:10:39 <anteaya> from this point of view, you mean from your end?
15:10:45 <patrickeast> they should all be in the comment section if you click on the toggle ci button
15:10:47 <kaisers> yep
15:10:50 <anteaya> from the logs output to you from your system
15:10:58 <kaisers> anteaya: yep
15:11:16 <anteaya> what is the command you are using to post the comments to gerrit?
15:12:03 <kaisers> digging, sec
15:12:08 <anteaya> can you put it in a paste please?
15:12:59 <kaisers> #link http://paste.openstack.org/show/192655/
15:14:48 <anteaya> so this is the command you post to gerrit?
15:14:50 <anteaya> gerrit review -m "* quobyteci-dsvm-volume http://176.9.127.22:8081/refs-changes-35-155735-142 : FAILURE " e259882f9127af77d83c4941c2bf70e9101a16e8
15:14:56 <kaisers> yep
15:15:23 <anteaya> does anyone else start off their string with *
15:16:14 <anteaya> kaisers: have you tried it without the *?
15:16:37 <kaisers> Not so far but will do now .)
15:16:38 <kaisers> :)
15:17:05 <anteaya> while kaisers is trying that
15:17:12 <kaisers> will take some time
15:17:17 <anteaya> does anyone else have any input on his issue?
15:17:33 <anteaya> if no, shall we move on for now?
15:17:36 <patrickeast> i'm not sure what the expected output is, but it doesn't look like output of the command is being checked https://github.com/j-griffith/sos-ci/blob/master/sos-ci/os_ci.py#L123
15:17:52 <patrickeast> maybe there is something in the stdout/stderr from the command that could give more hints
15:18:08 <kaisers> I'll look into that, thanks
15:18:10 <jgriffith> patrickeast: correct, there's no checking there
15:18:22 <jgriffith> patrickeast: kaisers it's just an launch/forget ssh command
15:18:29 <kaisers> yep
15:18:53 <jgriffith> kaisers: double check the format of the string being sent, that's caused people issues in the past (grab me after meeting)
15:19:03 <kaisers> will do, ok :)
15:19:16 <anteaya> thanks jgriffith
15:19:24 <anteaya> anyone have anything else on this?
15:19:43 <anteaya> shall we move on?
15:19:55 <anteaya> does anyone else have anythign they wish to discuss?
15:20:29 <rhe00> are there any infra changes planned for this week?
15:20:39 <anteaya> there are always infra changes
15:20:54 <anteaya> we don't have any outages planned for this week
15:20:56 <rhe00> the reason I ask is because on Friday a change was checked in that broke asselins third party nodepool scripts
15:21:08 <anteaya> out next scheduled outage is March 21
15:21:19 <anteaya> rhe00: that will happen
15:21:22 <rhe00> fortunately I got a hold of the devs on IRC and they fixed it
15:21:27 <anteaya> good
15:21:29 <anteaya> well done
15:21:55 <anteaya> unplanned breakages happen all the time
15:21:59 <anteaya> which is why we test
15:22:10 <anteaya> and why we are responsive to folks such as yourself
15:22:14 <anteaya> so thank you
15:22:37 <anteaya> asselin: did you want to say anything on the topic?
15:23:23 <anteaya> he must be pulled away
15:23:34 <anteaya> what else shall we discuss?
15:23:37 <ctlaugh> rhe00: I am currently working with his scripts, trying to get my CI setup... did the fix involve changes to asselin's scripts, or were they not affected?
15:24:02 <rhe00> it was a checkin to subunti2sql that changed the schema
15:24:36 <rhe00> one of the prepare_node scripts has to be modified to limit the version of subunit2sql to <=0.40
15:25:06 <rhe00> trying to dig up the details
15:25:23 <ctlaugh> rhe00: thank you
15:25:33 <rhe00> your nodepool image creation will blow up with a long stack trace if you hit this issue
15:27:12 <anteaya> rhe00: did you want to expand or are you finished?
15:27:27 <anteaya> I'm not sure if I'm waiting for you or not
15:28:23 <rhe00> I am still digging, but you can move on.
15:28:38 <anteaya> okay thanks for letting me know
15:28:41 <rhe00> my IRC history for openstack-infra roll off the screen buffer
15:28:50 <anteaya> how about checking the logs?
15:29:02 <anteaya> #link http://eavesdrop.openstack.org/
15:29:25 <anteaya> since noone but you can read your screen buffer anyway, this tends to be more useful
15:29:35 * asselin reads back
15:29:42 <anteaya> so while rhe00 digs
15:29:55 <anteaya> does anyone have anything else they wish to discuss today?
15:30:12 <rhe00> https://review.openstack.org/#/c/164379/
15:30:19 <rhe00> that's the patch to nodepool
15:30:35 <anteaya> #link https://review.openstack.org/#/c/164379/
15:31:11 <patrickeast> just as an fyi for anyone setting up systems... make sure its on a secure network and others on the network know what you are up to, we got burned by that this week and have to rearrange our networking (again)
15:31:12 <asselin> rhe00, ok hadn't see that yet
15:31:37 <patrickeast> turns out there were some unrestricted hosts on the same enginerring lab network
15:31:44 <anteaya> okay so staying with rhe00's point for a moment
15:31:53 <anteaya> rhe00: the linked patch has been reverted
15:31:55 <asselin> but yea, issues upstream also affect 3rd party ci
15:31:59 <patrickeast> so having a server running arbitrary python code all day that could access it was a pretty big problem
15:32:17 <anteaya> oh sorry, the revert patch is up for review
15:32:22 <anteaya> #link https://review.openstack.org/#/c/164730/
15:32:27 <anteaya> so please review
15:32:44 <rhe00> asselin: oh, didn't see that. thanks!
15:32:45 <asselin> patrickeast, can you elaborate?
15:33:13 <patrickeast> so, pretty much any host reachable from your ci build slaves is vulnerable
15:33:25 <anteaya> yes
15:33:35 <anteaya> since that is how our system works
15:33:39 <asselin> patrickeast, vulnerable to what?
15:33:47 <anteaya> so thanks for sharing your experience with others
15:33:58 <anteaya> asselin: to be told to build hosts I imagine
15:34:22 <anteaya> nodepool finds any connection and builds hosts
15:34:57 <anteaya> is that what happened in your case?
15:35:01 <patrickeast> yea, one thing for us was that if another host has a service on it (for example an http server) that is internal only
15:35:15 * asselin is lost
15:35:16 <patrickeast> someone could post a review that does a curl on it
15:35:20 <patrickeast> and get the dat
15:35:22 <patrickeast> data
15:35:26 <patrickeast> so like
15:35:37 <patrickeast> the other eng lab hosts are protected behind the firewall
15:35:48 <patrickeast> which means a lot of test/dev stuff is sometimes not locked down so well
15:36:08 <patrickeast> but our system lets any code posted for review get run, behind the firewall
15:36:11 <anteaya> asselin: nodepool assumes anything it can contact belongs to it
15:36:27 <asselin> patrickeast, ok yes, I'm following now....
15:36:30 <patrickeast> yep, it needs to be on a segregated network
15:37:20 <anteaya> patrickeast: this sounds like it would be a good blog post
15:37:31 <rhe00> that's interesting. have to check mine now. :/
15:37:50 <asselin> patrickeast, well...at least the attack won't be anonymous...but yeah I'll have to check into that as well
15:37:59 <anteaya> asselin: ha ha ha
15:38:00 <patrickeast> yea it would be tracked
15:38:06 <patrickeast> but all you need is an email address
15:38:12 <patrickeast> fake*
15:38:39 <patrickeast> anteaya: yea i can look into that (should see if my company has a blog site i can use)
15:38:52 <anteaya> patrickeast: please let me know how that goes
15:38:58 <anteaya> I don't think we have ever thought of it
15:39:11 <anteaya> since we work so hard to find every avaialbe resource and use it
15:39:31 <anteaya> but since you brought it up, yes that assumption should definitely be communicated
15:39:34 <asselin> anteaya, I guess infra doesn't have the issue since nodes running test jobs are in the public cloud which don't have access
15:39:46 <anteaya> right
15:40:04 <anteaya> we have more issues with making sure we can find everything that is there
15:40:21 <anteaya> we have never had a use case for checking if what is there should or should not be used
15:40:47 <anteaya> so easier to just ensure through network decisions that nodepool cna't talk to anything it can't use
15:41:25 <asselin> anteaya,  just a clarification: nodepool *node's* can't talk to anything it can't use
15:41:44 <anteaya> best to clarify that with fungi or jeblair
15:41:47 <anteaya> but yes
15:42:01 <patrickeast> yea the scary issue is the nodes, if you have floating ips on an "external" network to give them access to the internet
15:42:14 <patrickeast> they have access to anything else on the same physical network those floating ips are on
15:42:16 <anteaya> I would suggest setting up a network for nodepool where the only hosts nodepool can reach are hosts it can build on
15:42:26 <patrickeast> which i suspect for most of us is part of some test lab setup
15:42:49 <fungi> yeah, our job workers don't run in trusted networks
15:42:54 <fungi> for good reason
15:43:00 <patrickeast> anteaya: yep, thats what we are switching to now
15:43:09 <anteaya> patrickeast: thanks for bringing it up
15:43:31 <fungi> in fact, since they run in clouds, we don't really consider that there are such a thing as trusted networks and instead we explicitly firewall every server independently with local packet filters
15:44:10 <patrickeast> fungi: yea that makes sense
15:44:45 <anteaya> anyone with anything more on this topic?
15:44:57 <patrickeast> our problem was making an assumption about our internal eng network for the test/dev system, but it turns out it still had access to some private info
15:45:15 <patrickeast> so i guess the tl;dr is: go double check whats on your networks if you haven't already
15:45:39 <anteaya> thanks patrickeast
15:45:43 <anteaya> anything else?
15:45:57 <anteaya> okay to move on?
15:46:00 <patrickeast> thats all i had
15:46:08 <anteaya> thank you for sharing that patrickeast
15:46:29 <anteaya> anyone with anything else they wish to discuss today?
15:47:04 <anteaya> well if we are done for today I guess we can wrap up
15:47:18 <anteaya> thanks everyone for your attendance and participation
15:47:31 <anteaya> rhe00: please review that patch and post any comments you have
15:47:37 <anteaya> enjoy the rest of your day
15:47:38 <kaisers> anteaya: thanks for hosting the show
15:47:39 <anteaya> and thank you
15:47:42 <rhe00> anteaya: ok
15:47:44 <anteaya> kaisers: :)
15:47:49 <anteaya> see you next week
15:47:54 <anteaya> #endmeeting