19:06:00 <jeblair> #startmeeting infra
19:06:01 <openstack> Meeting started Tue Jan 27 19:06:00 2015 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:06:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:06:05 <openstack> The meeting name has been set to 'infra'
19:06:15 <cody-somerville> \o
19:06:26 <jeblair> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:06:55 <jeblair> #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-20-19.02.html
19:07:14 <asselin_> hi
19:07:20 <krtaylor> o/
19:07:24 <mmedvede__> o/
19:07:29 <jeblair> #topic libc CVE reboot
19:07:52 <jeblair> so first, as a public service announcement, we are in the process of rebooting all of our affected hosts due to a vulnerability in libc
19:08:14 <jeblair> we've completed nearly all of the hosts running precise, and are awaiting updated packages for our centos hosts
19:08:23 <jeblair> trusty is apparently not affected
19:09:00 <clarkb> have we confirmed that? I know I inferred it because it didn't have new packages yet
19:09:41 <pleia2> the security announcement said only 10.04 and 12.04 were impacted
19:09:47 <pleia2> http://www.ubuntu.com/usn/usn-2485-1/
19:09:54 <nibalizer> the cve said that it was fixed but the security implications were not recognized at that time
19:10:16 <fungi> right, patch to fix it landed a couple years ago
19:10:20 <clarkb> oh good usn has it now
19:10:22 <jeblair> so it seems plausible that the version in trusty is simply new enough.  i have not indepedently confirmed that
19:10:41 <clarkb> I see so thats nice
19:10:46 <pleia2> yep
19:10:50 <clarkb> we should probably put more effort into the tusty all the things effort
19:11:26 <jeblair> clarkb: maybe?  but i don't think this is a reason to do so
19:12:12 <clarkb> jeblair: ya this particular thing isn't a reason, but it does illustrate why having newer packages can be a good thing
19:12:28 <fungi> we could just as easily be faced next time with a bug which was only present on trusty
19:12:34 <clarkb> yup
19:12:51 <jeblair> clarkb: in this case, yes, but quite often older packages are too old to have new bugs.  things fall on both sides of the knife
19:13:11 <fungi> if anything, newly-introduced security vulnerabilities are much more frequent than long-fixed newly-discovered ones
19:13:21 <jeblair> so, yeah, we should move to trusty because it will be supported further in the future than precise
19:13:46 <fungi> which we are doing, here and there at least
19:13:55 <jeblair> #topic  Schedule next project renames
19:14:14 <fungi> i'm open this weekend, or on friday if we want to do a friday afternoon thing
19:14:22 <jeblair> we still only have attic moves, but some have been pending for a while
19:14:37 <clarkb> I can do saturday or friday but sunday is MURICA day lite
19:14:45 <clarkb> I will be watching football and smoking meat
19:14:48 <anteaya> I know annegent_ was asking about the ones she is advocating for
19:15:14 <annegent_> please don't let sportsball stop you :)
19:16:18 <fungi> indeed, i had forgotten. the joys of not watching commercial television
19:16:39 <jeblair> friday then?
19:16:43 <clarkb> wfm
19:16:50 <fungi> sure
19:17:04 <fungi> 1900 utc? earlier? later?
19:17:47 <jeblair> 1900 wfm
19:17:51 <clarkb> 1900 wfm
19:18:08 <jeblair> #agreed rename gerrit projects at 1900utc friday jan 30
19:18:25 <jeblair> i'll send the email
19:18:50 <jeblair> #topic Priority Efforts (Swift logs)
19:19:20 <clarkb> I haven't seen jhesketh this morning so I can update here
19:19:49 <clarkb> jhesketh wrote a change to look for a magic number in the console log to know when the console log is complete when uploading to swift
19:20:17 <clarkb> unfortunately it went into an infinite loop because jenkins didn't appear to serve any additional bytes on top of what it originally served
19:20:45 <clarkb> current workaround for that is to try 20 wget attempts and upload whatever it gets at that point. still need to figure out why it wasn't serving any more data thouhg
19:21:05 <clarkb> the other thing that went in recently was better index file generation you should start seeing that today on swift logged logs.
19:21:08 <jeblair> that is highly weird
19:21:32 <fungi> wondering if there's a jetty-side cache or something
19:21:38 <clarkb> ya din't get much time to debug it last night and now libc. Hopefully we can figure it out soon
19:21:52 <jeblair> clarkb, jhesketh: thanks! :)
19:21:53 <fungi> didn't seem to be apache causing it at any rate
19:22:11 <jeblair> #topic Priority Efforts (Puppet module split)
19:22:19 <jeblair> Reminder for Sprint: Wednesday January 28, 2015 at 1500 UTC
19:22:35 <fungi> for those without calendars, that's TOMORROW!
19:22:57 <jeblair> #link https://wiki.openstack.org/wiki/VirtualSprints#Schedule_of_Upcoming_OpenStack_Virtual_Sprints
19:23:02 <jeblair> #link https://etherpad.openstack.org/p/puppet-module-split-sprint
19:23:09 <jeblair> please sign up on the etherpad ^
19:23:17 <jeblair> #link https://review.openstack.org/#/q/topic:module-split+status:open,n,z
19:23:43 <asselin_> #link https://storyboard.openstack.org/#!/story/302
19:23:49 <asselin_> Remember to add yourself here too ^^
19:23:55 <asselin_> link is in etherpad
19:24:19 <jeblair> asselin_: thanks for your prep work!
19:24:38 <asselin_> jeblair, you're welcome
19:25:12 <fungi> i'm looking forward to reviewing (and possibly breaking/troubleshooting) all the things
19:25:19 <jeblair> and we'll see everyone tomorrow and hopefully end the day with a bunch of new gerrit repos :)
19:25:19 <nibalizer> \o/
19:25:23 <clarkb> should be good fun
19:25:36 <jeblair> #topic Priority Efforts (Nodepool DIB)
19:25:53 <clarkb> no updates since last week on this. I was out most of last week.
19:26:03 <jeblair> mordred: anything blocking you?
19:26:14 <clarkb> I did however notice rax snapshot image builds for centos and precise were broken so that should be sorted with the next round of rebuilds
19:26:15 <mordred> I mean
19:26:19 <mordred> other than openstack?
19:26:21 <jeblair> mordred: anything we can help with? :)
19:26:34 <mordred> jeblair: keep me from killing our cloud providers out of rage?
19:26:43 <mordred> jeblair: I'm honestly getting quite close, which is why I'm ragey
19:27:03 <mordred> but I've discovered a new way in which the rackspace catalog is broken but current tools have been hiding it
19:27:30 <clarkb> I hope to start work on moving away from unittest specific slaves as soon as the fires go away. I really don't know what the scope of that work is yet though (do we do it for stable branches? etc)
19:27:41 <mordred> oh - also - any infra-core who isn't watching the shade repo for reviews please do
19:27:47 <fungi> gotta love breakage swept under the rug by tooling
19:28:16 <jeblair> mordred: and anyone with an interest in nodepool
19:28:20 <mordred> I'm hoping to have a first-pass nodepool patch up today
19:28:41 <fungi> clarkb: i'm guessing it's coming up with a new job macro which installs the additional packages we don't install on the devstack workers
19:28:43 <jeblair> since we want to have shade be good enough to present a sensible interface to nodepool
19:28:43 <mordred> it will not work - but will be something worth commenting on approach with
19:28:43 <yolanda> mordred, i've been taking a look at that shade project. Mostly i'm missing tests , but maybe is that at an early stage?
19:28:59 <mordred> yolanda: yes - it's massively missing tests ... I'd love help :)
19:29:19 <clarkb> fungi: ya which is a non trivial set. which means we need to cache on the images first, etc
19:29:23 <mordred> however - I have a question for folks on that ...
19:29:30 <fungi> clarkb: yep
19:29:34 <clarkb> fungi: since devstack isn't exactly the set of things you need
19:29:39 <mordred> any thoughts on how to test that interactions with rackspace and hp public clouds as they exist in the wild work?
19:29:54 <mordred> because I can test against devstack all day long without surfacing any of the issues we've been battling lately
19:29:57 <clarkb> mordred: spin up a nodepool
19:30:00 <fungi> mordred: we could do it pretty easily with flaky jobs
19:30:07 <clarkb> mordred: or do ou mean generally?
19:30:11 <mordred> I mean generally
19:30:18 <fungi> false negative rate would be pretty high
19:30:28 <mordred> like, how do I functionally test shade to tell that it works with the real public clouds
19:30:32 <yolanda> mordred, have testing accounts on that providers and launch tests against them instead of fake providers?
19:30:42 <mordred> I suppose a set of massive fakes that return manually grabbed data as the clouds return it?
19:31:05 <nibalizer> puppet does that kind of testing, it turns into a massive workpile whenever the universe changes
19:31:12 <jeblair> mordred: have you seen mimic?  https://github.com/rackerlabs/mimic
19:31:28 <jeblair> mordred: glyph mentioned it to me a while ago
19:31:39 <mordred> neat!
19:31:43 <mordred> maybe we shodl use that
19:31:50 <mordred> I wonder if it's bug-for-bug compatible with rax?
19:32:12 <jeblair> mordred: heh, yeah, there's a thought.  :)
19:32:17 <jeblair> "Mimic is an API-compatible mock service for Openstack Compute and Rackspace's implementation of Identity and Cloud Load balancers. It is backed by in-memory data structure rather than a potentially expensive database."
19:32:40 <mordred> worth looking at
19:32:57 <jeblair> i'm kind of divided on this -- part of me is like "why is the word rackspace in there, why can't we have nice openstack things" but then, part of me says "hey, it's an implementation, it still may be worth testing against as-is"
19:33:15 <jeblair> since, after all, we _actually_ want to use shade with rackspace :)
19:33:27 <mordred> yah
19:33:31 <mordred> that's kinda where I'm at
19:33:45 <mordred> that rackspace may or may not be an openstack at this point is not the point
19:33:56 <mordred> the point is that it's one of my clouds and I need to use it
19:34:31 <jeblair> of course, if we have to write a patch to mimic to enable an hpcloud weirdness, that will be... instructive
19:34:52 <jeblair> or even a normal openstack behavior that rackspace doesn't exhibit
19:35:03 <yolanda> jeblair, mordred, and how can you be aware of any weirdness?
19:35:03 <mordred> yah. well, I'd hope we can do that with devstack
19:35:12 <mordred> yolanda: when it breaks my local tests :)
19:35:22 <yolanda> empiric testing :)
19:35:26 <mordred> I currently test with a set of very bad scripts in my homedir that I point at rax and hp accounts
19:35:56 <jeblair> mordred, yolanda: so maybe shade should have a test against devstack using a dsvm node, and a test against mimic
19:36:01 <mordred> jeblair: +1000
19:36:52 <jeblair> so, land of opportunity here :)
19:36:59 <jeblair> anything else on nodepool dib?
19:37:06 <clarkb> not from me
19:37:20 <jeblair> #topic Priority Efforts (Jobs on trusty)
19:37:49 <fungi> dhellmann has graciously started a thread on disabling py3k gating for oslo.messaging and oslo.rootwrap
19:37:55 <fungi> #link http://lists.openstack.org/pipermail/openstack-dev/2015-January/055270.html
19:38:03 <fungi> so far discussion seems to be accepting
19:38:14 <mordred> yay
19:38:16 <clarkb> I have pointed them at the bugs in question and answered a couple of the things that came up
19:38:22 <fungi> if there are no major concerns raised by friday, we can go ahead and cut over
19:38:55 <fungi> i was running a full recheck of all the projects currently successfully gating on 3.3 against 3.4 but lost my held nodes in the great reboot of 2015
19:39:05 <fungi> so in the process of restarting them right now
19:39:23 <jeblair> whoopsie
19:39:36 <fungi> but will know before the day is out if there are any new projects which are failing and have slipped through the cracks
19:39:43 <nibalizer> fungi: did you reboot the servers or just restart services?
19:39:55 <fungi> and plan to go ahead and whip up the change to switch them later this week
19:40:14 <fungi> nibalizer: we rebooted all ubuntu precise servers after updating glibc on them
19:40:36 * nibalizer nods
19:40:47 <fungi> anyway, nothing new on that front otherwise
19:40:59 <jeblair> fungi: thanks, progress!
19:41:00 <jeblair> #topic Priority Efforts (Zanata)
19:41:08 <fungi> eta on getting fixed python 3.4 in trusty seems to be sometime in march probabkly
19:41:17 <jeblair> pleia2: anything we can unblock?
19:41:50 <pleia2> I don't think so, just sorting out dependency issues now
19:41:59 <pleia2> https://review.openstack.org/#/c/147947/
19:42:08 <clarkb> pleia2: I said I would review when in a layover thengot distracted by image building and things
19:42:16 <clarkb> pleia2: will try to review that change sometime this week
19:42:32 <pleia2> I'll make a note in the review about what the order should be
19:43:00 <jeblair> #link https://review.openstack.org/#/c/147947/
19:43:32 <jeblair> #topic Options for procedural -2 (jeblair)
19:43:51 <jeblair> zaro: have you had a chance to look at the current status of the wip plugin?
19:44:12 <zaro> jeblair: sorry, i have not.  will do this week for sure
19:44:21 <jeblair> zaro: thanks!
19:45:13 <jeblair> #action zaro look into the current status of the wip plugin and find out if it is ready for use in gerrit 2.9 and could be used for procedural -2
19:45:24 <jeblair> #topic  Upgrading Gerrit (zaro)
19:45:52 <zaro> zuul-dev is still in a broken state.  need it working to continue testing.
19:45:57 <fungi> we've got an ubuntu trusty review-dev built
19:46:01 <zaro> fungi: says he's gonna take a look.
19:46:04 <jeblair> what do we need to do to fix zuul-dev?
19:46:15 <fungi> and yeah, i'm looking at zuul-dev errors this afternoon
19:46:32 <jeblair> ok.  so we're at "identify problem" stage with zuul-dev
19:46:32 <zaro> i think something wrong with apache setup.
19:46:32 <fungi> i assume the idea is to test gerrit 2.9.x with zuul
19:46:52 <zaro> yes, that is the purpose
19:47:01 <clarkb> note 2.10 just released right? is it worth moving the target for any features we want/need?
19:47:03 <jeblair> zaro: you can run your own zuul locally to test
19:47:08 <fungi> it's almost certainly just apache 2.4 syntax issues with the vhost config
19:47:14 <jeblair> that's actually usually what i do
19:47:16 <clarkb> if not I don't think we go to 2.10 simply because its there
19:47:21 <fungi> but i haven't looked yet
19:47:22 <jeblair> i usually reserve zuul-dev for testing zuul
19:47:34 <zaro> jeblair: yes, but need to fix zuul-dev anyway right?
19:48:01 <zaro> yeah, i would hold off on 2.10 until users get to 'break it in'
19:48:07 <jeblair> zaro: i mean, it wouldn't hurt, but i don't think we need to block the gerrit upgrade on it
19:49:14 <jeblair> #topic  Open discussion
19:49:23 <zaro> understood.  i may take that route.
19:50:46 <pleia2> nibalizer and I are presenting at fosdem on bits of infra this weekend (so airplanes and things coming up)
19:51:03 <mordred> pleia2: enjoy!
19:51:10 <mordred> pleia2: also, don't forget to drink ALL THE BEER
19:51:11 <fungi> hoping to hack on a nodepool bare-debian dib image later this week or next to see if we can run all our python 3.4 jobs successfully on it, and then that opens us up to be able to test other things on debian if we like
19:51:26 <pleia2> mordred :D
19:51:39 <jeblair> pleia2, nibalizer: yay!  let us know if there's a video
19:51:43 <jeblair> fungi: ++
19:51:54 <pleia2> will do
19:51:57 <anteaya> pleia2: safe travels
19:52:00 <anteaya> nibalizer: youtoo
19:52:06 <nibalizer> thanks
19:52:13 <fungi> best of luck on your presentations. break several legs or something
19:52:17 <clarkb> also eat at toukoul
19:52:19 <nibalizer> :D
19:52:50 <nibalizer> i'll be at the puppet contributor summit after fosdem, so can bring up bugs directly with developers, if we have any nonstandard ones lingering
19:52:52 <fungi> remember to remind the entire audience that we need them to come hack on and review our code ;_
19:53:21 <zaro> looks like proposal to add a seperate notification channel in gerrit is dead: https://gerrit-review.googlesource.com/#/c/63259
19:53:46 <zaro> not sure if we want to do anything further to get it upstream?
19:54:05 <fungi> that's unfortunate
19:55:20 <fungi> i'm not quite following why sven abandoned that change
19:55:39 <fungi> it seemed to be active as of yesterday
19:56:09 <zaro> i think he's basically thinks that it's fruitless to continue.
19:56:27 <clarkb> dave borowitz does seem to shoot it down pretty hard with comments yesterday
19:56:51 <jeblair> he also said 2 weeks ago "I'm glad this feature is getting implemented."
19:56:57 <fungi> https://gerrit-review.googlesource.com/58283 seems to still be open though
19:57:14 <fungi> looks like maybe it's just continying there?
19:57:17 <fungi> er, continuing
19:57:23 <zaro> it's just the same change. sven only split it up into multiple changes.
19:58:05 <zaro> 58283 was the one i worked on with David O.  such wasted effort.
19:59:19 <zaro> 58283 was the original change, it was too big so sven split it up into 4 smaller changes. 6259 is one of the smaller change.
19:59:21 <jeblair> it seems like the gist of dave borowitz's comments were about the specificity of the data
19:59:45 <jeblair> zaro: perhaps you could discuss with dave what kind of data he would be comfortable having in the db
20:00:07 <jeblair> zaro: because i don't think we need everything in there -- job name, result, and link are pretty universal
20:01:21 <jeblair> and even his last comment is a suggestion
20:01:45 <jeblair> zaro: anyway, i hope you find a way to continue, and thanks for trying regardless
20:01:59 <jeblair> we're out of time, thanks everyone
20:02:00 <jeblair> #endmeeting