19:00:44 <jeblair> #startmeeting infra
19:00:44 <openstack> Meeting started Tue Oct 21 19:00:44 2014 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:48 <ianw> go giants
19:00:49 <openstack> The meeting name has been set to 'infra'
19:00:50 <AJaeger_> o/
19:01:06 <jeblair> #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting
19:01:09 <jeblair> #link http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-10-14-19.03.html
19:01:21 <jhesketh> Morning
19:01:27 <jeblair> #topic  Actions from last meeting
19:01:28 <anteaya> jhesketh: yay
19:01:39 * jeblair pleia2 to create etherpad with infra-facing pros/cons on zanata vs pootle
19:01:50 <anteaya> she is on a plane
19:01:53 <jeblair> neat
19:01:54 <AJaeger_> was done
19:02:02 <anteaya> I think this and a ml thread are in progress
19:02:12 <clarkb> o/
19:02:14 <AJaeger_> anteaya: correct
19:02:25 <jeblair> #topic  Priority Efforts
19:02:28 <jeblair> swift logs
19:02:36 <jeblair> #link https://etherpad.openstack.org/p/swift_logs_next_steps
19:02:44 <jeblair> jhesketh: can you link your blog post?
19:03:04 <jeblair> ah, is in etherpad
19:03:06 <jhesketh> http://josh.people.rcbops.com/2014/10/openstack-infrastructure-swift-logs-and-performance/
19:03:07 <jeblair> #link http://josh.people.rcbops.com/2014/10/openstack-infrastructure-swift-logs-and-performance/
19:03:38 <jeblair> i have not had a chance to read it yet
19:03:49 <nibalizer> o/
19:03:57 <jeblair> it looks like it will be very informative :)
19:04:05 <krtaylor> o/
19:04:21 <jeblair> it makes my firefox slow
19:04:26 <anteaya> mine too
19:04:34 <anteaya> I didn't want to say anything though
19:04:41 <anteaya> because tables and graphs
19:04:42 <jhesketh> All good. I don't have a lot more to add. I think my conclusion is let's keep moving with switching things over but I want other input
19:05:02 <jeblair> jhesketh: you had some ideas for what might be causing some slowness (authn issues was one, i think?)
19:05:05 <jhesketh> Heh, yeah, I had to make all those graphs in Firefox. Sorry. :-(
19:05:15 <jeblair> jhesketh: did you have a chance to look into that?
19:05:18 <clarkb> ya I skimmed it and I agree with the plan to keep going
19:05:26 <clarkb> we should look into fixing the intermittent 404s too
19:06:30 <jeblair> jhesketh: could you put instructions in the etherpad for how to request swift versions of logs for those of us who haven't tried it out yet?
19:06:38 <jhesketh> jeblair: I list the auth stuff as a source of error and a few vague options for avoiding it, but I don't think it's huge
19:06:47 <jhesketh> The 404s I didn't look into
19:07:06 <clarkb> path?source=swift
19:07:09 <clarkb> jeblair: ^
19:07:14 <jeblair> (basically, i'm not familiar with the auth problems or the 404s and don't know how to become more familiar)
19:07:19 <jeblair> clarkb: for what jobs?
19:07:25 <jhesketh> clarkb: yep, that's it
19:07:33 <clarkb> and you should be able to do it on config jobs
19:07:38 <clarkb> which are now system-config jobs
19:07:40 <jhesketh> For project-config
19:07:46 <jeblair> pretend i was in china for two weeks :)
19:07:59 <clarkb> I was using the pupet-apply tests
19:08:11 <jhesketh> Ah yeah they'll work too
19:08:38 <jeblair> so like this: http://logs.openstack.org/01/130001/1/check/gate-infra-puppet-apply-precise/77509cd/console.html?source=swift
19:08:42 <jeblair> that's a 404
19:08:49 <clarkb> ya now refresh
19:08:53 <clarkb> eventually you should get the file
19:09:09 <jeblair> not so much
19:09:20 <jhesketh> That may not be recent enough?
19:09:42 <clarkb> it is from today
19:09:57 <jeblair> i don't see any log entries about a swift upload
19:10:00 <jeblair> http://logs.openstack.org/01/130001/1/check/gate-infra-puppet-apply-precise/77509cd/console.html
19:10:10 <clarkb> hrm we don't seem to apply the swift stuff in that job
19:10:17 <clarkb> ya so in the change of everything we may have broken that
19:10:37 <fungi> oh, possibly broken when we renamed?
19:10:55 <clarkb> ya that is my initial guess
19:11:09 <jeblair> http://logs.openstack.org/36/128736/5/check/gate-infra-puppet-apply-precise/80a3ead/console.html
19:11:15 <jeblair> but that works (is on a project-config change)
19:11:19 <anteaya> so I guess that is feedback for the next agenda item
19:11:19 <jhesketh> Hmm, I might have a look at what jobs are still running then. I only used the layout one
19:11:48 <jhesketh> It could be due to the job failing it doesn't get to the upload swift part
19:12:14 <jeblair> okay, so folks think it's working well enough that we should proceed to ... adding it to a devstack-gate job?
19:12:35 <clarkb> jeblair: we should probably sort out the apply centos6 problem first
19:12:41 <clarkb> just to be sure it won't interfere with d-g
19:12:50 <jeblair> jhesketh: ah, yep.  we'll need to run it regardless of result
19:12:51 <clarkb> but ya I think that is the next big test as it will give us bigger files and lots of them
19:13:13 <jhesketh> jeblair: actually I think the next step should be to disable storing on disk for a job or two so we can find issues like this one
19:13:29 <jeblair> that might require changes to some job configs, or the use of a plugin that lets you run things after failure
19:14:05 <jeblair> (in this case, i think we can add a plugin if needed since this is something we can bake into turbo-hipster post-jenkins)
19:14:05 <jhesketh> Yeah I'll need to investigate this case as probably the very next thing to do
19:14:32 <jhesketh> (turbo hipster has support)
19:14:51 <jeblair> so i think we can _also_ upload to swift on devstack immediately, but we do need to solve the failure issue before we have any jobs that _don't_ upload to static.o.o
19:15:15 <jhesketh> Agreed
19:15:27 <jeblair> cool, anything else on this?
19:16:02 <jeblair> #topic  Config repo split
19:16:14 <anteaya> my understanding is that we are done
19:16:27 <jeblair> so this is done, and i think all that remains is moving the spec to implemented.  and we'll drop this from future meetings.
19:16:29 <jeblair> yaay!
19:16:31 <anteaya> with the possible exception that swift upload on system-config jobs may be broken
19:16:38 <anteaya> yay
19:16:54 <jeblair> #topic  Nodepool DIB
19:17:07 <jeblair> are there any outstanding nodepool dib patches/issues we should discuss?
19:17:09 <clarkb> yes
19:17:16 <clarkb> the dib mutliple outputs change merged \o/
19:17:23 <clarkb> shoudl release tomorrow if dib sticks to their schedule
19:17:38 <jeblair> clarkb: you have a change to start using that, right?
19:17:49 <clarkb> and I still intend on upgrading nodepool to trusty and have a change I need to update in order to use the new dib feature
19:18:01 <clarkb> jeblair: ya let me get a link there are some -1's I need to address that I have ignored until the dib change merged
19:18:30 <clarkb> #link https://review.openstack.org/#/c/126747/
19:18:47 <clarkb> so as soon as grenade and jenkins are happy again I will probably context switch to the nodepool things
19:19:05 <jeblair> cool.  anything else?
19:19:09 <clarkb> and thats about it
19:19:16 <jeblair> Docs publishing
19:19:29 <jeblair> so this is waiting on us being happy with swift logs...
19:19:42 <fungi> agreed, since it will rely on similar mechanisms
19:19:53 <jeblair> ...or us considering the use of afs.
19:20:10 <jeblair> which, honestly, fits docs publishing pretty well.
19:20:18 <anteaya> awesome
19:20:37 <fungi> agreed, and is much closer than it was last week ;)
19:20:43 <jeblair> either way, still in a holding pattern
19:20:46 <jeblair> fungi: indeed :)
19:20:52 <jeblair> Jobs on trusty
19:21:12 <anteaya> do we need #topic
19:21:25 <jeblair> i probably should start doing that, yeah
19:21:37 <jeblair> anyway, who knows about the trusty move?
19:21:37 <anteaya> you did for nodepool dib
19:21:45 <fungi> #link https://etherpad.openstack.org/p/py34-transition
19:21:46 <jeblair> oh, huh
19:21:56 <fungi> the list of blockers is getting shorter
19:22:00 <jeblair> #topic Jobs on trusty
19:22:16 <fungi> glanceclient and to a lesser extent heatclient are problems
19:22:42 <anteaya> you punted nicely to the ml on glancelient
19:23:08 <fungi> mostly problems with using unordered data types to build urls and json for fake api servers
19:23:19 <zaro> o/
19:23:31 <jeblair> fungi: is anyone working on the heat problems?
19:23:57 <fungi> i was starting to look into that one, and they inherit their fake server from oslo incubator
19:24:09 <fungi> so i think i need to fix it there
19:24:19 <fungi> but may also end up punting to them
19:24:23 <jeblair> i kind of feel like you shouldn't have to be fixing all of the things, unless you really feel like it :)
19:24:37 <clarkb> ++
19:24:43 <anteaya> I agree
19:24:46 <fungi> i'd personally rather not, but at least giving them some heads up on where the issues seem to be can help
19:25:05 <fungi> also stackforge/heat-translator seems to have issues, but it's stackforge so they'll need to look into that when i get around to letting them know
19:25:45 <jeblair> fungi: so at some point, you plan on notifying heat they have trusty problems, yeah?
19:26:02 <fungi> jeblair: yeah, today i think
19:26:08 <jeblair> at that point, maybe we can bring it up in the project meeting
19:26:26 <fungi> well, it's not that heat is broken on trusty, it's that it's untestable on trusty
19:26:33 <fungi> so could also be broken, who knows
19:26:42 <jeblair> fungi: yeah, i think you just said the same thing twice :)
19:26:50 <fungi> er, heatclient i mean
19:26:59 <fungi> and yeah, basically if it's not tested...
19:27:15 <clarkb> and the SRU stuff is all in
19:27:27 <jeblair> finally, if we don't see progress we might suggest a deadline to the tc
19:27:28 <clarkb> hopefully ubuntu pcisk that up after they release unicorn
19:27:30 <fungi> the two outstanding ubuntu bugs, yeah. blocked on their release freeze i believe
19:27:40 <clarkb> fungi: ya that is my understanding
19:27:42 <jeblair> but hopefully we won't get to that point
19:27:43 <fungi> jeblair: sounds good
19:28:01 <fungi> i've been pushing not-as-hard yet because we're also waiting on ubuntu
19:28:14 <fungi> but that does seem just over the horizon now
19:28:29 <jeblair> cool.  a plan.  anything else?
19:28:55 <jeblair> #topic  Kilo summit topic brainstorming pad (fungi)
19:29:12 <jeblair> whee this got bigger
19:29:29 <fungi> #link https://etherpad.openstack.org/p/kilo-infrastructure-summit-topics
19:29:58 <fungi> the items there seem to be getting fleshed out and discussed
19:30:14 <jeblair> are we ready to try to finalize it?
19:30:26 <anteaya> I told mtreinish about using the bottom part for the friday sessions
19:30:43 <fungi> i think we've had plenty of time to regurgitate our collective brains into it
19:30:43 <anteaya> so qa knows to add stuff to this etherpad
19:31:26 <anteaya> does afs still need an etherpad item?
19:31:35 <krotscheck> jeblair and I talked at the storyboard meeting, and having storyboard get a slot seemed a little superfluous.
19:32:02 <krotscheck> My goal at the summit is to actually get people to pay attention to StoryBoard. Most of the roadmap things are still pretty well known.
19:32:10 <jeblair> what do you think about voting on these?  add votes to the etherpad next to items you think would be good for sessions?
19:32:22 <jeblair> this would be a non-binding vote :)
19:32:26 <fungi> heh
19:32:55 <anteaya> lines 22-25 can be removed
19:32:59 <fungi> that seems reasonable. maybe as a first order effort people should remove topics which they've decided aren't needed
19:33:08 <anteaya> so I will remove them
19:33:37 <anteaya> someone in brown beat me to it, thank you
19:33:55 <AJaeger_> an unnamed brown ;)
19:33:56 <fungi> and yeah, voting in here is likely a poor waste of meeting time
19:33:57 <anteaya> I'm guessing that is jeblair
19:34:22 <AJaeger_> so, should we add in the etherpad "+1 AJaeger" (or -1) ?
19:34:30 <fungi> but we should probably set the 24-hour clock to people voting for their preferences
19:34:37 <fungi> or something along those lines
19:34:43 <jeblair> yep, i'm brown, i named myself
19:34:44 <nibalizer> jeblair: maybe give everyone a number of votes (3?) so they can vote for multiple things
19:35:12 <jeblair> i think just leave a positive vote next to things you think are useful
19:35:47 <jeblair> i left a sample vote next to infra-manual
19:35:54 <nibalizer> ok
19:36:07 <jeblair> so if that's clear, we can probably move on and vote asynchronously
19:36:15 <AJaeger_> Also on the QA topics?
19:36:25 <jeblair> oh good question
19:36:38 <jeblair> i was mostly thinking we need this for the 4 infra session slots we have, so just the top section
19:36:47 <AJaeger_> jeblair: ok
19:36:52 <jeblair> the bottom i think we can leave for the actual meetup at the summit
19:36:54 <zaro> ughh! my cursor keeps moving on me!
19:37:08 <fungi> agreed. just the session slots topics
19:37:55 <jeblair> #topic  Kilo cycle Infra liaisons... should we have them? (fungi)
19:38:30 <jeblair> so i think we mostly decided we don't need formal project liasons right now; the informal contact we have is working okay
19:38:38 <fungi> qa, oslo, release management, docs, vulnerability management, stable maintenance are all starting to organize liaison lists
19:38:44 <AJaeger_> for liasions, the first question is what do we expect from them
19:38:51 <fungi> i agree in general we don't have much need
19:38:56 <jeblair> however, this morning fungi and anteaya discussed the idea of formal third-party ci liasons
19:39:03 <jeblair> which i think is a fabulous idea
19:39:04 <AJaeger_> Are those people that we would ask for a +1 for patches in their projects?
19:39:17 <AJaeger_> jeblair: +1 on 3rdparty liasons
19:39:40 <clarkb> jeblair: anteaya so I was thinking about that again today because it came up
19:39:45 <jeblair> so projects would nominate people to represent nova when working with third-party ci system operators
19:39:45 <fungi> each project which has decided they are requiring third-party testing for parts of their codebase would identify points of contact
19:39:48 <fungi> defaulting to the ptl
19:39:49 <clarkb> and couldn't we delegate CI voting to each project?
19:39:59 <clarkb> and get out of the business completely? seems like we talked about that in atlanta
19:40:00 <anteaya> I'm hoping that during the course of the summit those folks already playing those roles become more willing to be public about it
19:40:06 <clarkb> it would require some gerrit group magic but should work
19:40:16 <jeblair> clarkb: yes.  you brought it up along with making the gerrit accounts self service.
19:40:26 <fungi> righht, i think if we tweak to project-specific acls for third-party testing, these would be the people with control over that group
19:40:37 <clarkb> ya that too :) but I think they are orthogonal things if we want to solve them separately that may be easier
19:40:38 <anteaya> I would be in favour
19:40:44 <jeblair> i think that's a great idea.  does someone want to make a formal proposal and do the acl testing?
19:40:47 <anteaya> if we have the gerrit features to do that
19:40:51 <jeblair> clarkb: i kind of think they are tied.
19:41:03 <clarkb> I can sign up for the testing of said stuff
19:41:08 <anteaya> I would except I don't know if I can get it done this week
19:41:11 <clarkb> since I have suggested it I might as well own it :)
19:41:13 <anteaya> and I'm off next
19:41:17 <anteaya> thanks clarkb
19:41:28 <fungi> i can write up something for the project-wide liaisons page
19:41:47 <clarkb> #action clarkb figure out gerrit per project third party voting ACLs and third party accounts via openid
19:41:47 <anteaya> can we let cinder have their meeting first?
19:41:47 <fungi> as a draft before we make it official on the wiki
19:42:02 <anteaya> so maybe write that stuff up thursday fungi
19:42:11 <anteaya> since thingee was supportive
19:42:13 <jeblair> hypothetically, if we made everything self-service and put projects in control of their own third-party ci voting... should there still be project third-party ci liasons?
19:42:18 <fungi> #action fungi draft third-party testing liaisons section for wiki
19:42:29 <anteaya> but I do kind of want to let cinder own it, rather than it being a directive from us
19:42:54 <anteaya> well in matters of what to do if and education
19:42:57 <fungi> jeblair: well, did we have a solution to projects being able to disable commenting from accounts, or are we still the go-to for that?
19:43:02 <anteaya> I would like to deal with one person per project
19:43:06 <clarkb> jeblair: ya we may still need that rol
19:43:08 <clarkb> er role
19:43:10 <jeblair> fungi: i think that's why we need a proposal :)
19:43:10 <anteaya> rather than the hoard
19:43:19 <fungi> if we're stuck disabling the misbehaving systems, i'd like there to be representatives from projects letting us know when to reenable
19:43:26 <jeblair> fungi: (because i don't know the answer to that)
19:43:31 <fungi> got it
19:43:49 <jeblair> fungi: i kind of suspect it would end up being projects can disable voting, but really bad misbehavior may require us to disable an account
19:44:00 <clarkb> jeblair: ya that is what I am thinking
19:44:10 <anteaya> that would be great
19:44:24 <anteaya> and projects are in charge of the steps to become reenabled
19:44:36 <fungi> but beyond that, the liaisons idea acts as a rallying point for the third-party testers on those projects in place of our infra team
19:44:42 <krtaylor> third-party liaisons would also be helpful for third-party systems, a point of contact for systems with questions
19:44:56 <jeblair> so it sounds like liasons may still be useful even if we go to self-service, both for us (disabling for abuse, and facilitating onboarding of new ci systems with the projects themselves)
19:45:04 <jeblair> krtaylor: good point
19:45:20 <jeblair> so i think both of our action items here sound good.
19:45:40 <jeblair> anything else on the topic?
19:45:43 <rainya> fungi, anteaya, jeblair: i am late to the meeting, but this type of liasoning thing sounds right up my ally so would like to consider being part of it
19:45:48 <fungi> right. at the moment many of them assume they need to come to us because there's no other published point of contact, when we're not actually the people seting this policy for the projects in question
19:46:17 <fungi> and yeah, nothing else on this topic
19:46:25 <fungi> other than thanks rainya!
19:46:54 <jeblair> rainya: cool, clarkb and fungi have action items, so hopefully we can review those at the next meeting and figure out what needs doing
19:47:03 * fungi nods
19:47:07 <jeblair> #topic  Publish devstack.org content under infra (anteaya)
19:47:20 <anteaya> I think this is done
19:47:30 <anteaya> fungi: did the redirect happen from devstack.org?
19:47:44 <jeblair> ooh, is there a direct url you can share?
19:47:45 <fungi> #link https://review.openstack.org/#/c/130001/
19:47:52 <fungi> needs reviews
19:47:56 <AJaeger_> #link http://docs.openstack.org/developer/devstack/
19:48:11 <jeblair> how cool!
19:48:29 <rainya> oooooh, nice!
19:48:30 <fungi> not quite sure yet why my change is failing
19:48:37 <anteaya> no kidding
19:48:44 <anteaya> but there it is
19:48:57 <jeblair> so once fungi solves all the problems, we should be able to set up a redirect from devstack.org
19:49:00 <rainya> (i am ETO today for grad school assignments and know what I will be playing with between papers)
19:49:02 <fungi> but anyway, yeah, once the redirect is in place on static.o.o we should be done
19:49:16 <clarkb> woot
19:49:21 <anteaya> fungi: yay
19:49:22 <jeblair> btw, i'm assuming we made a decision somewhere to do a redirect rather than continuing to host on devstack.org directly?
19:49:30 <anteaya> AJaeger_: thanks for the publish jobs!
19:49:49 <anteaya> last meeting or the one before?
19:49:52 <fungi> jeblair: yeah, i believe dtroyer came out in favor of that
19:50:09 <AJaeger_> And mordred fixed devstack so that we could publish easily
19:50:13 <fungi> he can chime in if he's around, but i'll add him to the redirect review too
19:50:24 <jeblair> wfm
19:50:38 <jeblair> #topic  Drop gate-{name}-python26 from python-jobs template and specify it explicitly - and python2.6 deprecation (krotscheck, fungi, ajaeger)
19:50:55 <krotscheck> https://review.openstack.org/#/c/128736/ is waiting on clarkb
19:50:58 <clarkb> I am going to review that change as soon as I have a moment
19:51:05 <clarkb> to double check the thing from my last -1
19:51:10 <clarkb> and will merge so we should be moving on that
19:51:23 * AJaeger_ is confused with the other reviews - I've added links to the meeting page.
19:51:51 <krotscheck> The other reviews seem to be in a weird merge conflict.
19:51:53 <AJaeger_> I see mixed messages on the removal of python26 jobs from projects - how do we want to conintue here?
19:52:12 <AJaeger_> krotscheck: I'll fix the merge conflicts tomorrow - thanks for doing it last night!
19:52:20 <krotscheck> AJaeger_: Anytime!
19:52:22 <clarkb> for projects with stable icehouse and or juno we will only run py26 on those two branches
19:52:50 <clarkb> if a project is a lib project of those projects with stable branches but does not have stable branches we will continue to py26 on master
19:52:59 <clarkb> that list is basically anything in the oslo program and the python-*clients
19:53:04 <clarkb> everything else should stop with py26 testing
19:53:06 <fungi> AJaeger_: on 129433 dhellmann did say to stop running it on oslo-incubator as i suspected we should
19:53:23 <pleia2> clarkb: stackforge projects stop too?
19:53:23 <AJaeger_> fungi, ok, I'll update 129433.
19:53:33 <krotscheck> If the PTL or a project member expresses a strong dislike (-1) for running only on branches, what do we do? Ignore them?
19:53:34 <clarkb> fungi: oslo-incubator has stable branches iirc
19:53:40 <clarkb> fungi: so we should test on their stable branches
19:53:44 <dhellmann> fungi: actually, we may need to wait until we graduate the bits that are used in the client libs
19:53:51 <fungi> clarkb: based on weird decisions between ttx and dhellmann some libs do have stable/icehouse and stable/juno branches where they backported out-of-order fixes
19:53:55 <clarkb> krotscheck: for openstack*/ projects yesish we should explain why
19:53:56 <dhellmann> fungi: we expect that to happen this cycle
19:54:05 <clarkb> so not complete ignoring
19:54:12 * dhellmann adopts hurt expression
19:54:23 <clarkb> for stackforge projects I think we remove py26 and let projects add it back
19:54:28 * anteaya pats dhellmann colsolingly
19:54:32 <clarkb> we should get this done early so that no one screams in a year
19:54:37 * ttx looks positively shocked
19:54:40 <pleia2> ok
19:54:40 <clarkb> screaming now is better than screaming then
19:54:44 <jeblair> yep
19:55:05 <ttx> "in stable/icehouse, no one hears you scream"
19:55:08 <AJaeger_> So, ignore any -1 on https://review.openstack.org/129434 and ask them to send a patch?
19:55:08 <fungi> dhellmann: oh, stable servers are going to depend on incubated libs from master?
19:55:27 <clarkb> AJaeger_: yes I think that will help us track it better via git history
19:55:45 <clarkb> AJaeger_: if they propose soonish we can merge it all together so that there is not period of not testing
19:56:10 <AJaeger_> should we send an email to openstack-dev pointing these changes out?
19:56:23 <clarkb> AJaeger_: ya we should probably do that once we have a set of changes we are happy with
19:56:23 <dhellmann> fungi: there are parts of the client libs in the incubator, and the client libs need to work with 2.6, so the incubator needs to work with 2.6
19:56:26 <clarkb> (maybe that is now?)
19:57:01 <fungi> dhellmann: yeah, i still have no idea what that means exactly, so i'm just going to take your word for it for now
19:57:11 <jeblair> someone want to volunteer to write that email?
19:57:36 <AJaeger_> clarkb: I need to rebase everything and address the comments on oslo first - should be ready tomorrow (unless somebody takes over the patch)
19:57:45 <rainya> I'd like clarification since it will impact current projects i'm working on (everything is 2.6 still in the public cloud) is "this cycle" Kilo or Juno?
19:57:53 <AJaeger_> rainya: kilo
19:58:01 <dhellmann> fungi: there's an incubator module called "apiclient" that is synced into some of the client libraries, and there's another called something like "cliutils" that is the same
19:58:01 <fungi> rainya: kilo
19:58:02 <AJaeger_> rainya: juno is out and finished
19:58:37 <fungi> dhellmann: oh, and the stable servers are going to start using those (directly or indirectly) from somewhere other than syncing from the incubator?
19:58:51 <rainya> AJaeger_, thank you; I knew juno was finished, but wanted to be explicit as I had heard it go back and forth earlier this month
19:58:51 <clarkb> I can write that email if no one else wants to be the endpoint for responses :)
19:59:08 <fungi> anyway, we can take the oslo-incubator design details discussion for later
19:59:13 <clarkb> #action clarkb write py26 deprecation email
19:59:22 * AJaeger_ shares the blame since he wrote the patches ;)
19:59:44 <AJaeger_> thanks, clarkb
19:59:52 <jeblair> we're at time.  thanks everyone!
19:59:58 <anteaya> thanks jeblair
20:00:01 <jeblair> #endmeeting