19:03:00 <fungi> #startmeeting infra
19:03:00 <openstack> Meeting started Tue Mar 29 19:03:00 2016 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:03:04 <openstack> The meeting name has been set to 'infra'
19:03:23 <sbelous_> hi there
19:03:24 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:03:33 <fungi> #topic Announcements
19:03:54 <fungi> #info Reminder: add summit agenda ideas to the Etherpad
19:03:59 <fungi> #link https://etherpad.openstack.org/p/infra-newton-summit-planning Newton Summit Planning
19:04:04 <fungi> let's plan to try to do a little voting on them at next week's meeting
19:04:14 <fungi> #topic Actions from last meeting
19:04:16 <pleia2> #info happy second term PTL-ness to fungi :)
19:04:28 <fungi> heh, thanks (i think?)!
19:04:34 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-22-19.02.html
19:04:40 <fungi> none last week
19:04:57 <pabelanger> fungi: congrats!
19:05:04 <bkero> o/
19:05:05 <fungi> #topic Specs approval
19:05:11 <fungi> #info APPROVED: "Nodepool: Use Zookeeper for Workers"
19:05:16 <fungi> #link http://specs.openstack.org/openstack-infra/infra-specs/specs/nodepool-zookeeper-workers.html Nodepool: Use Zookeeper for Workers
19:05:22 <fungi> #info APPROVED: "Stackviz Deployment"
19:05:27 <fungi> #link http://specs.openstack.org/openstack-infra/infra-specs/specs/deploy-stackviz.html Stackviz Deployment
19:05:39 <fungi> those url's _should_ be valid soon ;)
19:05:45 <hashar> I have missed looking at Zookeeper / Nodepool.  Would Zookeeper be optional ?
19:05:48 <jesusaur> o/
19:05:48 <fungi> i approved, but jobs still need to run
19:06:02 <hashar> as a third party user of Nodepool I have only a single image to build/update.  Just wondering really
19:06:36 <jeblair> hashar: i believe it would be required, however, it scales down very well, and i anticipate simply running it on the nodepool host will be fine
19:06:47 <jeblair> hashar: (and will be what we do for quite some time)
19:06:49 <fungi> hashar: the spec proposal (mentioned last week) is at https://review.openstack.org/278777 while we wait for the jobs to finish running/publishing
19:07:08 <fungi> if you want to read further
19:07:10 <hashar> fungi: thanks
19:07:22 <hashar> jeblair: yeah I guess I will survive and we have some Zookeeper instances already iirc
19:07:26 <hashar> jeblair: thx ;)
19:07:44 <fungi> also, there's still time to get involved! specs are not written in stone (more like silly putty really)
19:08:14 <fungi> #topic Priority Efforts: Infra-cloud
19:08:30 <fungi> added briefly to highlight cody-somerville's awesome weekly status reporting
19:08:49 <zigo> o/
19:08:56 <zigo> (sorry to be late)
19:09:00 <fungi> #link http://lists.openstack.org/pipermail/openstack-infra/2016-March/004090.html latest infra-cloud status
19:09:17 <fungi> note that the hardware from former "west" has arrived in houston now
19:09:43 <anteaya> that is good news
19:09:44 <fungi> so maybe we'll have access back for our desired priority hardware shortly and can pick up where we left off at the sprint!
19:10:07 <fungi> #topic Baremetal instances in Nodepool with Ironic (igorbelikov)
19:10:13 <igorbelikov> hey folks
19:10:21 <igorbelikov> this idea was brought up in chat by kozhukalov last week
19:10:39 <fungi> i know this has been brought up many times in the past few years, so i'll let people reiterate the usual concerns
19:11:17 <clarkb> I wrotr an email about it
19:11:22 <igorbelikov> we really want to integrate our fuel deployment tests with infra, the issue is - we use baremetal nodes to launch a bunch of VMs and deploy openstack
19:11:22 <igorbelikov> there are a bunch of limitations we can't overcome yet to span the VMs across multiple baremetalal nodes
19:11:23 <igorbelikov> and a lot more issues come to mind if we imagine doing it on top of a bunch VMs requested from Nodepool
19:11:23 <igorbelikov> so we discussed this internally and while we still will continue to move forward to overcome this limitations
19:11:23 <igorbelikov> the most realistically looking way is to use Ironic and its baremetal driver in Nova, so Nodepool will be able to request baremetal nodes without any serious changes to Nodepool logic and current infra workflow
19:11:39 <clarkb> hard to dig up on phone but that should cover thr details
19:12:08 <igorbelikov> and I wanted to get some input from infra on this general idea
19:12:13 <angdraug> clarkb: do you remember date or subject key words?
19:12:14 <fungi> igorbelikov: how do we upload images to that?
19:12:27 <fungi> just curious if your plans include glance
19:12:28 <igorbelikov> baremetal nodes will be able to use dib-images
19:12:53 <igorbelikov> fungi: so basically the current workflow for nodepool vms, but with baremetal
19:13:18 <fungi> also running untrusted code on your servers has the opportunity to taint them if anyone proposes a patch which, say, uploads malicious firmware
19:14:10 <igorbelikov> fungi: fuel deployment tests will work just fine with restricted access from jenkins user. It doesn’t completely solve all security issues, but this can be discussed further
19:14:12 <clarkb> angdraug: was on infra list and adrian otto was on the thread
19:14:48 <fungi> i think that thread was having to do with bare metal testing for magnum or something?
19:15:06 <clarkb> yup
19:15:17 <fungi> anyway, yes it's a suggestion which has come up multiple times, as i've said, from multiple parties
19:15:22 <clarkb> but should cover general ironic + nodepool
19:15:47 <clarkb> and so far no one has given us a workable endpoint or attempted to
19:15:55 <igorbelikov> clarkb: thanks, I’ll dig up the thread, sadly I missed it
19:16:08 <angdraug> #link http://lists.openstack.org/pipermail/openstack-infra/2015-September/003138.html
19:16:12 <yolanda> from my perspective, once we have infra cloud up and running, there can be opportunity to start moving it
19:16:25 <fungi> i can see how it would be possible to implement, but more generally the usual needs for multiple separate environments, making sure the clouds providing those resources are staffed and maintained to keep them running, et cetera are typical concerns we have over any special regions in nodepool as well
19:16:40 <yolanda> deploy nova + ironic, use dib images, and figure how to deal with security problems
19:17:13 <fungi> the current goal with infra-cloud is to provide virtual machines, not baremetal test nodes, but it's possible that could act as an additional available region for those tests if we decided that was something we should implement
19:17:31 <fungi> "figure out how to deal with security problems" seems like a lot of handwaving to me
19:17:47 <igorbelikov> we’re ready to work on required infra-cloud change for that to happen
19:17:51 <yolanda> a spec should be needed of course
19:18:09 <fungi> it's a hard problem, and i know the tripleo and ironic crowd have struggled with it for a few years already, so looping them in early in such a conversation would be wise
19:18:12 <yolanda> and i see that as next steps once we have a stable infra cloud
19:18:16 <igorbelikov> yolanda: a spec is a must for this, sure:)
19:18:37 <pabelanger> I can understand the need for igorbelikov wanting bare metal nodes upstream, but I would be curious to see what else is needed to migrate more of ci.fuel-infra.org upstream personally.
19:19:07 <angdraug> pabelanger: http://lists.openstack.org/pipermail/openstack-dev/2015-November/079284.html
19:19:13 <fungi> anyway, i guess my point is that "modify nodepool to support ironic" is the simplest part of this. having good answers for the _hard_ parts first is what we'll need to be able to decide if we should do it
19:19:40 <yolanda> fungi, from our experience downstream using baremetal, we focused on two things: code review is very important, to ensure that no code is malicious. And also, periodical redeploys of baremetal servers, to ensure they are clean
19:19:46 <igorbelikov> pabelanger: the only things non-migrateble right now are deployment tests, we’re working on moving everything else upstream
19:20:21 <yolanda> nodepool supporting ironic should be a matter of using the right flavors. The complicated part should be the nova + ironic integration...
19:20:24 <fungi> yolanda: reviewing code before jobs run is also a significant departure from our current workflow/tooling so that's not a solution to be taken lightly
19:20:34 <igorbelikov> yolanda: periodical redeploys fit perfectly in the picture
19:20:39 * crinkle would like to see infra-cloud turned back on and providing reliable nodepool resources before thinking about new uses for the hardware
19:20:48 <pabelanger> angdraug: igorbelikov: thanks, will read up on it after meeting
19:20:59 <pabelanger> crinkle: ++
19:21:09 <fungi> i am in complete agreement there/ let's table any discussion of what else infra-cloud could be useful for until we're using it for what we first wanted
19:21:27 <fungi> thanks for the reality check, crinkle
19:21:31 <crinkle> :)
19:22:06 <yolanda> yep, that should be a next step once we have the hardware on place and redeploy again. But I think that we have this possibility for the mid-term
19:22:07 <angdraug> btw there's a sizeable pool of hw behind ci.f-i.org, just saying )
19:22:35 <fungi> okay, so it seems like this is a topic which would be better moved to a ml thread, we can loop in people with experience in the problem areas and determine if there's a good solution that fits our tools and workflow, and make sure concerns brought up in previous iterations of the same discussion are addressed to our satisfaction
19:23:16 <fungi> angdraug: one sizable pool is insufficient. if it goes offline then any jobs which can only run there won't run, and projects depending on those jobs will be blocked
19:23:33 <angdraug> fungi: do you think it's too early to start a spec?
19:23:40 <fungi> we've seen this many times already with tripleo and are strongly considering switching them to third-party ci
19:24:04 <anteaya> fungi: I recall you made the same point in the nfv ci thread on the -dev mailing list
19:24:07 <fungi> because trying to showhorn their special cloud in which only runs their tests and has no redundancy turns out not to be a great fit for our systems
19:24:09 <angdraug> good point, one more concern to address on ML/in spec
19:24:15 <fungi> er, shoehorn
19:24:36 <angdraug> that's exactly what we want to avoid
19:24:43 <angdraug> right now we're using that hw in our own special way
19:24:53 <angdraug> we want this to become a generic pool of hw for any openstack ci jobs
19:24:53 <fungi> so, yes you can start with a spec but i think it may be easier to have an ml thread to work out bigger questions before you bother settling on a set of solutions to propose in a spec
19:24:57 <igorbelikov> there are actually 2 pools in different geographical locations, but it’s still a good point
19:25:05 <AJaeger> so, team up with tripleo and have two pools to share ;)
19:25:19 <angdraug> AJaeger: +1 :)
19:25:37 <fungi> well, tripleo's environment would need a complete redesign from scratch to be generally usable anyway
19:25:46 <angdraug> so does ours
19:26:00 <fungi> their model with brokers and precreated networks is very specific to teh design of their jobs
19:26:08 <AJaeger> ah ;(
19:26:24 <pabelanger> AJaeger: fungi: I'm hoping to talk with the tripleo team in austin to see what can be done moving forward
19:26:39 <fungi> okay, meeting's half over, 6 topics to go. need to move on
19:26:50 <angdraug> sorry, thanks for giving us the time!
19:26:51 <igorbelikov> ok, moving this to mail thread, thanks!
19:27:04 <fungi> thanks angdraug, igorbelikov!
19:27:09 <fungi> #topic Gerrit tuning (zaro)
19:27:25 <fungi> zaro: saw you had more details on the ml thread this week!
19:27:27 <zaro> anybody get a chance to read the link
19:27:28 <zaro> ??
19:27:55 <anteaya> #link http://lists.openstack.org/pipermail/openstack-infra/2016-March/004077.html
19:28:00 <zaro> anyways yeah, performance is way better after running git gc
19:28:08 <AJaeger> zaro: yes, thanks for testing this!
19:28:21 <zaro> so was wondering if anybody had further questions about it?
19:28:35 <anteaya> can we look closer at git push origin HEAD:refs/for/master
19:28:52 <anteaya> the stats for user
19:28:55 <zaro> what do you mean look closer?
19:29:04 <anteaya> before is 5 seconds the after is 11s
19:29:13 <jeblair> has anyone looked into server performance with the resulting repositories?
19:29:18 <AJaeger> zaro: How can we run this? Is there a gerrit setting or is that manual? And while git gc runs, is the repo available for usage?
19:29:19 <anteaya> for user that looks like it taeks twice as long to me
19:29:29 <jeblair> (not only gerrit, but cgit/git)
19:30:39 <fungi> i haven't looked into it. sounds like zaro is the only one who's run comparative stats so far
19:30:48 <zaro> anteaya: i didn't notice that, but that's very odd result.  i 'm not sure why the descrepency there.
19:30:58 <fungi> but i agree the server impact (gerrit and cgit) is still a missing piece
19:30:59 <anteaya> zaro: okay, I question it
19:31:00 <abregman> zaro: we tried it in our downstream environment and it really made difference for some our projects. so thanks for that.
19:32:14 <zaro> AJaeger: i ran it manually with the nova repo provided by jeblair
19:32:34 <zaro> AJaeger: i only ran locally on my own machine.
19:32:41 <anteaya> abregman: if you are able to collect any statistics and share them as a reply to that mailing list post, that would be wonderful
19:32:42 <fungi> since git gc (or jgit gc for that matter) is by definition a destructive action, it's not something we'll be easily able to recover from if we later discover an adverse impact somewhere, hence the need for thorough testing
19:33:01 <hashar> I can't remember if I mentionned it on the list but Gerrit upload-pack ends up sending all refs/changes/*  to the client doing a git fetch :(
19:33:03 <abregman> anteaya: sure, np
19:33:06 <zaro> AJaeger: i supposed you can do the same
19:33:09 <anteaya> abregman: thank you
19:33:24 <AJaeger> zaro: I meant: Run on gerrit itself
19:33:26 <anteaya> abregman: include commands run and as much detail as you can
19:33:35 <abregman> ack :)
19:34:29 <fungi> #link https://tarballs.openstack.org/ci/nova.git.tar.bz2 a snapshot of the full nova repo from review.openstack.org's filesystem
19:34:31 <hashar> and somehow the git fetch is way faster over https compared to ssh (on my setup  and using Wikimedia Gerrit 2.8 .. ).  Long food: https://phabricator.wikimedia.org/T103990#2144157
19:35:15 <fungi> #link https://phabricator.wikimedia.org/T103990#2144157
19:35:20 <fungi> thanks for the details hashar
19:35:46 <hashar> feel free to poke me in your mornings if you want me to elaborate
19:36:00 <zaro> AJaeger: most things are cloning from git.o.o not review.o.o so i just tested directly.
19:36:08 <hashar> at least on my setup using https for fetch solved it.  I should try on your nova installation
19:36:53 <fungi> zaro: so anyway, it sounds like we're a lot closer to seeing performance benefit for this but more comfort about the potential impact to the server side of things is preferred before we decide it's entirely safe
19:37:22 <zaro> what would provide more comfort?
19:37:49 <anteaya> server performance with the resulting repositories is what jeblair has asked for
19:38:02 <fungi> zaro: indications that performance on git.o.o or review.o.o (on the servers) will improve or at least remain constant after a git gc (and definitely not get worse)
19:38:39 <fungi> e.g. is it more work for git to serve these after than it was before
19:39:20 <fungi> anyway, continuation on the ml
19:39:22 <fungi> okay, need to continue pushing through as many of these topics as we can
19:39:36 <fungi> #topic Status of gerrit replacement node (anteaya, yolanda)
19:39:51 <anteaya> okay so on April 11th we commited to doing a thing: http://lists.openstack.org/pipermail/openstack-dev/2016-March/088985.html
19:40:02 <anteaya> and my understanding is that yolanda has a node up
19:40:06 <yolanda> i just wanted to confirm that nothing is pending for that node replacement
19:40:11 <anteaya> beyond that I don't know what the plan is
19:40:14 <yolanda> i created the node, it's on ansible inventory, and it's disabled
19:40:14 <fungi> this is just a quick check on the existing server replacement schedule, and making sure someone writes up the maintenance plan for it?
19:40:16 <anteaya> but I think there whould be one
19:40:23 <anteaya> fungi: yes
19:40:30 <anteaya> I'm away next week
19:40:38 <anteaya> just want to hear someone is driving this
19:40:52 <anteaya> can be but doesnt' have to be yolanda
19:40:56 <clarkb> basic process is stop review.o.o, copy git repos, index(es), start on new server
19:41:03 <fungi> we likely also need a one-week warning e-mail followup to the previous announcement
19:41:05 <yolanda> do we have pre-existing maintenance plans for gerrit?
19:41:16 <fungi> clarkb: git repos are in cinder now, i believe
19:41:24 <clarkb> oh right
19:41:37 <clarkb> so thats potentially even easier, unmount, unattach, attach, mount, win
19:41:53 <yolanda> and db is on trove, so that should be fast
19:41:53 <abregman> zaro: did you run git gc --aggressively?
19:41:56 <fungi> so detach volume from old server, attach to new server, update dns with a short ttl if it hasn't already and then swap dns records right at the start of the outage
19:42:07 <yolanda> i can write an etherpad for it
19:42:09 <zaro> abregman: no.
19:42:12 <yolanda> if we don't have any
19:42:21 <fungi> thanks yolanda!
19:42:24 <anteaya> abregman: we've changed topics, we can continue discussion in the -infra channel
19:42:40 <fungi> #action yolanda draft a maintenance plan for the gerrit server replacement
19:42:51 <anteaya> thank you
19:42:55 <fungi> did you also want to send the followup announcement around the one-week mark?
19:42:58 <abregman> anteaya: oh k, sorry
19:43:01 <yolanda> i will also send a remainder on 4th april
19:43:03 <anteaya> abregman: no worreis
19:43:21 <fungi> #action yolanda send maintenance reminder announcement to the mailing list on April 4
19:43:25 <fungi> thanks yolanda!
19:43:30 <anteaya> thanks yolanda
19:43:37 <yolanda> glad to help :)
19:43:38 <fungi> #topic Ubuntu Xenial DIBs (pabelanger)
19:43:45 <pabelanger> ohai
19:43:49 <pleia2> awesome work on these pabelanger :)
19:43:53 <fungi> #link https://review.openstack.org/#/q/topic:ubuntu-xenial+status:open
19:43:54 <pabelanger> so ubuntu-xenial dibs are working
19:44:06 <clarkb> pabelanger: including the puppet runs?
19:44:08 <pabelanger> even tested with nodepool launching to jenkins
19:44:09 <pabelanger> clarkb: yup
19:44:11 <anteaya> #link https://etherpad.openstack.org/p/infra-operating-system-upgrades
19:44:12 <clarkb> nice
19:44:20 <pabelanger> so, that link above has 1 review that needs merged
19:44:27 <pabelanger> and we can then turn them on in nodepool
19:44:30 <clarkb> pabelanger: have you run a devstack-gate reproduce.sh script against one of the images yet?
19:44:30 <fungi> oh, i see i skipped a topic a couple back, i'll thread that one in next (sorry zaro, hashar!)
19:44:34 <pabelanger> surprisingly is was straightforward
19:44:42 <pabelanger> clarkb: not yet.
19:44:46 <pabelanger> clarkb: I can do that later today
19:44:50 <hashar> fungi: or we can skip git-review and follow up on list
19:45:12 <pabelanger> either way, out puppet manifests and dib elements works well
19:45:19 <pleia2> xenial isn't released until april 21st (beta 2 was last thursday), but I don't anticipate any ground-breaking changes between then and now
19:45:24 <pleia2> er, now and then
19:45:35 <yolanda> pabelanger, so not much changes needed right?
19:45:36 <anteaya> good work
19:45:48 <pabelanger> yolanda: right, see the topic above for all the patches
19:45:50 <clarkb> pleia2: ya we can just avoid switching any jobs over until releaes has happened
19:46:01 <clarkb> python35 is the other potential place we will see issues
19:46:06 <pleia2> clarkb: yeah
19:46:20 <anteaya> xenial ships with 35?
19:46:25 <clarkb> anteaya: yes
19:46:28 <anteaya> great
19:46:30 <fungi> i anticipate the py24-py35 transition will be similar to how we did py33-py34 last year
19:46:43 <fungi> er, s/py24/py34/
19:46:49 <clarkb> fungi: well and we may need to decide if we want to do 34 and 35
19:46:55 <clarkb> but yes
19:47:37 <fungi> right, we had the luxury last time of considering py3k testing a convenience and dropped it from stable branches so we could just cut over to py34
19:48:13 <fungi> though in this case we're not maintaining special platforms just for py34 testing, so there's less incentive to drop it anyway
19:48:25 <fungi> py33 testing was kinda hacky
19:48:38 <sdague> pabelanger: do you have this in a project config somewhere that we could run an early devstack job on to shake out bugs?
19:49:08 <AJaeger> sdague: once all changes by pabelanger merged, see above for review link: Yes
19:49:21 <fungi> that would be extremely easy to add once we have the patches in to start building images/booting nodes
19:49:24 <pabelanger> Yup, https://review.openstack.org/#/q/topic:ubuntu-xenial+status:open are the current patches needed to land
19:49:33 <pabelanger> fungi: indeed
19:50:07 <fungi> okay, any other questions we need to address on this topic in the meeting before i move on (or rather, back to the topic i unceremoniously skipped earlier)?
19:50:15 <pabelanger> none here
19:50:34 <fungi> #topic git-review release request (zaro, hashar)
19:50:40 <zigo> FYI, Py35 works perfectly with everything right now.
19:50:41 <fungi> #link http://lists.openstack.org/pipermail/openstack-infra/2016-March/004058.html git-review release request
19:50:50 <zigo> (tested at build time in Sid)
19:51:10 <hashar> so in short git-review last release is from June 12th 2015
19:51:21 <zaro> just wondering what needs to happen for a release?
19:51:37 <zaro> whether i can help with that?
19:51:40 <hashar> I could myself use the optional feature push url to be release.  That lets bypass the creation of an additional remote named "gerrit"
19:51:48 <hashar> which causes folks to fetch from both origin and gerrit remotes when ever they do git remote update
19:51:48 <zigo> A tag, then ping me to build the package in Debian, then I'll ping Ubuntu ppl?
19:51:57 <hashar> so I guess a tag
19:52:04 <fungi> i replied on the ml thread as well, but want to see someone get any remaining bug fixes or test improvements dlushed from the review queue before we tag a new release. we should consider git-review feature frozen for the moment, but can figure out what new features make sense to add once we have the current master state polished and released
19:52:16 <hashar> on the list one hinted at looking for open change that one might want to get approved before tagging a release
19:52:24 <zigo> Remember: we have a few days for a freeze exception so that it reaches the next LTS. Do we want that new version in 16.04 ?
19:52:29 <clarkb> I ran into a bug the other day
19:52:37 <clarkb> git review -d fetched a patch from a different git repo
19:52:44 <clarkb> I swear it used to fail on that
19:52:52 <hashar> zigo: yup would be good to have it in before Ubuntu freeze
19:53:11 <zigo> hashar: *IF* there's no regressions! :)
19:53:13 <fungi> clarkb: that would definitely count as a regression, please get up with me later if you need help reproducing
19:53:17 <jesusaur> clarkb: yes, i still use an old version and it fails on that
19:53:24 <clarkb> jesusaur: what version?
19:53:31 <clarkb> will help us bisect
19:53:44 <zigo> hashar: Also, Ubuntu Xenia *IS* frozen, we just happen to have FFE for all OpenStack things until Mitaka is out.
19:53:46 <jesusaur> clarkb: 1.23
19:54:10 <zigo> (and I guess git-review could be included in the FFE)
19:54:14 <fungi> zigo: hashar: well, git-review shouldn't be an openstack-specific thing
19:54:16 <hashar> zigo: doh! :-) then  it will be in the next stable or maybe we can push it via xenix-updates or similar
19:54:41 <zigo> https://wiki.ubuntu.com/XenialXerus/ReleaseSchedule
19:55:26 <fungi> okay, so it sounds like nothing significant to add to this topic aside from what is in the ml thread, so we should follow up there once someone has a chance to run back through the outstanding changes and make suggestions for missing fixes (including the regression clarkb spotted)
19:55:29 <zigo> hashar: The question is, are there features we *must* have, or is the current version in Xenial just fine?
19:55:41 <hashar> fungi: wikimedia community definitely uses git review
19:56:02 <hashar> zigo: no clue :/
19:56:20 <zigo> Let's switch topic then! :P
19:56:22 <fungi> hashar: yep! i definitely want to look at git-review as something developed by the openstack community for anyone using gerrit, not just for people using _our_ gerrit deployment
19:56:43 <fungi> #topic Infra cloud (pabelanger)
19:56:51 <fungi> #link http://lists.openstack.org/pipermail/openstack-infra/2016-March/004045.html
19:57:01 <pabelanger> This is a simple question, did we confirm we are doing 2 drops per server or 1?
19:57:01 <hashar> fungi: git-review has received a wild range of contribs from Wikimedia community for sure :-}
19:57:12 <pabelanger> we talked about it at the mid-cycle, but haven't see it brought up
19:57:32 <pabelanger> if not, we should ask HP NOC team about it
19:57:52 <fungi> clarkb had mentioned that as a preferred deployment model so that we could skip the nasty bridge-on-bridge action we had in west
19:58:00 <rcarrillocruz> indeed
19:58:01 <rcarrillocruz> also
19:58:08 <rcarrillocruz> we had to do some stuff in glean
19:58:17 <clarkb> right but I think 10Gbe is probably more valuable than 2 drops
19:58:22 <clarkb> and we don't have that right now?
19:58:23 <rcarrillocruz> as it was not ready by the time to handle the vlans thingy we were using
19:58:26 <fungi> roght, glean support for bridge configuration post-dated the west design
19:58:27 <crinkle> I don't think they are giving us 10G
19:58:35 <clarkb> crinkle: :(
19:58:37 <rcarrillocruz> oh
19:58:47 <rcarrillocruz> (sad trombone)
19:58:52 <crinkle> and I don't think we impressed hard enough that we wanted 2 drops
19:59:03 <clarkb> right because with 10Gbe we would deal
19:59:05 <clarkb> like we did before
19:59:14 <rcarrillocruz> hmm, so now just 1 nic 1GB
19:59:14 <rcarrillocruz> ?
19:59:17 <clarkb> but if we are only getting gig the nI think we need to impress onthem that we need it
19:59:18 <pabelanger> Is it too late to ask?
19:59:20 <fungi> it looked like the servers from west all had at least two 1gbe interfaces, but some had only one 10gbe while a few seemed to have two
19:59:29 <rcarrillocruz> i don't think it will be hard for venu to accommodate two nics
19:59:30 <pabelanger> for 2 drops
19:59:50 <rcarrillocruz> i mean
19:59:55 <rcarrillocruz> i've managed the gozer baremetal
19:59:57 <clarkb> fungi: it was 2x10Gbe with only one gbic installed and 2xgig iirc
20:00:01 <rcarrillocruz> and that' s the setup i had
20:00:06 <rcarrillocruz> 2 nics
20:00:17 <fungi> yeah, cat4e patch cables are likely no problem for them at all. copper 10gb sfps/switches on the other hand...
20:00:20 <anteaya> looks like we are at time
20:00:21 <rcarrillocruz> and never had any hold up on that
20:00:27 <crinkle> from venu's confirmation email: "We checked with DC Ops and they said all the nodes have only 1G NICs on them. So nodes to TOR switch are 1G connections."
20:00:36 <crinkle> i can follow up with them
20:00:40 <fungi> s/cat4e/cat5e/
20:00:40 <clarkb> crinkle: what thats not true
20:00:45 <fungi> not sure what's with my fingers today
20:00:50 <fungi> oh, also we're out of time
20:00:51 <rcarrillocruz> heh
20:00:52 <clarkb> crinkle: I am almost 100% positive we had 10Gbe nics in every one of the machines
20:01:03 <Clint> whee
20:01:12 <fungi> pleia2: we'll get to your topic first on the agenda netxt week if that's okay
20:01:17 <clarkb> it was those silly nics that caused kernel issues constantly
20:01:18 <fungi> thanks everyone!!!
20:01:22 <anteaya> thank you
20:01:23 <fungi> #endmeeting