20:00:15 <xgerman> #startmeeting octavia
20:00:16 <openstack> Meeting started Wed Sep 16 20:00:15 2015 UTC and is due to finish in 60 minutes.  The chair is xgerman. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:20 <openstack> The meeting name has been set to 'octavia'
20:00:20 <dougwig> o/
20:00:23 <johnsom> o/
20:00:23 <xgerman> #chair blogan
20:00:24 <rm_work> o/
20:00:26 <openstack> Current chairs: blogan xgerman
20:00:27 <sballe> o/
20:00:44 <bana_k> hi
20:00:51 <xgerman> #topic Announcements
20:00:52 <minwang2> o/
20:00:52 <blallau> hi
20:00:58 <sbalukoff> Howdy, howdy, folks!
20:01:00 <blogan> hello
20:01:03 <xgerman> blallau hi
20:01:03 <sballe> I have one quick annoucement
20:01:08 <xgerman> glad you could make it
20:01:17 <sbalukoff> Yeah, good to see you, sballe.
20:01:21 <sballe> I just wanted to let you know that I won’t be able to participle in LBaaS/Octavia moving forward. My new job doesn’t include LBaaS and Octavia  . I have been gone for a while: first 4 weeks of vacation and then transitioning into my new job at Intel so I don’t blame you if you don’t remember me ;-) I talked to xgerman and told him I would like to
20:01:21 <sballe> help with the hands on lab on Octavia in Tokyo and he told me I am still welcome to do that even though I am moving on. I hope you guys agree…
20:01:46 <sballe> I hope you can read all this
20:02:01 <dougwig> sballe: congrats on your new job, hope to see you around. of course you're welcome. :)
20:02:13 <sbalukoff> That's too bad, Susanne. And yes, for what it's worth, I think it's worthwile to have you help with the hands-on lab (though of course I don't determine that myself)
20:02:14 <fnaval> o/
20:02:16 <sballe> dougwig: cool! thx.
20:02:23 <ptoohill> 0/
20:02:25 <blogan> sballe: congrats, we'll miss you.  thanks for all your work
20:02:30 <crc32> hello.
20:02:31 <minwang2> congrats sballe
20:02:33 <sballe> :-)
20:02:37 <crc32> Good luck with intel
20:02:40 <sbalukoff> Well in any case, I hope the new job is a worthwhile promotion (and congrats, eh!)
20:02:42 <sballe> thx
20:02:50 <crc32> so uh. Your coming to rack space.
20:03:04 <crc32> we got flooded by intel people recently
20:03:13 <sballe> nice
20:03:17 <sbalukoff> Haha!
20:03:22 <blogan> intel is a small company
20:03:27 <dougwig> very small
20:03:28 <ptoohill> tiny
20:03:31 <sballe> yeah I haven't been asked to go and visit rax yet
20:03:35 <xgerman> yep + all roads lead to St Anton
20:03:36 <sbalukoff> They have like 3 employees, right?
20:03:37 <crc32> supposedly a big team is gonna sit adjacent to our block.
20:03:37 <johnsom> Hahaha, just 16,000+ in Oregon
20:03:41 <sbalukoff> 4 now, with Susanne.
20:03:45 <sballe> I am sure our path will corss again
20:03:49 <blogan> its anice little startup
20:03:56 <sballe> lol
20:04:01 <sbalukoff> Haha
20:04:10 <dougwig> they do have issues with product adoption.
20:04:34 <sbalukoff> dougwig: In that they aren't quite 100% of the CPU market yet.
20:04:53 <sballe> lol
20:05:10 <sbalukoff> I'm sure Susanne will fix that for them.
20:05:39 <crc32> wow I'm on all intel hardware apparently. At home and work.
20:05:40 <xgerman> I have faith in her
20:05:43 <crc32> model name	: Intel(R) Core(TM) i5-4690 CPU @ 3.50GHz
20:05:50 <sballe> I will miss oyu all but I will be in Tokyo working on the Octvia all hands and so we;ll see each othe than. I''l continue to attend the Octavia weekly meetings until then
20:05:58 <crc32> except for my PS4.
20:06:01 <sbalukoff> Sweet!
20:06:19 <sballe> yeah I have a PS$ too
20:06:24 <sballe> PS4
20:06:25 <crc32> we have 4 of us going to tokyo too.
20:06:31 <sballe> the rest of my stuff is Intel too
20:06:39 <xgerman> moving on...
20:06:43 <sballe> crc32: cool!
20:06:50 <xgerman> Mitaka design summit doc: #link https://etherpad.openstack.org/p/neutron-mitaka-designsummit
20:07:00 <xgerman> if you have ideas for topics please add
20:07:03 <sbalukoff> We need to get LBaaS and Octavia on there.
20:07:13 <sbalukoff> I don't see them mentioned at all yet.
20:07:15 <sballe> +1
20:07:36 <dougwig> we need to get them on there if we have things that need face time to discuss.  let's not add just to add.  just my personal IMO.
20:07:40 <xgerman> do we have stuff to discuss (e.g. Pool sharing)
20:07:50 <xgerman> ?
20:08:06 <blogan> im not sure we have to discuss that face to face
20:08:07 <johnsom> active-active aproaches?
20:08:09 <sbalukoff> pool sharing, active-active topology discussion, operator API discussion,
20:08:14 <sbalukoff> heat integration
20:08:19 <blogan> i think next steps for octavia woudl be great to discuss
20:08:43 <sbalukoff> There's a lot to discuss (again, there are IBM people already working on some of that)
20:09:03 <blogan> sbalukoff: working on the active active?
20:09:11 <sbalukoff> yes. And heat integration.
20:09:17 <johnsom> Great, so you guys can share
20:09:23 <xgerman> awesome
20:09:26 <xgerman> patches?
20:09:29 <sbalukoff> they jumped into it and then promptly went on vacation. I have a call with them tomorrow morning.
20:09:41 <xgerman> cool
20:09:43 <dougwig> sbalukoff: you putting that stuff onto the mitaka etherpad?
20:09:45 <blogan> sbalukoff: well i do worry about just moving forward with feature after feature, especially more complex features with stability and bug fixes languishing
20:09:48 <sbalukoff> And they've already promised to send designs my direction so that I can give them feedback before they go to the larger group.
20:10:00 <sbalukoff> blogan: Agreed.
20:10:01 <xgerman> nice
20:10:14 <sbalukoff> I'm trying to channel their enthusiasm in directions useful to the project.
20:10:30 <sbalukoff> dougwig: I will put it on the mitaka etherpad.
20:10:38 <xgerman> thanks sbalukoff
20:10:39 <sbalukoff> (Feel free to action-item me on that.)
20:10:40 <johnsom> blogan I agree, we have some cleanup to do
20:10:41 <blogan> i kinda feel we should keep the features limited this next cycle, and focus on technical debt, stability, and bug fixes
20:10:56 <xgerman> #action sbalukoff to put next steps Octavia on etherpad
20:11:05 <xgerman> blogan +1000
20:11:13 <johnsom> I would propose we get VRRP and multi-controller working in M
20:11:16 <dougwig> put the ideas on the etherpad.  hopefully we can have fewer sessions this time about how much neutron sucks, and more on useful topics. i'd still like to chat about separate rest vs neutron extension, but i don't think we need 40 minutes.
20:11:21 <blogan> johnsom: agreed
20:11:38 <sbalukoff> blogan: I'm with you on that. I'm hoping to convince the IBM folks to attack both stability and feature improvements by throwing more engineers at the problem.
20:11:45 <xgerman> I am still hoping for VRRP in L… but...
20:11:48 <blogan> dougwig: we can probably fit everythign into a general "next steps for octavia" designs ession
20:11:57 <sbalukoff> Because I know they are probably making assumptions about project stability that aren't true yet. :)
20:12:11 <dougwig> blogan: i'm thinking fwaas and vpnaas as well on that prior topic.
20:12:14 <blogan> sbalukoff: if they assume stability they are making the wrong assumption :)
20:12:19 <sbalukoff> blogan: Haha!
20:12:22 <sbalukoff> True enough.
20:12:34 <johnsom> Yeah, I haven't given up on VRRP in L
20:12:36 <blogan> dougwig: in the same design session?
20:12:43 <sbalukoff> johnsom: +1
20:12:45 <dougwig> i'm beginning to notice how every summit we have a new corporate overlord.  mirantis, then hp, now ibm.
20:12:46 <sbalukoff> I want to see that, too!
20:12:46 <xgerman> no, that should get their won
20:12:57 <xgerman> also we at HP have a huge interest in stability
20:13:01 <sbalukoff> dougwig: Oh, IBM isn't nearly overlord status yet.
20:13:07 <blogan> johnsom: the gate problems havent helped it
20:13:17 <sbalukoff> HP and RAX are the biggest contributors by far right now, eh.
20:13:39 <johnsom> blogan yes, it is distracting me
20:13:45 <xgerman> ok, let’s keep mobing
20:13:46 <sbalukoff> I figure if I can convince the IBM people that y'all are a threat (haha!), they might throw more engineers at this. And then we get better product faster! XD
20:13:46 <xgerman> Horizon work is going strong. TWC signed up; new exciting mocks in https://openstack.invisionapp.com/d/main#/projects/4716314
20:14:06 <sbalukoff> Just kidding on the threat thing, actually. But I am letting them know if they want influence, they have to commit more resources.
20:14:09 <xgerman> so there has been some movement and TimewarnerCable donated some respurces
20:14:15 <blogan> awesome
20:14:17 <dougwig> sbalukoff: IBM needs to compete with HP for summit party quality.
20:14:25 <johnsom> xgerman +1
20:14:25 <xgerman> +1
20:14:28 <sbalukoff> dougwig: I'm working hard on that, too!
20:14:40 <sbalukoff> xgerman: +1
20:14:45 <blogan> rackspace will compete with me and the party i dont plan to throw, my part will still win
20:14:45 <johnsom> Yeah, maybe horzion panels also a target for M
20:14:52 <blogan> party
20:14:58 <xgerman> johnsom looks like that
20:15:17 <sbalukoff> Anyway... er... what were we talking about again? ;)
20:15:19 <xgerman> anyhow, we have some nice new mocks which look like fun
20:15:25 <xgerman> rm_work is barbican getting a UI?
20:15:41 <xgerman> since we need to figure out the UX of the SSL cert
20:15:53 <dougwig> xgerman: how can i get an account on invisionapp ?
20:15:55 <rm_work> eventually
20:16:00 <redrobot> xgerman we hope to be in horizon eventually, but nobody is working on that currenlty.
20:16:01 <sbalukoff> dougwig: +1
20:16:05 <xgerman> dougwig ask in the ux project channel?
20:16:08 <sbalukoff> I don't have an account there either.
20:16:11 <rm_work> I don't know how soon though, it is not in their immediate plans
20:16:16 <sbalukoff> Oh! Which channel is that?
20:16:16 <rm_work> ah redrobot is here to answer cool :P
20:16:34 * redrobot just showed up
20:16:39 <sbalukoff> Yay, redrobot!
20:16:45 <xgerman> ok, since we can’t really put in our UI “now go to the CLI and run"
20:17:00 <sballe> lol
20:17:04 <blogan> xgerman: we can emulate a terminal in the UI
20:17:10 <rm_work> heh
20:17:56 <xgerman> #topic Liberty deadline stuff
20:18:10 <blogan> major thing is the gate and tempest tests
20:18:16 <xgerman> +1
20:18:19 <sbalukoff> +1
20:18:23 <xgerman> dougwig do we need the gate?
20:18:23 <blogan> i failed to even consider the scenario tests in my fixes because im a dumb shit
20:18:33 <dougwig> xgerman: yep
20:18:41 <dougwig> i'd be fine with just api tests.
20:18:44 <rm_work> lol
20:18:44 <johnsom> Yeah.  So question for dougwig, is there a way we can get bare metal for the tests?
20:18:56 <xgerman> can we use 3rd party CI for that?
20:19:03 <rm_work> or at least a different type of node that has vt-x
20:19:14 <dougwig> johnsom: i highly highly doubt it, since all of those tests run on donated instances.  but... they are donated from rax and hp, so... can you donate bare metal?
20:19:22 <johnsom> Yeah, or nested virtualization turned on
20:19:26 <blogan> or run a small subset of tests for the gate, temporarily
20:19:40 <rm_work> yeah
20:19:46 <dougwig> how does trove handle this?
20:19:49 <rm_work> maybe we can figure out a better way to share some of the created LBs?
20:19:56 <rm_work> like merge some of the tests onto the same base?
20:19:58 <xgerman> they only spin up three instances for all their tests
20:20:03 <johnsom> dougwig Trove boots just one or two VMS
20:20:30 <johnsom> We could make dsvm-1 dsvm-2 dsvm-3 that covers all of the tests.....
20:20:33 <johnsom> I guess
20:20:34 <dougwig> can we reuse amp's for test purposes?  and maybe use a periodic job that does it clean?
20:21:06 <xgerman> probably but fresh tests are better
20:21:20 <johnsom> The main issue is without the vt-x, we hit the two hour limit on the tests
20:21:22 <xgerman> if we reuse we need to troubleshoot crazy side effects
20:21:22 <blogan> i dont think we could do that, at least not in a way that would be quick and taht the tests control
20:21:32 <cp16net> fwiw trove uses a devstack instance and spins up a few instances on the node but only a max of 2 at a time.
20:21:39 <sbalukoff> johnsom has code to eliminate the generation of the amp image if it's already there...  once we figure out the logic there (ie. so we don't skip that step if it's needed), that might help somewhat.
20:21:45 <cp16net> (i think this is what you guys are talking about)
20:22:01 <xgerman> cp16met you are right
20:22:15 <johnsom> sbalukoff yeah, we have been testing that today.  It buys us just ten minutes
20:22:18 <blogan> sbalukoff: i dont think it'll help enough
20:22:20 <dougwig> we can increase the timeout, but honestly anything >60mins is a big pain anyway.
20:22:27 <blogan> dougwig: agreed
20:22:33 <cp16net> xgerman: :)
20:22:33 <sbalukoff> Ok, so reworking the tests to use fewer instances is going to be key.
20:22:34 <xgerman> +1
20:22:44 <xgerman> or containers?
20:22:46 <johnsom> So what about breaking the tests down across multiple jobs?
20:22:59 <sbalukoff> xgerman: And then leave VM testing to 3rd party CI?
20:23:01 <xgerman> that’s just dirty?
20:23:06 <xgerman> sbalukoff +1
20:23:15 <xgerman> or say the future are containers :-)
20:23:16 <blogan> containers is the solution, but not in the RC1 timeframe
20:23:21 <sbalukoff> Haha! Indeed.
20:23:27 <sbalukoff> Dammit.
20:23:30 <sbalukoff> We
20:23:34 <sbalukoff> We're so close!
20:23:40 <johnsom> Yeah, the same with nested virt enablement
20:23:40 <xgerman> yeah
20:23:47 <xgerman> so, douigwig
20:23:54 <xgerman> 1) timing — when do we need to have something
20:24:10 <rm_work> oh actually, i don't honestly thing that's TOO bad
20:24:12 <dougwig> i just asked in -infra, fyi.
20:24:13 <rm_work> *think
20:24:20 <blogan> dougwig: what if we did a limited # of tests temporarily until containers or some other solution is completed?
20:24:27 <dougwig> johnsom: if you can get them all <60, that'd work for now.
20:24:28 <rm_work> if we broke the tests up into "apiv2-part1" "apiv2-part2" and they could run in parallel
20:24:31 <xgerman> or increase timeout...
20:24:40 <rm_work> different gate jobs
20:24:45 <xgerman> or do rm_works things
20:24:53 <xgerman> what would be acceptable?
20:24:59 <rm_work> was johnsom's proposal
20:25:11 <johnsom> dougwig half the problem is just getting the system setup is 30 minutes before the tests start
20:25:12 <rm_work> i just happen to agree that might work
20:25:34 <rm_work> johnsom: i think buying 10m with the downloaded image is a good start, and we shouldn't ignore that even if it doesn't solve the whole problem
20:25:45 <johnsom> Agreed
20:25:47 <rm_work> then splitting into multiple gate jobs might actually be ok
20:25:49 <xgerman> +1
20:25:55 <rm_work> how hard would that actually be?
20:26:15 <xgerman> well, would infra let us do that and would that make neutron happy
20:26:26 <dougwig> johnsom: those timeouts are configurable in the job definition.
20:26:27 <blogan> break it up by load balancer, listener, pool, member, and health monitor tests?
20:26:29 <rm_work> i don't know if infra would take objection to that
20:26:31 <dougwig> johnsom: but eek
20:26:42 <rm_work> blogan: yeah that seems good
20:26:57 <blogan> health monitor runs first, let me see how long that took
20:27:06 <rm_work> separately, they aren't that bad
20:27:07 <dougwig> given the operators in the room here, is there any chance of a 3rd party CI setup with faster nodes?
20:27:11 <rm_work> like 30-40m i think
20:27:20 <rm_work> dougwig: can 3rd party CI be voting?
20:27:21 <dougwig> or donate such to infra, if they can use them?
20:27:22 <johnsom> Yeah, I think the tests are fairly self contained.  It's just a gate wizard to get the jobs setup
20:27:23 <rm_work> and is that a good idea?
20:27:24 <dougwig> rm_work: aye
20:27:39 <rm_work> I like the "splitting by test type" approach
20:27:42 <blogan> looks like it'd clock in at an hour
20:27:43 <rm_work> apiv2-hm
20:27:49 <rm_work> apiv2-listener
20:27:52 <rm_work> etc
20:28:08 <rm_work> an hour is acceptable for dsvm, and they'd run in parallel
20:28:21 <johnsom> dougwig I'm working on an internal discussion to get the nested virt turned on, but I have no idea when it would happen
20:28:21 <rm_work> we'd just be claiming like 5 extra jenkins nodes
20:28:50 <blogan> looks like there are some random failures in the tests too
20:28:50 <xgerman> well, we have hardware so in theory we can set up a 3rd party CI with VT-X
20:29:09 <sbalukoff> xgerman: +1
20:29:38 <xgerman> but then we also need to maintain it so I like splitting tests better
20:29:52 <xgerman> (and our network hasn’t been happy the last few weeks)
20:29:55 <johnsom> My vote is to split the tests
20:30:02 <blogan> we can look at doing a bare metal instance here
20:30:12 <ptoohill> abiggun
20:30:23 <xgerman> I think we all have hardware ;-)
20:30:26 <crc32> so how much hardware do you guys want?
20:30:46 <rm_work> yeah i vote split tests first
20:30:52 <sbalukoff> Spitting the tests makes sense even with the bare metal.
20:30:57 <rm_work> and if that gets pushback or doesn't work, then we can continue investigating baremetal
20:31:07 <sbalukoff> It also feels like a lower-hanging fruit in this case.
20:31:12 <sbalukoff> So yes, let's split the tests.
20:31:19 <blogan> dougwig: thoughts?
20:31:24 <xgerman> agreed. that abs my vote as well —
20:31:24 <blogan> on splitting the tests
20:31:33 <xgerman> it’s not like our solar is rock solid
20:31:41 <xgerman> solar=sonar
20:31:54 <dougwig> i agree on starting there, and then we can run one of them (pick a good subset) in the neutron check queue, all in ours.
20:32:26 <xgerman> oh, we just run all and pig out...
20:32:34 <sbalukoff> Haha!
20:32:43 <rm_work> ah yeah good point
20:32:45 <sbalukoff> We're important enough! I'm sure other projects will understand. XD
20:32:51 <rm_work> there is a neutron check for octavia as well
20:33:05 <rm_work> i agree with dougwig'd assessment
20:33:07 <xgerman> yeah, once they donate hardware they can have a seat at the table :-)
20:33:24 <sbalukoff> Heh!
20:33:35 <dougwig> clarkb pointed me at this interesting thread: http://lists.openstack.org/pipermail/openstack-infra/2015-September/003138.html
20:34:15 <xgerman> mmh...
20:34:21 <dougwig> if we do want to look at finding donated hardware, a) it'd take time to sort out, and b) we'd need it from multiple providers, if we wanted a shot at getting it into infra's setup.  3rd party we could do today.  it can vote (non-binding) in the check queue.
20:35:09 <johnsom> Cool, let's find their tag and use it...<grin>
20:35:20 <crc32> we have an account that is a botemless pit for vms but I don't see any baremetal on it.
20:36:05 <dougwig> sbalukoff: ha, no.
20:36:06 <dougwig> :)
20:36:12 <crc32> oh wow we do have baremetal
20:36:16 <crc32> suckers
20:36:24 <sbalukoff> I'll start poking my superiors about the possibility of donating hardware for this. I have no idea what that looks like at IBM, but I guess I'll find out. XD
20:36:38 <sballe> +1
20:36:43 <blogan> sbalukoff: it looks like you crying
20:36:49 <sbalukoff> that doesn't get us out of the immediate crisis yet, of course.
20:36:50 <rm_work> yeah, i'm prodding people internally about enabling vt-x
20:36:53 <sbalukoff> And neither does me crying.
20:36:55 <johnsom> In reality, we don't even need bare metal, just a host booted with the vt-x enabled
20:36:58 <rm_work> but yes, for now, splitting up the tests
20:37:05 <sbalukoff> So.... let's split tests and hope for the best for now?
20:37:15 <blogan> yeah
20:37:21 <johnsom> Split the tests!
20:37:21 <xgerman> that’s the plan
20:37:32 <xgerman> and then get one of those mainframes sbalukoff sells
20:37:35 <rm_work> split the tests! split the tests! </chant>
20:37:36 <sbalukoff> We need a fight song.
20:37:59 <sbalukoff> xgerman: I'll bet I can get my hands on a few AS/400's, eh. ;)
20:38:03 <johnsom> Who knows best how to get those setup?
20:38:11 * johnsom looks in dougwigs direction
20:38:21 <blogan> who set this job up?
20:38:26 <crc32> Yea sure along with a PDP-11
20:38:53 * dougwig hides.
20:39:03 <xgerman> ok, up next: Active-Passive
20:39:05 <blogan> each job would just do tox -e apiv2 neutron_lbaas.tests.tempest.api.v2.test_load_balancers
20:39:09 <blogan> or something like that
20:39:16 <blogan> and test_lsiteners
20:39:18 <blogan> etc
20:39:20 <xgerman> yep
20:39:21 <blogan> i cant typ
20:39:23 <dougwig> look at [testenv:apiv2] in tox.ini.  we just need more of those, that run subsets.
20:39:26 <xgerman> neither can I
20:39:27 <dougwig> then we can add jobs.
20:39:43 <blogan> dougwig: couldnt we just change the tox execution line
20:39:51 <blogan> instead of adding more tox sections
20:40:15 <xgerman> yeah, what’s the difference between the two?
20:40:22 <rm_work> not much
20:40:34 <xgerman> so let’s do what’s easier
20:40:44 <rm_work> more sections in tox.ini makes it easier to remember how to do it? i guess
20:40:45 <sbalukoff> +1
20:40:47 <dougwig> yeah, we could do it with a var.
20:41:10 <dougwig> the subsets will be defined somewhere... either in tox.ini or the gate hook script.
20:41:25 <rm_work> well, one lets us control it with changes to openstack/octavia, the other would require us to make changes to openstack-infra/project-config if we needed to update it
20:41:40 <dougwig> no, it's tox.ini or gate hook in neutron-lbaas
20:41:48 <rm_work> ah, we'd do it in gate_hook?
20:42:03 <dougwig> i think post_test_hook, actually.
20:42:17 <dougwig> which is actually a "run this test" hook for us.
20:42:25 <rm_work> so the gate definition would pass "LISTENER" or "HEALTH_MANAGER" to the post_test_hook.sh
20:42:25 <blogan> sounds like we have the people who are going to do this sorted :)
20:42:28 <dougwig> but we're in the weeds.
20:42:29 <rm_work> and then we'd interpret that
20:42:34 <rm_work> lol yeah kk :P
20:42:48 <xgerman> ok, so ACTIVE-PASSIVE
20:42:49 <sbalukoff> dougwig: Agreed.
20:42:51 <xgerman> johnsom
20:43:00 <dougwig> rm_work: yeah, in the job template we put a var with the subset, then the hooks pick up on that, then they invoke tox with a specific env or specific args.
20:43:02 <sbalukoff> Yes, active-passive!
20:43:15 <rm_work> dougwig: kk
20:43:27 <johnsom> So VRRP is in pretty good shape.  I have a race condition I'm fixing right now (when not watching gates timeout)
20:43:41 <xgerman> gates really got in the way
20:43:54 <johnsom> Yes, would have been done yesterday.
20:44:02 <crc32> so are we going to use ubuntu debian redhat or what on this Metal Trash can?
20:44:04 <xgerman> so given our deadlines — when do we need that dougwig?
20:44:22 <xgerman> aka when is the RC1 code freeze?
20:44:26 <blogan> id like to get teh scenario tests passing with octavia as well, but, looks like there is also some random failures in the api tests with octavia
20:44:34 <blogan> xgerman: 23rd no?
20:44:43 <rm_work> blogan: random as in "intermittent"?
20:44:48 <blogan> rm_work: yes
20:44:51 <rm_work> blegh
20:44:51 <dougwig> asap
20:44:56 <blogan> rm_work: the best kind
20:45:03 <xgerman> what is it? 23rd or asap?
20:45:44 <xgerman> ok, ASAP it is
20:45:50 <johnsom> Ok, so I am going to stop messing with the gate stuff and focus on VRRP
20:46:08 <rm_work> johnsom: well, i still think we need to finish up the pre-built image thing
20:46:09 <xgerman> yeah, I think it’s in rm_works and dougwigs capable hands
20:46:16 <blogan> well do we have it settled on who will split the tests up?
20:46:25 <xgerman> rm_work?
20:46:26 <blogan> how important is it to get vrrp in?
20:46:31 <blogan> im still uncertain about that
20:46:31 <johnsom> Folks can review it now, even run it.  I am just fixing a situation where it checks for spares incorrectly
20:46:40 <dougwig> we're technically in the bugfix period of things, and these gate changes are a "bug fix", so we're in danger with that not being present. i'm not sure there's a hard date, but whichever RC looks to be the final release, will be the deadline.
20:47:10 <xgerman> mmh, so time for hard choices?
20:47:58 <rm_work> dougwig: you want to do it? I am not sure what you had in mind exactly for the handling, but I can do it if you don't care how it is done :P
20:48:23 <crc32> One trash metal box spinning up
20:48:34 <xgerman> well, 12 minutes left
20:48:38 <dougwig> rm_work: take a crack at it.
20:48:43 <blogan> if vrrp getting in after L isn't too big of an issue, I'd feel much better not trying to rush review it while this gate stuff is giong on
20:49:09 <sbalukoff> blogan: +1
20:49:33 <blogan> i feel the same way about the containers patches we have that we at rackspace need, but ive been holding that one off as well
20:49:45 <dougwig> blogan: +1
20:50:04 <dougwig> if it's not *solid* and in, i don't think we should be trying to get it in at this point. we're well past FF.
20:50:23 <xgerman> well, FF is that the patch is committed
20:50:47 <xgerman> but I agree we need to review it and if we don’t have the time we don't
20:52:22 <xgerman> ok, let’s move it out then
20:52:58 <sbalukoff> Bummer.
20:53:03 <xgerman> #decision to delay ACTIBVE-PASSIVE to M
20:53:11 <xgerman> sbalukoff +1
20:53:14 <johnsom> Sad, but a good call
20:53:30 <blogan> it is sad, but we still have a lot done for L
20:53:39 <sbalukoff> By the way: We totally intend to use it once it's merged into master. So... when does L get tagged so we can merge it? ;)
20:53:55 <xgerman> I am still planning to demo it in Tokyo ;-)
20:54:01 <sbalukoff> xgerman: +1
20:54:07 <minwang2> +1
20:54:07 <johnsom> Hahaha  Sounds like sbalukoff is volunteering to do a review
20:54:22 <xgerman> #topic Octavia talk
20:54:24 <dougwig> well, octavia is release:independent, so i'd say as soon as we get the ref done, we can go crazy.
20:54:29 <sbalukoff> Oh, I've *been* doing reviews. Mostly in the neutron-lbaas and python-neutronclient stuff, though.
20:54:30 <dougwig> sbalukoff: ^^
20:54:35 <sbalukoff> I will redouble my efforts on Octavia.
20:54:46 <johnsom> Yeah, we can definitely demo in Tokyo
20:55:08 <xgerman> #info HP is getting 500 Octavia stickers
20:55:14 <sbalukoff> Sweet!
20:55:17 <rm_work> cool
20:55:29 <xgerman> but we need somebody who can get usb sticks we can hand out in our lab
20:55:29 <crc32> what flag should be present in /proc/cpuinfo?
20:55:46 <johnsom> crc32 vmx/smx
20:55:53 <xgerman> RAX did some for other projects...
20:56:03 <rm_work> not sure who we talk to about that
20:56:11 <rm_work> how many are we talking about?
20:56:12 <crc32> flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts
20:56:12 <crc32> dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
20:56:18 <rm_work> and how big?
20:56:27 <xgerman> how many people can be in the lab?
20:56:34 <xgerman> no idea but there was a linit
20:56:38 <sbalukoff> xgerman: I would like to be in it, eh!
20:56:41 <johnsom> crc32 vmx is there
20:56:44 <rm_work> crc32: yeah they're in there
20:56:44 <dougwig> stickers would be doubly cool if you attached them to iphones before handing them out.
20:56:44 <sbalukoff> Er... helping to run it, I mean.
20:56:50 <blogan> ill be spectating in teh lab
20:56:53 <rm_work> lol dougwig
20:56:54 <sbalukoff> dougwig: +1
20:57:02 <crc32> ok those flags are there.
20:57:04 <xgerman> dougwig A10 is sponsoring iPhones?
20:57:11 <sbalukoff> Haha!
20:57:12 <blogan> id throw that sticker in the trash immediately
20:57:16 <xgerman> because with HP you get only iPaqs or Palm
20:57:18 <dougwig> you guys wanted to spend $1B on cloud. i'm just trying to help.
20:57:24 <johnsom> I think we want a vm with devstack on it
20:57:31 <dougwig> what do we need in the usb sticks?  size/quantity/etc ?
20:57:37 <crc32> who would be managing this box? I need to get more requirments. The box I spun up is only 32GB at 10 cores.
20:58:02 <xgerman> dougwig a few G just to put a VM with devstack on ot
20:58:14 <crc32> wow make that 40. Guess I over shot it.
20:58:23 <johnsom> I can take an AI to get an idea of storage size and see what the limit is for the hands on lab head count
20:58:52 <rm_work> crc32: i think we're good for now with just splitting the tests into multiple jobs, we'll investigate bare-metal later if it is still necessary
20:58:58 <xgerman> #action johnsom figure out USB stick size + headcount for lab
20:59:08 <sbalukoff> Sweet.
20:59:11 <xgerman> #topic  Open Discussion
20:59:16 <xgerman> one minute — go!!
20:59:30 <sbalukoff> Someone merge this! https://review.openstack.org/#/c/208763/
20:59:31 <sbalukoff> ;)
20:59:46 <sbalukoff> That is all.
21:00:03 <xgerman> #endmeeting