20:00:15 #startmeeting octavia 20:00:16 Meeting started Wed Sep 16 20:00:15 2015 UTC and is due to finish in 60 minutes. The chair is xgerman. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:20 The meeting name has been set to 'octavia' 20:00:20 o/ 20:00:23 o/ 20:00:23 #chair blogan 20:00:24 o/ 20:00:26 Current chairs: blogan xgerman 20:00:27 o/ 20:00:44 hi 20:00:51 #topic Announcements 20:00:52 o/ 20:00:52 hi 20:00:58 Howdy, howdy, folks! 20:01:00 hello 20:01:03 blallau hi 20:01:03 I have one quick annoucement 20:01:08 glad you could make it 20:01:17 Yeah, good to see you, sballe. 20:01:21 I just wanted to let you know that I won’t be able to participle in LBaaS/Octavia moving forward. My new job doesn’t include LBaaS and Octavia  . I have been gone for a while: first 4 weeks of vacation and then transitioning into my new job at Intel so I don’t blame you if you don’t remember me ;-) I talked to xgerman and told him I would like to 20:01:21 help with the hands on lab on Octavia in Tokyo and he told me I am still welcome to do that even though I am moving on. I hope you guys agree… 20:01:46 I hope you can read all this 20:02:01 sballe: congrats on your new job, hope to see you around. of course you're welcome. :) 20:02:13 That's too bad, Susanne. And yes, for what it's worth, I think it's worthwile to have you help with the hands-on lab (though of course I don't determine that myself) 20:02:14 o/ 20:02:16 dougwig: cool! thx. 20:02:23 0/ 20:02:25 sballe: congrats, we'll miss you. thanks for all your work 20:02:30 hello. 20:02:31 congrats sballe 20:02:33 :-) 20:02:37 Good luck with intel 20:02:40 Well in any case, I hope the new job is a worthwhile promotion (and congrats, eh!) 20:02:42 thx 20:02:50 so uh. Your coming to rack space. 20:03:04 we got flooded by intel people recently 20:03:13 nice 20:03:17 Haha! 20:03:22 intel is a small company 20:03:27 very small 20:03:28 tiny 20:03:31 yeah I haven't been asked to go and visit rax yet 20:03:35 yep + all roads lead to St Anton 20:03:36 They have like 3 employees, right? 20:03:37 supposedly a big team is gonna sit adjacent to our block. 20:03:37 Hahaha, just 16,000+ in Oregon 20:03:41 4 now, with Susanne. 20:03:45 I am sure our path will corss again 20:03:49 its anice little startup 20:03:56 lol 20:04:01 Haha 20:04:10 they do have issues with product adoption. 20:04:34 dougwig: In that they aren't quite 100% of the CPU market yet. 20:04:53 lol 20:05:10 I'm sure Susanne will fix that for them. 20:05:39 wow I'm on all intel hardware apparently. At home and work. 20:05:40 I have faith in her 20:05:43 model name : Intel(R) Core(TM) i5-4690 CPU @ 3.50GHz 20:05:50 I will miss oyu all but I will be in Tokyo working on the Octvia all hands and so we;ll see each othe than. I''l continue to attend the Octavia weekly meetings until then 20:05:58 except for my PS4. 20:06:01 Sweet! 20:06:19 yeah I have a PS$ too 20:06:24 PS4 20:06:25 we have 4 of us going to tokyo too. 20:06:31 the rest of my stuff is Intel too 20:06:39 moving on... 20:06:43 crc32: cool! 20:06:50 Mitaka design summit doc: #link https://etherpad.openstack.org/p/neutron-mitaka-designsummit 20:07:00 if you have ideas for topics please add 20:07:03 We need to get LBaaS and Octavia on there. 20:07:13 I don't see them mentioned at all yet. 20:07:15 +1 20:07:36 we need to get them on there if we have things that need face time to discuss. let's not add just to add. just my personal IMO. 20:07:40 do we have stuff to discuss (e.g. Pool sharing) 20:07:50 ? 20:08:06 im not sure we have to discuss that face to face 20:08:07 active-active aproaches? 20:08:09 pool sharing, active-active topology discussion, operator API discussion, 20:08:14 heat integration 20:08:19 i think next steps for octavia woudl be great to discuss 20:08:43 There's a lot to discuss (again, there are IBM people already working on some of that) 20:09:03 sbalukoff: working on the active active? 20:09:11 yes. And heat integration. 20:09:17 Great, so you guys can share 20:09:23 awesome 20:09:26 patches? 20:09:29 they jumped into it and then promptly went on vacation. I have a call with them tomorrow morning. 20:09:41 cool 20:09:43 sbalukoff: you putting that stuff onto the mitaka etherpad? 20:09:45 sbalukoff: well i do worry about just moving forward with feature after feature, especially more complex features with stability and bug fixes languishing 20:09:48 And they've already promised to send designs my direction so that I can give them feedback before they go to the larger group. 20:10:00 blogan: Agreed. 20:10:01 nice 20:10:14 I'm trying to channel their enthusiasm in directions useful to the project. 20:10:30 dougwig: I will put it on the mitaka etherpad. 20:10:38 thanks sbalukoff 20:10:39 (Feel free to action-item me on that.) 20:10:40 blogan I agree, we have some cleanup to do 20:10:41 i kinda feel we should keep the features limited this next cycle, and focus on technical debt, stability, and bug fixes 20:10:56 #action sbalukoff to put next steps Octavia on etherpad 20:11:05 blogan +1000 20:11:13 I would propose we get VRRP and multi-controller working in M 20:11:16 put the ideas on the etherpad. hopefully we can have fewer sessions this time about how much neutron sucks, and more on useful topics. i'd still like to chat about separate rest vs neutron extension, but i don't think we need 40 minutes. 20:11:21 johnsom: agreed 20:11:38 blogan: I'm with you on that. I'm hoping to convince the IBM folks to attack both stability and feature improvements by throwing more engineers at the problem. 20:11:45 I am still hoping for VRRP in L… but... 20:11:48 dougwig: we can probably fit everythign into a general "next steps for octavia" designs ession 20:11:57 Because I know they are probably making assumptions about project stability that aren't true yet. :) 20:12:11 blogan: i'm thinking fwaas and vpnaas as well on that prior topic. 20:12:14 sbalukoff: if they assume stability they are making the wrong assumption :) 20:12:19 blogan: Haha! 20:12:22 True enough. 20:12:34 Yeah, I haven't given up on VRRP in L 20:12:36 dougwig: in the same design session? 20:12:43 johnsom: +1 20:12:45 i'm beginning to notice how every summit we have a new corporate overlord. mirantis, then hp, now ibm. 20:12:46 I want to see that, too! 20:12:46 no, that should get their won 20:12:57 also we at HP have a huge interest in stability 20:13:01 dougwig: Oh, IBM isn't nearly overlord status yet. 20:13:07 johnsom: the gate problems havent helped it 20:13:17 HP and RAX are the biggest contributors by far right now, eh. 20:13:39 blogan yes, it is distracting me 20:13:45 ok, let’s keep mobing 20:13:46 I figure if I can convince the IBM people that y'all are a threat (haha!), they might throw more engineers at this. And then we get better product faster! XD 20:13:46 Horizon work is going strong. TWC signed up; new exciting mocks in https://openstack.invisionapp.com/d/main#/projects/4716314 20:14:06 Just kidding on the threat thing, actually. But I am letting them know if they want influence, they have to commit more resources. 20:14:09 so there has been some movement and TimewarnerCable donated some respurces 20:14:15 awesome 20:14:17 sbalukoff: IBM needs to compete with HP for summit party quality. 20:14:25 xgerman +1 20:14:25 +1 20:14:28 dougwig: I'm working hard on that, too! 20:14:40 xgerman: +1 20:14:45 rackspace will compete with me and the party i dont plan to throw, my part will still win 20:14:45 Yeah, maybe horzion panels also a target for M 20:14:52 party 20:14:58 johnsom looks like that 20:15:17 Anyway... er... what were we talking about again? ;) 20:15:19 anyhow, we have some nice new mocks which look like fun 20:15:25 rm_work is barbican getting a UI? 20:15:41 since we need to figure out the UX of the SSL cert 20:15:53 xgerman: how can i get an account on invisionapp ? 20:15:55 eventually 20:16:00 xgerman we hope to be in horizon eventually, but nobody is working on that currenlty. 20:16:01 dougwig: +1 20:16:05 dougwig ask in the ux project channel? 20:16:08 I don't have an account there either. 20:16:11 I don't know how soon though, it is not in their immediate plans 20:16:16 Oh! Which channel is that? 20:16:16 ah redrobot is here to answer cool :P 20:16:34 * redrobot just showed up 20:16:39 Yay, redrobot! 20:16:45 ok, since we can’t really put in our UI “now go to the CLI and run" 20:17:00 lol 20:17:04 xgerman: we can emulate a terminal in the UI 20:17:10 heh 20:17:56 #topic Liberty deadline stuff 20:18:10 major thing is the gate and tempest tests 20:18:16 +1 20:18:19 +1 20:18:23 dougwig do we need the gate? 20:18:23 i failed to even consider the scenario tests in my fixes because im a dumb shit 20:18:33 xgerman: yep 20:18:41 i'd be fine with just api tests. 20:18:44 lol 20:18:44 Yeah. So question for dougwig, is there a way we can get bare metal for the tests? 20:18:56 can we use 3rd party CI for that? 20:19:03 or at least a different type of node that has vt-x 20:19:14 johnsom: i highly highly doubt it, since all of those tests run on donated instances. but... they are donated from rax and hp, so... can you donate bare metal? 20:19:22 Yeah, or nested virtualization turned on 20:19:26 or run a small subset of tests for the gate, temporarily 20:19:40 yeah 20:19:46 how does trove handle this? 20:19:49 maybe we can figure out a better way to share some of the created LBs? 20:19:56 like merge some of the tests onto the same base? 20:19:58 they only spin up three instances for all their tests 20:20:03 dougwig Trove boots just one or two VMS 20:20:30 We could make dsvm-1 dsvm-2 dsvm-3 that covers all of the tests..... 20:20:33 I guess 20:20:34 can we reuse amp's for test purposes? and maybe use a periodic job that does it clean? 20:21:06 probably but fresh tests are better 20:21:20 The main issue is without the vt-x, we hit the two hour limit on the tests 20:21:22 if we reuse we need to troubleshoot crazy side effects 20:21:22 i dont think we could do that, at least not in a way that would be quick and taht the tests control 20:21:32 fwiw trove uses a devstack instance and spins up a few instances on the node but only a max of 2 at a time. 20:21:39 johnsom has code to eliminate the generation of the amp image if it's already there... once we figure out the logic there (ie. so we don't skip that step if it's needed), that might help somewhat. 20:21:45 (i think this is what you guys are talking about) 20:22:01 cp16met you are right 20:22:15 sbalukoff yeah, we have been testing that today. It buys us just ten minutes 20:22:18 sbalukoff: i dont think it'll help enough 20:22:20 we can increase the timeout, but honestly anything >60mins is a big pain anyway. 20:22:27 dougwig: agreed 20:22:33 xgerman: :) 20:22:33 Ok, so reworking the tests to use fewer instances is going to be key. 20:22:34 +1 20:22:44 or containers? 20:22:46 So what about breaking the tests down across multiple jobs? 20:22:59 xgerman: And then leave VM testing to 3rd party CI? 20:23:01 that’s just dirty? 20:23:06 sbalukoff +1 20:23:15 or say the future are containers :-) 20:23:16 containers is the solution, but not in the RC1 timeframe 20:23:21 Haha! Indeed. 20:23:27 Dammit. 20:23:30 We 20:23:34 We're so close! 20:23:40 Yeah, the same with nested virt enablement 20:23:40 yeah 20:23:47 so, douigwig 20:23:54 1) timing — when do we need to have something 20:24:10 oh actually, i don't honestly thing that's TOO bad 20:24:12 i just asked in -infra, fyi. 20:24:13 *think 20:24:20 dougwig: what if we did a limited # of tests temporarily until containers or some other solution is completed? 20:24:27 johnsom: if you can get them all <60, that'd work for now. 20:24:28 if we broke the tests up into "apiv2-part1" "apiv2-part2" and they could run in parallel 20:24:31 or increase timeout... 20:24:40 different gate jobs 20:24:45 or do rm_works things 20:24:53 what would be acceptable? 20:24:59 was johnsom's proposal 20:25:11 dougwig half the problem is just getting the system setup is 30 minutes before the tests start 20:25:12 i just happen to agree that might work 20:25:34 johnsom: i think buying 10m with the downloaded image is a good start, and we shouldn't ignore that even if it doesn't solve the whole problem 20:25:45 Agreed 20:25:47 then splitting into multiple gate jobs might actually be ok 20:25:49 +1 20:25:55 how hard would that actually be? 20:26:15 well, would infra let us do that and would that make neutron happy 20:26:26 johnsom: those timeouts are configurable in the job definition. 20:26:27 break it up by load balancer, listener, pool, member, and health monitor tests? 20:26:29 i don't know if infra would take objection to that 20:26:31 johnsom: but eek 20:26:42 blogan: yeah that seems good 20:26:57 health monitor runs first, let me see how long that took 20:27:06 separately, they aren't that bad 20:27:07 given the operators in the room here, is there any chance of a 3rd party CI setup with faster nodes? 20:27:11 like 30-40m i think 20:27:20 dougwig: can 3rd party CI be voting? 20:27:21 or donate such to infra, if they can use them? 20:27:22 Yeah, I think the tests are fairly self contained. It's just a gate wizard to get the jobs setup 20:27:23 and is that a good idea? 20:27:24 rm_work: aye 20:27:39 I like the "splitting by test type" approach 20:27:42 looks like it'd clock in at an hour 20:27:43 apiv2-hm 20:27:49 apiv2-listener 20:27:52 etc 20:28:08 an hour is acceptable for dsvm, and they'd run in parallel 20:28:21 dougwig I'm working on an internal discussion to get the nested virt turned on, but I have no idea when it would happen 20:28:21 we'd just be claiming like 5 extra jenkins nodes 20:28:50 looks like there are some random failures in the tests too 20:28:50 well, we have hardware so in theory we can set up a 3rd party CI with VT-X 20:29:09 xgerman: +1 20:29:38 but then we also need to maintain it so I like splitting tests better 20:29:52 (and our network hasn’t been happy the last few weeks) 20:29:55 My vote is to split the tests 20:30:02 we can look at doing a bare metal instance here 20:30:12 abiggun 20:30:23 I think we all have hardware ;-) 20:30:26 so how much hardware do you guys want? 20:30:46 yeah i vote split tests first 20:30:52 Spitting the tests makes sense even with the bare metal. 20:30:57 and if that gets pushback or doesn't work, then we can continue investigating baremetal 20:31:07 It also feels like a lower-hanging fruit in this case. 20:31:12 So yes, let's split the tests. 20:31:19 dougwig: thoughts? 20:31:24 agreed. that abs my vote as well — 20:31:24 on splitting the tests 20:31:33 it’s not like our solar is rock solid 20:31:41 solar=sonar 20:31:54 i agree on starting there, and then we can run one of them (pick a good subset) in the neutron check queue, all in ours. 20:32:26 oh, we just run all and pig out... 20:32:34 Haha! 20:32:43 ah yeah good point 20:32:45 We're important enough! I'm sure other projects will understand. XD 20:32:51 there is a neutron check for octavia as well 20:33:05 i agree with dougwig'd assessment 20:33:07 yeah, once they donate hardware they can have a seat at the table :-) 20:33:24 Heh! 20:33:35 clarkb pointed me at this interesting thread: http://lists.openstack.org/pipermail/openstack-infra/2015-September/003138.html 20:34:15 mmh... 20:34:21 if we do want to look at finding donated hardware, a) it'd take time to sort out, and b) we'd need it from multiple providers, if we wanted a shot at getting it into infra's setup. 3rd party we could do today. it can vote (non-binding) in the check queue. 20:35:09 Cool, let's find their tag and use it... 20:35:20 we have an account that is a botemless pit for vms but I don't see any baremetal on it. 20:36:05 sbalukoff: ha, no. 20:36:06 :) 20:36:12 oh wow we do have baremetal 20:36:16 suckers 20:36:24 I'll start poking my superiors about the possibility of donating hardware for this. I have no idea what that looks like at IBM, but I guess I'll find out. XD 20:36:38 +1 20:36:43 sbalukoff: it looks like you crying 20:36:49 that doesn't get us out of the immediate crisis yet, of course. 20:36:50 yeah, i'm prodding people internally about enabling vt-x 20:36:53 And neither does me crying. 20:36:55 In reality, we don't even need bare metal, just a host booted with the vt-x enabled 20:36:58 but yes, for now, splitting up the tests 20:37:05 So.... let's split tests and hope for the best for now? 20:37:15 yeah 20:37:21 Split the tests! 20:37:21 that’s the plan 20:37:32 and then get one of those mainframes sbalukoff sells 20:37:35 split the tests! split the tests! 20:37:36 We need a fight song. 20:37:59 xgerman: I'll bet I can get my hands on a few AS/400's, eh. ;) 20:38:03 Who knows best how to get those setup? 20:38:11 * johnsom looks in dougwigs direction 20:38:21 who set this job up? 20:38:26 Yea sure along with a PDP-11 20:38:53 * dougwig hides. 20:39:03 ok, up next: Active-Passive 20:39:05 each job would just do tox -e apiv2 neutron_lbaas.tests.tempest.api.v2.test_load_balancers 20:39:09 or something like that 20:39:16 and test_lsiteners 20:39:18 etc 20:39:20 yep 20:39:21 i cant typ 20:39:23 look at [testenv:apiv2] in tox.ini. we just need more of those, that run subsets. 20:39:26 neither can I 20:39:27 then we can add jobs. 20:39:43 dougwig: couldnt we just change the tox execution line 20:39:51 instead of adding more tox sections 20:40:15 yeah, what’s the difference between the two? 20:40:22 not much 20:40:34 so let’s do what’s easier 20:40:44 more sections in tox.ini makes it easier to remember how to do it? i guess 20:40:45 +1 20:40:47 yeah, we could do it with a var. 20:41:10 the subsets will be defined somewhere... either in tox.ini or the gate hook script. 20:41:25 well, one lets us control it with changes to openstack/octavia, the other would require us to make changes to openstack-infra/project-config if we needed to update it 20:41:40 no, it's tox.ini or gate hook in neutron-lbaas 20:41:48 ah, we'd do it in gate_hook? 20:42:03 i think post_test_hook, actually. 20:42:17 which is actually a "run this test" hook for us. 20:42:25 so the gate definition would pass "LISTENER" or "HEALTH_MANAGER" to the post_test_hook.sh 20:42:25 sounds like we have the people who are going to do this sorted :) 20:42:28 but we're in the weeds. 20:42:29 and then we'd interpret that 20:42:34 lol yeah kk :P 20:42:48 ok, so ACTIVE-PASSIVE 20:42:49 dougwig: Agreed. 20:42:51 johnsom 20:43:00 rm_work: yeah, in the job template we put a var with the subset, then the hooks pick up on that, then they invoke tox with a specific env or specific args. 20:43:02 Yes, active-passive! 20:43:15 dougwig: kk 20:43:27 So VRRP is in pretty good shape. I have a race condition I'm fixing right now (when not watching gates timeout) 20:43:41 gates really got in the way 20:43:54 Yes, would have been done yesterday. 20:44:02 so are we going to use ubuntu debian redhat or what on this Metal Trash can? 20:44:04 so given our deadlines — when do we need that dougwig? 20:44:22 aka when is the RC1 code freeze? 20:44:26 id like to get teh scenario tests passing with octavia as well, but, looks like there is also some random failures in the api tests with octavia 20:44:34 xgerman: 23rd no? 20:44:43 blogan: random as in "intermittent"? 20:44:48 rm_work: yes 20:44:51 blegh 20:44:51 asap 20:44:56 rm_work: the best kind 20:45:03 what is it? 23rd or asap? 20:45:44 ok, ASAP it is 20:45:50 Ok, so I am going to stop messing with the gate stuff and focus on VRRP 20:46:08 johnsom: well, i still think we need to finish up the pre-built image thing 20:46:09 yeah, I think it’s in rm_works and dougwigs capable hands 20:46:16 well do we have it settled on who will split the tests up? 20:46:25 rm_work? 20:46:26 how important is it to get vrrp in? 20:46:31 im still uncertain about that 20:46:31 Folks can review it now, even run it. I am just fixing a situation where it checks for spares incorrectly 20:46:40 we're technically in the bugfix period of things, and these gate changes are a "bug fix", so we're in danger with that not being present. i'm not sure there's a hard date, but whichever RC looks to be the final release, will be the deadline. 20:47:10 mmh, so time for hard choices? 20:47:58 dougwig: you want to do it? I am not sure what you had in mind exactly for the handling, but I can do it if you don't care how it is done :P 20:48:23 One trash metal box spinning up 20:48:34 well, 12 minutes left 20:48:38 rm_work: take a crack at it. 20:48:43 if vrrp getting in after L isn't too big of an issue, I'd feel much better not trying to rush review it while this gate stuff is giong on 20:49:09 blogan: +1 20:49:33 i feel the same way about the containers patches we have that we at rackspace need, but ive been holding that one off as well 20:49:45 blogan: +1 20:50:04 if it's not *solid* and in, i don't think we should be trying to get it in at this point. we're well past FF. 20:50:23 well, FF is that the patch is committed 20:50:47 but I agree we need to review it and if we don’t have the time we don't 20:52:22 ok, let’s move it out then 20:52:58 Bummer. 20:53:03 #decision to delay ACTIBVE-PASSIVE to M 20:53:11 sbalukoff +1 20:53:14 Sad, but a good call 20:53:30 it is sad, but we still have a lot done for L 20:53:39 By the way: We totally intend to use it once it's merged into master. So... when does L get tagged so we can merge it? ;) 20:53:55 I am still planning to demo it in Tokyo ;-) 20:54:01 xgerman: +1 20:54:07 +1 20:54:07 Hahaha Sounds like sbalukoff is volunteering to do a review 20:54:22 #topic Octavia talk 20:54:24 well, octavia is release:independent, so i'd say as soon as we get the ref done, we can go crazy. 20:54:29 Oh, I've *been* doing reviews. Mostly in the neutron-lbaas and python-neutronclient stuff, though. 20:54:30 sbalukoff: ^^ 20:54:35 I will redouble my efforts on Octavia. 20:54:46 Yeah, we can definitely demo in Tokyo 20:55:08 #info HP is getting 500 Octavia stickers 20:55:14 Sweet! 20:55:17 cool 20:55:29 but we need somebody who can get usb sticks we can hand out in our lab 20:55:29 what flag should be present in /proc/cpuinfo? 20:55:46 crc32 vmx/smx 20:55:53 RAX did some for other projects... 20:56:03 not sure who we talk to about that 20:56:11 how many are we talking about? 20:56:12 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts 20:56:12 dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms 20:56:18 and how big? 20:56:27 how many people can be in the lab? 20:56:34 no idea but there was a linit 20:56:38 xgerman: I would like to be in it, eh! 20:56:41 crc32 vmx is there 20:56:44 crc32: yeah they're in there 20:56:44 stickers would be doubly cool if you attached them to iphones before handing them out. 20:56:44 Er... helping to run it, I mean. 20:56:50 ill be spectating in teh lab 20:56:53 lol dougwig 20:56:54 dougwig: +1 20:57:02 ok those flags are there. 20:57:04 dougwig A10 is sponsoring iPhones? 20:57:11 Haha! 20:57:12 id throw that sticker in the trash immediately 20:57:16 because with HP you get only iPaqs or Palm 20:57:18 you guys wanted to spend $1B on cloud. i'm just trying to help. 20:57:24 I think we want a vm with devstack on it 20:57:31 what do we need in the usb sticks? size/quantity/etc ? 20:57:37 who would be managing this box? I need to get more requirments. The box I spun up is only 32GB at 10 cores. 20:58:02 dougwig a few G just to put a VM with devstack on ot 20:58:14 wow make that 40. Guess I over shot it. 20:58:23 I can take an AI to get an idea of storage size and see what the limit is for the hands on lab head count 20:58:52 crc32: i think we're good for now with just splitting the tests into multiple jobs, we'll investigate bare-metal later if it is still necessary 20:58:58 #action johnsom figure out USB stick size + headcount for lab 20:59:08 Sweet. 20:59:11 #topic Open Discussion 20:59:16 one minute — go!! 20:59:30 Someone merge this! https://review.openstack.org/#/c/208763/ 20:59:31 ;) 20:59:46 That is all. 21:00:03 #endmeeting